Flying solo

When you're working alone, everything takes longer, but we can also get weird and leverage approaches that don't make sense on a team.

Foregoing automation

Foregoing automation tends to be better when we're working alone. For one thing, the way any particular inefficiency is scaled is a fraction of what it would be with a larger group using the same system. For example, if I decide to automate the creation of descriptions on this blog, that would save me seconds per description written * number of descriptions written. If writing a description is a few seconds of copying and pasting text from the body of a post, and if I write two posts per week, then time saved is 10 seconds * 2 posts per week * 52 weeks per year = 17 minutes per year. With maintenance over the course of a year, the cost of adding this feature could easily come to an hour or more. So, when working alone, it might make sense to skip adding the feature — especially if there's value in seeing how the blogging habit takes shape; it's very possible that in a year I won't be interested in blogging or that the way that I've automated writing descriptions won't apply anymore.

OTOH, if there were a thousand people writing descriptions on my blog, then we can see that their 283 hours of cumulative time spent would easily outweigh my hour spent developing the feature. This is even more true once you consider the added time spent explaining the workflow, reviewing posts, and the increased likelihood of mistakes.

So, it's not just that we can get away with being messier when we're working by ourselves; it actually makes a lot of sense to be messy when systems are under less load.

Tech debt

The situation with tech debt is similar. The cost of tech debt largely comes in the form of comprehending it and understanding its limitations. A form that doesn't validate properly and returns a 500 upon receipt of invalid input is far less likely to be an issue for the developer who built it than for someone else. If the developer saved significant time in this way, then it'd likely be worth the cost.

Keeping good notes can help a lot with maintaining a certain level of tech debt without letting the effects of it leak out in risky ways. For example, a system of tagged notes would allow us to list out, e.g., some-product admin forms, so that when we create the form with the missing validations we could also create a little cheatsheet for using the form, and that could be much cheaper than writing validations but still almost as effective (we're basically converting our form validations into preconditions to be guaranteed by administrators with the aid of some documentation).

Using many programming languages

Similar, too, is the use of multiple languages. Clearly, different languages have different strengths and weaknesses, and, in particular, some languages have better libraries for doing certain things. When working on a large codebase, we become accustomed to writing in an idiomatic and predictable way, usually in one or two general purpose programming languages, in order to make things predictable and accessible to larger teams.

But working with multiple languages frequently comes at a low cost:

  • The programmer needs to know all of the languages in use
  • If we relegate different languages to different verticals, then we don't need to use bindings (think "Python handles e-commerce"). We can combine languages by using a common compiled output language (frequently JavaScript), using different services tucked behind a reverse proxy like NGinx, or generally microservices + some form of IPC (although I tend to dislike this approach because it comes with a bunch of overhead).
  • If we write bindings between languages but assume certain preconditions (tech debt), then the bindings can be simplified considerably

And, of course, these low costs can come with very large rewards.

Programming more abstractly, and leaning into your own skillset

Most abstractions are kind of leaky, and it's generally difficult to make clean, non-leaky abstractions. Often, either writing out an idiom or taking the time to make a really clean abstraction is the correct approach when working on a larger team, because time spent debugging and working with a gnarly abstraction can quickly outweigh the time it saves. Worse, overly complicated abstractions are likely to introduce bugs (and possibly vulnerabilities) in production code.

But, as discussed regarding tech debt, it's easier to manage complexity when you created it. So, when working alone, it makes sense to consider what you can manage and lean into added complexity that saves time. This is especially true for code that is likely to be short-lived. However, code that is likely to be short-lived comprises most early-stage prototypes.

This approach could include things like leaning on regular expressions (ghastly!), creating a thin abstraction around your database that's likely to result in difficult to read errors, hacking in extra code rather than updating a deployment configuration, foregoing CI in favor of running all of your checks locally, skipping writing automated tests in favor of production logging and reverts where necessary, and more.

Quarantining complexity

When we think of managing complexity, the idea of a quarantine is often useful. For example, if we take a potentially harmful malicious program and run it in a sandbox, then our risk is reduced greatly. In this case, nothing changed about the underlying program but the context in which it exists and runs.

Programming abstractions can be employed to a similar effect. For example, we can play fast and loose with certain kinds of code review if we know that there's a strong type system in play. Even a strong type system can't protect us from many issues, but it can narrow the risk, and if what remains is acceptable risk, then we could skip review. Similarly, a polymorphic instance could be very gnarly, but if it plugs into a clean set of interfaces, then the added complexity is largely contained. In fact, all modern programming languages are built on the idea that we can quarantine our use of assembly code by creating higher-level abstractions that a compiler manages.

As in our example earlier, documentation can be a lightweight way for a human to document approaches that work in the context of a brittle implementation, thereby allowing workflows that quarantine complexity. When one can direct the humans involved, workflows are by far the easiest, quickest thing that we can change about the lifecycle of doing something with a program. But humans are also fallible in many ways; if a procedure isn't working in short order, then consider building out a more robust program.

Scale

Most of the advice above is a bad idea over the course of 2+ years, because the cost of cutting corners will ultimately outweigh the cost of doing things well. However, many, many things don't last two years, and we shouldn't decrease our range for the sake of making things last that were never going to last. For most things, experimentation and exposure are needed to know if they're useful; these are ways to help us gain that exposure efficiently.

And there should always be at least a rough estimate when opting to cut corners. Some things are genuinely done fifty times a day and, in that case, we're unlikely to save time by cutting corners. Furthermore, the math on these things is likely to change: Adding one person to a single person team more than doubles all of our cost of debt estimates. In that vein, adding more people to a team is like getting a bigger boat — not always better or faster, even.

Conclusion

There's much to be gained when we stop doing what we're "supposed" to do. However, this post, in essence, is describing the various footguns that you can employ to save time, but footguns are risky. So, this is advanced stuff, and in order to employ any of these strategies successfully, the developer needs to have a good idea of the risks involved and an awareness of complexity that they cannot manage successfully (including a "Well, shit, that didn't work" feedback loop).