Debugging workflows
Let's face it: As programmers, we typically build small variations on the same theme over and over. That isn't to say that programming isn't fulfilling or challenging (I strongly believe that it is both), but a daily Sudoku puzzle can also be challenging and fulfilling, and I think that past a certain level of experience, programming hits the same way. In fact, I suspect that all skills end up feeling like this after a certain level of experience. While those of us who have chosen programming as our profession enjoy (hopefully) these puzzles, we'd still like to solve them quickly, because when we do, we get to see big results.
Editing skill and muscle memory
Editing workflows tend to be particularly fertile territory for workflow improvements. The biggest constraints seem to be knowing which changes will be time-saving in the long run and then integrating those changes into our workflows. My best advice here is to allocate time for thinking through workflows and making small adjustments each day. Making small changes regularly tends to work best, since all of the changes that we make need to be worked into our regular rhythms; trying to make too many changes at once break those rhythms entirely, and then we need to sort of rebuild things from the ground up.
I find it helpful to keep a running list of areas that could use improvement, as I'm working, which I can then use to inform subsequent adjustments.
Estimation
When we're estimating time spent on things to try for time-saving adjustments, I think it's important to formulate a real hypothesis about how long things are taking and then to test it. I think that, as programmers, we tend to avoid doing this kind of hypothesis testing, because the feeling is that things are generally too unpredictable to draw any meaningful conclusions, since no two coding sessions are the same. However, if we zoom in, we can usually find that our workflows consist of a bunch of repeated activities once we factor out the smattering of inconsistencies that fragment the larger workflows.
One instance of really low-hanging fruit is repeated commands. I don't want to
overemphasize the benefits of saving a few keystrokes, because it's easy to
build broken workflows around those kinds of optimizations, since it can make
the resulting commands incomprehensible. However, with heavily repeated
commands, I think there's often an opportunity to save significant time in the
long term. If a command takes an extra few seconds to type, but it's
run thirty times per day then: 30 * 5 = 150 seconds per day which is 150 * 5 / 60 = 12.5 minutes per week which is 12.5 * 52 / 60 = 10.8 hours per year.
So, often when we scale things out to a year or more, even very small
improvements have a big impact.
This is a little compelling for a single improvement, but very compelling when we can systematically identify areas for improvement. The value of having reliable estimations is in knowing that something will actually save you ten minutes per day and, therefore, being able to improve it incrementally.
Documentation
The Manifesto for Agile Software Development probably didn't do documentation and planning any favors. Of course, everything exists in a context, and the problems were different when it was written, but I still think the philosophy has led to an unfortunate tendency for teams to see documentation as a sort of ugly thing that's mostly a waste of time.
Yet, the economies of good documentation are probably the most compelling time-saving workflow tool that I know of. Good documentation allows us to buy technical debt at a greatly reduced cost, because when we understand our tech debt, its potential for harm is greatly reduced. It also provides a way for ideas to take shape iteratively that's much faster than coding things out to see if they'll work. Granted, sometimes, there's no substitute for a good experiment, and we shouldn't spin on ideas when there are too many unknowns. But it is even more unwise to do all of your thinking in the form of costly, failing code.
Documentation is a way of knowing about things before they happen, by some means. So, if it takes ten hours to debug a setup, then well-written documentation could save the next person ten hours at a rate of whatever time you spent writing it. Of course, if that setup applies to fifty people, then that's five hundred hours. The gains are similar to what we'd get with good automation, but much less costly.
One piece of advice: While there are many reasons to write (or collaborate on) documentation, the point of reading documentation is primarily to figure out something important that you wouldn't have known or done otherwise. So, as a writer, don't try to make your documentation comprehensive; rather, try to relay value.
Idioms
Documenting coding idioms, in particular, gets a lot of value. This can save a lot of thinking and twiddling when it comes to those daily puzzles that we all solve repeatedly. Often, programmatic abstractions are not as accessible as a nicely documented idiom.
Pitfalls
As mentioned previously, documented technical debt comes at a decreased cost. Most documentation tends towards the constructive, but documenting what doesn't work can provide the reader more freedom in creatively solving their own problems.
Why don't we automate more?
If you're reading this article, then you're probably in the business of automating things. You might not think of what you do this way, but at the end of the day, we're providing instructions to a computer that tell it how to do the same things over and over again within a framework of stateful decision making. Nevertheless, it's routine to do things that other people, and possibly (probably) you, have done hundreds or thousands of times before. So, what's keeping us from automating more?
Well, building good programming abstractions is hard, and all forms of programmatic automation rely on building abstractions that are sufficient to hold up the desired automation. Suffice it to say that most attempts at abstraction of large domains that make use of polymorphism, data-driven development, code generation, and other very general constructs end up being more trouble than they're worth. We're usually better off repeating idioms than leaning on constructions that are brittle and leaky, even when they promise to save a lot of time in their best-case scenario. Typically, the programmer ends up reading and debugging the abstraction to make a change, only to find that it wasn't built for their intended use case, and is then faced with the choice of whether to break up the abstraction and rewrite it, favoring idioms, or to perpetuate the disease. Don't misunderstand: I like abstractions, and I think that they can certainly play a big role in solving inefficient workflows; I just think we have to be very deliberate about how we build them, and we probably should plan on investing real time in building good ones.