There is reassuringly steady progress in software development productivity. In only a few short years the pace of what is considered acceptable output in terms of development has changed pretty dramatically. The usual culprits are:
- faster hardware and networks, which lead to
- better programming languages, tools, and platforms, which produce
- better frameworks, libraries, services, and processes.
The core idea behind all this is that each layer is always independently improving, and each one of them stacks and compounds arbitrarily to give programmers more power with less effort. But as I look back over the decade+ that I’ve been in software professionally, and review the available evidence over the prior decades, there weren’t that many inflection points, some examples being:
- Managed code – huge win over unmanaged code like C++, freeing most developers of the tedious and error-prone task of managing memory
- Improved paradigms – like functional programming, that provide a lot of expressive power
- Cloud platforms and the browser as an execution environment – groundbreaking, changing the deployment and delivery of most software
- jQuery, the JVM, and other abstractions – abstracting multiple target platforms behind a single API
- Package managers – finally! A way to deliver on the promise of open source and VCS to allow developers to continuously invest in code as a global community
- Ruby on Rails – dramatically sped up development by reducing boilerplate, spawning clones in many languages, and inspiring features in many other frameworks
- React.js – rewrote the game in terms of complex user-facing UI
I watched at least one variation of the last five play out during my career, so I (eventually) know something earth shattering when I see it. But one thing is pretty obvious: these revolutions in productivity only happen once every few years — we might only see a new, original idea two or three times a decade — and they make a big and sustained difference on the industry. In the regular day-to-day, improvements are pretty incremental.
OK. This seems obvious, and a bit banal. Why is this important to recognize? Because once you say all this out loud, it should make you suspicious every time you hear “we switched to this new (framework|database|language) and it’s incredible how much more productive we are”. I hear this a lot.
We delude ourselves and each other into thinking each hot new framework is going to revolutionize things, but the math rarely adds up. And maybe the people saying these things are lying or have ulterior motives like trying to sell a service or product. But commonly I actually think that there’s something else going on, which maybe doesn’t get highlighted often enough: If you’re writing or rewriting something from scratch, you’ve removed a bunch of constraints, so of course you’re going to be more productive.
So in reality you’re often comparing “maintaining a production instance in framework A” vs “writing a new replacement in framework B” – of course framework B is going to feel more productive, but often that has nothing to do with the qualities of B, and everything to do with starting from scratch in B.
And we all know almost all developers love greenfield work, because there are wayyy fewer constraints. You can make a bunch of choices without having to accommodate legacy designs, code, or features. Productivity is off the charts and each new feature flows out almost effortlessly – the number of codepaths and feature interactions hasn’t yet exploded. That’s the magic of greenfield .
If you know that removing constraints can help you go faster, you can use this to your advantage. At Privy, we recently removed a bunch of features from our service. Overall, it was pretty awesome, and since then we’ve been able to ship features that we couldn’t ship before, or that wouldn’t have made sense because it would’ve been too much work or too much time.
How and why we removed features
Usually one of two ways:
One: In the middle of a major refactoring or rewrite, someone would encounter a (usually small) feature and ask “do we still need this?”
- Sometimes we looked at the data or talked it over with the business team, confirming that the feature was mostly unused. Ripping it out was generally easy; in fact many of the features didn’t have proper or updated documentation, and nobody noticed or complained. Many of these were “flair” or visibility/testing features that helped a lot during development, and made some larger feature seem more complete, but weren’t used much in practice. Saying no was usually the “easy” thing to do — often the feature had no owner or sponsor, and removing would save time on a project that was probably behind schedule.
- “Yes we need this feature.” This was very rare; I think the engineering team is pretty good at intuiting when a feature rewrite would cost more than the value provided, so those are the ones being talked about the most. We’ve been taking steps to give product engineering even better sense in this area.
Two: We started planning a change or new feature, and during diligence uncovered a bunch of work that would need to be done to continue supporting the old feature:
- Often, long discussions would ensue about how to work around the legacy code and assumptions, as legacy features uncovered early tend to be larger and more visible. Immense work sometimes went into these workarounds, delaying velocity of the new, as-yet unbuilt feature.
- Other times, as happened recently, we revisited legacy features for the nth time and finally decided to cut them out. In retrospect I wish we had done this sooner. Removing features that are adding negative value is an investment in future velocity, and one of the few “pure wins with no downsides” you can make.
In the end, we went as far as to shut down an entire legacy product embedded in our current product, which was definitely a strange experience. I think knowing that we were going to see a specific dollar revenue churn (and in some cases, refunding annual commitments) from sunsetting the legacy feature held me back a bit from seeing the right call earlier. But this is almost pure sunk cost fallacy – we all know that the effort to maintain that revenue will cost us much more future revenue from the misdirected effort; there’s no reason to bias for the existing code and committing to maintaining it.
Any proper startup should be running a healthy number of experiments and not all of them will pan out. Kill features to move faster.
 Often in greenfield projects, you can even make mistakes, and someone else will pay for them later. But if you never have to pay for those mistakes, this will obviously short circuit your learning cycles and important developer instincts. Having to maintain software is the best way to understand the pain of unmaintainable software designs.