I’m often asked how much testing is enough. The answer is frustratingly nuanced: it depends. It depends on your downside risk; it depends on your stability requirements; it depends where in the dependency hierarchy the tested code lives. And on and on.
I’ve begun to realize there’s one place where I can give simple advice that works almost all the time: write tests before fixing bugs.
I’ve never done TDD and I doubt most engineers have – it requires very precise specifications and interfaces to be defined before the unit of code is written. TDD practitioners explain that this is the point of TDD. But requirements are emerging and hidden, and even in the best of times implementation details force changes even to well-constructed plans.
When building a class for the first time, testing the interface beforehand is a chore because we might have to change the interface when we integrate just to get it working end-to-end; that forces us to rewrite the tests. And what if the requirements weren’t clear? Updating tests as you iterate on v1 is just a self-imposed tax. We don’t even know what to focus the tests on yet, since we don’t have experience to guide us as to where the bugs lurk .
This isn’t true when fixing bugs. The requirements and expected behavior are clear. That’s why the bug report was written up. It’s already shown a spotlight on a case that was missed in QA or requirements gathering. In other words, this bug report is valuable information – indeed, its confirmation – that a test is needed, and that it is missing.
So now I write a failing test, watch it fail, then fix the bug. Unlike TDD:
- The test has immediately provided value: it confirmed the existence of a bug, and now covers a case that was demonstrably missed by existing processes and QA.
- It will provide ongoing value in the future: it ensures this specific bug can’t be reintroduced (because we already proved to ourselves the test will fail if that happens).
- It is likely to cover an important failure mode we actually care about. After all, if nobody cared, a bug wouldn’t have been filed.
It takes a lot of judgment to know how much testing is enough. So it’s refreshing to have an easy testing rule to follow: if it’s buggy, write a test before you fix it.
 It’s much better, in my opinion, to add most code coverage as V1 starts to integrate, and we begin to have reasonable confidence in the interfaces. And even then, I like to focus my time on the tricky and complex cases, and just give the simple classes quick and dirty coverage with an 80/20 mentality. The goal is to make it so that when there is a bug, the bug fixer can just add a new case to some existing tests, vs creating 800 lines of setup code and tests to reproduce the failure. In real life, deferring tests until later in a project paints them as juicy targets to cut when schedules slip, but that’s for another story…