You're Doing Agile Wrong

Agile is the hip new software development methodology. Everyone from Fortune 100 firms to doughnut shops are trying to implement it these days. When even a US Department of Defense contractor is trying to become agile, a serious paradigm shift away from the traditional waterfall approach to a better methodology must be happening because agile is better. Or is it?

The problem with agile is that most people don’t understand it. It’s commonly interpreted to mean:

  • less emphasis on requirements
  • less design
  • more coding, sooner
  • shorter milestones

Only the last of which vaguely resembles what agile is trying to accomplish. Let’s begin by looking at two graphs of time spent on each major phase (graphs from David Longstreet’s “Agile Method and Other Fairy Tales”):

Of note, the best companies don’t code until 64% of the way into a project.

As you can see, the best in class companies spent 250% the time on requirements than the worst-in-class companies, and more than double the amount of time on design. They spent roughly half as much time coding, and half as much on testing.

However, these don’t tell us much – only that the best companies did more requirements and design than the worst companies. How did these two sets actually perform?

It turns out the best companies spend close to 7 hours per feature point as compared to almost 65 hours per feature point at the worst companies. That’s more than a nine-fold productivity gap between the best and worst companies.

Can this productivity gap really be attributed to just the development process? After all, correlation is not sufficient to prove causation, right? Recall that the worst teams spent twice as much of their overall development efforts on coding and testing as the best teams, reflecting a proportionally smaller investment in requirements and design. To answer the question we need to look at how much it costs to discover requirements earlier vs later. That is, how much would it cost to implement a requirement if it was discovered during testing vs during the requirements elicitation phase?

Turns out it costs ten times as much money to implement the same feature point discovered during testing than when it is discovered during requirements. If we now revisit our first two graphs, we can now conclude with high confidence that the worst performing companies are doing bad precisely because they are misallocating their development time. In fact, we can estimate how far from optimal they are by using the “dollars per FP” chart in conjunction with a report of which phase each FP requirement was discovered at that company or team.

[to be continued]


Leave a Reply

Your email address will not be published.