Iterative Development and the Efficiency Gap

Jenny and I were talking yesterday about short, time-boxed releases. Breaking a project into short, frequently delivered releases is a technique which has been gaining in popularity lately. Agile methodologies like SCRUM and XP rely on them, but they can be found as phased releases in traditional development shops as well. It’s clear why they’re popular with clients — working software is delivered frequently, and is an intuitive and satisfying measure of progress. And there are definite advantages to iteration from the development perspective. It’s easier to respond to changes in scope and requirements. Project planning is easier as well: instead of estimating how long the project will take, the team can select a subset of the scope which will fit in iterations. And, of course, iteration planning works very well with other techniques like continuous integration, refactoring and test-driven development, all of which can be explicitly planned for within the release cycle.

But is there a cost?

Jenny pointed out to me that there is one, and I think she’s right. Even when the feature set of the project can be neatly broken down — and not every project can be broken down into time-boxed releases — there is additional overhead that’s added. Each release must be planned. The team needs to package and deploy the software. The project needs to be wrapped up and the team needs to get trained up on the next release. And while the development team may be able to handle that relatively seamlessly, users tend to move more slowly and be less adaptive or responsive to change.

The problem is compounded if there is a software testing component. Any time the development team touches existing code, the testers need to run regression tests to make sure the current behavior is intact. And just because the iterations are relatively easy on the developers, it doesn’t mean that the testers will also be able to finish quickly. If there are small changes to many parts of the software, it can require a large testing effort. This is why iteration can mean a great deal of rework for the test team.

Any time there a project is broken into a set of time-boxed release cycles, there’s going to be an “efficiency gap” — the difference between the effort required to develop the project all at once, and the larger effort required to build and deliver the software in small, iterative phases. The question of whether or not to apply iterative development to a particular project can then come down to a question of whether the cost of iteration or phase planning is offset by the gain in flexibility and stakeholder perception.

So can this efficiency gap be measured? Obviously, it’s not possible to measure the actual efficiency gap, so it would be necessary to come up with a proxy for it. One way to measure might be to look for regression defects: keep track of the effort or lines of code required to repair defects discovered by unit tests or functional tests which passed for previous releases. Another way would be to compare the number of lines of code added to the number of lines modified or deleted — the higher this ratio, the less overlap between releases. These metrics could be useful for determining exactly how long a phase should be: if it seems like the interations are inefficient, it might make sense to add more scope and time to each iteration. If this improves efficiency, then it may be possible to keep modifying the iteration size until the team finds a good “sweet spot”.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.