Headline: For modern software development I’ve been working on some key metrics that seem to correlate to high quality and profitable software.
List of Metrics
- Number of customer reported bugs
- Automated test code coverage
- Completed user stories
- Code analysis warnings
- Changesets per day
- Regressive bugs
- Number of broken builds
Profitability
There are two components to profitability – how much money you make (revenue) and how much you spend (expenses). These metrics contribute positively to both. But for the revenue I would encourage early and frequent feedback from customers through Customer Preview Programs.
Revenue
So how do these metrics improve revenue? Obviously software with few customers bugs is more attractive to users. I’ve seen software that was so buggy that customers switched to a different solution. I’ve also been on the other side where the software was of such high quality that customers bought it in large part because it “just worked”.
In addition to stability of the software, velocity helps too. You don’t want to nag your customers with daily updates, but you don’t want to wait 18 months between releases – particularly for fairly new software. High code coverage, numerous completed user stories, and changesets all contribute to higher release velocity. You can release more frequently if your test cycles are short. And there’s reason to ship more frequently if you’ve added the top priority user stories from your backlog.
Expense
On the expense side good performance on all of these metrics contributes to low expenses.
If you have few customer reported bugs you’re spending time on user stories and not bugs.
Automated test coverage reduces the testing time and prevents injection of new bugs. Or at least detects them at the earliest possible moment – before you even check in your code (if your running all your tests as you should). Automated test coverage also reduces the staffing level needed for testing. Most domain experts spend their time determining whether the features work as desired (usability and usefulness) and not looking for bugs (the software didn’t do what the user expected).
If you can complete user stories quickly, that reduces cost. That is, the cost per user story is low.
Keeping code analysis warnings at zero helps prevent bugs too.
Changesets per day usually correlates to the completion of user stories.
Regressive bugs means that the software used to work but when you implemented new functionality you broke something. Of course this takes time which increases expense of the project, just to get back to where you were.
And finally, if your build is always broken then it is hard to ship and usually prevents developers from being productive.
Targets
So these are some metrics, but what are some good targets for these? While the answer may vary based on the size of your project and development team, this gives some view of that.
Customer Reported Bugs – I’ve seen sizable projects have only 1 or 2 customer reported bugs after several years. I’ve seen at least 10 such projects. So this is not a crazy target. So your target should be zero. But a small number shouldn’t deflate you
Automated Test Coverage – Almost all of those 10 projects had greater than 95{f073afa9b3cad59b43edffc8236236232bb532d50165f68f2787a3c583ed137f} code coverage. Obviously strive for 100{f073afa9b3cad59b43edffc8236236232bb532d50165f68f2787a3c583ed137f}, but there are times where you just can’t test several lines.
Completed User Stories – this is much harder to set a target for. What is the size of your user stories? What is the size of your team? If you simply look at user stories you complete, then you remove one of these factors. I feel pretty good if I can complete 1-3 user stories each week. I don’t feel good if I complete 0.
Code Analysis Warnings – Zero!
Changesets per Day – The most productive and high quality code I’ve seen is usually written by developer that have 3 to 4 checkins per day. It seems that I run across 3 groups:
- Developers that break code into small testable pieces that follow the SOLID principles and have 3-4 checkins each day.
- Developers that have 2-3 checkins per week.
- Developers that have 1 or 2 checkins per month.
I did see several developers once that would checkin code about once every 2 to 3 months – or about 4 to 6 times per year!
Now don’t “Check in just to check in”. But as you build your code base and keep it high quality, it should become easier and easier to add new functionality more frequently.
Regressive Bugs – Zero! This is usually a sign you’re missing some critical automated tests.It’s not much fun to go back and get your code to do what it used to do. The whole purpose of agile software development is to keep moving forward.
Number of Broken Builds – Zero! Now that’s harder I think than the code analysis warnings. but I hope the build is never broken for more than 1 hour. (Which of course means you keep your code lean and your build can run in less than 1 hour.) And this is much easier to accomplish if you are checking code in every few hours. The most you’ll lose if you back out a change is a few hours work.
I’d be interested if you have other metrics for high quality software development.
Leave a Reply