Improving Metrics With Test-Driven Development (TDD)

There are still many people questioning the value of unit testing, and even more for the Test-Driven Development (TDD) approach. In this article, Krzysztof Jelski shares with us five metrics that should improve with the adoption of this agile practice for software development and testing.

Author: Krzysztof Jelski, Pragmatists, https://www.pragmatists.pl/

You’ve already heard of Test-Driven Development (TDD). Finally you get down to trying it out for real. You even manage to convince your whole team to adopt it with you. Now you ask yourself how to find out, whether TDD actually benefits your team. Let me share with you five metrics ideas, which you can monitor while adopting TDD.

1. Time of running and manually testing your application

Coding…
Deploy…(coffee)…
Yup, here it is!
Login: test. Password: test.
Click. Click. Click.
…(page loading)…
Click. Typetypetypetype. Click.
Checking… oh no! The calculations are wrong!
Coding..

Does that look familiar? With TDD you will need a lot less of that. Most of the time you will get enough confirmation from a passing test and you won’t be urged to run the app.

Of course I mean the case of starting from “no/little automated tests”. Though… From what I have seen so far, consistently practiced test-last is a very rare animal. Usually developers lack motivation to add automated tests after the implementation is finished. In other words, even if your team already writes tests, TDD will make them write more and of better quality. Which means reducing the time needed to manually check coding results.

Conclusion: measure the time it takes to run your application and test it manually. With TDD it will decrease.

Fine print: your mileage may vary. The time savings might heavily depend on technology. For example, in case of frontend and using tools like LiveEdit, the saving might be negligible. But in case of mobile apps run on real device or in case of heavy application server, expect it to be really significant.

2. Object-oriented design quality

Adopting Test-Driven Development leads to improvement in design. You may monitor design quality by means of static code analysis. Interestingly, there are scientific experiments showing statistically significant improvement (reduction) on the coupling metric (specifically, it’s the Coupling Between Objects metrics, as defined in the set of CK metrics).

What other metrics can you expect to improve when you apply TDD consistently?
* Cohesion (increase)
* Method’s length (decrease)
* Cyclomatic complexity (decrease)

Conclusion: install [Sonar https://www.sonarqube.org/] or another tool for static code analysis. Observe key design metrics and watch the Technical Debt gauge go down.

3. Code coverage

This metric defines the percentage of your production code that is exercised (run) by the tests. With TDD you can expect it to increase, of course. Remember that a certain level of code coverage in itself is never the target. With setting a strict coverage goal of 80% or 95% you risk the team will game the metric. A more sensible use of code coverage is to identify areas of code that needs more tests.

But what should you do when you start adopting TDD and you already have a big legacy system with close to zero coverage? Should you monitor whether the coverage increases by 0,001% each week?

Well, no. There is a much better way. What you should monitor is the code coverage for new code. That is, each time you commit code, you count the code coverage only for those production classes that were touched in the commit. This metric should keep a reasonably high level, close to what you get with TDD in greenfield project (I would say between 80% and 100%). For a good discussion of this metric, check out this post on the Sonar blog.

Conclusion: monitor Code Coverage for new code. Expect it to rise after adopting TDD.

4. Bugfixing time

Automated developer tests will catch whole classes of defects. You will stop wasting time on simple calculation mistakes, rounding errors or simple regression bugs. For the whole team, expect a significant reduction of the time spent identifying, tracking and fixing bugs. Of course there is still space for exploratory testing. Also don’t expect you’ll magically stop making mistakes or missing things when specifying how the software should work. But still, if you go from little or no automated tests into the world of TDD, you will save lots of time on bugfixing.

Conclusion: monitor the time your team reports on identifying, managing and fixing defects. Expect it to reduce after adopting TDD.

5. Average Lead Time

Lead Time is the time elapsed between starting work on a certain feature and delivering it to production environment. You can monitor Lead Times for each particular feature or just take the average.

How Test-Driven Development (TDD) improves lead time metric

When adopting TDD, the Average Lead Time is definitely worth monitoring. Why? Because some TDD benefits show up not at implementation time, but later in your value stream. Other improvements show on features you implement after a few weeks or months of development. Let me share a few mechanisms that lead to shorter Average Lead Time with TDD process in place.

* Less manual testing

With the increased count and quality of automated developer tests you can expect a reduction of time spent on manual tests, as performed by your testers or users.

* Smoother development

Another thing is that improved design quality makes it easier to work with the code in the long run. Developers can be much more efficient in reading and navigating through test-driven, high-quality code. You can check out Robert Martin’s Clean Code book for a great discussion of how important it is to create readable code. That lead to loads of saved time after months or years of development.

* Less rework

Test-Driven Development, by requiring developers to specify how the code should work in details (the automated test-cases), also tends to bring the benefit of improved specification skills in the team. The delivery team starts to ask better questions and discuss more specific test-cases when discussing what needs to be done with business expert. And yes, if you thought of BDD (Behaviour-Driven Development) or SBE (Specification by Example), that is a very right association. TDD is often the first step to using examples in communication with business stakeholder. Now, better specifications mean that the team will reduce time on rework or on doing stuff that is not needed in the end.

Conclusion: Monitor the Average Lead Time for your project (yes, your JIRA can do that). Expect it to go down with TDD.

So, if you want to increase the above metrics in your project and think TDD may be a good means, definitely give it a try!

References

Clean Code: A Handbook of Agile Software Craftsmanship, Robert C. Martin, Prentice Hall

About the author

Krzysztof Jelski manages all the training efforts of Pragmatists, offering software teams unique training experience in technical agile practices. He has delivered workshops in Test-Driven Development and other skills to more than 300 people over the last 5 years. Krzysztof also helps development teams to collaborate more closely with business people. This article was originally published on https://blog.pragmatists.com/5-metrics-that-will-improve-when-your-team-adopts-tdd-acd8f447e21b and is reproduced with permission from Pragmatists .

1 Comment on Improving Metrics With Test-Driven Development (TDD)

  1. I wonder how many developers have really the discipline of using Test-Driven Development as many of them still question the utility of unit testing, letting the burden on integration testing to catch the bugs they introduced in code… ;O(

Comments are closed.