what do you do with your code coverage results?

I was reading Dave Laribee's latest post "Code Coverage: what is it good for?" and there's a great tidbit in there:

while code coverage isn't proof that our testing efforts will yield higher maintainability, it does tell us a teams commitment to a test practice.

This is exactly what I believe is the main benefit of having an agreed code coverage metric in a team - to ensure a commitment to continue test practices on an ongoing project. In a test-driven environment it highlights when test-first isn't being performed or isn't covering all of the production code that is being written.

Once a commitment is in place, I believe code coverage results can enable you to grow as a developer - reviewing where you often slip from pure test-first development and leave areas of implementation untouched by a test can help you prevent that in the future.

100% coverage isn't unachievable in a test-first environment and is something I think you can push for but it does beg the question of how much benefit you gain from the extra work involved in filtering agreed edge cases and exceptional circumstances.
However, do you want everybody to be purely test-first and not to have production code that is not covered by a test - if so, I think 100% is the only goal you can aim for.
It seems to me that any target number is pretty arbitrary if you accept the fact that you can't achieve 100% - it just becomes a  stake in the ground that people can dance around.

Are the developers performing test-driven development and are they consistent?
That's really all you need from the metric.



0 comments: