Don’t omit tests from coverage

Thursday 29 August 2019This is more than five years old. Be careful.

There’s a common idea out there that I want to refute. It’s this: when measuring coverage, you should omit your tests from measurement. Searching GitHub shows that lots of people do this.

This is a bad idea. Your tests are real code, and the whole point of coverage is to give you information about your code. Why wouldn’t you want that information about your tests?

You might say, “but all my tests run all their code, so it’s useless information.” Consider this scenario: you have three tests written, and you need a fourth, similar to the third. You copy/paste the third test, tweak the details, and now you have four tests. Except oops, you forgot to change the name of the test.

Tests are weird: you have to name them, but the names don’t matter. Nothing calls the name directly. It’s really easy to end up with two same-named tests. Which means you only have one test, because the new one overwrites the old. Coverage would alert you to the problem.

Also, if your test suite is large, you likely have helper code in there as well as straight-up tests. Are you sure you need all that helper code? If you run coverage on the tests (and the helpers), you’d know about some weird clause in there that is never used. That’s odd, why is that? It’s probably useful to know. Maybe it’s a case you no longer need to consider. Maybe your tests aren’t exercising everything you thought.

The only argument against running coverage on tests is that it “artificially” inflates the results. True, it’s much easier to get 100% coverage on a test file than a product file. But so what? Your coverage goal was chosen arbitrarily anyway. Instead of aiming for 90% coverage, you should include your tests and aim for 95% coverage. 90% doesn’t have a magical meaning.

What’s the downside of including tests in coverage? “People will write more tests as a way to get the easy coverage.” Sounds good to me. If your developers are trying to game the stats, they’ll find a way, and you have bigger problems.

True, it makes the reports larger, but if your tests are 100% covered, you can exclude those files from the report with [report] skip_covered setting.

Your tests are important. You’ve put significant work into them. You want to know everything you can about them. Coverage can help. Don’t omit tests from coverage.

Comments

[gravatar]
In my experience, it's the tests that don't get to 100% coverage, as any code written to provide additional diagnostics on test failures (e.g. evaluating some expression to produce a message to accompany a failed assertion) won't be exercised in a successful build.

Which, I guess, means writing tests for the tests...
[gravatar]
If you run flake8/pylint on your tests, you will also see mistakes like same-named tests.
[gravatar]
@Steve the tests not evaluating every branch because not all are expected to run unless there's a failure is *exactly* what "pragma: no cover" is for.

You absolutely do not write tests for tests, when you have stuff you don't expect to run, you tell coverage it won't run :)
[gravatar]
I confess that until I read this post I was one of the guilty ones. My test runner script has now been updated to use --skip-covered.

A nice side-effect is having the report output get cleaner and cleaner (we use the text report from command-line as part of our workflow) as our tests get better coverage. Thank you!
[gravatar]
It's a little weird for the top 90% of the article to be language-agnostic, and then to throw in a Python-specific answer-to-a-caveat near the end.

Add a comment:

Ignore this:
Leave this empty:
Name is required. Either email or web are required. Email won't be displayed and I won't spam you. Your web site won't be indexed by search engines.
Don't put anything here:
Leave this empty:
Comment text is Markdown.