Thursday 29 August 2019 — This is more than five years old. Be careful.
There’s a common idea out there that I want to refute. It’s this: when measuring coverage, you should omit your tests from measurement. Searching GitHub shows that lots of people do this.
This is a bad idea. Your tests are real code, and the whole point of coverage is to give you information about your code. Why wouldn’t you want that information about your tests?
You might say, “but all my tests run all their code, so it’s useless information.” Consider this scenario: you have three tests written, and you need a fourth, similar to the third. You copy/paste the third test, tweak the details, and now you have four tests. Except oops, you forgot to change the name of the test.
Tests are weird: you have to name them, but the names don’t matter. Nothing calls the name directly. It’s really easy to end up with two same-named tests. Which means you only have one test, because the new one overwrites the old. Coverage would alert you to the problem.
Also, if your test suite is large, you likely have helper code in there as well as straight-up tests. Are you sure you need all that helper code? If you run coverage on the tests (and the helpers), you’d know about some weird clause in there that is never used. That’s odd, why is that? It’s probably useful to know. Maybe it’s a case you no longer need to consider. Maybe your tests aren’t exercising everything you thought.
The only argument against running coverage on tests is that it “artificially” inflates the results. True, it’s much easier to get 100% coverage on a test file than a product file. But so what? Your coverage goal was chosen arbitrarily anyway. Instead of aiming for 90% coverage, you should include your tests and aim for 95% coverage. 90% doesn’t have a magical meaning.
What’s the downside of including tests in coverage? “People will write more tests as a way to get the easy coverage.” Sounds good to me. If your developers are trying to game the stats, they’ll find a way, and you have bigger problems.
True, it makes the reports larger, but if your tests are 100% covered, you can exclude those files from the report with [report] skip_covered setting.
Your tests are important. You’ve put significant work into them. You want to know everything you can about them. Coverage can help. Don’t omit tests from coverage.
Comments
Which, I guess, means writing tests for the tests...
You absolutely do not write tests for tests, when you have stuff you don't expect to run, you tell coverage it won't run :)
A nice side-effect is having the report output get cleaner and cleaner (we use the text report from command-line as part of our workflow) as our tests get better coverage. Thank you!
Add a comment: