A long-awaited feature of coverage.py is now available in a rough form: Who Tests What annotates coverage data with the name of the test function that ran the code.
To try it out:
- Install coverage.py v5.0a3.
- Add this line literally to the [run] section of your .coveragerc file:
- Run your tests.
- The .coverage file is now a SQLite database. There is no change to reporting yet, so you will need to do your own querying of the SQLite database to get information out. See below for a description of the database schema.
dynamic_context = test_function
The database can be accessed in any SQLite-compatible way you like. Note that the schema is not (yet) part of the public API. That is, it may not be guaranteed to stay the same. This is one of the things yet to be decided. For now though, the database has these tables:
- file: maps full file paths to file_ids: id, path
- context: maps contexts (test function names) to contexts_ids: id, context
- line: the line execution data: file_id, context_id, lineno
- arc: similar to line, but for branch coverage: file_id, context_id, fromno, tono
It’s not the most convenient, but the information is all there. If you used branch coverage, then the important data is in the “arc” table, and “line” is empty. If you didn’t use branch coverage, then “line” has data and “arc” is empty. For example, using the sqlite3 command-line tool, here’s a query to see which tests ran a particular line:
...> distinct context.context from arc, file, context
...> where arc.file_id = file.id
...> and arc.context_id = context.id
...> and file.path like '%/xmlreport.py'
...> and arc.tono = 122;
BTW, there are also “static contexts” if you are interested in keeping coverage data from different test runs separate: see Measurement Contexts in the docs for details.
Some things to note and think about:
- The test function name recorded includes the test class if we can figure it out. Sometimes this isn’t possible. Would it be better to record the filename and line number?
- Is test_function too fine-grained for some people? Maybe chunking to the test class or even the test file would be enough?
- Better would be to have test runner plugins that could tell us the test identifier. Anyone want to help with that?
- What other kinds of dynamic contexts might be useful?
- What would be good ways to report on this data? How are you navigating the data to get useful information from it?
- How is the performance?
- We could have a “coverage extract” command that would be like the opposite of “coverage combine”: it could pull out a subset of the data so a readable report could be made from it.
Please try this out, and let me know how it goes. Thanks.