|Ned Batchelder : Blog | Code | Text | Site|
Wicked hack: Python bytecode tracing
» Home : Blog : April 2008
Something I've been noodling on since PyCon is how to improve code coverage testing in Python, in particular, finding a way to measure bytecode execution instead of just line execution. After a fruitful investigation, I know a lot more about how CPython executes code, and I've found a way to trace each bytecode.
As I mentioned in the Hidden Conditionals section of Flaws in coverage measurement, measuring line execution misses details within the lines. My example was:
A line coverage tool says that line 2 was executed, but never points out that a divide by zero error is lurking there.
The problem is that Python's tracing facility (sys.settrace) is based on source lines. Your callback function is invoked for each line executed. At PyCon, Matt Harrison floated the possibility of a source transformation tool which would take your Python code and rewrite it so that the operations were spread out over more lines so that the trace function would be invoked more often. This would allow for tracing of the operations within lines.
It's an intriguing idea, but seems difficult and risky: the rewriting process could introduce errors, and there could be constructs which can't be pulled apart successfully.
I thought a better approach would be to modify the Python interpreter itself. If we could have the interpreter call a tracing function for each bytecode, we'd have an authoritative trace without intricate code munging. This approach has a few problems of its own:
But I was interested enough to explore the possibility, so I went digging into the Python interpreter sources to see how sys.settrace did its work. I found the answer to how it works, and also a cool trick to accomplish bytecode tracing without modifying the interpreter.
The bytecode interpreter invokes the trace function every time an opcode to be executed is the first opcode on a new source line. But how does it know which opcodes those are? The key is the co_lnotab member in the code object. This is a string, interpreted as pairs of one-byte unsigned integers. To re-use the example from The Structure of .pyc Files, here's some bytecode:
and here's its line number information:
The lnotab bytes are pairs of small integers, so this entry represents:
The two numbers in each pair are a bytecode offset delta and a source line number delta. The firstlineno value of 1 means that the bytecode at offset zero is line number 1. Each entry in the lnotab is then a delta to the bytecode offset and a delta to the line number to get to the next line. So bytecode offset 12 is line number 2, and bytecode offset 26 (12+14) is line number 3. The line numbers at the left of the disassembled bytecode are computed this way from firstlineno and lnotab.
(There are more details to deal with deltas larger than 255. Complete info is in the CPython source: compile.c, search for "All about a_lnotab".)
As the Python interpreter executes bytecodes, it examines the offsets against this map, and when the source line number that results is different than for the previous bytecode, it calls the trace function.
Here's where the hack comes in: what if we lie about line numbers? What would happen if we change the .pyc file to have a different mapping of bytecode offsets to line numbers?
To set up the test, here's a sample.py:
and here's tracesample.py:
Running tracesample.py gives this output:
As each line is executed, my trace function is invoked, and it digs into the frame object to get the filename and line number. From the output, we can see that we executed lines 1, 2, and 3 in turn.
To lie about line numbers, I wrote a small tool to rewrite .pyc files. It copies everything verbatim, except it changes the firstlineno and lnotab entries. As a simple proof of concept, we'll make the lnotab map claim that every byte offset is a new line number: it will consist entirely of (1,1) entries. And because byte offsets start at zero, I'll change the firstlineno entry to zero. Here's hackpyc.py:
This is fairly straightforward, the only hiccup was that code objects' members are read-only, so I couldn't just update the parts I wanted, I had to create a new code object with new.code.
Running "hackpyc.py sample.pyc" rewrites sample.pyc to lie about its line numbers. Now running tracesample.py produces:
Here the "line number" in the trace function is really a bytecode offset, and the interpreter invokes the trace function for every bytecode executed!
We can see that execution jumped from 15 to 25, skipping the bytecodes that examine the b variable. This is just the sort of detail about execution that line-oriented coverage measurement could never tell us.
As I see it, these are the good things about this technique:
Problems with this so far:
This is only a quick demonstration of a technique, it isn't useful yet. I think it could be made useful though.
PS: As a result of this investigation, I also think it would be simple to patch the interpreter to call a trace function on every bytecode.