Sometimes the automation really knows best

Tuesday 26 July 2005This is over 19 years old. Be careful.

Recently I was working on improving the automated test coverage for Cog, preparing for another release of features. I was approaching 100% coverage. There was only one line left untested. It was very simple undocumented function, cog.msg, which simply printed its argument to stdout. I thought about simply yanking it from the code. But looking at my own use of Cog, I saw that I had used it a few times, and others might want to, so I decided to put in a test for it, for completeness. If it weren’t the last untested line in the source, I probably would have skipped the test.

So I put in a simple test, and ran the test suite, and it failed! Turns out I had broken the function a while back during a global search and replace to use an explicit stdout member rather than sys.stdout. Go figure. The test that I had put in “just for completeness” found a genuine bug, and one that I would have encountered in my own environment once I tried deploying the code.

Now Cog has 100% test coverage (w00t!) and I learned my lesson about automated tests: they really work, and often know more than you do.

Comments

[gravatar]
Out of curiosity, what are you using for your coverage analysis?
[gravatar]
Why, my own updated version of coverage.py, of course!

This feels like a planted question, but I didn't plant it!

Add a comment:

Ignore this:
Leave this empty:
Name is required. Either email or web are required. Email won't be displayed and I won't spam you. Your web site won't be indexed by search engines.
Don't put anything here:
Leave this empty:
Comment text is Markdown.