« | » Main « | »

Coverage v3.0 beta 2

Thursday 30 April 2009

Beta 2 of Coverage v3.0 is now ready. Coverage is a tool for measuring code coverage of Python programs, usually during testing.

Kits are available as source (coverage-3.0b2.tar.gz), or as Windows installers for Python 2.3, 2.4, 2.5, or 2.6. The repository is also available on bitbucket.

Feedback is welcome however you see fit, but particularly good ways are tickets on bitbucket, or email on the testing-in-python mailing list.

Changes in Coverage since v2.x:

  • HTML reports and annotation of source files: use the new -b (browser) switch. Thanks to George Song for code, inspiration and guidance.
  • The trace function is implemented in C for speed. Coverage runs are now much faster. Thanks to David Christian for productive micro-sprints and other encouragement.
  • Code in the Python standard library is not measured by default. If you need to measure standard library code, use the -L switch during execution.
  • .coverage data files have a new pickle-based format designed for better extensibility.
  • Source annotation into a directory (-a -d) behaves differently. The annotated files are named with their hierarchy flattened so that same-named files from different directories no longer collide. Also, only files in the current tree are included.
  • Programs executed with -x now behave more as they should, for example, __file__ has the correct value.
  • Executable lines are identified by reading the line number tables in the compiled code, removing a great deal of complicated analysis code.
  • Coverage is now a package rather than a module. Functionality has been split into classes.
  • Python versions 2.3 though 2.6 are supported.

Between the folds

Wednesday 22 April 2009

Between the Folds looks like a interesting documentary, "exploring the science, art, creativity and ingenuity of the world's best paper folders." It's one of those topics that seems slight, but opens up before you to exceed your expectations.

I wish it were possible to see these films more easily. When Helvetica came out, it took me a year and half to see it. Objectified may not take as long, since it is showing in Boston next month.

Or maybe not, since the e-commerce site the MFA uses said there were no tickets left, but then I got an email confirming my purchase of a ticket. Now it's an adventure!

This is turning into a case study in the small ways shopping carts can be a nightmare. One nice touch in this case was a countdown timer, showing the time remaining before the contents of the cart would be emptied. But I had to register for an account to buy the tickets (ugh, why?), and while I was filling out that form, the timer ran out, and popped me over to a page that said, "Unexpected error encountered." What?? How could it be unexpected?? You were counting down to it!

Predicting the future of marriage equality

Monday 13 April 2009

Nate Silver, who did a stellar job tracking the 2008 presidential election, has built a model for predicting when each state would vote against banning gay marriage. His model results in a list predicting the year each state could tip to the marriage equality side of the scoreboard. Kentucky, South Carolina, Oklahoma, Tennessee, Arkansas, Alabama and Mississippi would be the last hold-outs, taking until the 2020's.

The model is a simple one, so there are huge possibilities for reality to go differently, but it's making good predictions already, accurately guessing how California voted on Prop 8. The comments on the post are the usual debate about marriage equality, mixed in with some reports from the field ("Idaho will never vote against a ban," vs "Idaho is closer than you think," and "You've overlooked the Mormon influence in Utah," etc).

BTW, The Map Scroll mapped the data for those who want a visual reminder of where the Deep South is...

I've long thought that acceptance of marriage equality was only a matter of time, it's interesting to see Nate put some quantitative analysis behind it. In the past, I've causually tossed off 50 years as a possibility, but Nate is giving us hope that it may only take 20.

Jimfl's things to look at

Sunday 12 April 2009

I've never met Jim Flanagan, but I've followed his shadowy trail across the internet for some time. Every year or so, it shifts, and I lose the scent. Then he re-appears to comment on a post of mine, and I pick it up again. I'm always glad when I do.

Today's finds:

Tweenbots in the city

Sunday 12 April 2009

Tweenbots are Kacie Kinzer's experiment in low-tech anthropomorphized "robots" tugging on strangers' heartstrings to help them get to their destination. A simple motorized robot is left at a starting point with a flag on top announcing his desired destination. Strangers notice and intervene to keep him headed in the right direction and out of trouble. Adorable, ingenious, surprising, inspiring, and enheartening all at once.

Post 2000

Wednesday 8 April 2009

This is my 2000th blog post. My 1000th post was over four years ago. Interesting things have happened since then:

Apart from specific posts, there are larger themes. I've written occasionally about disability (most recently, Obama's special joke), and have been touched by the people who contact me because of it. My blog is mostly about software and things of interest to software types, which maybe makes it more special that I can sometimes reach the sub-culture of software types living with a disabled child. It isn't something we engineers feel comfortable discussing, but it's an important aspect of our lives, so it's good to give it some air time now and then.

More times than I can recall, I've written blog posts to explain what I know about a topic, knowing that I will know even more after reading the comments posted here. This is incredibly valuable to me, both because of the knowledge gained, and as a reinforcement of community.

The volume of posts here has waxed and waned with the availability of time, and my interests during that time, but I've always valued the connections I make via this site. Thanks everyone, for helping to make it what it is.

Running a Python file as main

Tuesday 7 April 2009

I tried running a set of unit tests from work in the latest version of coverage.py, and was surprised to see that the tests failed. Digging into it, I found that the value of __file__ was wrong for my main program. It used that value to find the expected output files to compare results against, and since the value was wrong, it didn't find any expected output files, so the tests failed.

Consider this main program, myprog.py:

print "__file__ is", __file__
print "__name__ is", __name__

When run from the command line as "python myprog.py", it says:

__file__ is myprog.py
__name__ is __main__

The way coverage.py ran the main program could be boiled down to this:

# runmain1.py: run its argument as a Python main program.
import sys
import __main__

mainfile = sys.argv[1]
execfile(mainfile, __main__.__dict__)

Running "python runmain1.py myprog.py" produces:

__file__ is runmain1.py
__name__ is __main__

Because we imported __main__, and used its globals as myprog's globals, it thinks it is runmain1.py instead of myprog1.py. That's why my unit tests failed: they tried to find data files alongside coverage.py, rather than alongside the unit test files.

This is a better way to do it:

# runmain2.py: run its argument as a Python main program.
import imp, sys

mainfile = sys.argv[1]

src = open(mainfile)
try:
    imp.load_module('__main__', src, mainfile, (".py", "r", imp.PY_SOURCE))
finally:
    src.close()

This imports the target file as a real __main__ module, giving it the proper __file__ value.

The old execfile __main__ technique is used in lots of tools that offer to run your python main files for you, and I'm not sure why more people don't have problems with it. Probably because __file__ manipulation is uncommon. I've updated coverage.py to use the new technique. I hope there isn't a gotcha I'm overlooking that means it's a bad way to do this.

Updated: I found the gotcha: it creates a compiled file.

Subjective-C

Wednesday 1 April 2009

A few ideas have finally come together. I've been looking at Objective-C, the language Apple uses for Mac and iPhone development, and it's pretty nice. Then over the weekend, I saw Raymond Hettinger's fabulous PyCon presentation, Easy AI with Python. I wanted to do some AI experimenting of my own, and figured a new language was the way to go.

So I'm starting work on Subjective-C, the first language with semantics specified loosely enough to build systems that mimic human thought. It borrows ideas from lots of sources, of course. Reflection and introspection will play a central role. Python's duck-typing will be adapted to DWIM-typing. List incomprehesions will allow for more concise coding. Eclipse will be the IDE of choice, since it already has Views and Perspectives of its own.

BTW: this joke has been made before, but not as often as I would have thought.

This idea started when Antonio and I were talking on the L in Chicago, and he said, "Objective-C," but because of the noise I thought he said, "Subjective-C." I thought it was funny, but now I wonder, who thought of it? He didn't, and I didn't, but we were the only ones there!

« | » Main « | »