|Ned Batchelder : Blog | Code | Text | Site|
Lots of things happening in coverage.py world these days. Turns out I broke the XML report a long time ago, so that directories were not reported as packages. I honestly don't know why I let that sit for so long. It's fixed now, but I feel bad that I've ignored people's bug reports and pull requests. I'll try to be more responsive.
The fix is in coverage.py v4.0a3. Also, the reports now use file names instead of a weird hybrid. Previously, the file "a/b/c.py" was reported as "a/b/c". Now it is shown as "a/b/c.py". This works better where non-Python files can be reported, so we can't assume the extension is .py.
Oh, did I mention that now you can coverage-measure your Django templates?
Also in the XML report, there's now a configuration setting to determine the directory depth that will be reported as packages. The default is that all directories will be reported as packages, but if you set the depth to 2, then only the upper two layers of directories will be reported.
Try coverage.py v4.0a3.
New programmers often need small projects to work on as they hone their skills. Exercises in courses are too small, and don't leave much room for self-direction or extending to follow the interests of the student. "Real" projects, either in the open-source world or at work, tend to be overwhelming and come with real-world constraints that prevent experimentation.
Kindling projects are meant to fill this gap: simple enough that a new learner can take them on, but with possibilities for extension and creativity. Large enough that there isn't one right answer, but designed to be hacked on by a learner simply to flex their muscles.
To help people find projects like these, I've made a page: Kindling Projects. I'm hoping it will grow over time and be useful to people.
If you have suggestions, send them in.
A long experiment has come to fruition: coverage.py support for Django templates. I've added plugin support to coverage.py, and implemented a plugin for Django templates. If you want to try it in its current alpha state, read on.
The plugin itself is pip installable:
To run it, add these settings to your .coveragerc:
Then run your tests under coverage.py. It requires coverage.py >= 4.0a2, so it may not work with other coverage-related tools such as test-runner coverage plugins, or coveralls.io. The plugin works on Django >= 1.4, and Python 2 or 3.
You will see your templates listed in your coverage report alongside your Python modules. They have a .html extension but no directory, that's still to be fixed.
The technique used to measure the coverage is the same that Dmitry Trofimov used in dtcov, but integrated into coverage.py as a plugin, and made more performant. I'd love to see how well it works in a real production project. If you want to help me with it, feel free to drop me an email.
The coverage.py plugin mechanism is designed to be generally useful for hooking into the collection and reporting phases of coverage.py, specifically to support non-Python files. I've also got a plugin for Mako templates, but it needs some fixes from Mako. If you have non-Python files you'd like to support in coverage.py, let's talk.
The recent holidays gave us Christmas and New Year's Day on a Thursday, so I also had two Fridays off. This gave me two four-day weekends in a row. At the same time, I got a pull request against Cog from Doug Hellmann. Together, these gave me the time and the reason to update Cog.
So I cleaned up the couple of old pull requests, and open issues, and modernized the repo quite a bit.
Cog 2.4 is available now, with three new features:
It was nice to revisit this old friend, and be able to tend it and ship it.
Across more than 30 repos, we have more than 9500 pull requests. To get detailed information about all of them would require at least 9500 requests to the GitHub API. But GitHub rate-limits API use, so I can only make 5000 requests in an hour, so I can't collect details across all of our pull requests.
Most of those pull requests are old, and closed. They haven't changed in a long time. GitHub supports ETags, and any request that responds with 304 Not Modified isn't counted against your rate limit. So I should be able to use ETags to mostly get cached information, and still be able to get details for all of my pull requests.
I ran my program with this, and it didn't seem to help: I was still running out of requests against the API. Doing a lot of debugging, I figured out why. The reason is instructive for API design.
When you ask the GitHub API for details of a pull request, you get a JSON response that looks like this (many details omitted, see the GitHub API docs for the complete response):
GitHub has done a common thing with their REST API: they include details of related objects. So this pull request response also includes details of the users involved, and the repos involved, and the repos include details of their users, and so on.
The ETag for a response fingerprints the entire response. That means that if any data in the response changes, the ETag will change, which means that the cached copy will be ignored and the full response will be returned.
Look again at the repo information included: open_issues_count changes every time an issue is opened or closed. A pull request is a kind of issue, so that happens a lot. There's also pushed_at and updated_at, which will change frequently.
So when I'm getting details about a pull request that has been closed and dormant for (let's say) a year, the ETag will still change many times a day, because of other irrelevant activity in the repo. I didn't need those repo details on the pull request in the first place, but I always thought it was just harmless bulk. Nope, it's actively hurting my ability to use the API effectively.
Some REST API's give you control over the fields returned, or related objects included in responses, but GitHub's does not. I don't know how to use the GitHub API the way I wanted to.
So the pull request response has lots of details I don't need (the repo's owner's avatar URL?), and omit plenty of details I'm likely to need, like commits, comments, and so on. I understand, they aren't including one-to-many information at all, but I'd rather see the one-to-many than the almost certainly useless one-to-one information that is included, and is making automatic caching impossible.
Luckily, my co-worker David Baumgold had a good idea and the energy to implement it: webhookdb replicates GitHub data to a relational database, using webhooks to keep the two in sync. It works great: now I can make queries against Postgres to get details of pull requests! No rate limiting, and I can use SQL if it's a better way to express my questions.
One of the challenging things about programming is being able to really see code the way the computer is going to see it. Sometimes the human-only signals are so strong, we can't ignore them. This is one of the reasons I like indentation-significant languages like Python: people attend to the indentation whether the computer does or not, so you might as well have the people and computers looking at the same thing.
I was reminded of this problem yesterday while trying to debug a sample application I was toying with. It has a config file with some strings and dicts in it. It reads in part like this:
When I saw this file, I thought, "That's a weird way to comment things," but didn't worry more about it. Then later when the response was failing, I debugged into it, and realized what was wrong with this file. Before reading on, do you see what it is?
• • •
• • •
• • •
Python concatenates adjacent string literals. This is handy for making long strings without having to worry about backslashes. In real code, this feature is little-used, and it happens in a surprising place here. The "docstring" for the dictionary is implicitly concatenated to the first key. PYLTI_URL_FIX has a key that's 163 characters long: " Remap URL to ... URL.\nhttps://localhost:8000/", including three newlines.
But SECRET_KEY isn't affected. Why? Because the SECRET_KEY assignment line is a complete statement all by itself, so it doesn't continue onto the next line. Its "docstring" is a statement all by itself. The PYLTI_URL_FIX docstring is inside the braces of the dictionary, so it's all part of one 13-line statement. All the tokens are considered together, and the adjacent strings are concatenated.
As odd as this code was, it was still hard to see what was going to happen, because the first string was clearly meant as a comment, both in its token form (a multiline string, starting in the first column) and in its content (English text explaining the dictionary). The second string is clearly intended as a key in the dict (short, containing data, indented). But all of those signals are human signals, not computer signals. So I as a human attended to them and misunderstood what would happen when the computer saw the same text and ignored those signals.
The fix of course is to use conventional comments. Programming is hard, yo. Stick to the conventions.
I have a document challenge. It's a perfect job for Lotus Notes. What do I use in its place today?
I want to keep track of a bunch of web sites, say 100-200 of them. For each, I want a free-form document that lets me keep notes about them. But I also have structured information I want to track for each, like an email contact, a GitHub repo, some statistics, and so on. I want to be able to display these documents in summarized lists, so that some of the structured information is displayed in a table, and I can sort and filter the documents based on that information.
This is exactly what Lotus Notes did well. Is there something that can do it now? Ideally, it would be part of a Confluence wiki, but other options would be good too. (Please don't say SharePoint...)
CouchDB is the perfect backend for a system like this (no wonder, it was written by Damien Katz, and inspired by his time at Lotus), but is there a GUI client that makes it a complete application?
Say what you will about Lotus Notes, it was really good at this kind of job.
It was our first time organizing a conference, and we did it on short notice, about four months. Where we didn't know what to do, we mostly made it be like PyCon: 30-minute talks, 10 minutes between talks, a few opportunities for lightning talks, etc.
Judging from the #openedxcon tweet stream, and from talking to people afterward, people seemed to really like it.
I gave part of the edX keynote, and as usually happens when I give a talk, there are things I know I am going to say, and things that seem to just pop out of my mouth once I get going. I was showing two examples of long-tail Open edX sites, and making the point that edX would never have put these particular courses online itself. I said,
But it got tweeted as:
How meta: I say something, then the community turns it into something else, beyond my control! This was widely re-quoted, and was repeated by our CEO at another edX event later that week.
There's a difference between "beyond our reach" and "beyond our control." Not a huge difference, but I was talking more about reach at the time. But maybe that's a sign that things really are working, when it is beyond your control, and it's still good. Just like I "said."
And Open edX is going well: there are about 60 sites running 400 courses, all over the world. EdX has as outsized goal of educating a billion students by 2020, and Open edX will be a significant part of that. The 160 people at the conference were excited to help by running their own sites and courses. The conference was a success, even the parts beyond our control...
My son Max is a senior at NYU, studying film. He has to finish his senior project. It costs real money to make real films. Give him some money to help!
In 1964, Richard Feynman gave a series of seven lectures at Cornell called The Character of Physical Law. They were recorded by the BBC, and are now on YouTube. These are great.
These are not advanced lectures, they were intended for a general audience, and Feynman does a great job inhabiting the world of fundamental physics. He's clearly one of the top experts, but explains in such a personal approachable style that you are right alongside him as he explores this world looking for answers, following in the footsteps of Newton and Einstein.
If you've never heard Feynman, at least dip into the first one if only to hear his deep, thick New York accent. He also is witty: he places the French Revolution in 1783, and says, "Accurate to three decimal places, not bad for a physicist!" It's disarmingly out of character for an intellectual, but Feynman is the real thing, discussing not just the basics of forces and particles, but the philosophical implications for scientists and all thinkers:
I converted the videos to pure audio and listened to them in my car, which meant I couldn't see what he was drawing on the blackboard, but it was enlightening nonetheless. Highly recommended: The Character of Physical Law.
I do a lot of side projects in the Python world. I write coverage.py. I give talks at PyCon, like the one about iterators, or the one with the Unicode sandwich. I hang out in the #python IRC channel and help people with their programs. I organize Boston Python.
I enjoy these things, and I don't get paid for them. But if you want to help me out, here's how you can: my son Max is in his last semester at NYU film school, which means he and his friends are making significant short films. These films need funding. If you've liked something I've done for you in the Python world, how about tossing some money over to a film?
Max will be doing a film of his own this semester, but his Kickstarter isn't live yet. In the meantime, he's the cinematographer on his friend Jacob's film Go To Hell. So give a little money to Jacob, and in a month or so I'll hit you up again to give a lot of money to Max. :)
The first alpha of the next major version of coverage.py is available: coverage.py v4.0a1.
The big new feature is support for the gevent, greenlet, and eventlet concurrency libraries. Previously, these libraries' behind-the-scenes stack swapping would confuse coverage.py. Now coverage adapts to give accurate coverage measurement. To enable it, use the "concurrency" setting to specify which library you are using.
Huge thanks to Peter Portante for getting the concurrency support started, and Joe Jevnik for the last final push.
Also new is that coverage.py will read its configuration from setup.cfg if there is no .coveragerc file. This lets you keep more of your project configuration in one place.
Lastly, the textual summary report now shows missing branches if you are using branch coverage.
One warning: I'm moving around lots of internals. People have a tendency to use what they need to to get their plugin or tool to work, so some of those third-party packages may now be broken. Let me know what you find.
Full details of other changes are in the CHANGES.txt file.