Coverage.py has a trace function written in C, for speed. It uses the Python C API, which is notoriously tricky to get right because you have to manage reference counts yourself.

I've made some significant changes to the trace function recently, to add plugin support to the C tracer. Adding tests for badly behaved plugins, I managed to crash Python. Not a traceback, a for-real crash in CPython.

Naturally, this means something is wrong in my C extension. Poring over the code, I couldn't see anything amiss. I'd long been intrigued by the idea of David Malcolm's CPyTracer, a plugin to gcc that performs static path analysis to find mistakes in Python C extensions, so I decided to give it a try.

The best instructions are on A. Jesse Jiryu Davis' blog: Analyzing Python C Extensions With CPyChecker. I installed Fedora as suggested, and got the compiler running without much trouble (I just typed "yum" every time I wanted to type "apt-get").

The simple way to run the checker worked fine:

CC=~/gcc-python-plugin/gcc-with-cpychecker python setup.py build

This generates very nice HTML reports (like this) in two different styles that walk you through a path through your code that leads to a bad outcome. Well, supposedly a bad outcome. I found as Jesse did that there are false positives.

With the default settings, the checker only considers 256 paths through a function then stops, to avoid combinatorial explosions. But my functions had many more paths than that.

I increased the memory size of my Fedora Vagrantfile, then told CPyChecker to push on to examine a quarter million paths:

CC=~/gcc-python-plugin/gcc-with-cpychecker \
    CFLAGS="--maxtrans 250000" python setup.py build

This found a few issues, but did not resolve the crash I'm experiencing. Next step: rebuild CPython --with-pydebug.

BTW, Stefan Behnel has rewritten my extension in Cython, and I really should seriously consider switching over, so that this kind of thing doesn't happen any more.

tagged: » react

For my Mom's 75th birthday (tomorrow), we made a laptop cake, because she is as much tied to her computer as I am:

Laptop cake

Plain yellow cake, cut in half, with the "screen" propped up with Yodels. The keyboard is bite-sized Snickers. The trackpad is a Petite Ecolier cookie, upside-down with its signature chocolate layer hidden in the cake. The screen has Skittles for the window controls, and various candies for the icons in the dock. A swirled candy is standing in as the spinning beach ball!

If you have not seen our cakes before, they are not poised or perfect. We have a great time making them, and do not worry too much about technical accuracy. We don't use professional materials like fondant. We use stuff you can get in the supermarket, and the candy aisle is an important stop. They are fun, and they taste great!

tagged: » 1 reaction

Lots of things happening in coverage.py world these days. Turns out I broke the XML report a long time ago, so that directories were not reported as packages. I honestly don't know why I let that sit for so long. It's fixed now, but I feel bad that I've ignored people's bug reports and pull requests. I'll try to be more responsive.

The fix is in coverage.py v4.0a3. Also, the reports now use file names instead of a weird hybrid. Previously, the file "a/b/c.py" was reported as "a/b/c". Now it is shown as "a/b/c.py". This works better where non-Python files can be reported, so we can't assume the extension is .py.

Oh, did I mention that now you can coverage-measure your Django templates?

Also in the XML report, there's now a configuration setting to determine the directory depth that will be reported as packages. The default is that all directories will be reported as packages, but if you set the depth to 2, then only the upper two layers of directories will be reported.

Try coverage.py v4.0a3.

tagged: » react

New programmers often need small projects to work on as they hone their skills. Exercises in courses are too small, and don't leave much room for self-direction or extending to follow the interests of the student. "Real" projects, either in the open-source world or at work, tend to be overwhelming and come with real-world constraints that prevent experimentation.

Kindling projects are meant to fill this gap: simple enough that a new learner can take them on, but with possibilities for extension and creativity. Large enough that there isn't one right answer, but designed to be hacked on by a learner simply to flex their muscles.

To help people find projects like these, I've made a page: Kindling Projects. I'm hoping it will grow over time and be useful to people.

If you have suggestions, send them in.

A long experiment has come to fruition: coverage.py support for Django templates. I've added plugin support to coverage.py, and implemented a plugin for Django templates. If you want to try it in its current alpha state, read on.

The plugin itself is pip installable:

$ pip install django_coverage_plugin

To run it, add these settings to your .coveragerc:

[run]
# Makes it slower, won't be needed eventually
timid = True

plugins =
    django_coverage_plugin

Then run your tests under coverage.py. It requires coverage.py >= 4.0a2, so it may not work with other coverage-related tools such as test-runner coverage plugins, or coveralls.io. The plugin works on Django >= 1.4, and Python 2 or 3.

You will see your templates listed in your coverage report alongside your Python modules. They have a .html extension but no directory, that's still to be fixed.

The technique used to measure the coverage is the same that Dmitry Trofimov used in dtcov, but integrated into coverage.py as a plugin, and made more performant. I'd love to see how well it works in a real production project. If you want to help me with it, feel free to drop me an email.

The coverage.py plugin mechanism is designed to be generally useful for hooking into the collection and reporting phases of coverage.py, specifically to support non-Python files. I've also got a plugin for Mako templates, but it needs some fixes from Mako. If you have non-Python files you'd like to support in coverage.py, let's talk.

tagged: , » react

The recent holidays gave us Christmas and New Year's Day on a Thursday, so I also had two Fridays off. This gave me two four-day weekends in a row. At the same time, I got a pull request against Cog from Doug Hellmann. Together, these gave me the time and the reason to update Cog.

So I cleaned up the couple of old pull requests, and open issues, and modernized the repo quite a bit.

Cog 2.4 is available now, with three new features:

  • A --delimiters option lets you control the three delimiters that separate the cog code and result from the rest of the file. Thanks, Doug Hellman.
  • A -n=ENCODING option that lets you specify the encoding for the input and output files. Thanks, Petr Gladkiy.
  • A --verbose option that lets you control how much chatter is in the output while cogging.

It was nice to revisit this old friend, and be able to tend it and ship it.

tagged: » react

For the Open edX project, we like to collect statistics about our pull requests. GitHub provides a very capable API that gives me all sorts of information.

Across more than 30 repos, we have more than 9500 pull requests. To get detailed information about all of them would require at least 9500 requests to the GitHub API. But GitHub rate-limits API use, so I can only make 5000 requests in an hour, so I can't collect details across all of our pull requests.

Most of those pull requests are old, and closed. They haven't changed in a long time. GitHub supports ETags, and any request that responds with 304 Not Modified isn't counted against your rate limit. So I should be able to use ETags to mostly get cached information, and still be able to get details for all of my pull requests.

I'm using requests to access the API. The CacheControl package offers really easy integration of http caching:

from cachecontrol import CacheControlAdapter
from cachecontrol.caches import FileCache

# ...

session = requests.Session()
adapter = CacheControlAdapter(cache=FileCache(".webcache"))
session.mount("http://", adapter)
session.mount("https://", adapter)

I ran my program with this, and it didn't seem to help: I was still running out of requests against the API. Doing a lot of debugging, I figured out why. The reason is instructive for API design.

When you ask the GitHub API for details of a pull request, you get a JSON response that looks like this (many details omitted, see the GitHub API docs for the complete response):

{
  "id": 1,
  "url": "https://api.github.com/repos/octocat/Hello-World/pulls/1347",
  "number": 1347,
  "state": "open",
  "title": "new-feature",
  "body": "Please pull these awesome changes",
  "created_at": "2011-01-26T19:01:12Z",
  "updated_at": "2011-01-26T19:01:12Z",
  "closed_at": "2011-01-26T19:01:12Z",
  "merged_at": "2011-01-26T19:01:12Z",
  "head": {
    "label": "new-topic",
    "ref": "new-topic",
    "sha": "6dcb09b5b57875f334f61aebed695e2e4193db5e",
    "user": {
      "login": "octocat",
      ...
    },
    "repo": {
      "id": 1296269,
      "owner": {
        "login": "octocat",
        ...
      },
      "name": "Hello-World",
      "full_name": "octocat/Hello-World",
      "description": "This your first repo!",
      "private": false,
      "fork": false,
      "url": "https://api.github.com/repos/octocat/Hello-World",
      "homepage": "https://github.com",
      "language": null,
      "forks_count": 9,
      "stargazers_count": 80,
      "watchers_count": 80,
      "size": 108,
      "default_branch": "master",
      "open_issues_count": 0,
      "has_issues": true,
      "has_wiki": true,
      "has_pages": false,
      "has_downloads": true,
      "pushed_at": "2011-01-26T19:06:43Z",
      "created_at": "2011-01-26T19:01:12Z",
      "updated_at": "2011-01-26T19:14:43Z",
      "permissions": {
        "admin": false,
        "push": false,
        "pull": true
      }
    }
  },
  "base": {
    "label": "master",
    "ref": "master",
    "sha": "6dcb09b5b57875f334f61aebed695e2e4193db5e",
    "user": {
      "login": "octocat",
      ...
    },
    "repo": {
      "id": 1296269,
      "owner": {
        "login": "octocat",
        ...
      },
      "name": "Hello-World",
      "full_name": "octocat/Hello-World",
      "description": "This your first repo!",
      "private": false,
      "fork": false,
      "url": "https://api.github.com/repos/octocat/Hello-World",
      "homepage": "https://github.com",
      "language": null,
      "forks_count": 9,
      "stargazers_count": 80,
      "watchers_count": 80,
      "size": 108,
      "default_branch": "master",
      "open_issues_count": 0,
      "has_issues": true,
      "has_wiki": true,
      "has_pages": false,
      "has_downloads": true,
      "pushed_at": "2011-01-26T19:06:43Z",
      "created_at": "2011-01-26T19:01:12Z",
      "updated_at": "2011-01-26T19:14:43Z",
      "permissions": {
        "admin": false,
        "push": false,
        "pull": true
      }
    }
  },
  "user": {
    "login": "octocat",
    ...
  },
  "merge_commit_sha": "e5bd3914e2e596debea16f433f57875b5b90bcd6",
  "merged": false,
  "mergeable": true,
  "merged_by": {
    "login": "octocat",
    ...
  },
  "comments": 10,
  "commits": 3,
  "additions": 100,
  "deletions": 3,
  "changed_files": 5
}

GitHub has done a common thing with their REST API: they include details of related objects. So this pull request response also includes details of the users involved, and the repos involved, and the repos include details of their users, and so on.

The ETag for a response fingerprints the entire response. That means that if any data in the response changes, the ETag will change, which means that the cached copy will be ignored and the full response will be returned.

Look again at the repo information included: open_issues_count changes every time an issue is opened or closed. A pull request is a kind of issue, so that happens a lot. There's also pushed_at and updated_at, which will change frequently.

So when I'm getting details about a pull request that has been closed and dormant for (let's say) a year, the ETag will still change many times a day, because of other irrelevant activity in the repo. I didn't need those repo details on the pull request in the first place, but I always thought it was just harmless bulk. Nope, it's actively hurting my ability to use the API effectively.

Some REST API's give you control over the fields returned, or related objects included in responses, but GitHub's does not. I don't know how to use the GitHub API the way I wanted to.

So the pull request response has lots of details I don't need (the repo's owner's avatar URL?), and omit plenty of details I'm likely to need, like commits, comments, and so on. I understand, they aren't including one-to-many information at all, but I'd rather see the one-to-many than the almost certainly useless one-to-one information that is included, and is making automatic caching impossible.

Luckily, my co-worker David Baumgold had a good idea and the energy to implement it: webhookdb replicates GitHub data to a relational database, using webhooks to keep the two in sync. It works great: now I can make queries against Postgres to get details of pull requests! No rate limiting, and I can use SQL if it's a better way to express my questions.

One of the challenging things about programming is being able to really see code the way the computer is going to see it. Sometimes the human-only signals are so strong, we can't ignore them. This is one of the reasons I like indentation-significant languages like Python: people attend to the indentation whether the computer does or not, so you might as well have the people and computers looking at the same thing.

I was reminded of this problem yesterday while trying to debug a sample application I was toying with. It has a config file with some strings and dicts in it. It reads in part like this:

SECRET_KEY = 'you-will-never-guess'
""" secret key for authentication
"""

PYLTI_URL_FIX = {
""" Remap URL to fix edX's misrepresentation of https protocol.
    You can add another dict entry if you have trouble with the
    PyLti URL.
"""

    "https://localhost:8000/": {
        "https://localhost:8000/": "http://localhost:8000/"
    },
    "https://localhost/": {
        "https://localhost/":"http://192.168.33.10/"
    }
}

When I saw this file, I thought, "That's a weird way to comment things," but didn't worry more about it. Then later when the response was failing, I debugged into it, and realized what was wrong with this file. Before reading on, do you see what it is?

•    •    •

•    •    •

•    •    •

Python concatenates adjacent string literals. This is handy for making long strings without having to worry about backslashes. In real code, this feature is little-used, and it happens in a surprising place here. The "docstring" for the dictionary is implicitly concatenated to the first key. PYLTI_URL_FIX has a key that's 163 characters long: " Remap URL to ... URL.\nhttps://localhost:8000/", including three newlines.

But SECRET_KEY isn't affected. Why? Because the SECRET_KEY assignment line is a complete statement all by itself, so it doesn't continue onto the next line. Its "docstring" is a statement all by itself. The PYLTI_URL_FIX docstring is inside the braces of the dictionary, so it's all part of one 13-line statement. All the tokens are considered together, and the adjacent strings are concatenated.

As odd as this code was, it was still hard to see what was going to happen, because the first string was clearly meant as a comment, both in its token form (a multiline string, starting in the first column) and in its content (English text explaining the dictionary). The second string is clearly intended as a key in the dict (short, containing data, indented). But all of those signals are human signals, not computer signals. So I as a human attended to them and misunderstood what would happen when the computer saw the same text and ignored those signals.

The fix of course is to use conventional comments. Programming is hard, yo. Stick to the conventions.

tagged: » 1 reaction

I have a document challenge. It's a perfect job for Lotus Notes. What do I use in its place today?

I want to keep track of a bunch of web sites, say 100-200 of them. For each, I want a free-form document that lets me keep notes about them. But I also have structured information I want to track for each, like an email contact, a GitHub repo, some statistics, and so on. I want to be able to display these documents in summarized lists, so that some of the structured information is displayed in a table, and I can sort and filter the documents based on that information.

This is exactly what Lotus Notes did well. Is there something that can do it now? Ideally, it would be part of a Confluence wiki, but other options would be good too. (Please don't say SharePoint...)

CouchDB is the perfect backend for a system like this (no wonder, it was written by Damien Katz, and inspired by his time at Lotus), but is there a GUI client that makes it a complete application?

Say what you will about Lotus Notes, it was really good at this kind of job.

This week at work we ran the first Open edX Conference, bringing together people using Open edX, the open source platform that powers edX. It was an exciting, exhilarating, exhausting time.

It was our first time organizing a conference, and we did it on short notice, about four months. Where we didn't know what to do, we mostly made it be like PyCon: 30-minute talks, 10 minutes between talks, a few opportunities for lightning talks, etc.

Judging from the #openedxcon tweet stream, and from talking to people afterward, people seemed to really like it.

I gave part of the edX keynote, and as usually happens when I give a talk, there are things I know I am going to say, and things that seem to just pop out of my mouth once I get going. I was showing two examples of long-tail Open edX sites, and making the point that edX would never have put these particular courses online itself. I said,

It isn't really open source until stuff just starts happening that's completely beyond your reach.

But it got tweeted as:

"It isn't open source until stuff starts happening that is beyond our control." - Ned Batchelder @OpenEdX #openedxcon

How meta: I say something, then the community turns it into something else, beyond my control! This was widely re-quoted, and was repeated by our CEO at another edX event later that week.

There's a difference between "beyond our reach" and "beyond our control." Not a huge difference, but I was talking more about reach at the time. But maybe that's a sign that things really are working, when it is beyond your control, and it's still good. Just like I "said."

And Open edX is going well: there are about 60 sites running 400 courses, all over the world. EdX has as outsized goal of educating a billion students by 2020, and Open edX will be a significant part of that. The 160 people at the conference were excited to help by running their own sites and courses. The conference was a success, even the parts beyond our control...

My son Max is a senior at NYU, studying film. He has to finish his senior project. It costs real money to make real films. Give him some money to help!

He's made a really compelling pitch video, so even if you don't think you're going to give money, at least go and watch it to see how cool my son is... :)

Max pitching his fundraiser

In 1964, Richard Feynman gave a series of seven lectures at Cornell called The Character of Physical Law. They were recorded by the BBC, and are now on YouTube. These are great.

These are not advanced lectures, they were intended for a general audience, and Feynman does a great job inhabiting the world of fundamental physics. He's clearly one of the top experts, but explains in such a personal approachable style that you are right alongside him as he explores this world looking for answers, following in the footsteps of Newton and Einstein.

If you've never heard Feynman, at least dip into the first one if only to hear his deep, thick New York accent. He also is witty: he places the French Revolution in 1783, and says, "Accurate to three decimal places, not bad for a physicist!" It's disarmingly out of character for an intellectual, but Feynman is the real thing, discussing not just the basics of forces and particles, but the philosophical implications for scientists and all thinkers:

I converted the videos to pure audio and listened to them in my car, which meant I couldn't see what he was drawing on the blackboard, but it was enlightening nonetheless. Highly recommended: The Character of Physical Law.

tagged: » 2 reactions

Older:

Even older...