Finding fuzzy floats

Sunday 9 July 2017

For a 2D geometry project I needed to find things based on 2D points. Conceptually, I wanted to have a dict that used pairs of floats as the keys. This would work, except that floats are inexact, and so have difficulty with equality checking. The "same" point might be computed in two different ways, giving slightly different values, but I want them to match each other in this dictionary.

I found a solution, though I'm surprised I didn't find it described elsewhere.

The challenges have nothing to do with the two-dimensional aspect, and everything to do with using floats, so for the purposes of explaining, I'll simplify the problem to one-dimensional points. I'll have a class with a single float in it to represent my points.

First let's look at what happens with a simple dict:

>>> from collections import namedtuple
>>> Pt = namedtuple("Pt", "x")
>>>
>>> d = {}
>>> d[Pt(1.0)] = "hello"
>>> d[Pt(1.0)]
'hello'
>>> d[Pt(1.000000000001)]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: Pt(x=1.000000000001)

As long as our floats are precisely equal, the dict works fine. But as soon as we get two floats that are meant to represent the same value, but are actually slightly unequal, then the dict is useless to us. So a simple dict is no good.

To get the program running at all, I used a dead-simple structure that I knew would work: a list of key/value pairs. By defining __eq__ on my Pt class to compare with some fuzz, I could find matches that were slightly unequal:

>>> class Pt(namedtuple("Pt", "x")):
...     def __eq__(self, other):
...         return math.isclose(self.x, other.x)
...
>>> def get_match(pairs, pt):
...     for pt2, val in pairs:
...         if pt2 == pt:
...             return val
...     return None
...
>>> pairs = [
...     (Pt(1.0), "hello"),
...     (Pt(2.0), "goodbye"),
... ]
>>>
>>> get_match(pairs, Pt(2.0))
'goodbye'
>>> get_match(pairs, Pt(2.000000000001))
'goodbye'

This works, because now we are using an inexact closeness test to find the match. But we have an O(n) algorithm, which isn't great. Also, there's no way to define __hash__ to match that __eq__ test, so our points are no longer hashable.

Trying to make things near each other be equal naturally brings rounding to mind. Maybe that could work? Let's define __eq__ and __hash__ based on rounding the value:

>>> class Pt(namedtuple("Pt", "x")):
...     def __eq__(self, other):
...         return round(self.x, 6) == round(other.x, 6)
...     def __hash__(self):
...         return hash(round(self.x, 6))
...
>>> d = {}
>>> d[Pt(1.0)] = "apple"
>>> d[Pt(1.0)]
'apple'
>>> d[Pt(1.00000001)]
'apple'

Nice! We have matches based on values being close enough, and we can use a dict to get O(1) behavior. But:

>>> d[Pt(1.000000499999)] = "cat"
>>> d[Pt(1.000000500001)]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: Pt(x=1.000000500001)

Because rounding has sharp edges (ironically!), there will always be pairs of numbers arbitrarily close together that will round away from each other.

So rounding was close, but not good enough. We can't guarantee that we won't have these pairs of numbers that are in just the wrong place so that rounding will give the wrong answer.

I thought about other data structures: bisection in a sorted list sounded good (O(log n)), but that requires testing for less-than, and I wasn't sure how the fuzziness would affect it. Reading about other geometric algorithms lead to things like k-d trees, which seem exotic and powerful, but I didn't see how to put them to use here.

Then early one morning when I wasn't even trying to solve the problem, I had an idea: round the values two different ways. The problem with rounding was that it failed for values that straddled the rounding boundary. We can round twice, the second time with half the rounding window added, so the two are half out of phase. That way, if the first rounding separates the values, the second rounding won't.

We can't use double-rounding to produce a canonical value for comparisons and hashes, so instead we make a new data structure. We'll use a dict to map points to values, just as we did originally. A second dict will map rounded points to the original points. When we look up a point, if it isn't in the first dict, then round it two ways, and see if either is in the second dict. If it is, then we know what original point to use in the first dict.

That's a lot of words. Here's some code:

class Pt(namedtuple("Pt", "x")):
    # no need for special __eq__ or __hash__
    pass

class PointMap:
    def __init__(self):
        self._items = {}
        self._rounded = {}

    ROUND_DIGITS = 6
    JITTERS = [0, 0.5 * 10 ** -ROUND_DIGITS]

    def _round(self, pt, jitter):
        return Pt(round(pt.x + jitter, ndigits=self.ROUND_DIGITS))

    def __getitem__(self, pt):
        val = self._items.get(pt)
        if val is not None:
            return val

        for jitter in self.JITTERS:
            pt_rnd = self._round(pt, jitter)
            pt0 = self._rounded.get(pt_rnd)
            if pt0 is not None:
                return self._items[pt0]

        raise KeyError(pt)

    def __setitem__(self, pt, val):
        self._items[pt] = val
        for jitter in self.JITTERS:
            pt_rnd = self._round(pt, jitter)
            self._rounded[pt_rnd] = pt

This is now O(1), wonderful! I was surprised that I couldn't find something like this already explained somewhere. Perhaps finding values like I need to do is unusual? Or this is well-known, but under some name I couldn't find? Or is it broken in some way I can't see?

Triangular Fibonacci numbers

Saturday 17 June 2017

Yesterday in my post about 55, I repeated Wikipedia's claim that 55 is the largest number that is both triangular and in the Fibonacci sequence. Chris Emerson commented to ask for a proof. After a moment's thought, I realized I had no idea how to prove it.

The proof is in On Triangular Fibonacci Numbers, a dense 10-page excursion into number theory I don't understand.

While I couldn't follow the proof, I can partially test the claim empirically, which leads to fun with Python and itertools, something which is much more in my wheelhouse.

I started by defining generators for triangular numbers and Fibonacci numbers:

def tri():
    """Generate an infinite sequence of triangular numbers."""
    n = 0
    for i in itertools.count(start=1):
        n += i
        yield n
        
print(list(itertools.islice(tri(), 50)))
[1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171,
 190, 210, 231, 253, 276, 300, 325, 351, 378, 406, 435, 465, 496, 528, 561,
 595, 630, 666, 703, 741, 780, 820, 861, 903, 946, 990, 1035, 1081, 1128,
 1176, 1225, 1275]
def fib():
    """Generate an infinite sequence of Fibonacci numbers."""
    a, b = 1, 1
    while True:
        yield a
        b, a = a, a+b
        
print(list(itertools.islice(fib(), 50)))
[1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584,
 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811,
 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352,
 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437,
 701408733, 1134903170, 1836311903, 2971215073, 4807526976, 7778742049,
 12586269025, 20365011074]

The Fibonacci sequence grows much faster!

My first impulse was to make two sets of the numbers in the sequences, and intersect them, but building a very large set took too long. So instead I wrote a function that took advantage of the ever-increasing nature of the sequences to look for equal elements in two monotonic sequences:

def find_same(s1, s2):
    """Find equal elements in two monotonic sequences."""
    try:
        i1, i2 = iter(s1), iter(s2)
        n1, n2 = next(i1), next(i2)
        while True:
            while n1 < n2:
                n1 = next(i1)
            if n1 == n2:
                yield n1
                n1 = next(i1)
            while n2 < n1:
                n2 = next(i2)
            if n1 == n2:
                yield n1
                n1 = next(i1)
    except StopIteration:
        return

And a function to cut off an infinite sequence once a certain ceiling had been reached:

def upto(s, n):
    """Stop a monotonic sequence once it gets to n."""
    for i in s:
        if i > n:
            return
        yield i

Now I could ask, what values less than quadrillion are in both the triangular numbers and the Fibonacci sequence?:

>>> list(find_same(upto(fib(), 1e15), upto(tri(), 1e15)))
[1, 3, 21, 55]

This doesn't prove the claim for all numbers, but it shows that 55 is the largest number under a quadrillion that is in both sequences.

Another way to do this is to take advantage of the asymmetry in growth rate. The Fibonacci sequence up a quadrillion is only 72 elements. Make that a set, then examine the triangular numbers up to quadrillion and keep the ones that are in the Fibonacci set. And I'm certain there are other techniques too.

I can't explain why, but composable generators please me. This was fun. :)

Re-ruling .rst

Friday 12 May 2017

Sometimes, you need a small job done, and you can write a small Python program, and it does just what you need, and it pleases you.

I have some Markdown files to convert to ReStructured Text. Pandoc does a really good job. But it chooses a different order for heading punctuation than our house style, and I didn't see a way to control it.

But it was easy to write a small thing to do the small thing:

import re
import sys

# The order we want our heading rules.
GOOD_RULES = '#*=-.~'

# A rule is any line of all the same non-word character, 3 or more.
RULE_RX = r"^([^\w\d])\1\1+$"

def rerule_file(f):
    rules = {}
    for line in f:
        line = line.rstrip()
        rule_m = re.search(RULE_RX, line)
        if rule_m:
            if line[0] not in rules:
                rules[line[0]] = GOOD_RULES[len(rules)]
            line = rules[line[0]] * len(line)
        print(line)

rerule_file(sys.stdin)

If you aren't conversant in .rst: there's no fixed order to which punctuation means which level heading. The first rule encountered is heading 1, the next style found is heading 2, and so on.

There might be other ways to do this, but this makes me happy.

Shell = Maybe

Monday 24 April 2017

A common help Python question: how do I get Python to run this complicated command line program? Often, the answer involves details of how shells work. I tried my hand at explaining it what a shell does, why you want to avoid them, how to avoid them from Python, and why you might want to use one: Shell = Maybe.

Text-mode menu bar indicators

Monday 17 April 2017

I recently upgraded my Mac operating system, and decided to try out a new feature: automatically hiding the menu bar. This gives me back another sliver of vertical space. But it has a drawback: I no longer have the time, battery life, and speaker volume indicators available at a glance.

I went looking for a thing that I figured must exist: a Mac app that would display that information in a dock icon. I already have a dock clock. I found a dock battery indicator, though it tried so hard to be cute and pictorial, I couldn't tell what it was telling me.

Asking around, I got a recommendation for GeekTool. It lets you draw a panel on your desktop, and then draw in the panel with the output of a script. Now the ball was back in my court: I could build my own thing.

I'd long ago moved the dock to the left side of the screen (again, to use all the vertical space for my own stuff.) This left a small rectangle of desktop visible at the upper left and lower left, even with maximized windows. I drew a panel in the upper left of the desktop, and set it to run this script every five seconds:

#!/usr/bin/env python3.6

import datetime
import re
import subprocess

def block_eighths(eighths):
    """Return the Unicode string for a block of so many eighths."""
    assert 0 <= eighths <= 8
    if eighths == 0:
        return "\u2003"
    else:
        return chr(0x2590 - eighths)

def gauge(percent):
    """Return a two-char string drawing a 16-part gauge."""
    slices = round(percent / (100 / 16))
    b1 = block_eighths(min(slices, 8))
    b2 = block_eighths(max(slices - 8, 0))
    return b1 + b2

now = datetime.datetime.now()
print(f"{now:%-I:%M\n%-m/%-d}")

batt = subprocess.check_output(["pmset", "-g", "batt"]).decode('utf8').splitlines()
m = re.search(r"\d+%", batt[1])
if m:
    level = m.group(0)
    batt_percent = int(level[:-1])
else:
    level = "??%"
if "discharging" in batt[1]:
    arrow = "\u25bc"        # BLACK DOWN-POINTING TRIANGLE
elif "charging" in batt[1]:
    arrow = "\u25b3"        # WHITE UP-POINTING TRIANGLE
else:
    arrow = ""

print(level + arrow)
print(gauge(batt_percent) + "\u2578")   # BOX DRAWINGS HEAVY LEFT

vol = subprocess.check_output(["osascript", "-e", "get volume settings"]).decode('utf8')
m = re.search(r"^output volume:(\d+), .* muted:(\w+)", vol)
if m:
    level, muted = m.groups()
    if muted == 'true':
        level = "\u20e5"        # COMBINING REVERSE SOLIDUS OVERLAY
    print(f"\u24cb{level}")     # CIRCLED LATIN CAPITAL LETTER V

# For debugging: output the raw data, but pushed out of view.
print(f"{'':30}{batt}")
print(f"{'':30}{vol}")

This displays the time, date, battery level (both numerically and as a crudely drawn battery gauge), whether the battery is charging or discharging, and the volume level:

All that information, crammed into a tiny corner

BTW, if you are looking for Python esoterica, there are a few little-known things going on in this line:

print(f"{now:%-I:%M\n%-m/%-d}")

Finding Unicode characters to represent things was a bit of a challenge. I couldn't find exactly what I need for the right tip of the battery gauge, but it works well enough.

Geektool also does web pages, though in a quick experiment I couldn't make it do something useful, so I stuck with text mode. There also seem to be old forks of Geektool that offer text colors, which could be cool, but it started to feel a bit off-the-path.

This works great for what it does.

Clean-text bookmarklet

Saturday 8 April 2017

I love the text-based web. I love that people can speak their minds, express opinions, encourage each other, and create a lively world of words. This also means they are free to design their text in, shall we say, expressive ways. Those ways are not always ideal for actually reading the words.

Today I really liked Tiberius Hefflin's Part of That World, about the need to recognize non-code contributions in open source projects. You should read it, it is good and true.

But when I first got to the page, I saw this:

Screenshot of gray text on black background, slightly letterspaced

To start with the positive, this text has an elegance to it. It gives a peaceful quiet impression. It pairs perfectly with the mermaid illustration on the page. But I find it hard to read. This typeface is too weak to be light-on-dark, and letterspacing is almost always a bad idea for body text. It isn't even white-on-black, it's 70% white on black, so the letters seem to be hiding in the dark.

I don't mean to pick on this page. It's a well-designed page. There's clearly a mood being created here, and it's been established well. There are many pages online that veer much farther from the usual than this.

My solution for pages like this is a bookmarklet to strip away idiosyncracies in text layout. It changes text to almost-black on white, it removes letterspacing and shadows, and changes full-justified text to left-justified. When I use the bookmarklet on Part of That World, it looks like this:

Screenshot of cleaned-up text

You might prefer the original. That's fine, to each their own. You might feel like the personality has been bleached from this text. To some extent, that's true. But I saw the original, and can choose between them. This helped me to read the words, and not get snagged on the design of the page.

This is the bookmarklet: Clean text.

This is the JavaScript code in the bookmarklet, formatted and tweaked so you can read it:

javascript:(function () {
    var newSS = document.createElement('link'),
        styles = (
            '* { ' +
                'background: #fff; color: #111; ' +
                'letter-spacing: 0; text-shadow: none; hyphens: none; ' +
            '}' +
            ':link, :link * { color: #0000EE; } ' +
            ':visited, :visited * { color: #551A8B; }'
        ).replace(/;/g,' !important;');
    newSS.rel = 'stylesheet';
    newSS.href = 'data:text/css,' + escape(styles);
    document.getElementsByTagName('head')[0].appendChild(newSS);
    var els = document.getElementsByTagName('*');
    for (var i = 0, el; el = els[i]; i++) {
        if (getComputedStyle(el).textAlign === 'justify') {
            el.style.textAlign = 'left';
        }
    }
})();

There are other solutions to eccentrically designed pages. You could read blogs in a single aggregating RSS reader. But then everything is completely homogenized, and you don't even get a chance to experience the design as the author intended. Writers could (and are) flocking to sites like Medium that again homogenize the design.

By the way, full disclosure: I don't like the design of my own site, the page you are (probably) currently reading. I have been working on a re-design on and off for months. Maybe eventually it will be finished. The text will be serif, and larger, with a responsive layout and fewer distractions. Some day.

IronPython is weird

Wednesday 15 March 2017

Have you fully understood how Python 2 and Python 3 deal with bytes and Unicode? Have you watched Pragmatic Unicode (also known as the Unicode Sandwich, or unipain) forwards and backwards? You're a Unicode expert! Nothing surprises you any more.

Until you try IronPython...

Turns out IronPython 2.7.7 has str as unicode!

C:\Users\Ned>"\Program Files\IronPython 2.7\ipy.exe"
IronPython 2.7.7 (2.7.7.0) on .NET 4.0.30319.42000 (32-bit)
Type "help", "copyright", "credits" or "license" for more information.
>>> "abc"
'abc'
>>> type("abc")
<type 'str'>
>>> u"abc"
'abc'
>>> type(u"abc")
<type 'str'>
>>> str is unicode
True
>>> str is bytes
False

String literals work kind of like they do in Python 2: \u escapes are recognized in u"" strings, but not "" strings, but they both produce the same type:

>>> "abc\u1234"
'abc\\u1234'
>>> u"abc\u1234"
u'abc\u1234'

Notice that the repr of this str/unicode type will use a u-prefix if any character is non-ASCII, but it the string is all ASCII, then the prefix is omitted.

OK, so how do we get a true byte string? I guess we could encode a unicode string? WRONG. Encoding a unicode string produces another unicode string with the encoded byte values as code points!:

>>> u"abc\u1234".encode("utf8")
u'abc\xe1\x88\xb4'
>>> type(_)
<type 'str'>

Surely we could at least read the bytes from a file with mode "rb"? WRONG.

>>> type(open("foo.py", "rb").read())
<type 'str'>
>>> type(open("foo.py", "rb").read()) is unicode
True

On top of all this, I couldn't find docs that explain that this happens. The IronPython docs just say, "Since IronPython is a implementation of Python 2.7, any Python documentation is useful when using IronPython," and then links to the python.org documentation.

A decade-old article on InfoQ, The IronPython, Unicode, and Fragmentation Debate, discusses this decision, and points out correctly that it's due to needing to mesh well with the underlying .NET semantics. It seems very odd not to have documented it some place. Getting coverage.py working even minimally on IronPython was an afternoon's work of discovering each of these oddnesses empirically.

Also, that article quotes Guido van Rossum (from a comment on Calvin Spealman's blog):

You realize that Jython has exactly the same str==unicode issue, right? I've endorsed this approach for both versions from the start. So I don't know what you are so bent out of shape about.

I guess things have changed with Jython in the intervening ten years, because it doesn't behave that way now:

$ jython
Jython 2.7.1b3 (default:df42d5d6be04, Feb 3 2016, 03:22:46)
[Java HotSpot(TM) 64-Bit Server VM (Oracle Corporation)] on java1.8.0_31
Type "help", "copyright", "credits" or "license" for more information.
>>> 'abc'
'abc'
>>> type(_)
<type 'str'>
>>> str is unicode
False
>>> type("abc")
<type 'str'>
>>> type(u"abc")
<type 'unicode'>
>>> u"abc".encode("ascii")
'abc'
>>> u"abc"
u'abc'

If you want to support IronPython, be prepared to rethink how you deal with bytes and Unicode. I haven't run the whole coverage.py test suite on IronPython, so I don't know if other oddities are lurking there.

Rubik's algorithms

Sunday 26 February 2017

Recently, a nephew asked about how to solve a Rubik's Cube. I couldn't sit down with him to show him what I knew, so I looked around the web for explanations. I was surprised by two things: first, that all the pages offering solutions seemed to offer the same one, even down to the colors discussed: "Start by making a white cross, ..., finally, finish the yellow side."

Second, that the techniques (or "algorithms") were often given without explanation. They're presented as something to memorize.

My own solving technique uses a few algorithms constructed in a certain way that I describe in Two-Part Rubik's Algorithms. I wrote them up as a resource I hope my nephew will be able to use.

A Rubik's Cube with two edges flipped

BTW, that page makes use of Conrad Rider's impressive TwistySim library.

Https

Saturday 25 February 2017

Someone posted a link to my latest blog post on /r/Python, but somehow got an https link for it. That's odd: my site doesn't even properly serve content over https. People were confused by the broken link.

I should say, my site didn't even serve content over https, because now it does. I'd been meaning to enable https, and force its use, for a long time. This broken link pushed it to the top of the list.

Let's Encrypt is the certificate authority of choice these days, because they are free and automatable. And people say they make it easy, but I have to say, I would not have classified this as easy. I'm sure it's easier than it used to be, but it's still a confusing maze of choices, with decision points you are expected to navigate.

Actually getting everything installed requires sudo, or without sudo, using third-party tools, with instructions from obscure blog posts. There's clearly still room for improvement.

Once you have the certificate in place, you need to redirect your http site to https. Then you have to fix the http references in your site. Protocol-relative (or schema-less) URLs are handy here.

It's all done now, the entire site should always be https. I'm glad I finally got the kick in the pants to do it. If you find something wrong, let me know.

A tale of two exceptions, continued

Thursday 23 February 2017

In my last blog post, A tale of two exceptions, I laid out the long drawn-out process of trying to get a certain exception to make tests skip in my test runner. I ended on a solution I liked at the time.

But it still meant having test-specific code in the product code, even if it was only a single line to set a base class for an exception. It didn't feel right to say "SkipTest" in the product code, even once.

In that blog post, I said,

One of the reasons I write this stuff down is because I'm hoping to get feedback that will improve my solution, or advance my understanding. ... a reader might object and say, "you should blah blah blah."

Sure enough, Ionel said,

A better way is to handle this in coverage's test suite. Possible solution: wrap all your tests in a decorator that reraises with a SkipException.

I liked this idea. The need was definitely a testing need, so it should be handled in the tests. First I tried doing something with pytest to get it to do the conversion of exceptions for me. But I couldn't find a way to make it work.

So: how to decorate all my tests? The decorator itself is fairly simple. Just call the method with all the arguments, and return its value, but if it raises StopEverything, then raise SkipTest instead:

def convert_skip_exceptions(method):
    """A decorator for test methods to convert StopEverything to SkipTest."""
    def wrapper(*args, **kwargs):
        """Run the test method, and convert exceptions."""
        try:
            result = method(*args, **kwargs)
        except StopEverything:
            raise unittest.SkipTest("StopEverything!")
        return result
    return wrapper

But decorating all the test methods would mean adding a @convert_skip_exceptions line to hundreds of test methods, which I clearly was not going to do. I could use a class decorator, which meant I would only have to add a decorator line to dozens of classes. That also felt like too much to do and remember to do in the future when I write new test classes.

It's not often I say this, but: it was time for a metaclass. Metaclasses are one of the darkest magics Python has, and they can be mysterious. At heart, they are simple, but in a place you don't normally think to look. Just as a class is used to make objects, a metaclass is used to make classes. Since there's something I want to do everytime I make a new class (decorate its methods), a metaclass gives me the tools to do it.

class SkipConvertingMetaclass(type):
    """Decorate all test methods to convert StopEverything to SkipTest."""
    def __new__(mcs, name, bases, attrs):
        for attr_name, attr_value in attrs.items():
            right_name = attr_name.startswith('test_')
            right_type = isinstance(attr_value, types.FunctionType)
            if right_name and right_type:
                attrs[attr_name] = convert_skip_exceptions(attr_value)

        return super(SkipConvertingMetaclass, mcs).__new__(mcs, name, bases, attrs)

There are details here that you can skip as incantations if you like. Classes are all instances of "type", so if we want to make a new thing that makes classes, it derives from type to get those same behaviors. The method that gets called when a new class is made is __new__. It gets passed the metaclass itself (just as classmethods get cls and instance methods get self), the name of the class, the tuple of base classes, and a dict of all the names and values defining the class (the methods, attributes, and so on).

The important part of this metaclass is what happens in the __new__ method. We look at all the attributes being defined on the class. If the name starts with "test_", and it's a function, then it's a test method, and we decorate the value with our decorator. Remember that @-syntax is just a shorthand for passing the function through the decorator, which we do here the old-fashioned way.

Then we use super to let the usual class-defining mechanisms in "type" do their thing. Now all of our test methods are decorated, with no explicit @-lines in the code. There's only one thing left to do: make sure all of our test classes use the metaclass:

CoverageTestMethodsMixin = SkipConvertingMetaclass('CoverageTestMethodsMixin', (), {})

class CoverageTest(
    ... some other mixins ...
    CoverageTestMethodsMixin,
    unittest.TestCase,
):
    """The base class for all coverage.py test classes."""

Metaclasses make classes, just the way classes make instances: you call them. Here we call our with the arguments it needs (class name, base classes, and attributes) to make a class called CoverageTestMethodsMixin.

Then we use CoverageTestMethodsMixin as one of the base classes of CoverageTest, which is the class used to derive all of the actual test classes.

Pro tip: if you are using unittest-style test classes, make a single class to be the base of all of your test classes, you will be glad.

After all of this class machinations, what have we got? Our test classes all derive from a base class which uses a metaclass to decorate all the test methods. As a result, any test which raises StopEverything will instead raise SkipTest to the test runner, and the test will be skipped. There's now no mention of SkipTest in the product code at all. Better.

A tale of two exceptions

Sunday 22 January 2017

It was the best of times, it was the worst of times...

This week saw the release of three different versions of Coverage.py. This is not what I intended. Clearly something was getting tangled up. It had to do with some tricky exception handling. The story is kind of long and intricate, but has a number of chewy nuggets that fascinate me. Your mileage may vary.

Writing it all out, many of these missteps seem obvious and stupid. If you take nothing else from this, know that everyone makes mistakes, and we are all still trying to figure out the best way to solve some problems.

It started because I wanted to get the test suite running well on Jython. Jython is hard to support in Coverage.py: it can do "coverage run", but because it doesn't have the same internals as CPython, it can't do "coverage report" or any of the other reporting code. Internally, there's one place in the common reporting code where we detect this, and raise an exception. Before all the changes I'm about to describe, that code looked like this:

for attr in ['co_lnotab', 'co_firstlineno']:
    if not hasattr(self.code, attr):
        raise CoverageException(
            "This implementation of Python doesn't support code analysis.\n"
            "Run coverage.py under CPython for this command."
        )

The CoverageException class is derived from Exception. Inside of Coverage.py, all exceptions raised are derived from CoverageException. This is a good practice for any library. For the coverage command-line tool, it means we can catch CoverageException at the top of main() so that we can print the message without an ugly traceback from the internals of Coverage.py.

The problem with running the test suite under Jython is that this "can't support code analysis" exception was being raised from hundreds of tests. I wanted to get to zero failures or errors, either by making the tests pass (where the operations were supported on Jython) or skipping the tests (where the operations were unsupported).

There are lots of tests in the Coverage.py test suite that are skipped for all sorts of reasons. But I didn't want to add decorators or conditionals to hundreds of tests for the Jython case. First, it would be a lot of noise in the tests. Second, it's not always immediately clear from a test that it is going to touch the analysis code. Lastly and most importantly, if someday in the future I figured out how to do analysis on Jython, or if it grew the features to make the current code work, I didn't want to have to then remove all that test-skipping noise.

So I wanted to somehow automatically skip tests when this particular exception was raised. The unittest module already has a way to do this: tests are skipped by raising a unittest.SkipTest exception. If the exception raised for "can't support code analysis" derived from SkipTest, then the tests would be skipped automatically. Genius idea!

So in 4.3.2, the code changed to this (spread across a few files):

from coverage.backunittest import unittest

class StopEverything(unittest.SkipTest):
    """An exception that means everything should stop.

    This derives from SkipTest so that tests that spring this trap will be
    skipped automatically, without a lot of boilerplate all over the place.

    """
    pass

class IncapablePython(CoverageException, StopEverything):
    """An operation is attempted that this version of Python cannot do."""
    pass

...

# Alternative Python implementations don't always provide all the
# attributes on code objects that we need to do the analysis.
for attr in ['co_lnotab', 'co_firstlineno']:
    if not hasattr(self.code, attr):
        raise IncapablePython(
            "This implementation of Python doesn't support code analysis.\n"
            "Run coverage.py under another Python for this command."
        )

It felt a little off to derive a product exception (StopEverything) from a testing exception (SkipTest), but that seemed acceptable. One place in the code, I had to deal specifically with StopEverything. In an inner loop of reporting, we catch exceptions that might happen on individual files being reported. But if this exception happens once, it will happen for all the files, so we wanted to end the report, not show this failure for every file. In pseudo-code, the loop looked like this:

for f in files_to_report:
    try:
        generate_report_for_file(f)
    except StopEverything:
        # Don't report this on single files, it's a systemic problem.
        raise
    except Exception as ex:
        record_exception_for_file(f, ex)

This all seemed to work well: the tests skipped properly, without a ton of noise all over the place. There were no test failures in any supported environment. Ship it!

Uh-oh: very quickly, reports came in that coverage didn't work on Python 2.6 any more. In retrospect, it was obvious: the whole point of the "from coverage.backunittest" line in the code above was because Python 2.6 doesn't have unittest.SkipTest. For the Coverage.py tests on 2.6, I install unittest2 to get a backport of things 2.6 is missing, and that gave me SkipTest, but without my test requiements, it doesn't exist.

So my tests passed on 2.6 because I installed a package that provided what was missing, but in the real world, unittest.SkipTest is truly missing.

This is a conundrum that I don't have a good answer to:

How can you test your code to be sure it works properly when the testing requirements aren't installed?

To fix the problem, I changed the definition of StopEverything. Coverage.py 4.3.3 went out the door with this:

class StopEverything(unittest.SkipTest if env.TESTING else object):
    """An exception that means everything should stop."""
    pass

The env.TESTING setting was a pre-existing variable: it's true if we are running the coverage.py test suite. This also made me uncomfortable: as soon as you start conditionalizing on whether you are running tests or not, you have a very slippery slope. In this case it seemed OK, but it wasn't: it hid the fact that deriving an exception from object is a dumb thing to do.

So 4.3.3 failed also, and not just on Python 2.6. As soon as an exception was raised inside that reporting loop that I showed above, Python noticed that I was trying to catch a class that doesn't derive from Exception. Of course, my test suite didn't catch this, because when I was running my tests, my exception derived from SkipTest.

Changing "object" to "Exception" would fix the problem, but I didn't like the test of env.TESTING anyway. So for 4.3.4, the code is:

class StopEverything(getattr(unittest, 'SkipTest', Exception)):
    """An exception that means everything should stop."""
    pass

This is better, first because it uses Exception rather than object. But also, it's duck-typing the base class rather than depending on env.TESTING.

But as I kept working on getting rid of test failures on Jython, I got to this test failure (pseudo-code):

def test_sort_report_by_invalid_option(self):
    msg = "Invalid sorting option: 'Xyzzy'"
    with self.assertRaisesRegex(CoverageException, msg):
        coverage.report(sort='Xyzzy')

This is a reporting operation, so Jython will fail with a StopEverything exception saying, "This implementation of Python doesn't support code analysis." StopEverything is a CoverageException, so the assertRaisesRegex will catch it, but it will fail because the messages don't match.

StopEverything is both a CoverageException and a SkipTest, but the SkipTest is the more important aspect. To fix the problem, I did this, but felt silly:

def test_sort_report_by_invalid_option(self):
    msg = "Invalid sorting option: 'Xyzzy'"
    with self.assertRaisesRegex(CoverageException, msg):
        try:
            coverage.report(sort='Xyzzy')
        except SkipTest:
            raise SkipTest()

I knew this couldn't be the right solution. Talking it over with some co-workers (OK, I was griping and whining), we came up with the better solution. I realized that CoverageException is used in the code base to mean, "an ordinary problem from inside Coverage.py." StopEverything is not an ordinary problem. It reminded me of typical mature exception hierarchies, where the main base class, like Exception, isn't actually the root of the hierarchy. There are always a few special-case classes that derive from a real root higher up.

For example, in Python, the classes Exception, SystemExit, and KeyboardInterrupt all derive from BaseException. This is so "except Exception" won't interfere with SystemExit and KeyboardInterrupt, two exceptions meant to forcefully end the program.

I needed the same thing here, for the same reason. I want to have a way to catch "all" exceptions without interfering with the exceptions that mean "end now!" I adjusted my exception hierarchy, and now the code looks like this:

class BaseCoverageException(Exception):
    """The base of all Coverage exceptions."""
    pass

class CoverageException(BaseCoverageException):
    """A run-of-the-mill exception specific to coverage.py."""
    pass

class StopEverything(
        BaseCoverageException,
        getattr(unittest, 'SkipTest', Exception)
    ):
    """An exception that means everything should stop."""
    pass

Now I could remove the weird SkipTest dance in that test. The catch clause in my main() function changes from CoverageException to BaseCoverageException, and things work great. The end...?

One of the reasons I write this stuff down is because I'm hoping to get feedback that will improve my solution, or advance my understanding. As I lay out this story, I can imagine points of divergence: places in this narrative where a reader might object and say, "you should blah blah blah." For example:

  • "You shouldn't bother supporting 2.6." Perhaps not, but that doesn't change the issues explored here, just makes them less likely.
  • "You shouldn't bother supporting Jython." Ditto.
  • "You should just have dependencies for the things you need, like unittest2." Coverage.py has a long-standing tradition of having no dependencies. This is driven by a desire to be available to people porting to new platforms, without having to wait for the dependencies to be ported.
  • "You should have more realistic integration testing." I agree. I'm looking for ideas about how to test the scenario of having no test dependencies installed.

That's my whole tale. Ideas are welcome.

Update: the story continues, but fair warning: metaclasses ahead!

Older:

Even older...