Two useful sites for choosing color palettes, both from map-making
backgrounds. They both consider qualitative, sequential, and diverging palettes
as different needs, which I found insightful.
- Paul Tol’s notes, which gives
special consideration to color-blindness. He has some visual demonstrations
that picked up my own slight color-blindness.
- Cynthia Brewer’s ColorBrewer, with
interactive elements so you can create your own palette for your particular
Color Palette Ideas is different:
palettes based on photographs, but can also be a good source for ideas.
As an update to my ancient blog post
about this same topic, Adobe Color and
paletton both have tools for generating
palettes in lots of over-my-head ways. And Color Synth Axis
is still very appealing to the geek in me, though it needs Flash, and so I fear
is not long for this world...
Yesterday I pleaded,
Bug #915: please help!
It got posted to
where Robert Xiao (nneonneo) did some impressive debugging and
found the answer.
The user’s code used mocks to simulate an OSError when trying to make
with patch('tempfile._TemporaryFileWrapper') as mock_ntf:
mock_ntf.side_effect = OSError()
Inside tempfile.NamedTemporaryFile, the error handling misses the possibility
that _TemporaryFileWrapper will fail
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
file = _io.open(fd, mode, buffering=buffering,
newline=newline, encoding=encoding, errors=errors)
return _TemporaryFileWrapper(file, name, delete)
If _TemporaryFileWrapper fails, the file descriptor fd is closed, but the
file object referencing it still exists. Eventually, it will be garbage
collected, and the file descriptor it references will be closed again.
But file descriptors are just small integers which will be reused. The
failure in bug 915 is that the file descriptor did get reused, by SQLite. When
the garbage collector eventually reclaimed the file object leaked by
NamedTemporaryFile, it closed a file descriptor that SQLite was using. Boom.
There are two improvements to be made here. First, the user code should be
mocking public functions, not internal details of the Python stdlib. In
fact, the variable is already named mock_ntf as if it had been a mock of
NamedTemporaryFile at some point.
NamedTemporaryFile would be a better mock because that is the function being
used by the user’s code. Mocking _TemporaryFileWrapper is relying on an
internal detail of the standard library.
The other improvement is to close the leak in NamedTemporaryFile.
That request is now bpo39318.
As it happens, the leak had also been reported as
- Hacker News can be helpful, in spite of the tangents about shell
redirection, authorship attribution, and GitHub monoculture.
- There are always people more skilled at debugging. I had no idea you could
- Error handling is hard to get right. Edge cases can be really subtle.
Bugs can linger for years.
I named Robert Xiao at the top, but lots of people chipped in effort to help
get to the bottom of this. ikanobori posted it to Hacker News in the first
place. Chris Caron reported the original #915 and stuck with the process as it
dragged on. Thanks everybody.
Updated: this was solved on Hacker News. Details in
Bug #915: solved!
I just released coverage.py 5.0.3, with two bug fixes. There was another bug
I really wanted to fix, but it has stumped me. I’m hoping someone can figure it
describes a disk I/O failure. Thanks to some help from Travis support, Chris
Caron has provided instructions for reproducing it in Docker, and they work: I
can generate disk I/O errors at will. What I can’t figure out is what
coverage.py is doing wrong that causes the errors.
To reproduce it, start a Travis-based docker image:
cid=$(docker run -dti --privileged=true --entrypoint=/sbin/init \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
docker exec -it $cid /bin/bash
Then in the container, run these commands:
su - travis
git clone --branch=nedbat/debug-915 https://github.com/nedbat/apprise-api.git
pip install tox
tox -e bad,good
This will run two tox environments, called good and bad. Bad
will fail with a disk I/O error, good will succeed. The difference is that bad
uses the pytest-cov plugin, good does not. Two detailed debug logs will be
created: debug-good.txt and debug-bad.txt. They show what operations were
executed in the SqliteDb class in coverage.py.
The Big Questions: Why does bad fail? What is it doing at the SQLite level
that causes the failure? And most importantly, what can I change in coverage.py
to prevent the failure?
Some observations and questions:
- If I change the last line of the steps to “tox -e good,bad” (that is, run
the environments in the other order) then the error doesn’t happen. I don’t
understand why that would make a difference.
- I’ve tried adding time.sleep’s to try to slow the pace of database access,
but maybe in not enough places? And if this fixes it, what’s the right way to
productize that change?
- I’ve tried using the detailed debug log to create a small Python program
that in theory accesses the SQLite database in exactly the same way, but I
haven’t managed to create the error that way. What aspect of access am I
If you come up with answers to any of these questions, I will reward you
somehow. I am also eager to chat if that would help you solve the mysteries.
I can be reached on email,
as nedbat on IRC,
or in Slack. Please get in
touch if you have any ideas. Thanks.
Our card this year, drawn by Ben, of
course. The five gnomes are Susan, me, Ben, Max, and Nat:
Providing detailed command output in GitHub issues is hard: I want to be
complete, but I don’t want to paste unreadable walls of text. Some commands
have long output that is usually uninteresting (pip install), but which every
once in a while has a useful clue. I want to include that output without making
it hard to find the important stuff.
While working on an issue with coverage.py 5.0,
I came up with a way to show commands and their output that I think works
I used GitHub’s <details> support to
the commands I ran with their output in collapsible sections. I like the
way it came out: you can copy all the commands, or open a section to see what
happened for the command you’re interested in.
The raw markdown looks like this:
<summary>pip install '.[dev]'</summary>
Using cached https://files.pythonhosted.org/packages/0d/46/5b6a6c13fee40f9dfaba84de1394bfe082c0c7d95952ba0ffbd56ce3a3f7/aenum-2.1.2-py3-none-any.whl
Using cached https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl
Using cached https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl
(The GitHub renderer was very particular about the blank lines around the
<details> and <summary> tags, so be sure to include them if you try
Other people have done this: after I wrote this comment, one of the newer
coverage.py issues used the same technique, but with <tt> in the summaries to
make them look like commands, nice. There are a few manual steps to get that
result, but I’ll be refining how to produce that style more conveniently from a
While trying to reproduce an issue with coverage.py 5.0,
I had a test suite that showed the problem, but it was inconvenient to run the
whole suite repeatedly, because it took too long. I wanted to find just one
test (or small handful of tests) that would demonstrate the problem.
But I knew nothing about these tests. I didn’t know what subset might be
useful, or even what subsets there were, so I had to try random subsets and hope
for the best.
I selected random subsets with a new trick: I used
the -k option (select tests by a substring of their names) using single
consonants. “pytest -k b” will run only the tests with a b in their name, for
example. Then I tried “-k c”, “-k d”, “-k f”, and so on. Some will run the
whole test suite (“-k t” is useless because t is in every test name), but some
ran usefully small collections.
This is a mindless way to select tests, but I knew nothing about this test
suite, so it was a quick way to run fewer than all of them. Running “-k q” was
the best (only 16 tests). Then I looked at the test names, and selected yet
smaller subsets with more thought. In the end, I could reduce it to just one
test that demonstrated the problem.