For the last month or so, the IRC world has been embroiled in drama over the
new ownership of Freenode. For me, it culminated yesterday when I was banned
I’m not going to try to recap what happened in detail, but I can give you my
overall perspective on it. The new owners started on the wrong foot, and then
mishandled every subsequent interaction. At every turn, people feared the new
owners and staff were going to do something malicious. Then something bad would
happen, people would say, “look: malice!,” and the new staff would say, “it
wasn’t malice, it was a mistake!” Then it would happen again.
A month ago, when the new trends were becoming clear, the operators of the
#python channel (including me) decided to move #python to the new
Libera.chat network being run by the old
Freenode staff. But we also stayed in the Freenode channel to let people know
where everyone had gone.
Yesterday, after a heated debate in the Freenode channel where I was accused
of splitting the community, I got k-lined (banned entirely from Freenode). The
reason given was “spamming”, because of my recurring message about the move to
Libera. Then the entire Freenode #python channel was closed. So much for caring
about the community.
Was it malice or was it mistake? Does it matter? It’s not a good way to run a
network. After the channel was closed, people asking staff about what happened
were banned from asking. That wasn’t a mistake.
I can’t claim to know the minds of the new Freenode owners or staff. All I
can do is see their actions, or I could until they banned me from Freenode. I
know that some of the new staff are people we had come to know over the years as
persistent disrupters in #python. The people advocating for the new Freenode
staff seem to trend towards the anti-code-of-conduct, “free speech means I don’t
have to care” cohort. And the new staff seems to be using force to silence
people asking questions. It’s clear that transparency is not a strong value for
Setting aside network drama, the big picture here is that the Freenode
#python community isn’t split: it’s alive and well. It’s just not on Freenode
anymore, it’s on Libera.
Freenode was a good thing. But the domain name of the server was the least
important part of it, just a piece of technical trivia. There’s no reason to
stick with Freenode just because it is called Freenode. As with any way of
bringing people together, the important part is the people. If all of the
people go someplace else, follow them there, and continue.
At work, we work in GitHub pull requests that
get merged to the main branch. We also have twice-yearly community release
branches, and a small fraction of the main-branch changes need to be copied onto
the current release branch. Trying to automate choosing the commits to
cherry-pick lead me into some Git and GitHub complexities.
Git has three different ways to finish up a pull request, which complicates
the process of figuring out what to cherry-pick. Before getting into
cherry-picking, let’s look at the three finishes to pull requests. Suppose we
have four commits on the main branch (A-B-C-D), and a pull request for a feature
branch started from B with two commits (F-G) on it:
The F-G pull request can be brought into the main branch in three ways.
First, the F-G commits can be merged to main with a merge commit:
Second, the two commits can be rebased onto main as two new commits
Fr-Gr (for F-rebased and G-rebased):
Lastly, the two commits can be squashed down to one new commit FGs
(for F and G squashed):
Note that for rebased and squashed pull requests, the original commits F-G
will not be reachable from the main branch, and will eventually disappear from
the repo, indicated by their dashed outlines.
Now let’s consider the release branch. This is a branch made twice a year to
mark community releases of the platform. Once the branch is made, some fixes
need to be cherry-picked onto it from the main branch. We can’t just merge the
fixes, because that would bring the entire history of the main branch into the release.
Cherry-picking lets us take just the commits we want.
As an example, here E has been cherry-picked as Ec:
The question now is:
To get the changes from a finished pull request onto the
release branch, what commits should we cherry-pick?
The two rules are:
The commits should make the same change to the release branch that were made
to the main branch, and
The commits should be reachable from the main
branch, in case we need to later investigate how the changes came to be.
GitHub doesn’t record what approach was used to finish a pull request (unless
I’ve missed something). It records what it calls the “merge commit”. For
merged pull request, this is the actual merge commit. For rebased and squashed
pull requests, it’s the final commit that ended up on the main branch.
In the case of a merged pull request, the answer is easy: cherry-pick the two
original commits in the pull request. We can tell the pull request was merged
because the merge commit (with a thicker outline) has two parents (it’s actually
But for rebased and squashed pull requests, the answer is not so simple. We
can tell the pull request wasn’t merged, because the recorded “merge commit”
isn’t a merge. Somehow we have to figure out how many commits starting with the
merge commit are the right ones to take. For a rebased pull request we’d like
to cherry-pick as many commits as the pull request had:
And for a squashed pull request, we want to cherry-pick just the one squashed
But how to tell the difference between these two situations? I don’t know
the best approach. Maybe comparing the commit messages? My first way was to
look at the count of added and deleted lines. If the merge commit changes as
many lines as the pull request as a whole, then just take that one commit. But
that could be wrong if a rebased pull request had overlapping commits, and the
last commit changed all the lines.
Is there some bit of information I’ve overlooked? Does git or GitHub have a
way to unambiguously distinguish these cases?
Is there any way to find the coordinates of a Mandelbrot image from the
image? Even a guess as to the rough neighborhood?
I recently saw this as someone’s avatar:
This is clearly the Mandelbrot fractal, but where is it? What coordinates
and magnification? Without accompanying information, is it possible to find it?
I’d like to explore that region, but how can I find it?
This problem reminds me of Shazam,
the seemingly magical app that listens to what’s playing in your environment,
and tells you what song it is.
Is there any way?
BTW, the way I solved this problem in my own long-neglected Mandelbrot
explorer Aptus is to write data records into the
PNG files it produces.
For example, you can download the image from the Aptus page, and use
imagemagick to see
what data it contains:
Aptus also knows how to read these files, so you can open a PNG it produced,
and you will be exploring where it was captured. It’s like jumping into a
photo to visit the place it was taken. I used the same technique in
Too bad more images don’t carry metadata to help you re-find their location
in mathematical space.
I’ve made a change to coverage.py, and I could use your help testing it
before it’s released to the world.
tl;dr: install this and let me know if you don’t like the results:
pip install coverage==5.6b1
What’s changed? Previously, coverage.py didn’t understand about third-party
code you had installed. With no options specified, it would measure and report
on that code, for example in site-packages. A common solution was to use
--source=. to only measure code in the current directory
tree. But many people put their virtualenv in the current directory, so
third-party code installed into the virtualenv would still get reported.
Now, coverage.py understands where third-party code gets installed, and won’t
measure code it finds there. This should produce more useful results with less
work on your part.
This was a bit tricky because the --source option can
also specify an importable name instead of a directory, and it had to still
measure that code even if it was installed where third-party code goes.
As of now, there is no way to change this new behavior. Third-party code is
This is kind of a big change, and there could easily be unusual arrangements
that aren’t handled properly. I would like to find out about those before an
official release. Try the new version and let me know what you find out:
pip install coverage==5.6b1
In particular, I would like to know if any of the code you wanted measured
wasn’t measured, or if there is code being measured that “obviously” shouldn’t
be. Testing on Debian (or a derivative like Ubuntu) would be helpful; I know
they have different installation schemes.
At work, to keep up with mailing lists and GitHub notifications, I had more
than fifty GMail filters. It wasn’t too bad to create them by hand with the
GMail UI, but I’m sure there were filters there I didn’t need any more.
But then I wanted a filter with both an if-action, and an else-action. Worse,
I wanted if-A, then do this, if-B, do this, else, do that. GMail filters just
aren’t constructed that way. It was going to be a pain to set them up and
Looking around for tools, I found gmail-britta,
a Ruby DSL. This was the right kind of tool for me, except I don’t write Ruby.
I hadn’t found gmail-yaml-filters,
but I don’t think I want to write YAML.
promising, but my work GMail account wouldn’t let me follow its authentication
steps. Honestly, I often run afoul of authentication when trying to use APIs.
(See Support windows bar calendar for another project I
built in a strange way specifically to avoid having to figure out
So naturally, I built my own module to do it:
Gefilte Fish is a Python DSL
(domain-specific language) of sorts to create GMail filters. (The name is
fitting since this is the start of Passover.) Using gefilte, you write Python
code to express your filters. Running your program outputs XML that you then
import into GMail to create the filters.
The DSL lets you write this to make filters:
# Make the filter-maker and use its DSL. All of the methods of GitHubFilter # are now usable as global functions. fish=GefilteFish() withfish.dsl():
# Google's spam moderation messages should never get sent to spam. withreplyto("email@example.com"): never_spam() mark_important()
# If the subject and body have these, label it "liked". withsubject(exact("[Confluence]")).has(exact("liked this page")): label("liked")
withfrom_("firstname.lastname@example.org"): # Skip the inbox (archive them). skip_inbox().label("github")
# GitHub sends to synthetic addresses to provide information. withto("email@example.com"): label("mine").star()
# Notifications from some repos are special. withrepo("myproject/tasks")asf: label("todo") withf.elif_(repo("otherproject/something"))asf: label("otherproject") withf.else_(): # But everything else goes into "Code reviews". label("Code reviews")
# Some inbound addresses come to me, mark them so # I understand what I'm # looking at in my inbox. fortoaddr,the_labelin[ ("firstname.lastname@example.org","info@"), ("email@example.com","security@"), ("firstname.lastname@example.org","con20"), ("email@example.com","con21"), ]: withto(toaddr): label(the_label)
To make the DSL flow somewhat naturally, I definitely bent the rules on what
is considered good Python. But it let me write succinct descriptions of the
filters I want, while still having the power of a programming language.