How should I distribute alphas?

Saturday 27 September 2014

I thought today was going to be a good day. I was going to release the first alpha version of 4.0. I finally finished the support for gevent and other concurrency libraries like it, and I wanted to get the code out for people to try it.

So I made the kits and pushed them to PyPI. I used to not do that, because people would get the betas by accident. But pip now understands about pre-releases and real releases, and won’t install an alpha version by default. Only if you explicitly use --pre will you get an alpha.

About 10 minutes after I pushed the kits, someone I was chatting with on IRC said, “Did you just release a new version of coverage?” Turns out his Travis build was failing.

He was using coveralls to report his coverage statistics, and it was failing. Turns out coveralls uses internals from to do its work, and I’ve made big refactorings to the internals, so their code was broken. But how did the alpha get installed in the first place?

He was using tox, and it turns out that when tox installs dependencies, it defaults to using the --pre switch! Why? I don’t know.

OK, I figured I would just hide the new version on PyPI. That way, if people wanted to try it, they could use “pip install coverage==4.0a1”, and no one else would be bothered with it. Nope: pip will find the newer version even if it is hidden on PyPI. Why? I don’t know.

In my opinion:

  • Coveralls shouldn’t have used internals.
  • Tox shouldn’t use the --pre switch by default.
  • Pip shouldn’t install hidden versions when there is no version information specified.

So now the kit is removed entirely from PyPI while I figure out a new approach. Some possibilities, none of them great:

  1. Distribute the kit the way I used to, with a download on my site. This sucks because I don’t know if there’s a way to do this so that pip will find it, and I don’t know if it can handle pre-built binary kits like that.
  2. Do whatever I need to do to so that coveralls will continue to work. This sucks because I don’t know how much I will have to add back, and I don’t want to establish a precedent, and it doesn’t solve the problem that people really don’t expect to be using alphas of their testing tools on Travis.
  3. Make a new package on PyPI: coverage-prerelease, and instruct people to install from there. This sucks because tools like coveralls won’t refer to it, so either you can’t ever use it with coveralls, or if you install it alongside, then you have two versions of coverage fighting with each other? I think?
  4. Make a pull request against coveralls to fix their use of the now-missing internals. This sucks (but not much) because I don’t want to have to understand their code, and I don’t have a simple way to run it, and I wish they had tried to stick to supported methods in the first place.
  5. Leave it broken, and let people fix it by overriding their tox.ini settings to not use --pre, or wait until people complain to coveralls and they fix their code. This sucks because there will be lots of people with broken builds.

Software is hard, yo.


Definitely 4, unless there is a very good reason for that tox shouldn't be installing unstable packages by default. You shouldn't be spoon feeding projects that refuse to follow best practices.
FYI, when Joao wrote his comment, #4 was "leave it broken."

Joao, do you think tox's default of using unstable packages is a good one?
No, I think tox is broken and needs to be fixed unless they can provide a very good reason for their behavior.
Software distribution is only hard because people insist on doing strange and sometimes stupid things with other people's software. And then the user gets the short end of the stick. It doesn't have to be this way, except that there's always that one person who f's it up for the rest of us.
I'd assume this is a coverall's issue, not a tox's issue:

I'm not 100% sure, but tox probably needs the `--pre` flags to be able to install the package version that has been explictly required.

Not specifying an EXACT version in your requirements.txt is imho foolish and reckless, and coveralls should stop doing that.

But if in another project, for any reason, you actually need a prerelease version, tox'd fail with a confusing "package version not found" error without the `--pre` option presupplied.

It might be argued that tox should just document better the possibility of enabling the `--pre` option.

For those who're interested, I think that the relevant tox.ini configuration option is this one:
I meant: " tox should just document better the possibility of enabling the `--pre` option. " while not enabling `--pre` by default.

Anyhow I just realized that coveralls doesn't use tox by itself, and thus the issue might not have been fixed even with strict bound on the requirements.txt, if they had a different way to slurp the dependencies into the

For another discussion on the issue of specifying libraries version bounds: (in the Haskell ecosystem)

Unfortunately, actually having exact dependencies probably isn't a good idea, without being able to install multiple versions of the same library in the same environment :/
UPDATE: coveralls has updated their to require <4.0 (thanks to ionelmc for the pull request). That may be enough, we'll see...

To clarify: if you specify a version to install, pip will install that version, whether you have --pre or not.
I see... I just checked the hg blame for tox, and apparently that pip behavior hasn't always been the same

I just reported this issue on the tox bug tracker:
My vote #4, #5 – I've been using Tox for years, admittedly with version-pinned requirements, and the --pre behaviour is an unpleasant surprise. I'm glad to see coveralls fixed the unpinned install but I suspect there are plenty of other surprises lurking around.

Add a comment:

Ignore this:
Leave this empty:
Name is required. Either email or web are required. Email won't be displayed and I won't spam you. Your web site won't be indexed by search engines.
Don't put anything here:
Leave this empty:
Comment text is Markdown.