Updated multi-parameter interactive Jupyter notebook

Monday 12 February 2024

A few years ago I wrote Multi-parameter Jupyter notebook interaction about a Jupyter notebook. It worked at the time, but when I dusted it off recently, it didn’t. I’ve renovated it and cleaned it up a little, and now it works again.

It’s a Jupyter notebook with a simulation of late-career money flows to figure out possibilities for retirement. It uses widgets to give you sliders to adjust parameters to see how the outcome changes. It also lets you pick one of the parameters to auto-plot with multiple values, which gives a more visceral way to understand the effect different variables have.

Screenshot of the sliders and resulting plot of outcomes

You can get the notebook itself if you like.

One way to package Python code right now

Saturday 10 February 2024

A year or so ago, I couldn’t find a step-by-step guide to packaging a Python project that didn’t get bogged down in confusing options and choices, so I wrote my own: pkgsample. After I wrote it, I found the PyPA Packaging Python Projects tutorial, which is very good, so I never made a post here about my sample.

Since then, I’ve shown my sample to people a number of times, and they liked it, so I guess it’s helpful. Here’s what I wrote about it back when I first created it:

•    •    •

The Python packaging world is confusing. There are decades of history and change. There are competing tools, with new ones arriving frequently. I don’t want to criticize anyone, let’s just take it as a fact of life right now.

But I frequently see questions from people who have written some Python code, and would like to get it packaged. They have a goal in mind, and it is not to learn about competing tools, intricate standards, or historical artifacts. They are fundamentally uninterested in the mechanics of packaging. They just want to get their code packaged.

There are lots of pages out there that try to explain things, but they all seem to get distracted by the options, asking our poor developer to choose between alternatives they don’t understand, with no clear implications.

I’m also not criticzing the uninterested developer. I am that developer! I don’t know what all these things are, or how they compete and overlap: build, twine, hatch, poetry, flit, wheel, pdm, setuptools, distutils, pep517, shiv, etc, etc.

I just want someone to tell me what to do so my code will install on users’ machines. Once that works, I can go back to fixing bugs, adding features, writing docs, and so on.

So I wrote pkgsample to be the instructions I couldn’t find. It’s simple and stripped down, and does not ask you to make choices you don’t care about. It tells you what to do. It gives you one way to make a simple Python package that works right now. It isn’t THE way. It’s A way. It will probably work for you.

I am at liberty

Tuesday 30 January 2024

As of a few weeks ago, I am between gigs. Riffing on some corporate-speak from a recent press release: “2U and I have mutually determined that 2U is laying me off.”

I feel OK about it: work was becoming increasingly frustrating, and I have some severance pay. 2U is in a tough spot as a company so at least these layoffs seemed like an actual tactic rather than another pointless please-the-investors move by companies flush with profits and cash. 2U struggling also makes being laid off a more appealing option than remaining there after a difficult cut.

edX was a good run for me. We had a noble mission: educate the world. The software was mostly open source (Open edX), which meant our efforts could power education that we as a corporation didn’t want to pursue.

Broadly speaking, my job was to oversee how to do open source well. I loved the mission of education combined with the mission of open source. I loved seeing the community do things together that edX alone could not. I have many good friends at 2U and in the community. I hope they can make everything work out well, and I hope I can do a good job staying in touch with them.

I don’t know what my next gig will be. I like writing software. I like having developers as my customers. I am good at building community both inside and outside of companies. I am good at helping people. I’m interested to hear ideas.

You (probably) don’t need to learn C

Wednesday 24 January 2024

On Mastodon I wrote that I was tired of people saying, “you should learn C so you can understand how a computer really works.” I got a lot of replies which did not change my mind, but helped me understand more how abstractions are inescapable in computers.

People made a number of claims. C was important because syscalls are defined in terms of C semantics (they are not). They said it was good for exploring limited-resource computers like Arduinos, but most people don’t program for those. They said it was important because C is more performant, but Python programs often offload the compute-intensive work to libraries other people have written, and these days that work is often on a GPU. Someone said you need it to debug with strace, then someone said they use strace all the time and don’t know C. Someone even said C was good because it explains why NUL isn’t allowed in filenames, but who tries to do that, and why learn a language just for that trivia?

I’m all for learning C if it will be useful for the job at hand, but you can write lots of great software without knowing C.

A few people repeated the idea that C teaches you how code “really” executes. But C is an abstract model of a computer, and modern CPUs do all kinds of things that C doesn’t show you or explain. Pipelining, cache misses, branch prediction, speculative execution, multiple cores, even virtual memory are all completely invisible to C programs.

C is an abstraction of how a computer works, and chip makers work hard to implement that abstraction, but they do it on top of much more complicated machinery.

C is far removed from modern computer architectures: there have been 50 years of innovation since it was created in the 1970’s. The gap between C’s model and modern hardware is the root cause of famous vulnerabilities like Meltdown and Spectre, as explained in C is Not a Low-level Language.

C can teach you useful things, like how memory is a huge array of bytes, but you can also learn that without writing C programs. People say, C teaches you about memory allocation. Yes it does, but you can learn what that means as a concept without learning a programming language. And besides, what will Python or Ruby developers do with that knowledge other than appreciate that their languages do that work for them and they no longer have to think about it?

Pointers came up a lot in the Mastodon replies. Pointers underpin concepts in higher-level languages, but you can explain those concepts as references instead, and skip pointer arithmetic, aliasing, and null pointers completely.

A question I asked a number of people: what mistakes are JavaScript/Ruby/Python developers making if they don’t know these things (C, syscalls, pointers)?”. I didn’t get strong answers.

We work in an enormous tower of abstractions. I write programs in Python, which provides me abstractions that C (its underlying implementation language) does not. C provides an abstract model of memory and CPU execution which the computer implements on top of other mechanisms (microcode and virtual memory). When I made a wire-wrapped computer, I could pretend the signal travelled through wires instantaneously. For other hardware designers, that abstraction breaks down and they need to consider the speed electricity travels. Sometimes you need to go one level deeper in the abstraction stack to understand what’s going on. Everyone has to find the right layer to work at.

Andy Gocke said it well:

When you no longer have problems at that layer, that’s when you can stop caring about that layer. I don’t think there’s a universal level of knowledge that people need or is sufficient.

“like jam or bootlaces” made another excellent point:

There’s a big difference between “everyone should know this” and “someone should know this” that seems to get glossed over in these kinds of discussions.

C can teach you many useful and interesting things. It will make you a better programmer, just as learning any new-to-you language will because it broadens your perspective. Some kinds of programming need C, though other languages like Rust are ably filling that role now too. C doesn’t teach you how a computer really works. It teaches you a common abstraction of how computers work.

Find a level of abstraction that works for what you need to do. When you have trouble there, look beneath that abstraction. You won’t be seeing how things really work, you’ll be seeing a lower-level abstraction that could be helpful. Sometimes what you need will be an abstraction one level up. Is your Python loop too slow? Perhaps you need a C loop. Or perhaps you need numpy array operations.

You (probably) don’t need to learn C.

Randomly sub-setting test suites

Sunday 14 January 2024

I needed to run random subsets of my test suite to narrow down the cause of some mysterious behavior. I didn’t find an existing tool that worked the way I wanted to, so I cobbled something together.

I wanted to run 10 random tests (out of 1368), and keep choosing randomly until I saw the bad behavior. Once I had a selection of 10, I wanted to be able to whittle it down to try to reduce it further.

I tried a few different approaches, and here’s what I came up with, two tools in the coverage.py repo that combine to do what I want:

  • A pytest plugin (select_plugin.py) that lets me run a command to output the names of the exact tests I want to run,
  • A command-line tool (pick.py) to select random lines of text from a file. For convenience, blank or commented-out lines are ignored.

More details are in the comment at the top of pick.py, but here’s a quick example:

  1. Get all the test names in tests.txt. These are pytest “node” specifications:
    pytest --collect-only | grep :: > tests.txt
  2. Now tests.txt has a line per test node. Some are straightforward:
    tests/test_cmdline.py::CmdLineStdoutTest::test_version
    tests/test_html.py::HtmlDeltaTest::test_file_becomes_100
    tests/test_report_common.py::ReportMapsPathsTest::test_map_paths_during_html_report
    but with parameterization they can be complicated:
    tests/test_files.py::test_invalid_globs[bar/***/foo.py-***]
    tests/test_files.py::FilesTest::test_source_exists[a/b/c/foo.py-a/b/c/bar.py-False]
    tests/test_config.py::ConfigTest::test_toml_parse_errors[[tool.coverage.run]\nconcurrency="foo"-not a list]
  3. Run a random bunch of 10 tests:
    pytest --select-cmd="python pick.py sample 10 < tests.txt"
    We’re using --select-cmd to specify the shell command that will output the names of tests. Our command uses pick.py to select 10 random lines from tests.txt.
  4. Run many random bunches of 10, announcing the seed each time:
    for seed in $(seq 1 100); do
        echo seed=$seed
        pytest --select-cmd="python pick.py sample 10 $seed < tests.txt"
    done
  5. Once you find a seed that produces the small batch you want, save that batch:
    python pick.py sample 10 17 < tests.txt > bad.txt
  6. Now you can run that bad batch repeatedly:
    pytest --select-cmd="cat bad.txt"
  7. To reduce the bad batch, comment out lines in bad.txt with a hash character, and the tests will be excluded. Keep editing until you find the small set of tests you want.

I like that this works and I understand it. I like that it’s based on the bedrock of text files and shell commands. I like that there’s room for different behavior in the future by adding to how pick.py works. For example, it doesn’t do any bisecting now, but it could be adapted to it.

As usual, there might be a better way to do this, but this works for me.

Coverage.py with sys.monitoring

Wednesday 27 December 2023

New in Python 3.12 is sys.monitoring, a lighter-weight way to monitor the execution of Python programs. Coverage.py 7.4.0 now can optionally use sys.monitoring instead of sys.settrace, the facility that has underpinned coverage.py for nearly two decades. This is a big change, both in Python and in coverage.py. It would be great if you could try it out and provide some feedback.

Using sys.monitoring should reduce the overhead of coverage measurement, often lower than 5%, but of course your timings might be different. One of the things I would like to know is what your real-world speed improvements are like.

Because the support is still a bit experimental, you need to define an environment variable to use it: COVERAGE_CORE=sysmon. Eventually, sys.monitoring will be automatically used where possible, but for now you need to explicitly request it.

Some things won’t work with sys.monitoring: plugins and dynamic contexts aren’t yet supported, though eventually they will be. Execution will be faster for line coverage, but not yet for branch coverage. Let me know how it works for you.

This has been in the works since at least March. I hope I haven’t forgotten something silly in getting it out the door.

Older: