Cog resurgence

Friday 14 January 2022

My cog tool has been having a resurgence of late: a number of people are discovering it’s useful to run a little bit of Python code inside otherwise static files.

Hynek Schlawack used it to de-duplicate his pyproject.toml:

Automator extraordinaire Simon Willison started using it to keep docs up to date:

Brett Cannon even called it trendy!

Of course, some people were using it before it was cool.

With all this buzz(!), Tobias Macey invited me on podcast.__init__ to talk about it: Episode 347: Generate Your Text Files With Python Using Cog. It was fun to talk about this little tool I wrote nearly 18 years ago that has been plugging away all this time, and is now being re-discovered.

Even I am finding new uses for cog. I started using it to keep coverage.py docs up to date, I built my crazy over-engineered GitHub profile (source) with it, and I even used it on the source file of this blog post to pull in the tweets!

Gem: exploding string alternatives

Tuesday 28 December 2021

Here’s a Python gem: a small bit of Python that uses the power of the language and standard library well.

It’s a function to list strings generated by a pattern with embedded alternatives. It takes an input string with brace-wrapped possibilities, and generates all the strings made from making choices among them:

>>> list(explode("{Alice,Bob} ate a {banana,donut}."))
[
    'Alice ate a banana.',
    'Alice ate a donut.',
    'Bob ate a banana.',
    'Bob ate a donut.'
]

Here’s the function:

def explode(pattern: str) -> Iterable[str]:
    """
    Expand the brace-delimited possibilities in a string.
    """
    seg_choices = []
    for segment in re.split(r"(\{.*?\})", pattern):
        if segment.startswith("{"):
            seg_choices.append(segment.strip("{}").split(","))
        else:
            seg_choices.append([segment])

    for parts in itertools.product(*seg_choices):
        yield "".join(parts)

I call this a gem because it’s concise without being tricky, and uses Python’s tools to strong effect. Let’s look at how it works.

re.split: The first step is to break the string into pieces. I used re.split(): it takes a regular expression, divides the string wherever the pattern matches, and returns a list of parts.

A subtlety that I make use of here: if the splitting regex has capturing groups (parenthesized pieces), then those captured strings are also included in the result list. The pattern is anything enclosed in curly braces, and I’ve parenthesized the whole thing so that I’ll get those bracketed pieces in the split list.

For our sample string, re.split will return these segments:

['', '{Alice,Bob}', ' ate a ', '{banana,donut}', '.']

There’s an initial empty string which might seem concerning, but it won’t be a problem.

Grouping: I used that list of segments to make another list, the choices for each segment. If a segment starts with a brace, then I strip off the braces and split on commas to get a list of the alternatives. The segments that don’t start with a brace are the non-choices part of the string, so I add them as a one-choice list. This gives us a uniform list of lists, where the inner lists are the choices for each segment of the result.

For our sample string, this is the parts list:

[[''], ['Alice', 'Bob'], [' ate a '], ['banana', 'donut'], ['.']]

itertools.product: To generate all the combinations, I used itertools.product(), which does much of the heavy lifting of this function. It takes a number of iterables as arguments and generates all the different combinations of choosing one element from each. My seg_choices list is exactly the list of arguments needed for itertools.product, and I can apply it with the star syntax.

The values from itertools.product are tuples of the choices made. The last step is to join them together and use yield to provide the value to the caller.

Nice.

Load-balanced xdist

Saturday 11 December 2021

I wrote a pytest plugin to evenly balance tests across xdist workers.

Back story: the coverage.py test suite seemed to be running oddly: it would run to near-completion, and then stall before actually finishing. To understand why, I added some debug output to see what tests were running on which workers.

I have some very slow tests (they create a virtualenv and install packages). It turned out those tests were being run near the end of the test suite, after their worker had already run a bunch of other tests. So that one worker was taking 10 seconds longer to finish than all the others. This is what made the test suite seem to stall at the end.

I figured it would be easy to schedule tests more optimally. We could record the time each test takes, then use those times in the next test run to schedule the longer tests first, and to balance the total time across workers.

The result is balance_xdist_plugin.py (commit). It’s written to be a pytest plugin, though it’s still in the coverage.py repo, so it’s not usable by others yet. And there are two things that aren’t fully general:

  • The data is written to a “tmp” directory, when it should use the pytest caching feature.
  • The number of workers is assumed to be 8, because I couldn’t figure out how to get the true number.

You can indicate that certain tests should all be assigned to the same worker. This helps with slow session-scoped fixtures, like my virtualenv-creating tests. It’s also an escape hatch if you have tests that aren’t truly isolated from each other. (Full disclosure: coverage.py has a few of these and I can’t figure out what’s wrong...)

The plugin worked: the test suite runs slightly faster than before, but as is typical, not as much faster as I thought it would. A side-benefit is that the fastest tests are now run at the end, so there’s a satisfying acceleration toward the finish line.

Maybe this plugin will be useful to others? Maybe people have improvements?

Computing a GitHub Action matrix with cog

Sunday 7 November 2021

I had a complex three-axis GitHub Action matrix, but needed to skip some combinations. I couldn’t get what I needed with the direct YAML syntax, so I used Cog to generate the matrix with Python.

The matrix made Python wheels with cibuildwheel, and it worked. It had 15 jobs, but they built different numbers of architectures (ubuntu made three, windows made two, macos made only one). This made the overall run take longer, and made it harder to dig through logs to see if everything went OK. Conceptually, the matrix was three-axis, but expressed as two-axis, with a list of architectures for each job:

strategy:
  matrix:
    os:
      - ubuntu-latest
      - macos-latest
      - windows-latest
    cibw_build:
      - cp36
      - cp37
      - cp38
      - cp39
      - cp310
    include:
      - os: ubuntu-latest
        cibw_arch: x86_64 i686 aarch64
      - os: windows-latest
        cibw_arch: x86 AMD64
      - os: macos-latest
        cibw_arch: x86_64

I wanted to make the architectures a third axis, but couldn’t figure out how to use the YAML syntax to limit the choices for each OS. It seemed like the only way to get a ragged three-axis matrix was to list the combinations explicitly. If you know how, I’m still interested to know.

What I wanted was a way to compute the matrix with a bit more power. There are examples out there of using fromJSON to build a matrix, but I didn’t need it to be recomputed every run. I just wanted a way to not have to type out 30 combinations by hand.

I’ve often needed this sort of thing: a static file with just a bit of computed content. This is what Cog was meant for, and it worked great here too. This is what my computed matrix looks like now:

strategy:
  matrix:
    include:
      # To change the matrix, edit the choices, then process this file with cog:
      #
      # $ python -m pip install cogapp
      # $ python -m cogapp -rP .github/workflows/kit.yml
      #
      #
      # [[[cog
      #   #----- vvv Choices for the matrix vvv -----
      #   oss = ["ubuntu", "macos", "windows"]
      #   pys = ["cp36", "cp37", "cp38", "cp39", "cp310"]
      #   archs = {
      #       "ubuntu": ["x86_64", "i686", "aarch64"],
      #       "macos": ["x86_64"],
      #       "windows": ["x86", "AMD64"],
      #   }
      #   #----- ^^^ ---------------------- ^^^ -----
      #
      #   import json
      #   for the_os in oss:
      #       for the_py in pys:
      #           for the_arch in archs[the_os]:
      #               them = {
      #                   "os": the_os,
      #                   "py": the_py,
      #                   "arch": the_arch,
      #               }
      #               print(f"- {json.dumps(them)}")
      # ]]]
      - {"os": "ubuntu", "py": "cp36", "arch": "x86_64"}
      - {"os": "ubuntu", "py": "cp36", "arch": "i686"}
      - {"os": "ubuntu", "py": "cp36", "arch": "aarch64"}
      - {"os": "ubuntu", "py": "cp37", "arch": "x86_64"}
      - {"os": "ubuntu", "py": "cp37", "arch": "i686"}
      - {"os": "ubuntu", "py": "cp37", "arch": "aarch64"}
      - {"os": "ubuntu", "py": "cp38", "arch": "x86_64"}
      - {"os": "ubuntu", "py": "cp38", "arch": "i686"}
      - {"os": "ubuntu", "py": "cp38", "arch": "aarch64"}
      - {"os": "ubuntu", "py": "cp39", "arch": "x86_64"}
      - {"os": "ubuntu", "py": "cp39", "arch": "i686"}
      - {"os": "ubuntu", "py": "cp39", "arch": "aarch64"}
      - {"os": "ubuntu", "py": "cp310", "arch": "x86_64"}
      - {"os": "ubuntu", "py": "cp310", "arch": "i686"}
      - {"os": "ubuntu", "py": "cp310", "arch": "aarch64"}
      - {"os": "macos", "py": "cp36", "arch": "x86_64"}
      - {"os": "macos", "py": "cp37", "arch": "x86_64"}
      - {"os": "macos", "py": "cp38", "arch": "x86_64"}
      - {"os": "macos", "py": "cp39", "arch": "x86_64"}
      - {"os": "macos", "py": "cp310", "arch": "x86_64"}
      - {"os": "windows", "py": "cp36", "arch": "x86"}
      - {"os": "windows", "py": "cp36", "arch": "AMD64"}
      - {"os": "windows", "py": "cp37", "arch": "x86"}
      - {"os": "windows", "py": "cp37", "arch": "AMD64"}
      - {"os": "windows", "py": "cp38", "arch": "x86"}
      - {"os": "windows", "py": "cp38", "arch": "AMD64"}
      - {"os": "windows", "py": "cp39", "arch": "x86"}
      - {"os": "windows", "py": "cp39", "arch": "AMD64"}
      - {"os": "windows", "py": "cp310", "arch": "x86"}
      - {"os": "windows", "py": "cp310", "arch": "AMD64"}
    # [[[end]]]

If you haven’t seen cog before, this is how it works: it finds chunks of Python code between [[[cog and ]]] markers, executes them, and inserts the output into the file up to the [[[end]]] marker. Existing output is replaced.

Here, the 30 lines of combinations are the output. They weren’t in the file originally; they were created when I ran cog and it re-wrote the whole file. If I change the lists of choices, or the Python code, and re-run cog, it will remove those 30 lines and replace them with the new output.

This is perfect for this use: the choices for the matrix are only going to change very infrequently, and manually. When the choices need to change, I can edit the lists in the Python code, and run cog again to update the generated matrix.

Coverage goals

Monday 1 November 2021

There’s a feature request to add a per-file threshold to coverage.py. I didn’t add the feature, I wrote a proof-of-concept: goals.py.

Coverage.py has a --fail-under option that will check the total coverage percentage, and exit with a failing status if it is too low. This lets people set a goal, and then check that they are meeting it in their CI systems.

The feature request is to check each file individually, rather than the project as a whole, to exert tighter control over the goal. That sounds fine, but I could see that it would actually be more complicated than that, because people sometimes have more complicated goals: 100% coverage in tests and 85% in product code, or whatever.

I suggested implementing it as a separate tool that used data from a JSON report. Then, I did just that.

The goals.py tool is flexible: you give it a percentage number, and then a list of glob patterns. It collects up the files that match the patterns, and checks the coverage of that set of files. You can choose to measure the group as a whole, or each file individually. Patterns can be negated to remove files from consideration.

For example:

# Check all Python files collectively, except in the tests/ directory.
$ python goals.py --group 85 '**/*.py' '!tests/*.py'

# We definitely want complete coverage of anything related to html.
$ python goals.py --group 100 '**/*html*.py'

# No Python file should be below 90% covered.
$ python goals.py --file 90 '**/*.py'

Each run of goals.py checks one set of files against one goal, but you can run it multiple times if you want to check multiple goals.

If you want to have more control over your coverage goals, give goals.py a try. It might turn into a full-fledged coverage.py feature, or maybe it’s enough as it is.

Feedback is welcome, either here or on the original feature request.

Django Chat podcast

Wednesday 13 October 2021

I had a fun conversation on the Django Chat podcast with Will Vincent and Carlton Gibson. It was a great discussion.

Things we talked about:

  • Walking
  • Right and wrong ways to do things
  • Geographic meetups during virtual times
  • Open source attention
  • Coverage.py
  • The evolution of the Python standard library
  • Python 3.10’s trace behavior
  • Coverage as a measure of test quality
  • UX of test information
  • Developer gamification
  • Upgrading Django with third-party packages
  • Convincing people to test
  • Using non-public interfaces
  • Cog
  • Side projects as outlets
  • Rewriting my wacky personal site (this site)
  • edX being acquired by 2U
  • Open source from first principles

Older:

Jul 25:

Aptus v3