Variable fonts

Tuesday 17 September 2019

We’re all used to fonts coming in different weights (normal, bold), or sometimes different widths (normal, condensed, extended). Geometrically, there’s no reason that these variations need to be discrete. It’s a limitation of technology that we’ve been given a few specific weights or widths to choose from.

Over the years there have been a few attempts to make those variation dimension continuous rather than discrete. Knuth’s Metafont was one, Adobe’s Multiple Master fonts were another. The latest is OpenType’s variable fonts.

In a variable font, the type designer not only decides on the shapes of the glyphs, but on the axes of variability. Weight and width are two obvious ones, but the choice is arbitrary.

One of the great things about variable fonts is that browsers have good support for them. You can use a variable font on a web page, and set the values of the variability dimensions using CSS.

Browser support also means you can play with the variability without any special tools. Nick Sherman’s v-fonts.com is a gallery and playground of variable fonts. Each is displayed with sliders for its dimensions. You can drag the sliders and see the font change in real time in your browser.

Many of the fonts are gimmicky, either to show off the technology, or because exotic display faces are where variability can be used most broadly. Here are a few that demonstrate variability to its best advantage:

Antonia Variable includes an optical size axis. Optical size refers to the adjustments that have to be made to shapes to compensate for the size of the font. At tiny sizes, letters have to be wider, and features sturdier in order for type to remain legible, but also seem like the same family. It’s kind of like how babies have the same features as adults, but smaller and plumper.

Sample of Antonia Variable

Bradley DJR Variable, is another good example of an optical size axis.

Sample of Bradley DJR Variable

UT Morph is an ultra-geometric display face with two stark axes, positive and negative. This shows how variability can be used to control completely new aspects of a design.

Sample of UT Morph

Recursive has some really interesting axes that use variability in eye-opening ways without being cartoonish: proportion (how monospaced is it), expression (how swoopy is it), and italic (changes a few letter shapes).

Sample of Recursive

Variable fonts are still a new technology, but we’ll see them being used more and more. Don’t expect to see fonts stretching and squashing before your eyes though. Site designers will use variability to make some choices, and you won’t even realize variability was involved. Like all good typography, it won’t draw attention to itself.

Don’t omit tests from coverage

Thursday 29 August 2019

There’s a common idea out there that I want to refute. It’s this: when measuring coverage, you should omit your tests from measurement. Searching GitHub shows that lots of people do this.

This is a bad idea. Your tests are real code, and the whole point of coverage is to give you information about your code. Why wouldn’t you want that information about your tests?

You might say, “but all my tests run all their code, so it’s useless information.” Consider this scenario: you have three tests written, and you need a fourth, similar to the third. You copy/paste the third test, tweak the details, and now you have four tests. Except oops, you forgot to change the name of the test.

Tests are weird: you have to name them, but the names don’t matter. Nothing calls the name directly. It’s really easy to end up with two same-named tests. Which means you only have one test, because the new one overwrites the old. Coverage would alert you to the problem.

Also, if your test suite is large, you likely have helper code in there as well as straight-up tests. Are you sure you need all that helper code? If you run coverage on the tests (and the helpers), you’d know about some weird clause in there that is never used. That’s odd, why is that? It’s probably useful to know. Maybe it’s a case you no longer need to consider. Maybe your tests aren’t exercising everything you thought.

The only argument against running coverage on tests is that it “artificially” inflates the results. True, it’s much easier to get 100% coverage on a test file than a product file. But so what? Your coverage goal was chosen arbitrarily anyway. Instead of aiming for 90% coverage, you should include your tests and aim for 95% coverage. 90% doesn’t have a magical meaning.

What’s the downside of including tests in coverage? “People will write more tests as a way to get the easy coverage.” Sounds good to me. If your developers are trying to game the stats, they’ll find a way, and you have bigger problems.

True, it makes the reports larger, but if your tests are 100% covered, you can exclude those files from the report with [report] skip_covered setting.

Your tests are important. You’ve put significant work into them. You want to know everything you can about them. Coverage can help. Don’t omit tests from coverage.

Why your mock doesn’t work

Friday 2 August 2019

Mocking is a powerful technique for isolating tests from undesired interactions among components. But often people find their mock isn’t taking effect, and it’s not clear why. Hopefully this explanation will clear things up.

BTW: it’s really easy to over-use mocking. These are good explanations of alternative approaches:

A quick aside about assignment

Before we get to fancy stuff like mocks, I want to review a little bit about Python assignment. You may already know this, but bear with me. Everything that follows is going to be directly related to this simple example.

Variables in Python are names that refer to values. If we assign a second name, the names don’t refer to each other, they both refer to the same value. If one of the names is then assigned again, the other name isn’t affected:

x23x = 23xy23y = xxy1223x = 12

If this is unfamiliar to you, or you just want to look at more pictures like this, Python Names and Values goes into much more depth about the semantics of Python assignment.

Importing

Let’s say we have a simple module like this:

# mod.py

val = "original"

def update_val():
    global val
    val = "updated"

We want to use val from this module, and also call update_val to change val. There are two ways we could try to do it. At first glance, it seems like they would do the same thing.

The first version imports the names we want, and uses them:

# code1.py

from mod import val, update_val

print(val)
update_val()
print(val)

The second version imports the module, and uses the names as attributes on the module object:

# code2.py

import mod

print(mod.val)
mod.update_val()
print(mod.val)

This seems like a subtle distinction, almost a stylistic choice. But code1.py prints “original original”: the value hasn’t changed! Code2.py does what we expected: it prints “original updated.” Why the difference?

Let’s look at code1.py more closely:

# code1.py

from mod import val, update_val

print(val)
update_val()
print(val)

After “from mod import val”, when we first print val, we have this:

mod.pyval‘original’code1.pyval

“from mod import val” means, import mod, and then do the assignment “val = mod.val”. This makes our name val refer to the same object as mod’s name val.

After “update_val()”, when we print val again, our world looks like this:

mod.pyval‘original’‘updated’code1.pyval

update_val has reassigned mod’s val, but that has no effect on our val. This is the same behavior as our x and y example, but with imports instead of more obvious assignments. In code1.py, “from mod import val” is an assignment from mod.val to val, and works exactly like “y = x” does. Later assignments to mod.val don’t affect our val, just as later assignments to x don’t affect y.

Now let’s look at code2.py again:

# code2.py

import mod

print(mod.val)
mod.update_val()
print(mod.val)

The “import mod” statement means, make my name mod refer to the entire mod module. Accessing mod.val will reach into the mod module, find its val name, and use its value.

mod.pyval‘original’code2.pymod

Then after “update_val()”, mod’s name val has been changed:

mod.pyval‘original’‘updated’code2.pymod

Now we print mod.val again, and see its updated value, just as we expected.

OK, but what about mocks?

Mocking is a fancy kind of assignment: replace an object (or function) with a different one. We’ll use the mock.patch function in a with statement. It makes a mock object, assigns it to the name given, and then restores the original value at the end of the with statement.

Let’s consider this (very roughly sketched) product code and test:

# product.py

from os import listdir

def my_function():
    files = listdir(some_directory)
    # ... use the file names ...
# test.py

def test_it():
    with mock.patch("os.listdir") as listdir:
        listdir.return_value = ['a.txt', 'b.txt', 'c.txt']
        my_function()

After we’ve imported product.py, both the os module and product.py have a name “listdir” which refers to the built-in listdir() function. The references look like this:

os modulelistdirlistdir()product.pylistdir

The mock.patch in our test is really just a fancy assignment to the name “os.listdir”. During the test, the references look like this:

os modulelistdirlistdir()mock!product.pylistdir

You can see why the mock doesn’t work: we’re mocking something, but it’s not the thing our product code is going to call. This situation is exactly analogous to our code1.py example from earlier.

You might be thinking, “ok, so let’s do that code2.py thing to make it work!” If we do, it will work. Your product code and test will now look like this (the test code is unchanged):

# product.py

import os

def my_function():
    files = os.listdir(some_directory)
    # ... use the file names ...
# test.py

def test_it():
    with mock.patch("os.listdir") as listdir:
        listdir.return_value = ['a.txt', 'b.txt', 'c.txt']
        my_function()

When the test is run, the references look like this:

os modulelistdirlistdir()mock!product.pyos

Because the product code refers to the os module, changing the name in the module is enough to affect the product code.

But there’s still a problem: this will mock that function for any module using it. This might be a more widespread effect than you intended. Perhaps your product code also calls some helpers, which also need to list files. The helpers might end up using your mock (depending how they imported os.listdir!), which isn’t what you wanted.

Mock it where it’s used

The best approach to mocking is to mock the object where it is used, not where it is defined. Your product and test code will look like this:

# product.py

from os import listdir

def my_function():
    files = listdir(some_directory)
    # ... use the file names ...
# test.py

def test_it():
    with mock.patch("product.listdir") as listdir:
        listdir.return_value = False
        my_function()

The only difference here from our first try is that we mock “product.listdir”, not “os.listdir”. That seems odd, because listdir isn’t defined in product.py. That’s fine, the name “listdir” is in both the os module and in product.py, and they are both references to the thing you want to mock. Neither is a more real name than the other.

By mocking where the object is used, we have tighter control over what callers are affected. Since we only want product.py’s behavior to change, we mock the name in product.py. This also makes the test more clearly tied to product.py.

As before, our references look like this once product.py has been fully imported:

os modulelistdirlistdir()product.pylistdir

The difference now is how the mock changes things. During the test, our references look like this:

os modulelistdirlistdir()product.pylistdirmock!

The code in product.py will use the mock, and no other code will. Just what we wanted!

Is this OK?

At this point, you might be concerned: it seems like mocking is kind of delicate. Notice that even with our last example, how we create the mock depends on something as arbitrary as how we imported the function. If our code had “import os” at the top, we wouldn’t have been able to create our mock properly. This is something that could be changed in a refactoring, but at least mock.patch will fail in that case.

You are right to be concerned: mocking is delicate. It depends on implementation details of the product code to construct the test. There are many reasons to be wary of mocks, and there are other approaches to solving the problems of isolating your product code from problematic dependencies.

If you do use mocks, at least now you know how to make them work, but again, there are other approaches. See the links at the top of this page.

Set_env.py

Sunday 21 July 2019

A good practice when writing complicated software is to put in lots of debugging code. This might be extra logging, or special modes that tweak the behavior to be more understandable, or switches to turn off some aspect of your test suite so you can focus on the part you care about at the moment.

But how do you control that debugging code? Where are the on/off switches? You don’t want to clutter your real UI with controls. A convenient option is environment variables: you can access them simply in the code, your shell has ways to turn them on and off at a variety of scopes, and they are invisible to your users.

Though if they are invisible to your users, they are also invisible to you! How do you remember what exotic options you’ve coded into your program, and how do you easily see what is set, and change what is set?

I’ve been using environment variables like this in coverage.py for years, but only recently made it easier to work with them.

To do that, I wrote set_env.py. It scans a tree of files for special comments describing environment variables, then shows you the values of those variables. You can type quick commands to change the values, and when the program is done, it updates your environment. It’s not a masterpiece of engineering, but it works for me.

As an example, this line appears in coverage.py:

# $set_env.py: COVERAGE_NO_PYTRACER - Don't run the tests under the Python tracer.

This line is found by set_env.py, so it knows that COVERAGE_NO_PYTRACER is one of the environment variables it should fiddle with.

When I run set_env.py in the coverage.py tree, I get something like this:

$ set_env
Read 298 files
 1:              COVERAGE_AST_DUMP                  Dump the AST nodes when parsing code.
 2:               COVERAGE_CONTEXT                  Set to 'test_function' for who-tests-what
 3:                 COVERAGE_DEBUG                  Options for --debug.
 4:           COVERAGE_DEBUG_CALLS                  Lots and lots of output about calls to Coverage.
 5:                COVERAGE_ENV_ID                  Use environment-specific test directories.
 6:              COVERAGE_KEEP_TMP                  Keep the temp directories made by tests.
 7:          COVERAGE_NO_CONTRACTS                  Disable PyContracts to simplify stack traces.
 8:            COVERAGE_NO_CTRACER                  Don't run the tests under the C tracer.
 9:           COVERAGE_NO_PYTRACER = '1'            Don't run the tests under the Python tracer.
10:               COVERAGE_PROFILE                  Set to use ox_profile.
11:            COVERAGE_TRACK_ARCS                  Trace every arc added while parsing code.
12:                 PYTEST_ADDOPTS                  Extra arguments to pytest.

(# [value] | x # ... | ? | q)>

All of the files were scanned, and 12 environment variables found. We can see that COVERAGE_NO_PYTRACER has the value “1”, and none of the others are in the environment. At the prompt, if I type “4”, then COVERAGE_DEBUG_CALLS (line 4) will be toggled to “1”. Type “4” again, and it is cleared. Typing “4 yes please” will set it to “yes please”, but often I just need something or nothing, so toggling “1” as the value works.

One bit of complexity here is that a program you run in your shell can’t change environment variables for subsequent programs, which is exactly what we need. So “set_env” is actually a shell alias:

alias set_env='$(set_env.py $(git ls-files))'

This runs set_env.py against all of the files checked-in to git, and then executes whatever set_env.py outputs. Naturally, set_env.py outputs shell commands to set environment variables. If ls-files produces too much output, you can use globs there also, so “**/*.py” might be useful.

Like I said, it’s not a masterpiece, but it works for me. If there are other tools out there that do similar things, I’d like to hear about them.

Coverage.py 5.0a6: context reporting

Wednesday 17 July 2019

I’ve released another alpha of coverage.py 5.0: coverage.py 5.0a6. There are some design decisions ahead that I could use feedback on.

Important backstory:

  • The big feature in 5.0 is “contexts”: recording data for varying execution context, also known as Who Tests What. The idea is to record not just that a line was executed, but also which tests ran each line.
  • Some of the changes in alpha 6 were driven by a hackathon project at work: using who-tests-what on the large Open edX codebase. We wanted to collect context information, and then for each new pull request, run only the subset of tests that touched the lines you changed. Initial experiments indicate this could be a huge time-savings.

Big changes in this alpha:

  • Support for contexts when reporting. The --show-contexts option annotates lines with the names of contexts recorded for the line. The --contexts option lets you filter the report to only certain contexts. Big thanks to Stephan Richter and Albertas Agejevas for the contribution.
  • Our largest test suite at work has 29k tests. The .coverage SQLite data file was 659Mb, which was too large to work with. I changed the database format to use a compact bitmap representation for line numbers, which reduced the data file to 69Mb, a huge win.
  • The API to the CoverageData object has changed.

Some implications of these changes:

  • The HTML reporting on contexts is good for small test suites, but very quickly becomes unwieldy if you have more than 100 tests. Please try using it and let me know what kind of reporting would be helpful.
  • The new more-compact data file is harder to query. The larger data file has a schema designed to be useful for ad-hoc querying. It was a classic third-normal form representation of the data. Now I consider the database schema to be a private implementation detail. Should we have a new “coverage sql” report command that exports the data to a convenient SQLite file?
  • Because CoverageData has changed, you will need an updated version of pytest-cov if you use that plugin. The future of the plugin is somewhat up in the air. If you would like to help maintain it, get in touch. You can install the up-to-date code with:
    pip install git+https://github.com/nedbat/pytest-cov.git@nedbat/cov5-combine#egg=pytest-cov==0.0
  • To support our hackathon project, we wrote a new pytest plugin: it uses pytest hooks to indicate the test boundaries, and can read the database and the code diff to choose the subset of tests to run. This plugin is in very rough shape (as in, it hasn’t yet fully worked), but if you are interested in participating in this experiment, get in touch. The code is here nedbat/coverage_pytest_plugin. I don’t think this will remain as an independent plugin, so again, if you want to help with future maintenance or direction, let me know.
  • All of our experimentation (and improvements) for contexts involve line coverage. Branch coverage only complicates the problems of storage and reporting. I’ve mused about how to store branch data more compactly in the past, but nothing has been done.

I know this is a lot, and the 5.0 alpha series has been going on for a while. The features are shaping up to be powerful and useful. All of your feedback has been very helpful, keep it coming.

Changelog podcast: me, double-dipping

Saturday 29 June 2019

I had a great conversation with Jerod Santo on the Changelog podcast: The Changelog 351: Maintainer spotlight! Ned Batchelder. We talked about Open edX, and coverage.py, and maintaining open source software.

One of Jerod’s questions was unexpected: what other open source maintainers do I appreciate? Two people that came to mind were Daniel Hahler and Julian Berman. Some people are well-known in the Python community because they are the face of large widely used projects. Daniel and Julian are known to me for a different reason: they seem to make small contributions to many projects. I see their names in the commits or issues of many repos I wander through, including my own.

This is a different kind of maintainership: not guiding large efforts, but providing little pushes in lots of places. If I had had the presence of mind, I would have also mentioned Anthony Sottile for the same reason.

And I would have mentioned Mariatta, for a different reason: her efforts are focused on CPython, but on the contribution process and tooling around it, rather than the core code itself. A point I made in the podcast was that people and process challenges are often the limiting factor to contribution, not technical challenges. Mariatta has been at the forefront of the efforts to open up CPython contribution, and I wish I had mentioned her in the podcast.

And I am sure there are people I am overlooking that should be mentioned in these appreciations. My apologies to you if you are in that category...

Older:

May 20:

Tidelift