Testing: exceptions and caches

Sunday 25 January 2026

Nicer ways to test exceptions and to test cached function results.

Two testing-related things I found recently.

Unified exception testing

Kacper Borucki blogged about parameterizing exception testing, and linked to pytest docs and a StackOverflow answer with similar approaches.

The common way to test exceptions is to use pytest.raises as a context manager, and have separate tests for the cases that succeed and those that fail. Instead, this approach lets you unify them.

I tweaked it to this, which I think reads nicely:

from contextlib import nullcontext as produces

import pytest
from pytest import raises

@pytest.mark.parametrize(
    "example_input, result",
    [
        (3, produces(2)),
        (2, produces(3)),
        (1, produces(6)),
        (0, raises(ZeroDivisionError)),
        ("Hello", raises(TypeError)),
    ],
)
def test_division(example_input, result):
    with result as e:
        assert (6 / example_input) == e

One parameterized test that covers both good and bad outcomes. Nice.

AntiLRU

The @functools.lru_cache decorator (and its convenience cousin @cache) are good ways to save the result of a function so that you don’t have to compute it repeatedly. But, they hide an implicit global in your program: the dictionary of cached results.

This can interfere with testing. Your tests should all be isolated from each other. You don’t want a side effect of one test to affect the outcome of another test. The hidden global dictionary will do just that. The first test calls the cached function, then the second test gets the cached value, not a newly computed one.

Ideally, lru_cache would only be used on pure functions: the result only depends on the arguments. If it’s only used for pure functions, then you don’t need to worry about interactions between tests because the answer will be the same for the second test anyway.

But lru_cache is used on functions that pull information from the environment, perhaps from a network API call. The tests might mock out the API to check the behavior under different API circumstances. Here’s where the interference is a real problem.

The lru_cache decorator makes a .clear_cache method available on each decorated function. I had some code that explicitly called that method on the cached functions. But then I added a new cached function, forgot to update the conftest.py code that cleared the caches, and my tests were failing.

A more convenient approach is provided by pytest-antilru: it’s a pytest plugin that monkeypatches @lru_cache to track all of the cached functions, and clears them all between tests. The caches are still in effect during each test, but can’t interfere between them.

It works great. I was able to get rid of all of the manually maintained cache clearing in my conftest.py.

Comments

Add a comment:

Ignore this:
Leave this empty:
Name is required. Either email or web are required. Email won't be displayed and I won't spam you. Your web site won't be indexed by search engines.
Don't put anything here:
Leave this empty:
Comment text is Markdown.