In Why your mock doesn’t work, I explained that you have to mock objects where they are used, not where they are defined. Now we’ll talk about another reason your mock may be mysteriously failing to capture activity.
At work a test like this was failing:
# -- cool_feature.py ------------
import logging
logger = logging.getLogger(__name__)
def expensive_prep():
...
def do_cool_thing(text):
expensive_prep()
logger.info(f"Cool feature is doing {text!r}")
# -- test_cool_feature.py ------------
from unittest.mock import patch
from cool_feature import do_cool_thing
@patch("cool_feature.expensive_prep")
@patch("cool_feature.logger")
def test_cool_feature_announcement(mock_prep, mock_logger):
do_cool_thing("hello")
mock_logger.info.assert_called_once_with(
"Cool feature is doing 'hello'"
)
We couldn’t see why the log mock wasn’t getting called at all. The code was straightforward, the test was simple, how come the mock wasn’t catching the log call?
Mocks are slippery. That’s their strength and their weakness. They will happily let you call any method on them, or access any attribute, and when you do, they will give you another slippery mock that will let you do whatever you want. This can be very powerful, but also very misleading.
The problem with our test was the order of the mocks in the function signature. The first decorator corresponds to the last argument, because the first decorator is the last one applied. That’s the inside-out nature of decorators. It should be:
@patch("cool_feature.expensive_prep")
@patch("cool_feature.logger")
def test_cool_feature_announcement(mock_logger, mock_prep):
When the order was wrong, we called the logger, and then were asserting that “cool_feature.expensive_prep” had its “info” method called. That (non-existent) method hadn’t been called, so our assert failed.
We were using the mocks completely incorrectly, but mocks don’t mind: you can do whatever you want, and they let you, with no notification.
Once the order of the mocks in the signature was fixed, the test passed.
But there’s a way we could write this to be more fool-proof. Mocks can be made strict with their “spec=True” option. When set, the mock will raise an error if you use a method or attribute on it that didn’t exist on the object it was mocking.
Even better is the autospec=True option to mock.patch. This makes the mock strict with spec=True, and also recursively auto-applies spec=True to any new mock you get from the mock:
@patch("cool_feature.expensive_prep", autospec=True)
@patch("cool_feature.logger", autospec=True)
def test_cool_feature_announcement(mock_logger, mock_prep):
If we had been using autospec from the start, the incorrect test would have raised a helpful error right away:
AttributeError: 'function' object has no attribute 'info'
I wish autospec=True was the default. I can see why there are occasional needs for truly permissive mocks, but mostly we intend to use them in strict ways, and want to know when we don’t.
BTW, the usual advice to avoid mocking applies if you can, since even when you get your mocks working properly, there are still dangerous pitfalls. I’ll re-recommend these two resources about how to test differently:
- Augie Fackler and Nathaniel Manista’s PyCon talk Stop Mocking and Start Testing.
- Itamar Turner-Trauring’s article Fast tests for slow services: why you should use verified fakes.
Comments
We have a flake8 plugin to catch usages without autspec or spec_set are not sent:
https://github.com/level12/flake8-level12#plugins
To reduce verbosity, I’ve also started adding a
mock_patch()
andmock_patch_obj()
to every project I work on that defaults to those values being true and then use them in place ofmock
’s functions.On a sidenote - using f-strings for logging is less then optimal. You should rather try to habituate the form
logger.info("Cool feature is doing %s", text)
.Why? In the f-string case, the formatting is always applied. In the improved form however, formatting is only applied if the log record is processed by any handler.
For some kinds of objects, calling the
__str__
method can take noticeable time (e.g. Numpy arrays or dataclasses with complex substructure). On the other hand, passing them tologger.info
and friends as argument is very very cheap.This means, that particularly for chatty
debug
calls, you don’t want string formatting to be applied when the handler might throw away the message anyway.In response to JL, using F-strings in log messages is often cited as a drawback, but in practice it often makes extremely little difference and can actually be a perf hit. There’s a discussion on the pylint Github about this: https://github.com/PyCQA/pylint/issues/2354#issuecomment-414526879 and https://github.com/PyCQA/pylint/issues/2395
While you’re correct that a
__str__
method can be arbitrarily complex (and therefore arbitrarily expensive), the very common case is that__str__
is usually quite cheap. In general F-strings are also more performant than %-interpolation, so oftentimes using %-interpolation in a log message can actually cause a (admittedly negligible) perf hit. Given f-strings are far more readable, I think it’s better to habituate them as the pattern to use and only use % formatting where it makes sense (because of the cost of str, or some other tangible reason).In terms of the article, Ned thanks for writing this, I always get the order of arguments when you have multiple
patch
decorators wrong (it gets even more confusing when you also have pytest fixtures in the mix).Add a comment: