The value of unit tests

Thursday 11 February 2016This is over eight years old. Be careful.

Seems like testing and podcasts are in the air... First, I was interviewed on Brian Okken’s Python Test podcast. I wasn’t sure what to expect. The conversation went in a few different directions, and it was really nice to just chat with Brian for 45 minutes. We talked about coverage.py, testing, doing presentations, edX, and a few other things.

Then I see that Brian was himself a guest on Talk Python to Me, Michael Kennedy’s podcast about all things Python.

On that episode, Brian does a good job arguing against some of the prevailing beliefs about testing. For example, he explains why unit tests are bad, and integration tests are good. His argument boils down to, you should test the promises you’ve made. Unit tests mostly deal with internal details that are not promises you’ve made to the outside world, so why focus on testing them? The important thing is whether your product behaves right from the outside.

I liked this argument, it made sense. But I don’t think I agree with it. Or, I completely agree with it, and come to a different conclusion.

When I build a complex system, I can’t deal with the whole thing at once. I need to think of it as a collection of smaller pieces. And the boundaries between those pieces need to remain somewhat stable. So they are promises, not to the outside world, but to myself. And since I have made those promises to myself, I want unit tests to be sure I’m keeping those promises.

Another value of unit tests is that they are a way to chop up combinatorial explosions. If my system has three main components, and each of them can be in ten different states, I’ll need 1000 integration tests to cover all the possibilities. If I can test each component in isolation, then I only need 30 unit tests to cover the possibilities, plus a small number of integration tests to consider everything mashed together. Not to mention, the unit tests will be faster than the integration tests. Which would you rather have? 1000 slow tests, or 30 fast tests plus 20 slow tests?

Sure, it’s possible to overdo unit testing. And it’s really easy to have all your unit tests pass and still have a broken system. You need integration tests to be sure everything fits together properly. Finding the right balance is an art. I really like hearing Brian’s take on it. Give it a listen.

Comments

[gravatar]
Hey there Ned you're even more famous now! That's your second podcast interview recently, right? :) Now we need to get you onto Talk Python To me and you'll have completed the circuit :)

I can see where he's coming from, in that so often people write a TON of unit tests and don't bother with integration testing at all, which I think is a shame.

However especially in a dynamic language like Python I think unit tests can be a great developer tool to ensure that the different units are actually obeying the contracts they advertise.
[gravatar]
>If my system has three main components, and each of them can be in ten different states, I'll need 1000 integration tests to cover all the possibilities. If I can test each component in isolation, then I only need 30 unit tests to cover the possibilities, plus a small number of integration tests to consider everything mashed together.

Interesting point. Also, it reminded me of equivalence classes in testing, which provide analogous benefits.
[gravatar]
>If my system has three main components, and each of them can be in ten different states, I'll need 1000 integration tests to cover all the possibilities. If I can test each component in isolation, then I only need 30 unit tests to cover the possibilities, plus a small number of integration tests to consider everything mashed together.

This doesn't test all of the 1000 states, though. It only tests for the ten states each component can be in (plus the integration testing, which you have to do anyways). If you wrote 30 integration tests for the components' states, that would cover exactly the same amount of combinations. Of course these tests would be more complex and hence slower to execute, but you can't fight against combinatorial explosion by changing your testing strategy! You can only fight combinatorial explosion by breaking dependencies in the system.

And as for testing with state equivalence classes: It doesn't make sense. If you've proved some component's internal states equivalent with respect to the component's behavior (i.e. its public interface), then those states should not be distinct in the first place; otherwise you either have duplicated code or the parts that make up the state aren't orthogonal. In any case, that's a mistake in the component's design.

The only exception here is accidental state or complexity, like performance optimization (caches etc).
[gravatar]
@Jonas Haag: If you're referring to my previous comment (in your 2nd last paragraph above), I did not say that equivalence classes were applicable to Ned's example and reasoning. I just said:

>Also, it reminded me of equivalence classes in testing, which provide analogous benefits.
[gravatar]
I like this tweet by Ram that summmarises the different roles these tests play quite well

https://twitter.com/artagnon/status/668264916349018112


Broad functional tests will tell you what broke, and specific unit tests will tell you what to fix. You need both.


Unit tests do have a certain "addictive" quality about them especially when coverage is also involved and I often tend to write too many that make my resistance to changing things high.

Add a comment:

Ignore this:
Leave this empty:
Name is required. Either email or web are required. Email won't be displayed and I won't spam you. Your web site won't be indexed by search engines.
Don't put anything here:
Leave this empty:
Comment text is Markdown.