Kim writes about Managed Developer Testing, referring to an in-depth article by Alberto Savoia about "a set of management practices supported by automated tools which [he believes] are essential to make a developer testing effort successful".
Kim says "our engineering team has nominally agreed that test-driven development is a good thing, but I'm having a lot of trouble keeping it going". I know the problem. In my experience, there are a few factors that contribute to the impossibility of developer testing:
- Developers aren't used to writing tests, so we're talking about a change of culture, which is hard.
- Design for testability is rarely considered when architecting in the first place, so there are large chunks of code which are very difficult to test automatically. This contributes to developers' sense that unit testing is a waste of time.
- Developers have an in-born confidence in their code, so they view unit tests as a formality that won't yield any benefit.
All of those are back-pressures on developer testing, but in my experience, the worst is this: The constant push to get the features "done" as quickly as possible. There's little support in a traditionally-run engineering department to "hold up" the feature while the tests are written. Once it's coded and seems to work, we say it's done, then move on to the next feature. And of course if there isn't a strong culture and infrastructure for automated developer testing already in place, any effort to start is going to bear the entire burden of creating the culture and infrastructure, so it's a very high bar to get across.
As I'm learning with my personal projects, automated testing is a wonderful thing. I wish I knew how to get it in place on larger projects.