Matthias Noback
August 26, 2016
In a previous article we answered the question “why you would write tests”, or specifications for your software project. In this article we’ll discuss costs and benefits of writing tests.
The price you pay for writing automated tests consists of the following parts (and probably more):
- You need to become familiar with the test frameworks (test runners as well as tools for generating test doubles). This will initially take some time or make you slow.
- If your code is not well designed already, or if you’re not producing clean code yet, writing the tests will be a pain. This will slow you down and may even lead you to giving it up.
- If you’re not very familiar with writing tests, you’ll likely end up with tests that are hard to maintain. This will slow you down during later stages of the project.
Below are some suggestions to keep the costs low.
1. Reducing the cost of learning to work with the tools
First get yourself familiar with the tools by going over the documentation, or maybe following some online course. This initial investment should get you up and running quickly. You’ll learn the intricate details along the way. Make sure you read a lot of tests written by team members who already know more about testing, or by inspecting the tests of your favorite frameworks and libraries.
2. Reducing the cost of testing with badly designed code
Automated testing and writing good production code go hand-in-hand. If the production code is not well-designed when you start testing it, you already had a certain amount of (possibly invisible) technical debt. When writing these tests, you’re finally paying off that debt. This is often a good thing, since it will prepare the code for the future.
You can prevent a large initial investment by starting with some high-level tests, testing the application at the User Interface (UI) level. Then, with that safety net in place, you can safely add code or make changes at the lower levels. Of course, you’ll write tests for those smaller changes and, while building up more certainty, you will eventually be able to discard some of the high-level tests.
Read more about strategies for improving the cost/benefit factor of writing tests in the article Economy of Tests by Mathias Verraes and Konstantin Kudryashov.
3. Reducing the cost of bad tests
Producing bad tests can turn your life as sour as writing bad production code does. Test code benefits from many of the improvements that you would normally make to production code. But on top of the usual code smells, tests also come with their own set of smells: tests can be fragile, contain duplication, contain too much logic themselves, etc. It’s best to learn about these test smells. Consider books like xUnit Patterns by Gerard Meszaros or Growing Object-Oriented Software Guided by Tests by Steve Freeman and Nat Pryce.
Instead of reading up on this topic (which requires an even bigger investment of course), you could also decide to act on every signal that something about your test is wrong. This requires some discipline, but if you carefully tend your production code, it will be easy to apply the same old intuition for clean production code to the less familiar activity of writing test code.
For example, if setting up a particular test takes too many lines of code, look for an intermediate method or even object which can do the heavy lifting. Or, if your test code contains a lot of assertions and expectations, try to combine them into one, more meaningful assertion.
Again, there are lots of good examples of test refactorings in the books I just mentioned, but I’ve found that a lot of these possible improvements can eventually be discovered by simply continuing your way, trusting on your own feeling that “something is wrong” about a test.
Does writing tests make you go faster or slower?
An answer to this question has different constituents.
First, to answer this question you should not put too much attention to the fact that, due to the costs described above, testing may at times feel slow. Just like you keep learning to write better production code, you will always keep learning how to write better test code. This is an ongoing learning process, which will be very fascinating. It will teach you a lot of things about programming itself and the quality of your production code will improve along the way.
Every programmer I’ve spoken with so far who has reached an intermediate level of familiarity with automated testing, reports to me that writing tests definitely makes them a faster and more effective programmer. Once you get the hang of it, you’ll never want to go back - you never want to trade that feeling of safety and security for endless “var_dump-ing and die-ing” or refreshing a page in a browser and filling in that form again.
Second, the answer to this question will often be used to determine whether or not a team should start writing tests. Being fast or slow isn’t a good criterion for making this decision. As explained in the previous article, testing is about delivering the right thing. It’s also about proving that the thing isn’t broken and that it works in the exact way the customer expects it to. Being faster or slower doesn’t have anything to do with that.
Third, consider a simple piece of production code: it will be much easier and faster to simply write that piece of code, instead of also writing test code for it. On top of the simple act of writing that test code, you will also spend time running it, improving it, submitting it as part of a pull request, processing code review comments about it, etc.
However, you also need to consider the increased cost of maintaining the untested code. As explained in the previous article, having no tests will be very bad for the future of the code. It will be very scary to make changes to it and the expected behavior of the code is nowhere documented. Therefore, we shouldn’t just worry about whether you are a faster programmer with or without writing tests; it’s also about the health of the project, now and in the future.
When writing tests remains painful
At some point you’ll be past the initial learning period: you know how to write different types of tests, using several of the available tools. You’re in a more comfortable place on the learning curve. Things should be wonderful, and you should start receiving some of the major benefits now.
Sometimes though, you will still feel like you’re struggling. There appears to be no way to make your tests finish within less than 30 seconds. Tests randomly fail, later to succeed again, without you changing anything. Or equally bad: all the tests pass, but the application that’s running in production is obviously broken.
There are several causes and several solutions for these kinds of (often occurring) situations.
First of all, your test suite may not be in a good shape: the number of unit tests, versus the number of integration tests, versus the number of end-to-end tests may be wrong. Read up on the concept of the test pyramid to find out why this is bad, and how to fix it. I can also recommend talks by Ciaran McNulty on the subject of testing (e.g. Building a Pyramid) or Why Your Test Suite Sucks).
Second, there may be too big a difference between the way you are testing your application while developing it, and how that same application is configured in a production environment. This suggests a revision of your current delivery/deployment strategy. You could look into constructing an “acceptance” environment which resembles as closely as possible your production environment. For more useful suggestions on this topic, read Continous Delivery by Jez Humble and David Farley.
Finally, it may be time to reconsider the tools you are using. Some of the technical choices underlying (web development) frameworks won’t make it easy for you to write tests. In general: the more third-party code relies on static (i.e. non-interchangeable) or global (i.e. not injected) values or objects, the harder it will be to force the code you write using it to be testable. The same is true for code that relies on something outside of the runtime application, like the filesystem or a network connection.
If this is the case, you could of course look for other frameworks and libraries that will be more supportive of your endeavour. Or, you could push the framework or library more towards the “edges” of your application. This is when a hexagonal architecture would be helpful.
Conclusion
We already knew about the major benefits of testing (since we discussed them in the previous article), but now we’ve also considered the price of testing. We’ve covered several ways in which you can reduce or prevent the costs coming from learning how to write tests, dealing with untestable production code, and taking care of the quality of your test suite.
We’ve considered the question: does writing tests make you slower or faster? And finally, we’ve looked at what to do when you find that all your test efforts don’t seem to pay off.
If you’re still in doubt whether or not writing tests will be worth the effort, I would recommend applying Mathias Verraes’ approach to testing: to first be religious about it, and later evaluate to find out if it works for you.
What remains to be discussed is the question: when to write tests? Before, during, or after writing production code? And: in which cases may you allow yourself not to write any tests at all?