Matthias Noback
September 1, 2016
In previous articles we answered some important questions already: why would you write tests in the first place? And, will it make you faster or slower?
Now we need to answer one other important question that should guide you in your daily quest to improve the quality of the code you deliver: when should you write a test?
When, as in: for which projects?
I previously remarked that:
If you deliver code without tests, it’s legacy code.
This isn’t entirely true, because we should make an important distinction first. We naturally distinguish production code from test code, but production code isn’t always going to end up running on a production server or on the computer of an end user. For example, I’ve written quite a lot of workshop code, sample code, throw-away code, like a simple conversion script, private (never shared) code, code that will only be useful (and used) for a very short time, etc. For all these types of code one could argue that it isn’t absolutely necessary to write tests. The reason for it being that there don’t need to be tests to prevent it from becoming unmaintainable legacy code.
Of course it will still prove very useful to produce tests for these types of code, since a big advantage of having an automated test suite is that it will save you a lot of manual, exploratory testing.
I’ve often skipped testing because the code I was going to write would be very simple and bug-free. This almost always turned out not to be true. So then I knew I had made a bad choice; it made me very inefficient at producing a working solution.
Now think of the kind of code that is the exact opposite of the types mentioned above: code that will be deployed to a production environment, code that will be used on an end user’s machine, code that will be used for more than a couple of days, code that is not meant as a learning tool but is instead the “real deal”. All of this code requires tests; a proof of correctness and effectiveness.
When, as in: at which point in your development workflow?
There’s another way in which we can interpret the initial question: while you’re developing software, at which point in your development workflow should you write tests?
We’ll discuss several options now, including possible advantages or disadvantages, and some suggestions.
Before writing any code
The most “famous time” to write tests is right before writing the corresponding production code. This approach is known as test-driven development (TDD) or test-first development. It helps you to write no more production code than is needed for the feature you’re currently implementing. Also, it automatically provides you with a large set of regression tests, for every unit of code.
TDD has a nice flow to it. It divides the large “problem” that is software development into tiny pieces. It allows you to take small, safe steps, incrementally solving the problem at hand. This is why TDD is such a nice way to help discover algorithms.
This approach also helps driving the design of your code units (e.g. classes). As explained in a previous article, the quality of your code will greatly improve when you start submitting it to tests. When test-driving a piece of code, you’re likely to end up with a good design. Also, starting with a “green” (i.e. passing) test, you can safely and with impunity refactor the code (i.e. make structural improvements without changing any of the behavioral aspects).
Good design without TDD
When developers become more experienced test writers, they will often skip the test-first part. I don’t know of any resource describing this as a valid approach, but in my experience, this can work quite well. As already mentioned in a previous article, when you learn to write automated tests for your code, the quality of your production code will start increasing as a result of the need to make it testable. This means that, while learning to write good tests, you learn how to write better code in general, and these design skills will turn into general intuitions. Once acquired, the developer can start trusting on their expert intuition for good design. It enables them to ignore the “official” TDD cycle of red-green-refactor, and interweave writing production code with tests, alternating between a test-first and a test-after-the-fact approach.
As a replacement for manual testing
Without a test-first approach we still need a way to exercise the code we write to see if it behaves as expected. If we don’t have a test suite that does this, we’ll resort to more primitive measures like hitting the “Refresh” button in a browser, running console commands again and again with different arguments, filling in complicated forms, etc.
Tests in these cases don’t guide the design, they just make running the code easier. Replacements for manual test scenarios can occur at different levels of granularity.
Often you’ll use the same testing tool, but your tests will look different from real unit tests or acceptance tests. A test may be written using an xUnit framework like PHPUnit, but not be a real unit test (i.e. it doesn’t test the unit in isolation, or it doesn’t test all possible behaviors, etc.). In this case, PHPUnit’s test runner is used to save you some time, but it doesn’t provide certainty nor a full specification of the unit’s behavior.
The same goes for so-called “functional” tests which may be written in the Gherkin language and executed by a Cucumber-like tool (e.g. Behat for PHP). These tests describe for example which URL to visit, which button to click and which HTML element to expect to appear. These quite literally mimic the steps an actual developer would need to take to verify the application works.
These tests look like acceptance tests, but they aren’t: the steps in the scenarios are described in technical “computer” terms (i.e. which classes the HTML elements should have, or which URL path will be used), not in terms of how you would describe the desired behavior of your application, regardless of how it will eventually be implemented. Those implementation details should be hidden in the executable code behind each of the steps described in the scenario.
Tests as a replacement for manual testing have the advantage that they save time while you’re developing (since they provide an automated way to “click” through an application). They also provide a safeguard against regressions. Keep in mind that they (most often) are not proper specifications of the code, nor of the use cases of the application at large.
After a “spike”
Instead of writing tests before, or while coding, it’s also possible to write tests after writing production code. A relatively short time span of writing production code is often called a “spike”. This stands for a period of time in which you think hard, look things up in API documentation, trying to figure things out. Finally you’ll have something crude working. The spike is over once you’ve settled on a particular design, including (parts of) an implementation.
Basically, this spike-thing is just whatever programmers have done for years and years. The new thing - once you know about testing - is that after a spike, you get this insecure feeling, because you don’t have tests for the code you produced during the spike.
At this point you should of course add some unit tests (and any other type of test you deem necessary). But, the downside of this test-after-the-fact approach is that writing unit tests can feel very tedious and might slow you down so much that you may feel tempted to stop writing any more tests. You’ve seen the production code working after all!
Some suggestions to improve the spike situation:
- Stop implementing untested code as soon as you’ve had the “design breakthrough”.
- Afterwards, throw the code written during the spike away and start all over, this time with a TDD approach. This might seem slow, but is actually pretty fast, since you’ve done most of the work already (i.e. doing the thinking).
- One last option would be to think more beforehand: don’t put your thoughts into code immediately, but first think with your head, while possibly making some notes on paper.
- If you feel tempted to stop writing tests at all: remember why you write tests, think about the intended future of your code.
Closing remarks; stop the shaming
Test advocates often like to shame people into testing. “You’re doing it wrong!” And developers often like to make fun of it: “Like, we’re all writing tests for our code, right? ;)” Both of these responses are inadequate. We shouldn’t feel ashamed for not delivering tests with our code. We should simply admit that we don’t know how to do it, and ask for help. We need to practice and learn along the way, and we should continuously improve our testing skills. Because, if there’s one thing that we shouldn’t have a single doubt about, it’s the necessity of producing tests for our production code.