This is a blog post about the trade-offs I’ve observed during agile software delivery with test driven development (TDD).
Agile and TDD are amazing, and I use them daily.
My hope is that by raising awareness about the trade-offs of our decisions, while applying Agile and TDD, will help us make better decisions.
Agile software development
Agile software development is a balance of two priorities.
On the one hand, we need to develop an application that satisfies the current requirements completely, correctly and economically.
On the other hand, we need to allow for enough flexibility so that we can adapt to new requirements easily.
Even thought those two priorities may look mutually exclusive, that is not necessarily the case.
Test driven development (TDD) is a software development process often used with Agile. TDD is a process that focuses on closing the gap between the moment we write the code, and the moment we know the code is doing what we expect. As a developer, we work in very small increments and after every increment get feedback about the correctness of our application.
TDD depends on writing many tests and doing it before the application code is even written.
Agile and TDD work very well together for many reasons. Some that I find important are:
Tests are proofs that our application satisfies the requirements completely and correctly.
The test-first approach discourages writing more software than you really need, supporting the Agile requirement of being economical.
Additionally, the test-first approach affects the design of your application code.
Tests require your application to have “articulation points”.
Points where you can decompose your application so that you can test the components individually, supporting the agile requirement of being flexible.
Sidepoint: Sometimes I hear developers being reluctant to make changes in their application “just to make testing easier”. A good answer is to remind those developers that “testability” is a very important architectural requirement.
If an application is scoring low in that requirement then is something wrong with the application’s architecture.
Writing a test
Now, that we have established how awesome TDD is, lets try to understand it a bit better so that we can use appropriately. For every test we write there is a trade-off:
Pros: The test proves that given some very specific input, we know how the code under test is going to behave.
Cons: The test pours concrete into some of our design decisions.
The method names, the return types, the number, and the types of the parameters for every piece of application code you use in the test now becomes harder to change.
Because changing it now requires extra effort to update the test.
That is neither flexible nor economical.
Please keep in mind, I am not talking about re-writing tests that conflict with new requirements.
I am talking about tests that you have to rewrite because the design of the application needs to evolve to accommodate new requirements.
It is definitely something that I have experienced in code bases which are more than I couple of years old.
I try to extend the code base to introduce a new feature.
And either a considerable amount of the development effort goes towards adjusting and updating unrelated test, or the cost of refactoring the tests weights into how to implement the feature in the first place looking for the path of least resistance.
So, tests are good until they aren’t.
How do you choose which tests to write and which not?
Choosing which test to write next
I re-read recently “Test Driven Development: By Example” by Kent Beck.
The author suggests we keep a To-Do list of what we need to do to keep us focused.
Initially, the list contains high level items very close to the acceptance criteria that our application should adhere to.
Those items are usually too big and complex to be implemented in one go, so to keep working in smaller steps with continuous feedback, we come up with smaller To-Do items each one small and specific but still challenging.
The list evolving, as we progress we get to cross off some items and introduce new ones.
This is example of do list from the imaginary feature of providing a new “Sales API”.
Since that was too big to implement in one step.
I broke down into smaller items.
Those items are smaller but still challenging for me.
It is important to stress that this list does not depend only on the feature but also on the developer’s expertise with the technology and the domain.
In the example above, a NodeJS developer with years of experience with Express, would not have to sub-task just for creating an endpoint.
Since, for them this is not a challenging task from which they can learn something new.
Similarly, a developer with less experienced in the domain may have broken down the VAT computation sub-task further into smaller tasks.
The first would be only compute the VAT for the current country.
This list now represents our awareness and mastery over the domain, the existing code and the technology.
The more aware we are, the more abstract and closer to the business language we can keep this list.
The more uncertainty and unknowns we have, the more we need to decompose bigger tasks into smaller, more concrete and technical sub-tasks.
This list is also driving which tests we are going to write.
With high awareness and mastery, we are able to write fewer tests.
Each test closer to the business language, testing a larger chunk of our application’s logic, while maintaining our confidence that our code is working.
By avoiding writing many smaller tests, we keep the cost of future refactoring low.
By testing from the outside in as much as possible, we remain flexible to refactor the internals of our application, since fewer tests depend on them.
Don’t drink too much cool aid
We humans tend to overestimate our abilities, it is very likely that we are not as comfortable with the domain, and the technologies as we think we are. Be humble and receptive to signals that point to you not being fully aware of what is going on.
When something unexpected happens that makes you question whether your understanding is complete then do create smaller sub-tasks and more specific tests.
Speed and Test Complexity
Two more considerations when testing on larger chunks are speed and test complexity.
When the code under tests includes many components, tests can get slow.
The slowness is seldom because of the application’s domain object graph that is being instantiate but rather because of frameworks and external integrations (e.g. Database).
I still encourage you to mock those out if they slow you down.
When testing larger chunks, it can also happen that your tests needs to do a lot of setup to make sure everything is in the proper state for this specific test.
That makes the test harder to understand.
Also, the more dependencies a test has to application code the more likely it is to break.
This could be one of the reason break down the tests the individual components separately.
Working in a team
As we’ve seen in the previous section, the development of a feature using TDD depends very much on the developer. How does that work in a team setup where people have varying degrees of mastery and awareness.
My suggestion is as a senior or lead, your priority is to ensure that people gain mastery over TDD. TDD is the tool to manage uncertainty as we work to gain mastery over the domain and the technology.
“Test Driven Development: By Example” by Kent Beck