Sunday, January 6, 2019

Test-Driven Development - you are doing it wrong!

Test-Driven Development is one of those techniques that somehow is not as widely used as it should be. I know a lot of developers who can agree on the benefits it brings. Yet the same developers, when asked about practicing TDD, answer: “it does not work for me”.
When I tried to understand the “why” behind this statement, they start to struggle to find an explanation. They say they only add simple features. They say their application is not that complex. They have no time. Many of them tried, but it was “too time-consuming”, it cost too much effort.

I pair with some of them. On a couple of occasions I had long conversations with them to understand how they were doing it? What went wrong? What’s the conclusion? In many cases, the problem lies in the basics - they simply tried to apply TDD to every newly created object.

Based on my experience, I can say this is the most common mistake developers do. What is so bad about that? This mistake leads them to abandon TDD completely.

TDD in a nutshell

Test-Driven Development is a design technique. When you use it, you won’t solve not existing problems, you won’t add useless code, you will write code that is easy to use and understand.

The essence of TDD boils down to three steps:
  • Red - write a test scenario that will fail.
  • Green - add enough (but not more) code to meet new requirements.
  • Refactor - improve the quality of written code. Both on the production and tests side.

If you want to read more about TDD, you can start here, here and here.

What does the mistake looks like?

We start with the first cycle.
  • RED - we added the new test scenario (code has to compile):
  • GREEN - we are adding the code to meet requirements from a newly added test:
  • REFACTOR - we improve our code and tests.

So far, so good. Let’s add another scenario:
  • RED:
  • GREEN:

And that’s the moment where many developers stop further development of System Under Tests (SUT). They move to adding tests for two newly created classes. And that’s not all - they try to do this in TDD manner.
These two mistakes lead many of them to abandon TDD. They start to think the technique cost them too much time and effort, which I would even agree with if … TDD worked in this way.

What and why is wrong with it?

To make it perfectly clear - this is not how Test-Driven Development works.
Let me explain why:
  • SUT development is the source of scenarios you have to support. This is the reason why you write the code in the first place. If you jump into tests of other classes before SUT functionality is fully written, you may be tempted to add there (to those classes) functionality you only think would be useful later.
  • Slower SUT development - you have to add tests to each dependency. You have to put current development on hold and switch the context.
  • Before you are done with SUT development, there are plenty of refactoring steps you’ll need to take. Current design may change many times. There’s chance you will get rid of already created classes. If you write tests in TDD manner for each SUT dependency during refactoring step you will start treat TDD as a timewaster.
    Until you are not done design is not established and it may change. If it happen all tests and your effort will be thrown away.

How to make it better?

  • Stay focused on SUT development - that’s why you write this code, in the first place.
  • Don’t waste your time on polishing/developing dependencies that may change.
  • Don’t waste your time on test dependencies that may disappear after adding next SUT testing scenario.
  • Beforel SUT development is done, remember - the next step after refactoring it adding new failing scenario.

What you should do instead of writing tests for newly created dependencies? Just get back to TDD and after refactoring just move to the RED step:

Using TDD does not mean you have to TDD everything

TDD is a Design technique. It helps you to establish good design and avoid producing useless code.
TDD is NOT a testing technique. This means that once you start using it, not each and every test case you write in the future will have to be added in this way.

You may have to write more tests without TDD even after SUT development is done:
  • You will have to test you dependencies. Yes, there may be a good reason to add a couple of new tests in TDD manner to polish their shape, but this is not always the case. However, there always will be an already existing functionality produced during SUT development you will have to cover.
  • Once you establish SUT design and you won’t be able to add any new failing tests, it does not always mean you are done with testing. It just means you are done with TDD.
    Sometimes there's good reason to add more complex test cases simply to double check if functionality really do want you want.

I hope this article helped you to understand how to follow TDD steps in right way, where TDD doesn't make any sense and the fact TDD is over doesn't mean testing is done.

If you have any further questions, comments don't hesitate to share them below the article.


  1. Hello, Thank you for this clear Post, I enjoyed reading it ! I have one question that always gets back, when working on a US let's say a webservice that does some functionality. What is the entry point you will chose for your SUT would it be the controller in this case ? Or may be the service ?

    1. By service you mean the unit responsible for the application logic? If yes, than entry point would be the service. Why is that? Because there's the functionality I'm interested in.

  2. This comment has been removed by a blog administrator.