Mike's corner of the web.

Test-driven development (TDD)

For me, TDD is a tool to help with focus and design. If somebody else manages to produce clean code with tests, then I don't really care if they used TDD or not. It's tempting to talk about TDD when discussing automated testing, but the fact that TDD produces a set of automated tests is best thought of as a happy accident. (This thought was prompted by a little exchange on Twitter.)

Using TDD on real projects

Most introductions to TDD use a small example to explain the basic ideas. However, there are some issues that tend to crop when trying to use TDD on production codebases. Since this is a topic that is less commonly covered, I wrote Moving from Practice to Production with Test-Driven Development for Simple-Talk.

Writing tests that don't fail

One question I'm often asked is: what if I write a test that doesn't immediately fail, say, to make sure an edge case works properly? This doesn't fit neatly into the TDD cycle of red-green-refactor, but if the test still adds value, I'd say add the test. The one thing I would do is to make sure that the test does actually work. In other words, I'd make a small modification either to the test or the production code to see the test fail.

With apologies for triteness, imagine we need to write a function is_less_than_ten. We start with a test:

@istest
def is_less_than_ten_is_true_for_numbers_less_than_ten():
    assert is_less_than_ten(9)

Which leads to the not-very-interesting implementation:

def is_less_than_ten(value):
    return True

We then add another test case:

@istest
def is_less_than_ten_is_false_for_numbers_greater_than_ten():
    assert not is_less_than_ten(11)

We then change our implementation to get the test passing:

def is_less_than_ten(value):
    return value < 10

At this point, our implementation looks correct. However, we haven't tested an important edge case: what happens with the input 10? We might write a test case:

@istest
def is_less_than_ten_is_false_for_ten():
    assert not is_less_than_ten(10)

If we run the test, we discover that the function behaves correctly. In other words, it's not a failing test. However, it's quite possible that we made a mistake in the implementation, for instance using "less than or equal" rather than "less than". Therefore, it's still useful to write the test. Just to make sure it tests what I think it does, I'd change the implementation to behave incorrectly by replacing the "less than" with "less than or equal". Once I've seen that the test fails in this case, I'd revert the code and commit the test.

What's the point of making a test pass by returning a constant value?

When using TDD, I often make the first test pass by just returning the expected value. For instance, reusing the example from above, given the test:

@istest
def is_less_than_ten_is_true_for_numbers_less_than_ten():
    assert is_less_than_ten(9)

We can make it pass like so:

def is_less_than_ten(value):
    return True

While it's certainly true that the current implementation of is_less_than_ten is not especially useful, going through the red-green-refactor cycle has value. Specifically, we've made sure that the plumbing for our tests and importing code is working properly.

But doesn't "do the simplest thing that works" suggest that you hard-code outputs for each input?

For our is_less_than_ten, if we have three test cases (9, 10, 11), we could write a passing implementation like so:

def is_less_than_ten(value):
    results = {
        9: True,
        10: False,
        11: False
    }
    return results[value]

We could add more test cases, but the implementation could just be extended with more hard-coded values. However, I'd argue that there comes a point where implementing the logic we expect is simpler than adding yet another case. The problem is in quantifying that notion of simplicity. As you write more code, you get a feel for what's appropriate, but that's not tremendously helpful for advice for others. Examples help, but a formalisation of the informal rules we use to make the next test pass is an interesting idea. Uncle Bob did just that when he wrote the Transformation Priority Premise.

When discussing this with Graham Helliwell, he suggested a "common sense guiding rule":

When TDD says "do the simplest thing to make a test pass", splitting the execution path through your code really does not count as simple. This discourages if statements, and discourages loops an order of magnitude more so.

I don't think the rule quite captures everything in the transformations that Uncle Bob laid out, but it captures the spirit well while still being concrete.

However, the "always use hard-coded values where possible" train of thought does go somewhere interesting. Suppose that the implementation keeps adding hard-coded values to make the tests pass. How would we force a more sensible implementation? The answer is in something like QuickCheck. That is, if we generate random test cases, we force a more sensible implementation, with the added bonus of checking a much wider range of values.

Other resources