Test-driven development (TDD)
I find test-driven development:
- makes me thing about code from the caller's perspective, rather than the implementation, leading to code that's more pleasant to use
- keeps me focused on the next small step, rather than implementing things that aren't needed
- helps me make continual progress, rather than being stuck trying to solve the whole problem at once
- encourages separation of concerns, otherwise the test will be hard to write and read
If you get all of those benefits without TDD, great! If people aren't used to working in an incremental style, or tend to focus on the implementation rather than the interface, then TDD can be a useful way of trying a different approach. Like any unfamiliar approach, it's likely to be painful at first, so working with someone more experienced will often help.
Another benefit of TDD is that you end up with a suite of automated tests, but I tend to view this as a happy accident rather than the essential value of TDD.
Writing tests that don't fail
Although TDD might be the main way that I write tests, it doesn't mean that it's the only way. Especially when checking edge cases, there might be an additional case I want to test that already passes, even though I've been working in small steps. I don't think there's anything wrong with this, so long as the test adds value. One thing I'll do is change the code under test so that the test fails, just to ensure it's testing the right thing, and that the error message is understandable.
What's the point of making a test pass by returning a constant value?
When using TDD, I often make the first test pass by just returning the expected value. For instance, reusing the example from above, given the test:
@istest
def is_less_than_ten_is_true_for_numbers_less_than_ten():
assert is_less_than_ten(9)
We can make it pass like so:
def is_less_than_ten(value):
return True
While it's certainly true that the current implementation of is_less_than_ten
is not especially useful,
going through the red-green-refactor cycle has value.
Specifically, we've made sure that the plumbing for our tests and importing code is working properly.
But doesn't "do the simplest thing that works" suggest that you hard-code outputs for each input?
For our is_less_than_ten
,
if we have three test cases (9, 10, 11),
we could write a passing implementation like so:
def is_less_than_ten(value):
results = {
9: True,
10: False,
11: False
}
return results[value]
We could add more test cases, but the implementation could just be extended with more hard-coded values. However, I'd argue that there comes a point where implementing the logic we expect is simpler than adding yet another case. The problem is in quantifying that notion of simplicity. As you write more code, you get a feel for what's appropriate, but that's not tremendously helpful for advice for others. Examples help, but a formalisation of the informal rules we use to make the next test pass is an interesting idea. Uncle Bob did just that when he wrote the Transformation Priority Premise.
When discussing this with Graham Helliwell, he suggested a "common sense guiding rule":
When TDD says "do the simplest thing to make a test pass", splitting the execution path through your code really does not count as simple. This discourages if statements, and discourages loops an order of magnitude more so.
I don't think the rule quite captures everything in the transformations that Uncle Bob laid out, but it captures the spirit well while still being concrete.
However, the "always use hard-coded values where possible" train of thought does go somewhere interesting. Suppose that the implementation keeps adding hard-coded values to make the tests pass. How would we force a more sensible implementation? The answer is in something like QuickCheck. That is, if we generate random test cases, we force a more sensible implementation, with the added bonus of checking a much wider range of values.