Archive: Testing
Sunday 15 January 2023 14:44
Predicates are awkward to test.
Or, to be more precise,
predicates are awkward to test such that the test will reliably fail if the behaviour under test stops working.
To see why, let's look at an example: a permission check.
Suppose I'm writing a system where only admins should have permission to publish articles.
I might write the following predicate:
function hasPermissionArticlePublish(user: User): boolean {
return user.isAdmin;
}
test("admins have permission to publish articles", () => {
const user = {isAdmin: true};
const result = hasPermissionArticlePublish(user);
assert.isTrue(result);
});
test("users that aren't admins don't have permission to publish articles", () => {
const user = {isAdmin: false};
const result = hasPermissionArticlePublish(user);
assert.isFalse(result);
});
I then realise that only active admins should have permission to publish articles,
so I update the function (eliding the additional tests for brevity):
function hasPermissionArticlePublish(user: User): boolean {
return user.isActive && user.isAdmin;
}
Since I'm using TypeScript, I'll need to update any existing tests to include the new field to keep the compiler happy:
test("users that aren't admins don't have permission to publish articles", () => {
const user = {isActive: false, isAdmin: false};
const result = hasPermissionArticlePublish(user);
assert.isFalse(result);
});
Whoops! Although the test still passes, it's actually broken:
since isActive
is false, it'll pass regardless of the value of isAdmin
.
How can we prevent this sort of mistake?
Solution #1: Consistent construction of test data
The idea here is that we have a set of inputs that we know will make our unit under test behave one way,
and then we make some minimal change to that set of inputs to make the unit under test behave another way.
In our example,
we'd have a test for a user that does have permission:
test("active admins have permission to publish articles", () => {
const user = {isActive: true, isAdmin: true};
const result = hasPermissionArticlePublish(user);
assert.isTrue(result);
});
We can extract the user from this test into a constant,
and then make a minimal change to the user to cause the permission check to fail:
const userWithPermission = {isActive: true, isAdmin: true};
test("active admins have permission to publish articles", () => {
const result = hasPermissionArticlePublish(userWithPermission);
assert.isTrue(result);
});
test("users that aren't admins don't have permission to publish articles", () => {
const user = {...userWithPermission, isAdmin: false};
const result = hasPermissionArticlePublish(user);
assert.isFalse(result);
});
This is a fairly unintrusive solution: the changes we needed to make were fairly small.
The downside is that we've effectively coupled two of our tests together:
our test that non-admins can't publish articles relies on another test to check that userWithPermission
can indeed publish an article.
As the predicate and the data become more complicated, maintaining the tests and the relationships between them becomes more awkward.
Solution #2: Testing the counter-case
To break the decoupling between test cases that our first solution introduced,
we can instead test both of the cases we care about in a single test:
test("users that aren't admins don't have permission to publish articles", () => {
const admin = {isActive: true, isAdmin: true};
const nonAdmin = {...admin, isAdmin: false};
const adminResult = hasPermissionArticlePublish(admin);
const nonAdminResult = hasPermissionArticlePublish(nonAdmin);
assert.isTrue(adminResult);
assert.isFalse(nonAdminResult);
});
If we made the same mistake as before by setting isActive
to false
,
then our first assertion would fail.
Much like our first solution, we can now be more confident that we are indeed testing how the function under test behaves as we vary the isAdmin
property,
except that our confidence in this test no longer relies on another test.
Both this approach and the previous approach work less well when the predicate itself is more complex.
When the predicate is checking that a set of conditions are all true,
it's easy enough to take a user that satisfies all conditions and change one property so that a condition is no longer satisfied.
When there are more complicated interactions between the inputs, this approach becomes trickier to use.
Solution #3: Returning more information from the function under test
A fundamental issue here is that there are many reasons why the permission check might fail,
but we only get out a true or a false.
In other words, the return value of the function doesn't give us enough information.
So, another solution would be to extract a function that returns the information we want,
which can then be used both by the original permission check function and our tests.
function hasPermissionArticlePublish(user: User): boolean {
return checkPermissionArticlePublish(user) === "SUCCESS";
}
type PermissionCheckResult =
| "SUCCESS"
| "FAILURE_USER_NOT_ACTIVE"
| "FAILURE_USER_NOT_ADMIN";
function checkPermissionArticlePublish(user: User): PermissionCheckResult {
if (!user.isActive) {
return "FAILURE_USER_NOT_ACTIVE";
} else if (!user.isAdmin) {
return "FAILURE_USER_NOT_ADMIN";
} else {
return "SUCCESS";
}
}
test("users that aren't admins don't have permission to publish articles", () => {
const user = {isActive: true, isAdmin: false};
const result = checkPermissionArticlePublish(user);
assert.isEqual(result, "FAILURE_NOT_ADMIN");
});
This requires changing the code under test,
but it allows us to get the answer we want (why did the permission check fail?) directly,
rather than having to infer it in the tests.
You'd probably also want to have a couple of test cases against hasPermissionArticlePublish
directly,
just to check it's using the result of checkPermissionArticlePublish
correctly.
In this simple case, the extra code might not seem worth it,
but being able to extract this sort of information can be increasingly useful as the condition becomes more complex.
It's also a case where we might be willing to make our production code a little more complex in exchange for simplifying and having more confidence in our tests.
Conclusion
I've used them all of these techniques successfully in the past,
and often switched between them as the problem being solved changes.
There are certainly other solutions -- for instance, property-based testing --
but hopefully the ones I've described give some food for thought if you find yourself faced with a similar challenge.
It's also worth noting that this problem isn't really specific to predicates:
it happens any time that the function under test returns less information than would you would like to assert on in your test.
Interestingly, despite having faced this very problem many times,
I haven't really seen anybody write about these specific techniques or the problem in general.
It's probably just the case that I've been looking in the wrong places and my search skills are poor:
pointers to other writing on the topic would be much appreciated!
Topics: Testing
Saturday 7 January 2023 18:30
James Shore has written
a new draft of his "Testing Without Mocks: A Pattern Language" article
which I thoroughly recommend reading, or at least the accompanying Mastodon thread.
To help me understand how this approach differs from a typical "testing with mocks" approach,
I found it useful to think about three key questions:
-
Which parts of the system -- whether that be code, data, infrastructure or something else -- do you replace with test doubles?
-
How do you make those test doubles?
-
Who's responsible for making the test doubles?
By investigating the different answers to these questions,
we can then see how we might mix and match those answers to explore other possible approaches.
Let's dig in!
Question #1: which parts of the system do you replace with test doubles?
The typical approach with mocks is to replace an object's immediate dependencies with mocks.
At the other end of the spectrum, if you're running end-to-end tests,
you might use production code with non-production config,
say to point the system at a temporary (but real) database.
Nullables says that we should run as much of our own code as possible, but avoid using real infrastructure.
So, we replace those dependencies on infrastructure at the lowest level, for instance by replacing stdout.write()
.
If you're mocking immediate dependencies, then you often end up mocking an awful lot of your interfaces at one point or another.
I suspect an advantage of replacing dependencies at the lowest level is a much smaller,
and therefore more manageable, set of interfaces that you have to replace.
Question #2: how do we make the test doubles?
Rather using a mocking framework, or making the real thing with different config,
you replace the lowest level of instructure with embedded stubs.
One way I think of embedded stubs is as in-memory implementations that do just enough to let the code run and the tests pass:
they don't need to be full implementations.
For instance, instead of calling random()
, just return a constant value.
We then make the other objects in our system nullable:
that is, we provide a way to instantiate them with dependencies that are themselves either nullable or embedded stubs.
(This glosses over a lot of important detail in the original article,
such as configurable responses and output tracking, which I'm leaving out for brevity.)
Question #3: who's responsible for making the test doubles?
Or, to put it another way, where does the code that sets up dependencies for tests live?
The embedded stub pattern means that all of the code that implements an interface,
whether for production or for testing, is in one place,
rather than (for instance) each test case mocking an interface and having to correctly simulate how it works.
By putting this code in the same file as the production code,
it means the knowledge of how the interface is supposed to work is in one place,
reducing the risk of inconsistency and improving quality through repeated use.
Similarly, higher level interfaces have functions to create nullable instances in the same file as the functions that create the production instances.
So, again, the knowledge of how to create a test instance of X is in one place, which is the same place as X itself, rather than scattered across multiple tests.
Mixing and matching
Now, I reckon you could pick and choose your answers to these questions.
For instance, suppose your default is replacing immediate dependencies in the test case using a mocking framework.
You could:
- keep using a mocking framework (different answer to question #2), but
- choose to mock the lowest level of infrastructure (same answer to question #1), and
- put all of the code that sets up the mocks (directly or indirectly) in one place (same answer to question #3).
Or you could:
- throw away the mocking framework and hand-write stubs (same answer to question #2), but
- still replace immediate dependencies (different answer to question #1), and
- write a separate implementation in each test case/suite (different answer to question #3).
These different combinations come with all sorts of different trade-offs, and some will be more useful than others.
Personally, I've gotten a lot of mileage out of:
- making test doubles without a mocking framework, and
- putting the code to set up testable instances of X in the same place as X itself
(so the knowledge of how X should work is in one place, and the code to simulate X isn't duplicated), but
- varying exactly at what level dependencies are replaced:
sometimes immediate dependencies, sometimes the lowest level of infrastructure, sometimes somewhere in the middle.
I often find that "somewhere in the middle" is where the simplest and most stable interface to replace
(and therefore the one that leads to less brittle tests with clearer intent) can be found.
It's entirely possible that this is an artefact of poor design choices on my part though!
Conclusion
These three questions gave me a way to interrogate the approach that James Shore describes,
as well as more traditional approaches such as end-to-end testing and testing with mocks.
To be clear, I think these three questions are a way to interrogate and explore approaches,
not to characterise them entirely.
Each combination of answers will present its own particular challenges that need solving:
if you haven't already done so,
I strongly encourage you to read James Shore's original article to see how he does so.
We can, to some extent, mix and match the answers of these approaches,
allowing us to consider and explore alternatives that match our own preferences and context.
Even if an approach isn't the right choice at a given moment,
perhaps some aspects of the approach or the underlying thinking can lead us to interesting new thoughts.
Thanks to James Shore for responding to my ramblings when I originally thought about this on Mastodon.
Topics: Mocking, Testing
Monday 18 February 2013 10:38
Code reuse is often discussed, but what about test reuse? I don't just mean
reusing common code between tests -- I mean running exactly the same tests
against different code. Imagine you're writing a number of different
implementations of the same interface. If you write a suite of tests against the
interface, any one of your implementations should be able to make the tests
pass. Taking the idea even further, I've found that you can reuse
the same tests whenever you're exposing the same functionality through different
methods, whether as a library, an HTTP API, or a command line interface.
As an example, suppose you want to start up a virtual machine from some
Python code. We could use QEMU, a command line application on Linux that lets
you start up virtual machines. Invoking QEMU directly is a bit ugly, so we wrap it
up in a class. As an example of usage, here's what a single test case might look
like:
def can_run_commands_on_machine():
provider = QemuProvider()
with provider.start("ubuntu-precise-amd64") as machine:
shell = machine.shell()
result = shell.run(["echo", "Hello there"])
assert_equal("Hello there\n", result.output)
We create an instance of QemuProvider
, use the start
method to start a virtual machine, and then run a command on the virtual machine,
and check the output. However, other than the original construction of the
virtual machine provider, there's nothing in the test that relies on QEMU
specifically. So, we could rewrite the test to accept provider
as an argument to make it implementation agnostic:
def can_run_commands_on_machine(provider):
with provider.start("ubuntu-precise-amd64") as machine:
shell = machine.shell()
result = shell.run(["echo", "Hello there"])
assert_equal("Hello there\n", result.output)
If we decided to implement a virtual machine provider using a different
technology, for instance by writing the class VirtualBoxProvider
,
then we can reuse exactly the same test case. Not only does this save us
from duplicating the test code, it means that we have a degree of confidence
that each implementation can be used in the same way.
If other people are implementing your interface, you could provide the same
suite of tests so they can run it against their own implementation. This can
give them some confidence that they've implemented your interface
correctly.
What about when you're implementing somebody else's
interface? Writing your own set of implementation-agnostic tests and
running it existing implementations is a great way to check that you've
understood the interface. You can then use the same tests against your code
to make sure your own implementation is correct.
We can take the idea of test reuse a step further by testing user interfaces
with the same suites of tests that we use to implement the underlying library.
Using our virtual machine example, suppose we write a command line interface (CLI)
to let people start virtual machines manually. We could test the CLI by writing
a separate suite of tests. Alternatively, we could write an adaptor that invokes
our own application to implement the provider interface:
class CliProvider(object):
def start(self, image_name):
output = subprocess.check_output([
_APPLICATION_NAME, "start", image_name
])
return CliMachine(_parse_output(output))
Now, we can make sure that our command-line interface behaves correctly
using the same suite of tests that we used to test the underlying code. If
our interface is just a thin layer on top of the underlying code, then
writing such an adaptor is often reasonably straightforward.
I often find writing clear and clean UI tests is hard. Keeping a clean separation between
the intent of the test and the implementation is often tricky, and it takes
discipline to stop the implementation details from leaking out. Reusing tests
in this way forces you to hide those details behind the common interface.
If you're using nose in Python to
write your tests, then I've put the code I've been using to do this in a separate
library called
nose-set-tests.
Topics: Testing, Software design, Python
Tuesday 27 September 2011 20:14
I heartily endorse this fine article on writing maintainable code. What do you mean I'm biased because I wrote it?
Topics: Software design, Software development, Testing
Thursday 13 May 2010 14:37
Picking names sensibly in code is crucial to readability. For instance, let's say you're storing prices in your database, and they're represented by a Price
class. What does a PriceHelper
do? Naming something a helper isn't helpful in the least. On the other hand, the class PriceFetcher
probably fetches prices from somewhere. Others have written about what names not to use for classes, but I want to talk about a different set of names. I want to talk about how we name our tests. For instance, let's say I'm implementing a join
method in Java that will glue together a list of strings. Being a good programmer, I write a test to describe its behaviour:
class JoinTest {
@Test
public void testJoin() {
assertEquals("", join(", ", emptyList()));
assertEquals("Apples", join(", ", asList("Apples")));
assertEquals(
"Apples, Bananas, Coconuts",
join(", ", asList("Apples", "Bananas", "Coconuts"))
);
}
}
So, what's wrong with calling the method testJoin
? The answer is that the method tells us nothing about what we expect to happen, just that we're testing the join
method. By picking a name that describes what we expect, we can help the next reader of the code to understand what's going on a little better:
class JoinTest {
@Test public void
joiningEmptyListReturnsEmptyString() {
assertEquals("", join(", ", emptyList()));
}
@Test public void
joiningListOfOneStringReturnsThatString() {
assertEquals("Apples", join(", ", asList("Apples")));
}
@Test public void
joiningListOfStringsConcatenatesThoseStringsSeparatedBySeparator() {
assertEquals(
"Apples, Bananas, Coconuts",
join(", ", asList("Apples", "Bananas", "Coconuts"))
);
}
}
Splitting up tests in this fashion also means that it's easier to pin down failures since you can easily work out exactly what test cases are failing.
This works well with Test Driven Development (TDD). Each time you want to add a new behaviour and extend the specification a little further by adding a failing test, the test method describes the new behaviour in both the test code and its name. In fact, having rearranged the code so that the test method names are on their own line without modifiers, we can read a specification of our class simply by folding up the bodies of our test methods:
class JoinTest {
joiningEmptyListReturnsEmptyString() {
joiningListOfOneStringReturnsThatString() {
joiningListOfStringsConcatenatesThoseStringsSeparatedBySeparator() {
}
This reflects the view that test driven development is just as much about specification as testing. In fact, you could as far as saying that regression tests are a happy side-effect of TDD. Either way, the intent of each test is now that much clearer.
Topics: Testing
Monday 30 November 2009 19:43
Recently, I've been discussing with a variety of people the merits of various styles of programming. One person's complaint about object orientated programming is that it leads to huge hierarchies of code, where you constantly have to move up and down the class hierarchy to find out what's going on. I've encountered such code on plenty of occasions, but I disagree that it's a problem with object orientation. Instead, I believe it's a problem with inheritance.
Of course, this all depends on what we mean by object orientation. So, let's take a simple example in Java. In this case, we're going to make a class that generates names.
public class NameGenerator {
public String generateName() {
return generateFirstName() + " " + generateLastName();
}
protected String generateFirstName() {
// Generate a random first name
...
}
protected String generateLastName() {
// Generate a random last name
...
}
}
So, NameGenerator
will generate names using the default implementations of generateFirstName
and generateLastName
. If we want to change the behaviour of the class, say so that we only generate male names, we can override the generateFirstName
method to only generate male names. This is known as the template pattern, the idea being that by putting the overall algorithm in one method that delegates to other methods, we avoid code duplication while still allowing different behaviours in different subclasses.
However, in my experience, the template pattern is a terrible way to write code. In some piece of code, if we see that the NameGenerator
is being used, we might take a quick look at the class to see how it behaves. Unfortunately, the fact that subclasses may be overriding some methods to change behaviour isn't immediately obvious. This can make debugging difficult, since we start seeing behaviour we don't expect according to the class definition we have right in front of us. This becomes especially confusing if one of the methods affects control flow, for instance if we use one of the protected methods in a conditional, such as an if statement.
Testing also becomes more difficult. When we subclass NameGenerator
, what methods should we be testing? We could always test the public method, but then we'll be testing the same code over and over again. We could test just the protected methods, but then we aren't testing the public method fully since it's behaviour changes depending on the protected methods. Only testing the public method if we think its behaviour will have changed is error-prone and brittle to changes made in behaviour in NameGenerator
.
The solution is to use composition rather than inheritance. In this case, the first step is to pull out the two protected methods into two interfaces. The first interface will generate first names, and the second interface will generate last names. We then use dependency injection to get a hold of some implementation of these classes. This means that, rather than creating classes themselves, we have them passed in via the constructor. So, our NameGenerator
now looks like this:
public class NameGenerator {
private final FirstNameGenerator firstNameGenerator;
private final LastNameGenerator lastNameGenerator;
public NameGenerator(FirstNameGenerator firstNameGenerator,
LastNameGenerator lastNameGenerator) {
this.firstNameGenerator = firstNameGenerator;
this.lastNameGenerator = lastNameGenerator;
}
public String generateName() {
return firstNameGenerator.generate() + " " + lastNameGenerator.generate();
}
}
Now we've made the dependencies on the generation of first and last names clear and explicit. We've also made the class easier to test, since we can just pass in mocks or stubs into the constructor. This also encourages having a number of smaller classes interacting, rather than a few enormous classes. The code is also more modular, allowing it be reused more easily. For instance, if there's somewhere in the code where we just want to generate first names, we now have an interface FirstNameGenerator
that we can use straight away.
For more on dependency injection, Misko Hevery has a good blog post on the subject, as well as a tech talk on dependency injection.
Notice how the new version of our code is completely devoid of inheritance. That's not to say that we have a completely flat class hierarchy -- we need implementations of FirstNameGenerator
and LastNameGenerator
, but it's important to keep inheritance and implementation distinct. Despite the lack of inheritance, I believe that this is still an example of object orientated programming in action. So, if not inheritance, what characterises object orientation?
I would first suggest encapsulation -- in this case, we don't care how the first and last name generators work, we just call their methods to get a result out. In this way, we can concentrate on one small part of the program at a time.
This doesn't tell the whole story though. If we didn't allow any subtyping, or rather polymorphism, then our example would not be able to use different implementations of FirstNameGenerator
. By having polymorphism, we allow different implementations to be used so that we can use the same algorithm without rewriting it.
So, there we have my view of object orientation -- composition of small objects using encapsulation and polymorphism, while avoiding inheritance and, in particular, the template pattern.
Topics: Software design, Testing
Friday 23 October 2009 19:57
The debate between static and dynamic typing has gone on for years. Static typing is frequently promoted as effectively providing free tests. For instance, consider the following Python snippet:
def reverse_string(str):
...
reverse_string("Palindrome is not a palindrome") # Works fine
reverse_string(42) # 42 is not a string
reverse_string() # Missing an argument
In a static language, those last two calls would probably be flagged up as compilation errors, while in a dynamic language you'd have to wait until runtime to find the errors. So, has static typing caught errors we would have missed?
Not quite. We said we'd encounter these errors at runtime -- which includes tests. If we have good test coverage, then unit tests will pick up these sorts of trivial examples almost as quickly as the compiler. Many people claim that this means you need to invest more time in unit testing with dynamic languages. I've yet to find this to be true. Whether in a dynamic language or a static language, I end up writing very similar tests, and they rarely, if ever, have anything to do with types. If I've got the typing wrong in my implementation, then the unit tests just won't pass.
Far more interesting is what happens at the integration test level. Even with fantastic unit test coverage, you can't have tested every part of the system. When you wrote that unit, you made some assumptions about what types other functions and classes took as arguments and returned. You even made some assumptions about what the methods are called. If these assumptions were wrong (or, perhaps more likely, change) in a static language, then the compiler will tell you. Quite often in a static language, if you want to change an interface, you can just make the change and fix the compiler errors -- “following the red”.
In a dynamic language, it's not that straightforward. Your unit tests as well as your code will now be out-of-date with respect to its dependencies. The interfaces you've mocked in the unit tests won't necessarily fail because, in a dynamic language, they don't have to be closely tied to the real interfaces you'll be using.
Time for another example. Sticking with Python, let's say we have a web application that has a notion of tags. In our system, we save tags into the database, and can fetch them back out by name. So, we define a TagRepository
class.
class TagRepository(object):
def fetch_by_name(self, name):
...
return tag
Then, in a unit test for something that uses the tag repository, we mock the class and the method fetch_by_name
. (This example uses my Funk mocking framework. See what a subtle plug that was?)
...
tag_repository = context.mock()
expects(tag_repository).fetch_by_name('python').returns(python_tag)
...
So, our unit test passes, and our application works just fine. But what happens when we change the tag repository? Let's say we want to rename fetch_by_name
to get_by_name
. Our unit tests for TagRepository
are updated accordingly, while the tests that mock the tag repository are now out-of-date -- but our unit tests will still pass.
One solution is to use our mocking framework differently -- for instance, you could have passed TagFetcher
to Funk, so it will only allows methods defined on TagFetcher
to be mocked:
...
tag_repository = context.mock(TagFetcher)
# The following will raise an exception if TagFetcher.fetch_by_name is not a method
expects(tag_repository).fetch_by_name('python').returns(python_tag)
...
Of course, this isn't always possible, for instance if you're dynamically generating/defining classes or their methods.
The real answer is that we're missing integration tests -- if all our tests pass after renaming TagRepository
's methods, but not changing its dependents, then nowhere are we testing the integration between TagRepository
and its dependents. We've left a part of our system completely untested. Just like with unit tests, the difference in integration tests between dynamic and static languages is minimal. In both, you want to be testing functionality. If you've got the typing wrong, then this will fall out of the integration tests since the functionality won't work.
So, does that mean the discussed advantages of static typing don't exist? Sort of. While compilation and unit tests are usually quick enough to be run very frequently, integration tests tend to take longer, since they often involve using the database or file IO. With static typing, you might be able to find errors more quickly.
However, if you've got the level of unit and integration testing you should have in any project, whether its written in a static or dynamic language, then I don't think using a static language will mean you have fewer errors or fewer tests. With the same level high level of testing, a project using a dynamic language is just as robust as one written in a static language. Since integration tests are often more difficult to write than unit tests, they are often missing. Yet relying on static typing is not the answer -- static typing says nothing about how you're using values, just their type, and as such make weak assertions about the functionality of your code. Static typing is no substitute for good tests.
Topics: Mocking, Testing
Monday 19 October 2009 15:08
Funk 0.2 has just been released -- you can find it on the Cheese Shop, or you can always get the latest version from Gitorious. You can also take a peek at Funk's documentation.
The most important change is a change of syntax. Before, you might have written:
database = context.mock()
database.expects('save').with_args('python').returns(42)
database.allows('save').with_args('python').returns(42)
database.expects_call().with_args('python').returns(42)
database.allows_call().with_args('python').returns(42)
database.set_attr(connected=False)
Now, rather than calling the methods on the mock itself, you should use the functions in funk:
from funk import expects
from funk import allows
from funk import expects_call
from funk import allows_call
from funk import set_attr
...
database = context.mock()
expects(database).save.with_args('python').returns(42)
allows(database).save.with_args('python').returns(42)
expects_call(database).with_args('python').returns(42)
allows_call(database).with_args('python').returns(42)
set_attr(database, connected=False)
If you want, you can leave out the use of with_args
, leading to a style very similar to JMock:
from funk import expects
from funk import allows
from funk import expects_call
from funk import allows_call
...
database = context.mock()
expects(database).save('python').returns(42)
allows(database).save('python').returns(42)
expects_call(database)('python').returns(42)
allows_call(database)('python').returns(42)
To help transition old code over, you can use funk.legacy
:
from funk.legacy import with_context
@with_context
def test_view_saves_tags_to_database(context):
database = context.mock()
database.expects('save')
One final change in the interface is that has_attr
has been renamed to set_attr
. Hopefully, the interface should be more stable from now on.
There's also a new feature in that you can now specify base classes for mocks. Let's say we have a class called TagRepository
, with a single method fetch_all()
. If we try to mock calls to fetch_all()
, everything will work fine. If we try to mock calls to any other methods on TagRepository
, an AssertionError
will be raised:
@with_context
def test_tag_displayer_writes_all_tag_names_onto_separate_lines(context):
tag_repository = context.mock(TagRepository)
expects(tag_repository).fetch_all().returns([Tag('python'), Tag('debian')]) # Works fine
expects(tag_repository).fetch_all_tags() # Raises an AssertionError
Two words of caution about using this feature. Firstly, this only works if
the method is explicitly defined on the base class. This is often not the case
if the method is dynamically generated, such as by overriding
__getattribute__
on the type.
Secondly, this is no substitute for integration testing. While its true that the
unit test above would not have failed, there should have been some integration
test in your system that would have failed due to the method name change. The
aim of allowing you to specify the base class is so that you can find that
failure a little quicker.
If you find any bugs or have any suggestions, please feel free to leave a comment.
Topics: Funk, Mocking, Python, Testing
Monday 28 September 2009 17:42
Roll up, roll up! That's right folks, I've written a Python mocking framework.
Another mocking framework?
Yup. As for why, there are a few reasons.
The simplest is to see just how difficult it was to write a usable mocking framework. It turns out not to be all that difficult – I managed to write the core of the framework over a weekend, although plenty of time was spent tweaking and adding behaviour afterwards.
A somewhat better reason is that none of the existing Python mocking frameworks that I could find really did what I wanted. The closest I found was Fudge. My criteria were something like:
- Not using the record/replay pattern.
- Allowing the expected calls and their arguments to be set up beforehand.
- Allowing differentiation between the methods that have to be called, and methods that can be called.
So what's wrong with Fudge?
Fudge meets all of these expectations. So what went wrong?
Firstly, I found Fudge too strict on ordering. Imagine I have a TagRepository
that returns me tags from a database. I want to mock this object since I don't want to make a database trip in a unit test. So, in Fudge, I would set up the mock like so:
@with_fakes
def test_gets_python_and_debian_tags():
tag_repository = fudge.Fake()
tag_repository.expects('with_name').with_args('python').returns(python_tag)
tag_repository.next_call().with_args('debian').returns(debian_tag)
# Rest of the test
This would require me to get the Python tag before the Debian tag – yet I really didn't care which method I called first. I'm also not a fan of the syntax – for the first expectation, expects
is used, yet for the second expectation, next_call
is used.
The second problem I had was that, if you only set up one expectation on a method, you could call it many times. So, with the example above, if you had only set up the expectation for the Python tag, you could get the Python tag any number of times, so long as you asked for it at least once.
I dislike this since, by adding a second expectation, we have now changed the behaviour of the first. This does not lend itself to being able to modify or refactor the test quickly.
Finally, Fudge used a global context for mocks. The ramification of this is that, when using the decorator @with_fakes
, each test inherits the mocks set up for the previous test. For instance:
@with_fakes
def test_tag_is_saved_if_name_is_valid():
database = fudge.Fake()
database.expects('save').with_args('python')
tag_repository = TagRepository(database)
tag_repository.save('python')
@with_fakes
def test_tag_is_not_saved_if_name_is_blank():
tag_repository = TagRepository(None)
tag_repository.save('')
The second test above would fail since it does not save the Python tag to the database created in the first test. This seemed somewhat unintuitive to me, so I ended up rolling my own decorator. At the start of each test, it would remove all of the mocks currently set up so that I could start from a blank slate.
The other effect is that it makes it difficult to nest mock contexts – admittedly, something I rarely need to do, but it can be useful to make a quick assertion that requires some amount of mocking.
Okay, what does Funk do differently?
Let's take a look at how we'd write that first test using Funk:
@with_context
def test_gets_python_and_debian_tags(context):
tag_repository = context.mock()
tag_repository.expects('with_name').with_args('python').returns(python_tag)
tag_repository.expects('with_name').with_args('debian').returns(debian_tag)
# Rest of the test
The first difference is that using the @with_context
decorator means that we get a context passed in. If you want to build your own context, you can do so simply by calling funk.Context()
.
Secondly, Funk doesn't care what order you call the two methods in.
Finally, even if you only set one expectation, Funk expects the method to be called once. Setting up further expectations will not affect the existing expectations.
Show me the code!
You can see the Git repository over on Gitorious, or grab version 0.1 from the Cheese Shop. Feel free to try it out, although the API isn't 100% stable yet. The source includes some documentation, but you might also want to take a look at some of the tests to get an idea of what you can do.
Topics: Funk, Mocking, Python, Testing