Monday 28 September 2009 17:42
Roll up, roll up! That's right folks, I've written a Python mocking framework.
Another mocking framework?
Yup. As for why, there are a few reasons.
The simplest is to see just how difficult it was to write a usable mocking framework. It turns out not to be all that difficult – I managed to write the core of the framework over a weekend, although plenty of time was spent tweaking and adding behaviour afterwards.
A somewhat better reason is that none of the existing Python mocking frameworks that I could find really did what I wanted. The closest I found was Fudge. My criteria were something like:
- Not using the record/replay pattern.
- Allowing the expected calls and their arguments to be set up beforehand.
- Allowing differentiation between the methods that have to be called, and methods that can be called.
So what's wrong with Fudge?
Fudge meets all of these expectations. So what went wrong?
Firstly, I found Fudge too strict on ordering. Imagine I have a
TagRepository that returns me tags from a database. I want to mock this object since I don't want to make a database trip in a unit test. So, in Fudge, I would set up the mock like so:
@with_fakes def test_gets_python_and_debian_tags(): tag_repository = fudge.Fake() tag_repository.expects('with_name').with_args('python').returns(python_tag) tag_repository.next_call().with_args('debian').returns(debian_tag) # Rest of the test
This would require me to get the Python tag before the Debian tag – yet I really didn't care which method I called first. I'm also not a fan of the syntax – for the first expectation,
expects is used, yet for the second expectation,
next_call is used.
The second problem I had was that, if you only set up one expectation on a method, you could call it many times. So, with the example above, if you had only set up the expectation for the Python tag, you could get the Python tag any number of times, so long as you asked for it at least once.
I dislike this since, by adding a second expectation, we have now changed the behaviour of the first. This does not lend itself to being able to modify or refactor the test quickly.
Finally, Fudge used a global context for mocks. The ramification of this is that, when using the decorator
@with_fakes, each test inherits the mocks set up for the previous test. For instance:
@with_fakes def test_tag_is_saved_if_name_is_valid(): database = fudge.Fake() database.expects('save').with_args('python') tag_repository = TagRepository(database) tag_repository.save('python') @with_fakes def test_tag_is_not_saved_if_name_is_blank(): tag_repository = TagRepository(None) tag_repository.save('')
The second test above would fail since it does not save the Python tag to the database created in the first test. This seemed somewhat unintuitive to me, so I ended up rolling my own decorator. At the start of each test, it would remove all of the mocks currently set up so that I could start from a blank slate.
The other effect is that it makes it difficult to nest mock contexts – admittedly, something I rarely need to do, but it can be useful to make a quick assertion that requires some amount of mocking.
Okay, what does Funk do differently?
Let's take a look at how we'd write that first test using Funk:
@with_context def test_gets_python_and_debian_tags(context): tag_repository = context.mock() tag_repository.expects('with_name').with_args('python').returns(python_tag) tag_repository.expects('with_name').with_args('debian').returns(debian_tag) # Rest of the test
The first difference is that using the
@with_context decorator means that we get a context passed in. If you want to build your own context, you can do so simply by calling
Secondly, Funk doesn't care what order you call the two methods in.
Finally, even if you only set one expectation, Funk expects the method to be called once. Setting up further expectations will not affect the existing expectations.
Show me the code!
You can see the Git repository over on Gitorious, or grab version 0.1 from the Cheese Shop. Feel free to try it out, although the API isn't 100% stable yet. The source includes some documentation, but you might also want to take a look at some of the tests to get an idea of what you can do.