Assumption Testing

I like to write narratives when I’m testing code. Do A. Test some stuff. Do B. Test some stuff.

The problem with these narratives is that an error in part A can result in cascading test failures in B, C, etc. It’s usually not too hard to figure out, but it’s definitely annoying to see one bug fill up your console  with tracebacks from 50 test failures.

One way to deal with this is to compartmentalize your tests, i.e. make A and B separate tests and mock out any references to A in B. It’s easy to overdo this however.  A lot of times, you actually do want to test the interaction between A and B (and C and D and so forth).

What we really want to do is to run the narrative and stop as soon as we hit a failure. However, there doesn’t appear to be a control the flow and order of testing between different tests in Python’s native unittest and doctest modules. That basically leaves writing really long and unwieldy test functions.  Not very maintainable.

So I ended up hacking together an extension of unittest (and doctest, sort of) and named it Antf. The basic idea is that we add functionality for specifying that a test case depends on functionality tested in another test case. If test A depends on test B, then we test B first. If B fails, then the test runner passes over A.

Possible issues are possible namespace issues in keeping track of which test cases have already been looked at and circular assumption references. These don’t seem to be real show-stoppers right now though, so I’ve gone ahead and pasted the code below the fold.

Continue reading “Assumption Testing”