Happiness comes from small victories. In order to save on unnecessary typing, I’ve made a few shortcuts in Django for my most common “python manage.py” commands.
For example, my test command is longer than most because I use a different settings file for tests to keep the environments separate. So I popped open an editor and created this:
python manage.py test $* –settings=settings_test
Save as something short — like ‘ts’, then make executable
chmod u+x ts
Now instead of typing python manage.py test … , I can simply type ./ts and be done with it. Note that things like ./ts my_app also still work
Last step: update your repository to ignore these files so you don’t accidentally piss off any others you’re working with.
I spent lunch today at a talk by David Bollier focusing on how to govern (or manage // semantics) the digital commons. His premise, more or less, is:
- There now exists a digital commons not that different from the commons from way back when. Whereas villagers once benefited from a shared space for, say, sheep grazing, Internet users now benefit from shared code and media (among other things).
- Commons have to be maintained and protected (see “tragedy of the commons”). What Bollier was interested in was less the shared space and more the norms and relationships that allowed users of the commons to protect it and not abuse it.
- After giving numerous examples of how people did so for the regular commons, how do we do so for the digital commons?
Just goes to show that few things are new — we’re just changing the scale and tweaking the metaphors is all.
Continue reading “Commons and Wealth”
I like to write narratives when I’m testing code. Do A. Test some stuff. Do B. Test some stuff.
The problem with these narratives is that an error in part A can result in cascading test failures in B, C, etc. It’s usually not too hard to figure out, but it’s definitely annoying to see one bug fill up your console with tracebacks from 50 test failures.
One way to deal with this is to compartmentalize your tests, i.e. make A and B separate tests and mock out any references to A in B. It’s easy to overdo this however. A lot of times, you actually do want to test the interaction between A and B (and C and D and so forth).
What we really want to do is to run the narrative and stop as soon as we hit a failure. However, there doesn’t appear to be a control the flow and order of testing between different tests in Python’s native unittest and doctest modules. That basically leaves writing really long and unwieldy test functions. Not very maintainable.
So I ended up hacking together an extension of unittest (and doctest, sort of) and named it Antf. The basic idea is that we add functionality for specifying that a test case depends on functionality tested in another test case. If test A depends on test B, then we test B first. If B fails, then the test runner passes over A.
Possible issues are possible namespace issues in keeping track of which test cases have already been looked at and circular assumption references. These don’t seem to be real show-stoppers right now though, so I’ve gone ahead and pasted the code below the fold.
Continue reading “Assumption Testing”
Getting back into the blogging thing