PowerPoint lets you add notes to each slide that are not visible when you play your PowerPoint as a slide show. Let’s say you want to remove all of those notes — e.g. so you can distribute the PowerPoint file — and don’t want to manually remove this all by hand.
If you’re using one of the newer versions of PowerPoint on a PC, this is straight-forward enough. You just pull up the Document Inspector and tell it to remove notes, along with other possibly sensitive metadata. Here’s how to do it in PowerPoint 2007 and PowerPoint 2010.
But let’s say you’re using a Mac. As far as I can tell, there’s no way to remove notes in PowerPoint for Mac 2011 (if there’s a way to do it, please let me know in the comments). You may be able to use some VBScript macros, but explaining scripting to someone with little technical experience can be difficult.
Continue reading “Remove Notes from Powerpoint (PPTX)”
Click here to try out the demo.
Posted my first Django snippet! This concerns a quick and dirty hack for getting composite indexing in MySQL. It’s also a simple example of how to use Django’s post_syncdb signal.
Django currently comes with a unique_together meta attribute you can use to specify unique combinations of fields. I think the backends create an index in the database from this. However, I couldn’t find anything for simply creating non-unique attributes, hence the hack you see in the snippet.
I don’t really have the patience to make it work fully for the other backends since I don’t use them currently (aside from SQLite3 for testing), but it’s GoodEnoughForMe.
I like to write narratives when I’m testing code. Do A. Test some stuff. Do B. Test some stuff.
The problem with these narratives is that an error in part A can result in cascading test failures in B, C, etc. It’s usually not too hard to figure out, but it’s definitely annoying to see one bug fill up your console with tracebacks from 50 test failures.
One way to deal with this is to compartmentalize your tests, i.e. make A and B separate tests and mock out any references to A in B. It’s easy to overdo this however. A lot of times, you actually do want to test the interaction between A and B (and C and D and so forth).
What we really want to do is to run the narrative and stop as soon as we hit a failure. However, there doesn’t appear to be a control the flow and order of testing between different tests in Python’s native unittest and doctest modules. That basically leaves writing really long and unwieldy test functions. Not very maintainable.
So I ended up hacking together an extension of unittest (and doctest, sort of) and named it Antf. The basic idea is that we add functionality for specifying that a test case depends on functionality tested in another test case. If test A depends on test B, then we test B first. If B fails, then the test runner passes over A.
Possible issues are possible namespace issues in keeping track of which test cases have already been looked at and circular assumption references. These don’t seem to be real show-stoppers right now though, so I’ve gone ahead and pasted the code below the fold.
Continue reading “Assumption Testing”