[Michael working on cleaning up the unittest module] it seems like most of the good ideas have been captured already. I'll through two more (low priority) ideas out there.
1) Randomized test runner/option that runs tests in a random order (like regrtest.py -r, but for methods) 2) decorator to verify a test method is supposed to fail #2 is useful for getting test cases into the code sooner rather than later. I'm pretty sure I have a patch that implements this (http://bugs.python.org/issue1399935). It didn't fit in well with the old unittest structure, but seems closer to the direction you are headed. One other idea that probably ought not be done just yet: add a way of failing with the test continuing. We use this at work (not in Python though) and when used appropriately, it works quite well. It provides more information about the failure. It looks something like this: def testMethod(self): # setup self.assertTrue(precondition) self.expectTrue(value) self.expectEqual(expected_result, other_value) All the expect methods duplicate the assert methods. Asserts can the test to fail immediately, expects don't fail immediately allowing the test to continue. All the expect failures are collected and printed at the end of the method run. I was a little skeptical about assert vs expect at first, but it has proven useful in the long run. As I said, I don't think this should be done now, maybe later. n _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com