On Wed, Aug 22, 2007 at 07:44:02PM -0400, Alexandre Vassalotti wrote: > When I was fixing tests failing in the py3k branch, I found the number > duplicate failures annoying. Often, a single bug, in an important > method or function, caused a large number of testcase to fail. So, I > thought of a simple mechanism for avoiding such cascading failures. > > My solution is to add a notion of dependency to testcases. A typical > usage would look like this: > > @depends('test_getvalue') > def test_writelines(self): > ... > memio.writelines([buf] * 100) > self.assertEqual(memio.getvalue(), buf * 100) > ... > > Here, running the test is pointless if test_getvalue fails. So by > making test_writelines depends on the success of test_getvalue, we can > ensure that the report won't be polluted with unnecessary failures. > > Also, I believe this feature will lead to more orthogonal tests, since > it encourages the user to write smaller test with less dependencies. > > I wrote an example implementation (included below) as a proof of > concept. If the idea get enough support, I will implement it and add > it to the unittest module. > > -- Alexandre
I like this idea. Be sure to have an option to ignore dependancies and run all tests. Also when skipping tests because a depedancy failed have unittest print out an indication that a test was skipped due to a dependancy rather than silently running fewer tests. Otherwise it could be deceptive and appear that only one test was affected. Greg _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com