On 8/22/07, Alexandre Vassalotti <[EMAIL PROTECTED]> wrote: > When I was fixing tests failing in the py3k branch, I found the number > duplicate failures annoying. Often, a single bug, in an important > method or function, caused a large number of testcase to fail. So, I > thought of a simple mechanism for avoiding such cascading failures. > > My solution is to add a notion of dependency to testcases. A typical > usage would look like this: > > @depends('test_getvalue') > def test_writelines(self): > ... > memio.writelines([buf] * 100) > self.assertEqual(memio.getvalue(), buf * 100) > ...
This definitely seems like a neat idea. Some thoughts: * How do you deal with dependencies that cross test modules? Say test A depends on test B, how do we know whether it's worthwhile to run A if B hasn't been run yet? It looks like you run the test anyway (I haven't studied the code closely), but that doesn't seem ideal. * This might be implemented in the wrong place. For example, the [x for x in dir(self) if x.startswith('test')] you do is most certainly better-placed in a custom TestLoader implementation. That might also make it possible to order tests based on their dependency graph, which could be a step toward addressing the above point. But despite that, I think it's a cool idea and worth pursuing. Could you set up a branch (probably of py3k) so we can see how this plays out in the large? Collin Winter _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com