2009/3/16 Evgeny <evgeny.zis...@gmail.com>: > Thing is. It just does not matter THAT much. > The case you describe is fairly rare in the xUnit world, or in any > world I would guess.
And as I said, I got bitten by it just last week. Another way I've been bitten is when I've done slightly more complex xUnit stuff where I couldn't just let it use introspection to find all the testcases automatically. Once you start doing that, having to register testcases into testsuites and making sure they all get run, it becomes very easy to leave some out and xUnit provides absolutely no protection against that. In fact in that case, you end up building the equivalent of perl's plan yourself. > The testing suite does not have a "will", it is only a tool. > > When the testing suite works, it just works; When people have > confidence in it for some reason, then there is usually a reason > behind that. > > Let me demonstrate with an example: > A group of Java developers are using JUnit to write unit tests for > their software. That software is being built and tested on a > continuous integration server (the likes of CruiseControl). And they > even went as far as to draw a graph and a report of the running unit > tests. > > The know: > - how many unit tests were executed each run > - how much time each unit test took to run (and the total time) > - which unit tests passed, and which failed > - the behavior of some tests over time (a bad test can randomly > fail/pass for example) > > If you would tell them that each time they write a unit test, they > also need to go to some file and increment some counter. They would > probably either not do it, or say you are crazy. > > The major idea is to make it easier for a developer to write stuff. > Thats why people invent IDEs (I use vi personally). So that the actual > developer will not be annoyed to do things that are much better done > automatically, like for example update a counter each time he writes > one line of test code. As has already been pointed out, this is impossible to do it automatically. Impossible not just because counting how many tests will run is equivalent to the halting problem, getting around that is actually quite easy - just run the script and see. The real reason it's impossible is a plan is a summary of what you think you wrote and what you think it will do. Your computer can only see what you actually wrote and what it actually will do. So an automatically calculated plan will always be correct and thus never tells you anything. Alternatively, the plan is a meta-test, a test for your testing code. It is the equivalent of putting is($tests_run_count, $tests_i_planned_count) at the end of your test script. Letting the computer calculate the plan is the equivalent of putting is($tests_run_count, $tests_run_count) at the end of the your test script. It's pointless. It will always pass. Sometimes a plan is more trouble than its worth, you might even think it's always more work than it's worth. However for it to be worth anything at all, it must involve work. A possibly easier alternative to the current planning system is available if you use revision control. Leave the plan in an external file say foo.t's plan goes in foo.plan. When you run foo.t it writes the test count into foo.count. Before checking in changes to foo.t you run it and then cp foo.count foo.plan. When you look at the diff for your checkin you should see that foo.plan is changing in line with your changes to foo.t. Wrap this all up in a script and put it in your RCS's hooks/triggers mechanism so that it all happens automatically. Make a module Test::FilePlan to take care of reading and writing the foo.{plan,count} files. So you can automatically generate the number but you still need a human to check whether the number is changing correctly, F > I wont argue that plan counter does not have its use. It probably > does. But what it also does is annoy the developer. That is why you > would probably see "no_plan" used in most of the testing code in the > wild (I am not talking about CPAN). > > > just my opinion, you are welcome to argue your reasons if you feel > differently. > > > - evgeny >