On 1/9/2012 8:05 PM, Roy Smith wrote:
In article<11jrt8-l32....@satorlaser.homedns.org>,
  Ulrich Eckhardt<ulrich.eckha...@dominolaser.com>  wrote:

Some people advocate that the test framework should
intentionally randomize the order, to flush out inter-test dependencies
that the author didn't realize existed (or intend).

If you now
happen to influence one test with another and the next run randomizes
the tests differently, you will never see the fault again. Without this
reproducability, you don't gain anything but the bad stomach feeling
that something is wrong.

The standard solution to that is to print out the PRNG initialization
state and provide a way in your test harness to re-initialize it to that
state.  I've done things like that in test scenarios where it is
difficult or impossible to cover the problem space deterministically.

Your unfortunate case is where test X creates persistent state that must
be present in order for test X+1 to produce meaningful results. This
kind of dependency obviously blows, as it means you can't debug test X+1
separately. I'd call this operational dependency.

This kind of dependency is IMHO a bug in the tests themselves.

For the most part, I'm inclined to agree.  However, there are scenarios
where having each test build the required state from scratch is
prohibitively expensive.  Imagine if you worked at NASA wanted to run
test_booster_ignition(), test_booster_cutoff(),
test_second_stage_ignition(), and test_self_destruct().  I suppose you
could run them in random order, but you'd use up a lot of rockets that
way.

Somewhat more seriously, let's say you wanted to do test queries against
a database with 100 million records in it.  You could rebuild the
database from scratch for each test, but doing so might take hours per
test.  Sometimes, real life is just *so* inconvenient.

There is another dependency and that I'd call a logical dependency. This
occurs when e.g. test X tests for an API presence and test Y tests the
API behaviour. In other words, Y has no chance to succeed if X already
failed.

Sure.  I run into that all the time.  A trivial example would be the
project I'm working on now.  I've come to realize that a long unbroken
string of E's means, "Dummy, you forgot to bring the application server
up before you ran the tests".  It would be nicer if the test suite could
have run a single test which proved it could create a TCP connection and
when that failed, just stop.

Many test cases in the Python test suite have multiple asserts. I believe both resource sharing and sequential dependencies are reasons. I consider 'one assert (test) per testcase' to be on a par with 'one class per file'.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to