I'm setting a unit testing system for a pile of code right now which
is going to expand into hundreds of tests. I've currently got a simple
program that finds all of the tests in the system and hands this list to
Test::Harness to be run.

However, I'd like to be able to do two things:

    1. Run the tests in dependency order. That is, if module A uses
    module B, the tests for module B should be executed before those for
    module A.

    2. Skip test of code where dependencies have been tested and found
    to be failing. For example, if the test for the database connection
    module fails, nothing that uses that module should have its test
    run. (The tests should of course be reported as skipped.)

This should make it easier to fix problems because we'll see right away
where we need to start fixing things, and we won't have to go through
reams of test failures to find the one low level module failure that
caused them all.

I'm not very familiar with Test::Harness usage, and I'm wondering
if someone could suggest a good way of doing this. I've looked at
examples/mini_harness.plx; is using those private methods really
the suggested way to go about things?

Also, from looking at the code, I get the impression that analyze_file
is supposed to run the perl file given to it as an argument. But
the docs say otherwise:

    Like "analyze", but it reads from the given $test_file.

Well, analyze reads test results, so presumbly the documentation
is saying that this is supposed to be a file containing the test
results, not the test code. Hmmm.

cjs
-- 
Curt Sampson  <[EMAIL PROTECTED]>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC

Reply via email to