I dont get this logic.
Why cant something that wants to monitor the test process do something
other than make test?
They can do a make, and or make test-prep or whatever, and then call
into an alternative test harness framework to monitor the tests.
Can you explain why this is a no-go in more detail?
OK, so bear in mind here my use case is to encapsulate an install run
for a distribution.
In doing this, I have some desirable behaviours.
Primarily, I want to be doing the testing in a way that is as close as I
possibly can get to the way it will be done "for real" by users.
So the path I will actually most likely take is to not even run "make
test" myself.
I suspect in the end what I will do is to inject the distro to be tested
into the local CPAN cache and tell an actual CPAN client
cpan> test LOCAL/Some-Distro-0.01.tar.gz
And let it recurse and do all the normal things that a CPAN client does.
If I'm testing on Perl 5.005, I want to see what breaks when using the
5.005 default CPAN.pm client (Test::Harness upgraded to 3.0 may be a
necesary evil here). I want to see old make files generated by
ExtUtils::MakeMaker failing...
I want to see Module::Build failing to install because it's not
mentioned as a dependency by something that needs it.
I want to see what is ACTUALLY going to happen in real usages of CPAN,
or as close as I can get.
Now, I don't want to monitor the tests. Not at all.
I'll get a basic PASS or FAIL level result from the testing harness.
HOWEVER, I also want a complete perfect copy of all the TAP streams, as
well as the normal output of the installation run.
That way I can extract those TAP streams and do interesting and shiny
things with it OUTSIDE the virtual machine, possibly months in the future.
And so hence I want to write some environment variable before I start
the test run, and then after I'm finished pick up all the TAP streams,
attach them to the normal output, and pull that off the virtual machine.
The less I deviate from normal user behaviour, the more accurate my
testing is.
Adam K