I realize I forgot to post the location of the code. It's still too
early to put on CPAN, in the wrong hands it can do some pretty naughty
things, but I'll put it here.
http://www.pobox.com/~schwern/src/CPAN-Test-0.12.tar.gz
Be *very* careful about running bin/cpan_smoke_module.plx. Its
currently wired to my personal settings and does some pretty caviler
rm -rf's.
You also might be interested in this. It's the list of possible
mistakes CPAN::Test looks for.
http://www.pobox.com/~schwern/cgi-bin/perl-qa-wiki.cgi?CPANTestMistakes
I've halted having CPAN::Test post to cpan-testers for the time being.
I'll spool the information in my own local database until we figure
out a better way to sort out all this information.
On Mon, Sep 24, 2001 at 11:01:53AM +0200, Roland Giersig wrote:
> What I'd like to know: how have you solved the problem of interactive
> configuration that some modules need? Or do you leave the 'perl
> Makefile.PL' step interactive and automate only the 'make; make test'?
Currently, it just has a timeout. If Makefile.PL takes too long, we
assume it's either hung or is simply waiting for input.
Next step is to make it differenciate between a Makefile.PL that's
hung, waiting or simply takes a long time (such as Tk). We can use a
combination of utime(), STDOUT watching and STDIN tying to do this.
- utime incrementing + continual STDOUT output == takes a long time
- utime incrementing + no STDOUT output == possible infinite loop
- utime stopped + no STDOUT output == possibly hung
- Tied STDIN detects an attempt to read == needs interaction
'make' and 'make test' need to have similar safeguards. Right now
they have nothing.
Next step is to start encouraging CPAN authors to honor the
PERL_SKIP_TTY environment variable, which the core tests currently
use. This will tip off modules that they're being run without a
controlling terminal (or controlling human) and to avoid asking
questions.
Ultimately, we'll try for an expected query/response system.
Hopefully the # of modules which require hand-configuration will be
low. For those that need it, a simple config file can be set up for
each module with the usual questions and the usual answers.
But I'm going to leave this for much later. It's very complicated,
high maintenance and we should be able to get > 80% of the modules
tested without worrying about it.
> What about dependencies?
Dependencies are handled. The AI::Categorize run, for example,
handled it's Storable dependencies. Even twisted series of
dependencies like Class::DBI are handled fine. The reporting code
needs to be made a little more intellegent to differenciate between a
module failing because a dependency failed to install and a module
failing otherwise.
Undeclared dependencies can also be handled fairly intellegently. The
system can examine the test output to look for tell-tale signs of a
module failing to be found "Can't locate Bar.pm in @INC...", attempt
to recover by installing that module and continuing with the tests.
It would then recommend to the author they declare those dependencies.
> Regarding failure, I don't think this should be completely automatic.
> The testsuite should collect all non-(PASS|UNKNOWN) and present them to
> the tester in a compact form for examination. The tester then decides
> to either just send out the result or to investigate further based upon
> output from 'make' and 'make test'. That way you can start testing in
> the background and only have to put your attention to it when the test
> run is finished, and then only the (FAIL|NA)s have to be examined,
> which should be only a few.
Yes, that's basically the plan. The human load involved will be small
enough when testing the 10 or so new uploads a day. Running through
the backlog, OTOH, may get ugly.
There's one NA I handle automatically. Detecting if the current perl
is too old. If the module reports that via a well-formed "use 5.006"
then it can be detected.
> I'm fantasizing about creating a real testserver, which would
> automatically scan the mailing list, do the tests and report back the
> results
That's the plan. CPAN::Test already has a read_update_email() which
can read the CPAN module update emails and report what modules just
got uploaded. The plan is to feed that information into a smoke
testing program and get a continual test. Information is spat out via
CPAN::Test::Reporter and also fed into a local database.
> Which brings me to another point: right now, only the package name is
> sent out to the mailing list. This means that a tester has to wait a
> few hours or days until the nearest CPAN repository is in sync and the
> package can be downloaded from there. How about attaching the package
> onto the announcement mail, at least if the package size is smaller
> than a certain amount?
A much simpler solution is for CPAN testers to use ftp.funet.fi.
> And this would also allow to submit not-yet-released modules and have
> them automatically tested beforehand.
Down the road, cpan-testers could establish it's own holding pen for
authors wishing to have their modules beaten with sticks before
uploading to CPAN.
--
Michael G. Schwern <[EMAIL PROTECTED]> http://www.pobox.com/~schwern/
Perl6 Quality Assurance <[EMAIL PROTECTED]> Kwalitee Is Job One
"Let's face it," said bearded Rusty Simmons, opening a can after the
race. "This is a good excuse to drink some beer." At 10:30 in the
morning? "Well, it's past noon in Dublin," said teammate Mike
[Joseph] Schwern. "It's our duty."
-- "Sure, and It's a Great Day for Irish Runners"
Newsday, Sunday, March 20, 1988