>>>>> On Sat, 22 Dec 2007 11:12:43 -0800, chromatic <[EMAIL PROTECTED]> said:

  > On Saturday 22 December 2007 05:22:05 Andreas J. Koenig wrote:
 >> 1. You get a fail report with an error message that doesn't tell you
 >>    exactly what went wrong.
 >> 
 >> 2. You rewrite your test in a way that it does tell you more.
 >> 
 >> 3. Release.
 >> 
 >> 4. If you now understand the problem, fix it, else goto 1.

  > I agree in principle, but in practice I seem to get a lot of failure 
reports 
  > from someone who's installed Perl on Windows into a directory with spaces 
in 
  > the name, and whatever installer there is just can't cope with that 
  > situation.

It seems pretty impossible to me that such a user has managed to
install the whole toolchain. CPAN::Reporter has a lot of dependencies
to be installed before it can start to send reports. And if they were
good enough to get this going I'd expect they only send occasionally
some crappy report. Let them drop on the floor. Your source for
information is the cpantesters website where all reports are collected
and draw your educated conclusions what is a systematic breakage and
what is a glitch.

  > I'm not sure how to write a test for "Tester's installation of Perl is 
fatally 
  > broken and can't actually install anything."

In this case you should be able to differentiate between mails from
clueless users and the testing infrastructure without losing too much
time.

The really tricky cases are somewhere else. The testing infrastructure
will likely sometimes send out false positives because something
broke, like Test::Harness or ExtUtils::MakeMaker. That's why the
testing modules need to report how they are composing their results.
So far I (as a smoker) have recognized these glitches pretty quickly,
stopped the testing, rolled back, and informed all parties. This gives
me serious headache because many testers will often not be able to
react accordingly.

-- 
andreas

Reply via email to