On Wed, Sep 3, 2008 at 8:38 PM, chromatic <[EMAIL PROTECTED]> wrote:
> minds!", I ask one more question.  What do the tests and reports to which
> I've objected *actually* test and report?

I think there's a sort of Heisenberg or Schroedinger principle at work
here, however.  The tests both PASS and FAIL on any given platform
(the cat is alive and dead at the same time) until we actually observe
it, but the act of observing it itself entangles us in the result.

Did tests fail because Test::Harness version X was used or Test::More
version Y was used?  Did tests fail because the Makefile.PL or
Build.PL didn't specify requirements correctly?

So I think the question can only ever be "did tests pass given the
entire environment in which the tests were running"?

If we adopt the new (old, according to Graham) grading, we then have
the following:

PASS -->yes
FAIL --> no
NA --> we never got that far and never will on this platform
UNKNOWN --> we never got that far for some other reason

Eventually, would it be good break UNKNOWN down into some of the
various reasons we can detect?  Sure, but I suspect that this shift
alone is enough to reduce the pain and provide more distinct value.

David

Reply via email to