Summary:  if your TAP suddenly jumps from test #2 to test #29,
Test::Harness reports that tests #3 through #28 have 'failed'.  

TAPx::Harness does not report them as failures, but as parse errors
(because it's bad TAP).  Is a 'parse error' a reasonable compromise?

More information below, if you're curious.

Cheers,
Ovid

Running 'prove' against three of the sample tests included with
TAPx::Parser generates the following summary:

  Failed Test                Stat Wstat Total Fail  List of Failed
  ----------------------------------------------------------------
  t/sample-tests/bignum                     2   ??  ??
  t/sample-tests/bignum_many                2 1999  3-100000
  t/sample-tests/combined                  10    2  3 10
   (1 subtest UNEXPECTEDLY SUCCEEDED), 1 subtest skipped.
  Failed 3/3 test scripts. -9/14 subtests failed.
  Files=3, Tests=14,  3 wallclock secs ( 0.02 cusr +  0.02 csys =  0.04
CPU)
  Failed 3/3 test programs. -9/14 subtests failed.

A couple of interesting things to note in that.  First, 'bignum' can't
figure out which tests failed and 'bignum_many' reports almost 2000
tests failed (only 11 tests were run, but 9 of them report ridiculously
high test numbers) and the List of Failed doesn't seem to make any
sense, though it sort of does if you look at the test:

  print <<DUMMY;
  1..2
  ok 1
  ok 2
  ok 99997
  ok 99998
  ok 99999
  ok 100000
  ok 100001
  ok 100002
  ok 100003
  ok 100004
  ok 100005
  DUMMY

Test::Harness reports that the 'missing' test numbers are failures.

Here's the test summary output from 'runtests' (you can see the actual
parse errors by supplying the -p switch):

  Test Summary Report
  -------------------
  t/sample-tests/bignum      (Wstat: 0 Tests: 4 Failed: 2)
    Failed tests:  136211425-136211426
    Errors encountered while parsing tap
  t/sample-tests/bignum_many (Wstat: 0 Tests: 11 Failed: 9)
    Failed tests:  99997-100005
    Errors encountered while parsing tap
  t/sample-tests/combined    (Wstat: 0 Tests: 10 Failed: 2)
    Failed tests:  3, 10
    TODO passed:   9
    Tests skipped: 7
  Files=3, Tests=25,  0 wallclock secs ( 0.09 cusr +  0.03 csys =  0.12
CPU)

In this case, I am *not* reporting the 'missing' tests as failures, but
rather as a parse errors.  I think the above output makes much more
sense, but rather than say 'this is how it works', I'd prefer to know
if there's a reason I should report the missing tests as having failed
(that's a bit tricky to do with TAPx::Harness because I don't get
Result objects for tests which are not run).

--

Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/

Reply via email to