Re: Reporting setup problems separately?

2007-07-25 Thread Gabor Szabo

SKIP might be a good idea to avoid running tests that cannot work due
to some missing prereq or bad environment or whatever but it still
does not solve the
reporting problem.

There can be calls like this

ok($ok, environment is ready);

and calls like this

ok($ok, system under test works well);

So far people only gave suggestions but IMHO non from experience.
So does that mean others are not interested in separate reporting of
thing is broken vs could not even execute test

or did I phrase my question incorrectly ?

This might be a conceptual question as well maybe.

In a CPAN module it might not be a big issue.
e.g in a database related module if the user who is installing the
module did not provide the necessary connection information we can just skip
the tests.

In my situation the test execution and its result is the product, this
is done in a QA department. People in the department seem to want
clear separation
of the environment is broken, could not execute some of the tests and
the product is broken, ring the alarm bells.

So how do others do that?

Gabor


Re: Reporting setup problems separately?

2007-07-25 Thread David Cantrell

Gabor Szabo wrote:


So does that mean others are not interested in separate reporting of
thing is broken vs could not even execute test


I'm very much interested in it.  I've not been following the rest of 
this thread, but if you're talking about reporting things like XS 
modules not compiling/linking, then I certainly think it would be useful.


However, there is a danger of false positives.  Lots of ordinary users 
(never mind people doing smoke testing!) would, I imagine, be quite 
likely to report a build error when trying to build, eg, GD-$latest 
when they have an ancient libgd on their system.  It's hard to tell the 
difference, at least without paying significant attention to detail, 
between that, which is not an error, and the error that is GD-$latest 
failing to build against libgd-$latest.


So I suggest that these particular reports should only ever be sent to 
authors who have explicitly opted in to them by, eg, setting an option 
in META.yml.


--
David Cantrell


Re: Reporting setup problems separately?

2007-07-25 Thread A. Pagaltzis
* Gabor Szabo [EMAIL PROTECTED] [2007-07-24 08:45]:
 That is, there is a phase where I setup the test environment
 and then there is the actual test. Obviously each phase can
 consists of several ok() functions.

Right. The setup phase is called the “fixture” in testing lingo
and it should not be a test. If the fixture setup fails, it makes
no sense to try to continue and execute any further tests, as
they’ll all break anyway. So you do the setup prior to any tests
and `die` if any of it fails.

 I would like to be able to display them in a different way.

If you `die`, the harness will report it differently.


* David Golden [EMAIL PROTECTED] [2007-07-25 04:40]:
 Or use a SKIP block.

Not in this case. SKIP will continue silently without registering
a failure, but this is an issue of setup failure for non-optional
tests, for which you want a loud complaint.

The right approach is `die`.

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/