Re: Reporting setup problems separately?

2007-07-25 Thread A. Pagaltzis
* Gabor Szabo <[EMAIL PROTECTED]> [2007-07-24 08:45]:
> That is, there is a phase where I setup the test environment
> and then there is the actual test. Obviously each phase can
> consists of several ok() functions.

Right. The setup phase is called the “fixture” in testing lingo
and it should not be a test. If the fixture setup fails, it makes
no sense to try to continue and execute any further tests, as
they’ll all break anyway. So you do the setup prior to any tests
and `die` if any of it fails.

> I would like to be able to display them in a different way.

If you `die`, the harness will report it differently.


* David Golden <[EMAIL PROTECTED]> [2007-07-25 04:40]:
> Or use a SKIP block.

Not in this case. SKIP will continue silently without registering
a failure, but this is an issue of setup failure for non-optional
tests, for which you want a loud complaint.

The right approach is `die`.

Regards,
-- 
Aristotle Pagaltzis // 


Re: Reporting setup problems separately?

2007-07-25 Thread David Cantrell

Gabor Szabo wrote:


So does that mean others are not interested in separate reporting of
"thing is broken" vs "could not even execute test"


I'm very much interested in it.  I've not been following the rest of 
this thread, but if you're talking about reporting things like XS 
modules not compiling/linking, then I certainly think it would be useful.


However, there is a danger of false positives.  Lots of ordinary users 
(never mind people doing smoke testing!) would, I imagine, be quite 
likely to report a build "error" when trying to build, eg, GD-$latest 
when they have an ancient libgd on their system.  It's hard to tell the 
difference, at least without paying significant attention to detail, 
between that, which is not an error, and the error that is GD-$latest 
failing to build against libgd-$latest.


So I suggest that these particular reports should only ever be sent to 
authors who have explicitly opted in to them by, eg, setting an option 
in META.yml.


--
David Cantrell


Re: Reporting setup problems separately?

2007-07-24 Thread Gabor Szabo

SKIP might be a good idea to avoid running tests that cannot work due
to some missing prereq or bad environment or whatever but it still
does not solve the
reporting problem.

There can be calls like this

ok($ok, "environment is ready");

and calls like this

ok($ok, "system under test works well");

So far people only gave suggestions but IMHO non from experience.
So does that mean others are not interested in separate reporting of
"thing is broken" vs "could not even execute test"

or did I phrase my question incorrectly ?

This might be a conceptual question as well maybe.

In a CPAN module it might not be a big issue.
e.g in a database related module if the user who is installing the
module did not provide the necessary connection information we can just skip
the tests.

In my situation the test execution and its result is the product, this
is done in a QA department. People in the department seem to want
clear separation
of "the environment is broken, could not execute some of the tests" and
"the product is broken, ring the alarm bells".

So how do others do that?

Gabor


Re: Reporting setup problems separately?

2007-07-24 Thread David Golden

On 7/24/07, Adam Kennedy <[EMAIL PROTECTED]> wrote:

He's doing it within a single test script, BAIL_OUT is for the entire
series of test scripts.

"die" is BAIL_OUT for a single test :)


That works.  Or use a SKIP block.

David


Re: Reporting setup problems separately?

2007-07-24 Thread Adam Kennedy

David Golden wrote:

On 7/24/07, Gabor Szabo <[EMAIL PROTECTED]> wrote:

In the way it is written now both setup-phases-failure and
real-failure are displayed in the same way.
When it fails I know something went wrong but I don't know if
it is in the test environment (e.g. not enough disk space) or it is
in the product I am testing.


If the setup phase fails, is there any point to continuing the tests
and showing the real failures?  That seems like an ideal situation for
BAIL_OUT($why).


He's doing it within a single test script, BAIL_OUT is for the entire 
series of test scripts.


"die" is BAIL_OUT for a single test :)

And I'm afraid the only thing that comes to mind for me is a special 
string in either OK or the diagnostics.


Adam K


Re: Reporting setup problems separately?

2007-07-24 Thread David Golden

On 7/24/07, Gabor Szabo <[EMAIL PROTECTED]> wrote:

In the way it is written now both setup-phases-failure and
real-failure are displayed in the same way.
When it fails I know something went wrong but I don't know if
it is in the test environment (e.g. not enough disk space) or it is
in the product I am testing.


If the setup phase fails, is there any point to continuing the tests
and showing the real failures?  That seems like an ideal situation for
BAIL_OUT($why).

Regards,
David Golden


Reporting setup problems separately?

2007-07-23 Thread Gabor Szabo

Hi,

I have a test that looks like this:

ok(prepare_first(), "first prepared");
ok(test_first(), "first is working");

ok(prepare_second(), "second prepared");
ok(test_second(), "second is working");


That is, there is a phase where I setup the test environment
and then there is the actual test. Obviously each phase can
consists of several ok() functions.

In the way it is written now both setup-phases-failure and
real-failure are displayed in the same way.
When it fails I know something went wrong but I don't know if
it is in the test environment (e.g. not enough disk space) or it is
in the product I am testing.

I would like to be able to display them in a different way.
Currently the only thing I could think of it making sure the label of each ok()
call in the setup phase starts with the word "CONFIG" and then let the harness
understand this.

I wonder how do other do this?

Gabor