# from Chris Dolan
# on Sunday 02 December 2007 17:35:
The problem with skipped tests is that they're easier for developers
to ignore than TODO tests.
This is true, but if the test can correctly detect whether it should
skip, that's probably a better way to go. An option for the
*developer*
* Ovid [EMAIL PROTECTED] [2007-12-02 16:50]:
Breaking the toolchain is bad.
You can almost imagine Curtis murmuring those words even in
his sleep…
Regards,
--
Aristotle Pagaltzis // http://plasmasturm.org/
On 3 Dec 2007, at 04:34, Michael G Schwern wrote:
[snip]
This doesn't mean that people don't use dies_ok() when they should use
throws_ok(). Every tool is open to abuse. The solution is not to
remove the
tool, but figure out why it's being abused. Maybe the answer is as
simple as
On 3 Dec 2007, at 10:26, A. Pagaltzis wrote:
* Ovid [EMAIL PROTECTED] [2007-12-02 16:50]:
Breaking the toolchain is bad.
You can almost imagine Curtis murmuring those words even in
his sleep…
I have lost count of the number of times Andy/Ovid said this over the
LPW weekend :-)
Adrian
# from nadim khemir
# on Monday 03 December 2007 15:11:
There is not simple solution to this problem.
There is, but the data needs to be more complicated than a boolean.
Please. Calling it failure just gets everyone hung-up on semantics
(and rightly so, because computers are awfully picky
On Sunday 02 December 2007 18:48, Chris Dolan wrote:
...
In this fashion, we use TODO tests to track when bugs in our
dependent packages get fixed, and then when we can remove workarounds
therefore.
Chris
This discussion is interresting and it's always educating to understand why
other
So I read two primary statements here.
1) Anything unexpected is suspicious. This includes unexpected success.
2) Anything unexpected should be reported back to the author.
The first is controversial, and leads to the conclusion that TODO passes
should fail.
The second is not controversial,
use Test::More;
pass();
plan tests = 2;
pass();
Why shouldn't this work? Currently you get a You tried to run a test without
a plan error, but what is it really protecting the test author from?
Historically, there was a clear technical reason. It used to be that
Michael G Schwern [EMAIL PROTECTED] wrote:
It also makes it technically possible to allow the test to change it's plan
mid-stream, though the consequences and interface for that do require some
thought.
With some sugar, that could actually be quite handy for something like
test blocks. E.g.:
# from David Golden
# on Monday 03 December 2007 19:55:
With some sugar, that could actually be quite handy for something like
test blocks. E.g.:
{
plan add = 2;
ok( 1, wibble );
ok(1, wobble );
}
or maybe make the block a sub
block {
subplan 2;
ok(1, wibble);
ok(1, wobble);
};
David Golden wrote:
Michael G Schwern [EMAIL PROTECTED] wrote:
It also makes it technically possible to allow the test to change it's plan
mid-stream, though the consequences and interface for that do require some
thought.
With some sugar, that could actually be quite handy for something
Eric Wilhelm wrote:
# from David Golden
# on Monday 03 December 2007 19:55:
With some sugar, that could actually be quite handy for something like
test blocks. E.g.:
{
plan add = 2;
ok( 1, wibble );
ok(1, wobble );
}
or maybe make the block a sub
block {
subplan 2;
* Michael G Schwern [EMAIL PROTECTED] [2007-12-04 03:00]:
So I read two primary statements here.
1) Anything unexpected is suspicious. This includes unexpected
success.
2) Anything unexpected should be reported back to the author.
The first is controversial, and leads to the
13 matches
Mail list logo