Ovid writes:
> it's getting harder to find out which TODO tests are unexpectedly
> passing.
> [...]
> Suggestions?
I am spoiling my own YAPC::EU 2009 talk here but try this toolchain:
# the tool chain
$ cpan App::DPath
# prepare example project which contains passing
Ovid wrote:
> - Original Message
>> From: chromatic
>
>> Add diagnostics to TODO tests and let your test harness do what it's
>> supposed
>> to do. Shoving yet more optional behavior in the test process continues to
>> violate the reasons f
# from Ovid
# on Tuesday 14 July 2009 03:33:
>> Fork/branch Test::Builder and make it work yourself. When it's ready
>> and usable, ask Schwern to evaluate, improve and merge.
>>
>> Code = Conversation. :)
>
>I know. I've thought about that, but truth be told, I'm really
> getting burnt out with
On Tue, Jul 14, 2009 at 5:43 AM, Ovid wrote:
> We have no diagnostics. We've never had diagnostics (the ad-hoc things
> going to STDERR don't count because they can't be synched or reliably
> parsed). Thus, I can't add diagnostics to the TODO tests until Schwe
On Tuesday 14 July 2009 02:43:44 Ovid wrote:
> Thus, I'm trying to think of a way of solving my problem now, not at some
> hypothetical date in the future.
Next option: write your own test harness which dies when it encounters a bonus
test. This should take you less than an afternoon.
If that
On Tue, Jul 14, 2009 at 03:33:29AM -0700, Ovid wrote:
> I know. I've thought about that, but truth be told, I'm really getting burnt
> out with the Perl community right now. Lots of people are being rude,
> thinking that being "right" is all they need to justify being arrogant and
> it's sapp
- Original Message
> From: Salve J Nilsen
>
> Fork/branch Test::Builder and make it work yourself. When it's ready and
> usable,
> ask Schwern to evaluate, improve and merge.
>
> Code = Conversation. :)
I know. I've thought about that, but truth be told, I'm really getting burnt
o
Ovid said:
- Original Message
From: chromatic
Add diagnostics to TODO tests and let your test harness do what
it's supposed to do. Shoving yet more optional behavior in the
test process continues to violate the reasons for having separate
test processes and TAP analyzers.
- Original Message
> From: chromatic
> Add diagnostics to TODO tests and let your test harness do what it's supposed
> to do. Shoving yet more optional behavior in the test process continues to
> violate the reasons for having separate test processes and TAP analy
On Monday 13 July 2009 06:56:15 Ovid wrote:
> We currently have over 30,000 tests in our system. It's getting harder to
> manage them. In particular, it's getting harder to find out which TODO
> tests are unexpectedly passing. It would be handy have to some option to
> f
ing aggregated is needed. Thus, I thought
> about a BAILOUT or forced failure for the TODO at that point.
Can't you just put the TAP into a database, and do a quick query to
extract the file, test number, and line of passing TODO tests? Surely
some code might need to be written, but
Gabor Szabo wrote:
AFAIK due to the number of tests it won't work well in Smolder - but I
have not tried it.
I was referencing to a future version of it ;-)
It's worth a try. Our main test suite at $work has 23,000+ tests and Smolder
handles it just fine.
--
Michael Peters
Plus Three, LP
On Mon, Jul 13, 2009 at 5:10 PM, Michael Peters wrote:
> Gabor Szabo wrote:
>
>> I think it would be better to have a tool (Smolder) be able to display
>> various drill-downs from the aggregated test report.
>
> If you want to see what Smolder would do to your tests, create a TAP archive
> and then
Gabor Szabo wrote:
I think it would be better to have a tool (Smolder) be able to display
various drill-downs from the aggregated test report.
If you want to see what Smolder would do to your tests, create a TAP archive and
then you can upload it to the "Junk" project at http://smolder.plusth
- Original Message
> From: Gabor Szabo
>
> I think it would be better to have a tool (Smolder) be able to display
> various drill-downs from the aggregated test report.
> e.g. list of all the TODOs
> list of all the TODOs that pass
> etc...
How would Smolder (which we're not using since
On Mon, Jul 13, 2009 at 4:56 PM, Ovid wrote:
>
> We currently have over 30,000 tests in our system. It's getting harder to
> manage them. In particular, it's getting harder to find out which TODO tests
> are unexpectedly passing. It would be handy have to some option to
We currently have over 30,000 tests in our system. It's getting harder to
manage them. In particular, it's getting harder to find out which TODO tests
are unexpectedly passing. It would be handy have to some option to force TODO
tests to die or bailout if they pass (note that thi
* chromatic <[EMAIL PROTECTED]> [2008-05-18 07:20]:
> People already have to modify the TODO test to add whatever
> kind of positive assertion you postulate; why is writing a
> separate test a barrier?
Because it’s hidden behind an internal interface that would have
to be exposed? Or any other rea
On Saturday 17 May 2008 20:48:24 Aristotle Pagaltzis wrote:
> You’re not following.
>
> 1. There is non-broken code which isn’t being tested directly.
>
> 2. There is a test that ensures its correctness, but only
>indirectly, as part of testing something else.
>
> 3. That something else is cur
* Michael G Schwern <[EMAIL PROTECTED]> [2008-05-18 05:30]:
> Aristotle Pagaltzis wrote:
>>> As a technique, paying attention to how broken code changes,
>>> why does it matter that broken code breaks differently? What
>>> does this information tell you that might fix code?
>>
>> It means there is
Aristotle Pagaltzis wrote:
As a technique, paying attention to how broken code changes,
why does it matter that broken code breaks differently? What
does this information tell you that might fix code?
It means there is a known internal dependency on some other part
of the code that is not being
* Michael G Schwern <[EMAIL PROTECTED]> [2008-05-14 08:50]:
> As I understand it, you want to know when broken code breaks
> differently.
Indeed.
> I can sort of see the point as a regression test... sort of.
> But if you're at the point of fussiness that broken code has to
> break in a specific
Smylers wrote:
Bram writes:
At the moment foo() returns 3.
Time passes and code changes.
Now there are 3 options:
foo() returns 1, this will result in 'unexpected todo test passed'
being outputted;
foo() returns 3, no special output is produced;
foo() returns 4, no special output is produ
On Monday 12 May 2008 20.41.58 Bram wrote:
> > Leave your todo test as it was to start with.
> > Create a new test file "development_values_that_shoulnot_change.t" in
> > your developer test directory (that's not just for pod right).
> Which is not practical at all.
> That means maintaining an ext
On Mon, May 12, 2008 at 08:41:58PM +0200, Bram wrote:
> I'm not saying that it shouldn't change.
> What I'm saying is that the value may change but that if it changes I
> would like to be informed about it.
>
>
> >Leave your todo test as it was to start with.
> >
> >Create a new test file "dev
On Monday 12 May 2008 11:41:58 Bram wrote:
> What I'm suggesting is that it outputs something similar for TODO test
> that return an unexpected result.
TODO means "if it's anything but this, tell me". You're trying to extract two
bits of information from one bit of data. Zombie Claude Shanno
Quoting nadim khemir <[EMAIL PROTECTED]>:
On Monday 12 May 2008 16.23.46 Bram wrote:
Then what happens if it starts returning 4?
Then the test script will report a FAIL, and users will/might start
ignoring failures.
Which is a bad thing (IMHO).
The todo test indicates that something doesn't b
On Monday 12 May 2008 16.23.46 Bram wrote:
> Then what happens if it starts returning 4?
> Then the test script will report a FAIL, and users will/might start
> ignoring failures.
> Which is a bad thing (IMHO).
>
> The todo test indicates that something doesn't behave as it should.
> If it suddenly
--- Fergal Daly <[EMAIL PROTECTED]> wrote:
> I almost posted this a few hours ago but then decided not to since
> I'm
> not sure I like the thread at all. I'm posting it now because while
> I'm not a huge fan of the idea itself, the problems you list are due
> to a lazy interpretation of the idea
2008/5/12 Ovid <[EMAIL PROTECTED]>:
> --- Smylers <[EMAIL PROTECTED]> wrote:
>
>> If you believe that (until the TODO is done) foo will consistently
>> return 3, and you wish to be alerted if it suddenly starts returning
>> 4,
>> then surely you can do that with a non-TODO test which checks for its
Quoting Smylers <[EMAIL PROTECTED]>:
Bram writes:
At the moment foo() returns 3.
Time passes and code changes.
Now there are 3 options:
foo() returns 1, this will result in 'unexpected todo test passed'
being outputted;
foo() returns 3, no special output is produced;
foo() returns 4, no sp
--- Smylers <[EMAIL PROTECTED]> wrote:
> If you believe that (until the TODO is done) foo will consistently
> return 3, and you wish to be alerted if it suddenly starts returning
> 4,
> then surely you can do that with a non-TODO test which checks for its
> being 3?
Sure you can do that:
my $r
Bram writes:
> At the moment foo() returns 3.
>
> Time passes and code changes.
>
> Now there are 3 options:
>
> foo() returns 1, this will result in 'unexpected todo test passed'
> being outputted;
> foo() returns 3, no special output is produced;
> foo() returns 4, no special output is prod
* Ovid <[EMAIL PROTECTED]> [2008-05-12 11:35]:
> Alternatively, persistent TAP could potentially track TODO
> results and handle the $WAS for you, but this is quite a ways
> off and has the problem that we cannot always identify which
> tests are which.
Plus, you still need a way to specify which
ently complaining to Schwern about the issue with
TODO tests but didn't see this obvious solution :)
Case in point: working on a codebase once where my only unusual test
output was a TODO test and I happened to notice that the failure
changed after some refactoring. This in turn led me to di
[This idea was first submitted to p5p... See:
http://www.nntp.perl.org/group/perl.perl5.porters/2008/05/msg136540.html
]
While adding some todo tests (for t/op/range.t) I ran into some
limitations (IMHO).
Assume the following TODO test:
{
local $TODO = "test that foo() retu
test script
> changes the output of
> TODO tests in Test::Harness.
>
> == begin test.pl==
> use strict;
> use warnings;
> use lib '../../perl/lib';
> use Test::More;
> use Test::Files;
>
> plan tests => 2;
> TODO: {
> local $TODO = "TODO
--- Michael G Schwern <[EMAIL PROTECTED]> wrote:
> Julien Beasley wrote:
> > Hi,
> >
> > I've found that using Test::Files in a test script
> changes the output of
> > TODO tests in Test::Harness.
>
> Here's the problem.
>
>
Julien Beasley wrote:
Hi,
I've found that using Test::Files in a test script changes the output of
TODO tests in Test::Harness.
Here's the problem.
$ perl -wle 'use Test::Files; print Test::Builder->new->exported_to'
Test::Files
exported_to() is the mechanism Test
Hi,
I've found that using Test::Files in a test script changes the output of
TODO tests in Test::Harness.
== begin test.pl==
use strict;
use warnings;
use lib '../../perl/lib';
use Test::More;
use Test::Files;
plan tests => 2;
TODO: {
local $TODO = "TODO Testing&q
- Original Message
From: Michael G Schwern <[EMAIL PROTECTED]>
> > Ah, crud. I need to support it then. Bummer. I'll try to get a release
> > out there when I can, then.
>
> Don't bother, its a poorly designed feature and likely unused. I don't want
> to see it pushed forward into T
On 8 Sep 2006, at 01:52, Michael G Schwern wrote:
Adrian Howard wrote:
Maybe this is the right time to think about mechanisms supporting
different versions of the TAP protocol?
http://perl-qa.yi.org/index.php/TAP_version
I meant in the context of Ovid's TAPx::Parser code.
Rather than add
Adrian Howard wrote:
Maybe this is the right time to think about mechanisms supporting
different versions of the TAP protocol?
http://perl-qa.yi.org/index.php/TAP_version
On 6 Sep 2006, at 14:33, Ovid wrote:
- Original Message
From: Sébastien Aperghis-Tramoni <[EMAIL PROTECTED]>
Hmm, that's curious. However, if it's undocumented I would argue
against
supporting it right now. What benefit does it gain us?
This comes from the Good Old Test.pm modul
gt; ok 2
> > not ok 3
> > ok 4
> > ok 5
> > DUMMY_TEST
> >
> > As one can see, the "1..5" plan is followed by the "todo 3 2;"
> > directive. This is supposed to indicate something about plan ahead todo
> > tests. (Instead of the &
mer I can
believe. The latter... I'd be interested to know of a single use in the wild. And if
there is anyone using it they'd likely be better served by moving to Test::Legacy and
inline TODO tests. Or backporting inline TODO tests to Test.pm.
Ah, crud. I need to support it then
Shlomi Fish wrote:
"t/sample-tests/todo" in the Test-Harness distribution reads:
<<<<<<<<<<<
print <As one can see, the "1..5" plan is followed by the "todo 3 2;" directive.
This is supposed to indicate something about
- Original Message
From: Sébastien Aperghis-Tramoni <[EMAIL PROTECTED]>
> > Hmm, that's curious. However, if it's undocumented I would argue against
> > supporting it right now. What benefit does it gain us?
>
> This comes from the Good Old Test.pm module:
>
> $ perl -MTest -e 'plan test
Selon Ovid <[EMAIL PROTECTED]>:
> > As one can see, the "1..5" plan is followed by the "todo 3 2;" directive.
> > This is supposed to indicate something about plan ahead todo tests.
> > [...]
> > I'd like to know what I should do about th
On Sep 6, 2006, at 3:59 AM, Ovid wrote:
Hmm, that's curious. However, if it's undocumented I would argue
against supporting it right now. What benefit does it gain us?
The flip to that is that we've always said that Test::Harness is the
reference implementation. In a way, it is documente
t; DUMMY_TEST
> >>>>>>>>>>>
>
> As one can see, the "1..5" plan is followed by the "todo 3 2;" directive.
> This is supposed to indicate something about plan ahead todo tests. (Instead
> of the "# TODO" directives in the
"t/sample-tests/todo" in the Test-Harness distribution reads:
<<<<<<<<<<<
print <>>>>>>>>>>
As one can see, the "1..5" plan is followed by the "todo 3 2;" directive.
This is supposed to indicate som
ded
> > TODO PASSED tests 1-2
> >
> > All tests successful (1 subtest UNEXPECTEDLY SUCCEEDED).
> > Passed Test Stat Wstat Total Pass Passed List of Passed
> > ---
> > th_test.t 21 50.00% 1-2
> > Files=1, Tests=2, 0 wallclock secs ( 0.11 cusr +
--
> th_test.t 21 50.00% 1-2
> Files=1, Tests=2, 0 wallclock secs ( 0.11 cusr + 0.01 csys = 0.12
> CPU)
>
> The line starting TODO PASSED shows all TODO tests, not those that
> unexpectedly succeeded, which confused me a bit. Also, the final
>
t of Passed
> ---
> th_test.t 21 50.00% 1-2
> Files=1, Tests=2, 0 wallclock secs ( 0.11 cusr + 0.01 csys = 0.12
> CPU)
>
> The line starting TODO PASSED shows all TODO tests
Test Stat Wstat Total Pass Passed List of Passed
---
th_test.t 21 50.00% 1-2
Files=1, Tests=2, 0 wallclock secs ( 0.11 cusr + 0.01 csys = 0.12
CPU)
The line starting TODO PASSED shows all TODO tes
On 4/19/06, Andy Lester <[EMAIL PROTECTED]> wrote:
> > BTW, the patch only shows TODO pass status when no failures occur.
> >
> > Oh and obviously all of Test::Harness'es tests pass. :-)
>
> This patch doesn't apply against my latest dev version of
> Test::Harness. I'm going to have to massage it
BTW, the patch only shows TODO pass status when no failures occur.
Oh and obviously all of Test::Harness'es tests pass. :-)
This patch doesn't apply against my latest dev version of
Test::Harness. I'm going to have to massage it manually.
But I like the idea. Thanks.
xoa
--
Andy Lester
so, I think that that would be useful, as it would mean that any (real)
> TODO test that unexpectedly started passing would be noticed.
>
> I bring this up because we seem to have inadvertently fixed really old regexp
> bugs that we didn't have a test case for, but I realise that
One of my unwritten TODOs is to go through the current Perlbug
queue and
write test cases for all the currently testable problems.
Hey! That's one of my unwritten TODOs, too!
In the long term, however, it would be great if Test::Harness
recognized
individual TODO test cases that passed an
any (real)
TODO test that unexpectedly started passing would be noticed.
I bring this up because we seem to have inadvertently fixed really old regexp
bugs that we didn't have a test case for, but I realise that right now adding
TODO tests wouldn't actually have been *that* useful - if
On Tue, Apr 18, 2006 at 01:21:37PM -0500, Steve Peters wrote:
> One of my unwritten TODOs is to go through the current Perlbug queue and
> write test cases for all the currently testable problems. My hope is
> that unexpected fixes would be caught much sooner in these cases. I've
> made a bit of
; If so, I think that that would be useful, as it would mean that any (real)
> TODO test that unexpectedly started passing would be noticed.
>
> I bring this up because we seem to have inadvertently fixed really old regexp
> bugs that we didn't have a test case for, but I realise
st that unexpectedly started passing would be noticed.
I bring this up because we seem to have inadvertently fixed really old regexp
bugs that we didn't have a test case for, but I realise that right now adding
TODO tests wouldn't actually have been *that* useful - if a TODO passes we
don
n't have tests yet...
> Putting them as TODO tests might just shift the problem -- too many
> bugs, not enough people looking at RT to solve them -- into "not enough
> people are looking at the TODO tests to solve them".
But putting them in the code moves them closer
quot; : "# '$r$c' ne 'SS'\nnot ok ", $test++, "\n";
eval $code;
print $c eq 'V' ? "ok " : "# '$c' ne 'V'\nnot ok ", $test++, "\n";
+}
+
+
+{
+ curr_test($test);
+ package main;
+ eval q{::is(
chromatic wrote in perl.qa :
> One idea is attaching a simple test case to every bug report that
> doesn't have test code that's nearly right for the core. It's a lot
> easier to touch up a test case than it is to write one, so we could do
> a lot of good by turning bug reports into executable
hould they be done as TODOs? Is there a distinct set of test files for
them? etc? Can they use Test::More? etc. etc. etc.
I like the idea and I can see how it would make some things easier. We
might get a lot of benefit from a little simpler idea, though.
Putting them as TODO tests might just shif
What's the current approach to turning perlbugs into tests?
Should they be done as TODOs? Is there a distinct set of test files for
them? etc? Can they use Test::More? etc. etc. etc.
e.g. http://bugs6.perl.org/rt2/Ticket/Display.html?id=5430
can have a fairly simple test like:
package
> "KS" == Kurt Starsinic <[EMAIL PROTECTED]> writes:
KS> I'd like to do that, but I haven't (yet) figured out how to say
KS> "perl --exactly-the-command-line-options-this-perl-was-called-with".
KS> For example, I don't know how to discover the -I options that I was
KS> called with. Any i
On Sun, Feb 18, 2001 at 01:48:50AM -0500, [EMAIL PROTECTED] wrote:
> A few weeks ago I brought up the idea of unifying the format of todo
> tests and skip tests:
> http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2001-01/msg00883.html
>
> Well, here it is. This is a patch
A few weeks ago I brought up the idea of unifying the format of todo
tests and skip tests:
http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2001-01/msg00883.html
Well, here it is. This is a patch to both t/TEST and Test::Harness so
they honor this style of test output:
not ok 13
Paul Johnson <[EMAIL PROTECTED]> writes:
>On Wed, Feb 14, 2001 at 07:27:10PM +, [EMAIL PROTECTED] wrote:
>
>> >BEGIN { plan tests => 14, todo => [3,4] }
>>
>> Which requires you to got edit the numbers when you add tests.
>> Not wanting to do that is why I started use'ing Test.
>
>Same here.
On Thu, Feb 15, 2001 at 07:29:05AM -0500, barries wrote:
> What's the benefit of maintaining a count? Perl's a lot better at it
> than I am.
This was already discussed on p5p (I think) but I'll repeat the basic
arguments here.
Why have Perl maintain the count? Its lazier, some tests don't have
Paul Johnson <[EMAIL PROTECTED]> writes:
>On Wed, Feb 14, 2001 at 07:27:10PM +, [EMAIL PROTECTED] wrote:
>
>> >BEGIN { plan tests => 14, todo => [3,4] }
>>
>> Which requires you to got edit the numbers when you add tests.
>> Not wanting to do that is why I started use'ing Test.
>
>Same here.
On Wed, Feb 14, 2001 at 11:28:25PM +0100, Paul Johnson wrote:
> On Wed, Feb 14, 2001 at 07:27:10PM +, [EMAIL PROTECTED] wrote:
>
> Same here. But you have to edit the number of tests anyway, and I think
> you _should_ have to.
What's the benefit of maintaining a count? Perl's a lot better
On Wed, Feb 14, 2001 at 07:27:10PM +, [EMAIL PROTECTED] wrote:
> >BEGIN { plan tests => 14, todo => [3,4] }
>
> Which requires you to got edit the numbers when you add tests.
> Not wanting to do that is why I started use'ing Test.
Same here. But you have to edit the number of tests anyway,
Paul Johnson <[EMAIL PROTECTED]> writes:
>On Mon, Feb 12, 2001 at 11:58:19PM -0500, [EMAIL PROTECTED] wrote:
>
>> bugs which have yet to be fixed. The syntax is alittle weird, but
>> that can still be fixed as its undocumented. Basically, it looks (or
>
>I'm not sure whether you're talking about
On Tue, Feb 13, 2001 at 10:06:58PM +0100, Paul Johnson wrote:
> I'm not sure whether you're talking about the API or the textual output
> here.
The textual output and, to a certain extent, Test.pm's API.
http://mailarchive.activestate.com/mail/msg/perl5-porters:464006 has
the details.
On Mon, Feb 12, 2001 at 11:58:19PM -0500, [EMAIL PROTECTED] wrote:
> bugs which have yet to be fixed. The syntax is alittle weird, but
> that can still be fixed as its undocumented. Basically, it looks (or
I'm not sure whether you're talking about the API or the textual output
here. The API l
On Tue, Feb 13, 2001 at 01:28:02PM -0500, Kurt Starsinic wrote:
> I'd like to do that, but I haven't (yet) figured out how to say
> "perl --exactly-the-command-line-options-this-perl-was-called-with".
> For example, I don't know how to discover the -I options that I was
> called with. Any ide
On Tue, Feb 13, 2001 at 01:15:47PM -0500, [EMAIL PROTECTED] wrote:
> On Tue, Feb 13, 2001 at 12:58:22PM -0500, Kurt Starsinic wrote:
> > Can this also be applied to tests that are *supposed* to fail?
>
> Not really, todo has a fairly specific meaning and the test is
> definately failing and i
On Tue, Feb 13, 2001 at 12:58:22PM -0500, Kurt Starsinic wrote:
> Can this also be applied to tests that are *supposed* to fail?
Not really, todo has a fairly specific meaning and the test is
definately failing and is definately supposed to be fixed.
In your case, all you have to do is this:
On Mon, Feb 12, 2001 at 11:58:19PM -0500, [EMAIL PROTECTED] wrote:
> There's an obscure feature of Test::Harness and Test.pm I stumbled on
> recently. You can declare that certain tests are supposed to fail,
> they represent tests on features which haven't been implemented yet or
> bugs which hav
There's an obscure feature of Test::Harness and Test.pm I stumbled on
recently. You can declare that certain tests are supposed to fail,
they represent tests on features which haven't been implemented yet or
bugs which have yet to be fixed. The syntax is alittle weird, but
that can still be fixe
85 matches
Mail list logo