Quoting nadim khemir <[EMAIL PROTECTED]>:

On Monday 12 May 2008 16.23.46 Bram wrote:
Then what happens if it starts returning 4?
Then the test script will report a FAIL, and users will/might start
ignoring failures.
Which is a bad thing (IMHO).

The todo test indicates that something doesn't behave as it should.
If it suddenly starts returning another value, which is still just as
wrong, then it shouldn't result in FAIL.

Incorrect? Yes
Not expected? Yes
Fail? No since that particular code is known to misbehave (in what way
it misbehaves can (and will) change over time).


In the particular test foo() could get altered in quite a lot of ways
without realizing that it also affects the behavior of foo().

This could be a good change (< 3) or a bad one (> 3) but bottom line
it means that things changed that weren't expected and that, most
likely, means that there are tests missing to cover all the behavior
of the change.

Mixing normal test with test for failure that shouldn't (isn't that a strange
test to start with) change is asking for troubles.

I'm not saying that it shouldn't change.
What I'm saying is that the value may change but that if it changes I would like to be informed about it.


Leave your todo test as it was to start with.

Create a new test file "development_values_that_shoulnot_change.t" in your
developer test directory (that's not just for pod right).

Which is not practical at all.
That means maintaining an extra test file for each file that defines a todo test and that means duplicating every todo test.

Which in the case of perl means 68 extra files and, at least, 150 duplicated tests.

And that still leaves the problem of when the test file is run.
Should it be run by make test? Should it be run by ./perl harness?

It can't be run by make test since that would mean that it produces a FAIL if the behavior changes. (confusing users)

It can't be run by ./perl harness since then the smoke reports will produce a FAIL. (confusing testers and making each Smoke report bogus - since it compares the output of make test and ./perl harness)

So that would mean, running it when someone thinks the value might have changed.
Which means never since the idea is to catch an unexpected change of behavior.


This setup will not surprise your users and you have the test you want.


What I proposed won't surprise users either.

This is what currently happens when a TODO test passes:

$ make test
All tests successful.
u=11.31  s=1.60  cu=647.87  cs=54.29  scripts=1449  tests=201092
make[2]: Leaving directory `/opt/perl/perl-blead'
make[1]: Leaving directory `/opt/perl/perl-blead'

$ cd t ; ./perl harness op/range.t
op/range......ok
All tests successful.

Test Summary Report
-------------------
op/range.t (Wstat: 0 Tests: 135 Failed: 0)
  TODO passed:   118
Files=1, Tests=135, 0 wallclock secs ( 0.06 usr 0.01 sys + 0.03 cusr 0.01 csys = 0.11 CPU)
Result: PASS


What I'm suggesting is that it outputs something similar for TODO test that return an unexpected result.

Again, make test only outputs All test succesful.
Users will only run make test and thus can't be confused by the extra output.



Kind regards,

Bram


Reply via email to