Am Montag, den 13.11.2006, 12:00 +0100 schrieb Bernd Fondermann:

> > IMO a failing test is as valuable as a passing one. Maybe even more
> > because it reminds us to do something.
> 
> > I don't think that it is an indicator of quality to have always 100% of
> > tests passing.
> 
> My unit test 1x1 says: test runs have a binary result. It is either
> green or red. If red, you have to fix the test. If you do nothing, the
> test will soon be accompanied by a second failing test. Nobody checks
> for failing tests anymore.

I do not propose to do nothing when a test changes from green to red. I
propose committing tests that fail because you don't have a solution so
far.
I can't see the advantage in accepting only passing tests. 

> That does not necessarily mean every failing tests is a subject to
> immediate fix. For example, this is not possible in
> test-driven-development which is based on failing tests.

Would you accept failing tests used for TDD in James?

> But intended-to-fail failing tests obscures all
> oops-something-bad-happened failing tests.

I think that is only a management/administrative issue. If there are a
lot of tests we need a workflow not to oversee any changes in the
results. 
Do you refer to psychological aspects? I consider the message "don't
start working on a test, it could fail" more bad than "there is one
failing test, nobody would mind a second...".

> > There often are known bugs for a long time, some of them not even
> > documented anywhere.
> 
> Agreed. It is a good thing to have tests that document
> bugs/errors/missing features/evolving code.
> But these test should not be failing, be commented out or be contained
> in the a separate "failing-tests-suite".
> They should _succeed_, with a comment pointing to the JIRA they
> reproduce and document.
> When such an issue gets fixed, the test fails, will be detected and
> the assertion in question can be inverted. Voila.

And when you have an Exception you add a try/catch? Who guarantees that
nobody forgets what is right?
I never heard of this as a good practice for unit tests. IMO *this*
would obscure the tests. 
The assertion logic could be more complicated than a single boolean
assert. (which could be an anti-pattern of course)
The developer working on the fix runs the tests again and again, maybe
using a debugger. I think this should be done on the original, not on a
inverted one.
Unit tests can be part of the definition. They should not be changed in
any way as long they are valid. 
>From the junit FAQ (okay some kind of TDD related): "When all the tests
pass, you know you're done!"
Before changing/inverting anything I would comment out them completely.

> > Tests are not written to steal someones time. I don't like the idea of
> > forcing developers to run tests. I hate being forced. Uncommenting
> > those things in build files is my first reaction to it.
> > I consider every committer responsible to do what ever is needed to
> > ensure the quality of his commits.
> > IMO it's a psychological mistake to give the false impression that the
> > code will be automatically verified before committing.
> > Everyone should be aware of ones responsibilities. IMO that is more
> > effective as forcing someone.
> > I can't see the big catastrophe if a committer causes a test to fail by
> > mistake.
> 
> Fully agreed. Unit tests are only one means beside others to assure
> good code. They must be easy to run and must not be easy to be
> forgotten.
> 
> What is your objection targeted to? The change to the ant file?

I don't care to much about the current changes. Running tests from ant
is okay. I would not like to force tests on compile/dist-lite. 
I don't like haltonfailure="yes" in the nightly build.

> > I propose to accept failing tests in the codebase. Nightly build should
> > not fail just because tests are failing. Instead we could provide a
> > testreport in the same folder.
> > Of course tests ran from ant should not be interrupted by the first
> > failure. (haltonfailure="no")
> +1

Maybe we could start a vote on that after discussing.

> >
> > Well, of course this brings some administrative overhead. Maybe we need
> > a solution to sort our tests using TestSuites.
> > We could separate the tests known to fail to be warned at once when a
> > change brings additional tests to fail.
> -1

Why? Doesn't this avoid the obscuring issue? The regular tests pass and
you have a binary result. Wouldn't it be nice to have all known-bugs in
one TestSuite?

> > I don't consider slow tests as bad tests. They are just inappropriate
> > for a ad hoc results. Even if they took an hour it's okay, but we need
> > to find a way not to force people to run them all the time.
> 
> Not agreed. According to my experience unit tests should be fast and
> easy. Otherwise your unit test setup is flawed or could be simplified.

I agree. That's why I tried to avoid the term "unit test". :-) 

> Other tests, of course, for example integration or compliance test
> could take much more time.

Right. As I started to test which IMAP commands work in the current
implementation I wrote integration tests. These have proven to be very
useful. 
If the problem arises, that they are to slow for "every-day-use" we
should separate, but not change them.

Joachim




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to