Am Dienstag, den 14.11.2006, 09:31 +0100 schrieb Bernd Fondermann:
> > > > IMO a failing test is as valuable as a passing one. Maybe even more
> > > > because it reminds us to do something.
> > >
> > > > I don't think that it is an indicator of quality to have always 100% of
> > > > tests passing.
> > >
> > > My unit test 1x1 says: test runs have a binary result. It is either
> > > green or red. If red, you have to fix the test. If you do nothing, the
> > > test will soon be accompanied by a second failing test. Nobody checks
> > > for failing tests anymore.
> >
> > I do not propose to do nothing when a test changes from green to red. I
> > propose committing tests that fail because you don't have a solution so
> > far.
> > I can't see the advantage in accepting only passing tests.
>
> Because thats the fundamental unit test paradigm. The whole red/green
> thing is build on this.
... which supposes to have small iterations: write a test, bring it to
pass?
BTW: An argument against having failing tests came to my mind:
Tests are aging because code and requirements evolves. Passing tests
will began to fail and will be fixed to reflect the new requirements.
A failing test may get outdated without anybody notices.
> > Do you refer to psychological aspects? I consider the message "don't
> > start working on a test, it could fail" more bad than "there is one
> > failing test, nobody would mind a second...".
>
> If somebody starts working on tests, they _will_ fail at the
> beginning. But go away with a failing test and do no clean ups is not
> polite.
But it's okay to commit code that has only 70% of the required
functionality. Maybe enough to start integration.
Why not commit Tests that show what is missing? I think that is quite
polite. It's probably more transparent than a bunch of TODOs.
Okay you may say that tests should be inverted... I'm still not
convinced about the benefits except they are circumstancing limitations
in current tools.
> > I never heard of this as a good practice for unit tests. IMO *this*
> > would obscure the tests.
> > The assertion logic could be more complicated than a single boolean
> > assert. (which could be an anti-pattern of course)
>
> But no so complicated to justify polluting the whole test suite with failures.
I think we agreed that it makes no sense to randomly mix failing and
passing tests...
> > The developer working on the fix runs the tests again and again, maybe
> > using a debugger. I think this should be done on the original, not on a
> > inverted one.
>
> Agreed. But you are talking about work in progress on a working copy.
This requires reinverting the inverted test which is IMO error-prone.
> > Unit tests can be part of the definition. They should not be changed in
> > any way as long they are valid.
>
> Not agreed. This is a too dogmatic point of view.
Maybe. But I noticed that they help me a lot as a "definition" in the
part-time os developer job. What does the code I started writing last
week?
> > > Other tests, of course, for example integration or compliance test
> > > could take much more time.
> >
> > Right. As I started to test which IMAP commands work in the current
> > implementation I wrote integration tests. These have proven to be very
> > useful.
> > If the problem arises, that they are to slow for "every-day-use" we
> > should separate, but not change them.
>
> integration tests don't belong into a unit test suite.
> per definition, in java, unit tests test (more or less) a single class
> ("the unit").
What do you mean by that statement related to current James development?
Separating integration from unit tests? Don't use junit for integration
tests?
Joachim
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]