Joachim Draeger wrote:
Am Montag, den 13.11.2006, 12:00 +0100 schrieb Bernd Fondermann:

IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
I don't think that it is an indicator of quality to have always 100% of
tests passing.
My unit test 1x1 says: test runs have a binary result. It is either
green or red. If red, you have to fix the test. If you do nothing, the
test will soon be accompanied by a second failing test. Nobody checks
for failing tests anymore.

I do not propose to do nothing when a test changes from green to red. I
propose committing tests that fail because you don't have a solution so
far.
I can't see the advantage in accepting only passing tests.

The advantage is that each developer running tests and having a failing test will know that he have to search for the problem in its local copy and can skip the manual check of each "know failing test list".

What will happen when we'll have 423 failing tests and 613 passing? You break something and they will be 424 failing and 612 passing: how do you know what is the new failing one?? HOw do you rerun the only failing test?

Modern IDEs provide a mean to re-execute only failing test: I always use this feature. I would loose time if we had failing tests.

That does not necessarily mean every failing tests is a subject to
immediate fix. For example, this is not possible in
test-driven-development which is based on failing tests.

Would you accept failing tests used for TDD in James?

No, unless we start using TDD for james.
And if we decide to do this maybe we should define a different way: separate test to be "solved" from the to be implemented one.

But intended-to-fail failing tests obscures all
oops-something-bad-happened failing tests.

I think that is only a management/administrative issue. If there are a
lot of tests we need a workflow not to oversee any changes in the
results. Do you refer to psychological aspects? I consider the message "don't
start working on a test, it could fail" more bad than "there is one
failing test, nobody would mind a second...".

I want complete code committed: we commit a feature only when is complete or it is anyway somehow working. Having failing tests means it is incomplete. Just wait some day until you fixed it.

If I wrote half algorythm and it does not compile I don't commit it: I will do it as soon as it will compile, or as soon as it will seem to be working if it is already called by someone. Otherwise I simply wait. If I want to work on a long running issue and I want to commit non-compiling code or something similar I can do this in a branch.

I don't care to much about the current changes. Running tests from ant
is okay. I would not like to force tests on compile/dist-lite. I don't like haltonfailure="yes" in the nightly build.

I don't care of when we run tests. I care that tests passes.
If I'm allowed to comment out a failing test if I know it has been created by someone else and open a JIRA issue for that I'm happy anyway.

I can't afford writing unit tests for other's code (even if it happens from time to time) and I really feel that passing tests is a value. If we start having some failing tests this will make me to ignore possible new failing tests.

Stefano


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to