On 11/13/06, Joachim Draeger <[EMAIL PROTECTED]> wrote:

Hi!

IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.

I don't think that it is an indicator of quality to have always 100% of
tests passing.

My unit test 1x1 says: test runs have a binary result. It is either
green or red. If red, you have to fix the test. If you do nothing, the
test will soon be accompanied by a second failing test. Nobody checks
for failing tests anymore.

That does not necessarily mean every failing tests is a subject to
immediate fix. For example, this is not possible in
test-driven-development which is based on failing tests.

But intended-to-fail failing tests obscures all
oops-something-bad-happened failing tests.

Therefore...

There often are known bugs for a long time, some of them not even
documented anywhere.

Agreed. It is a good thing to have tests that document
bugs/errors/missing features/evolving code.
But these test should not be failing, be commented out or be contained
in the a separate "failing-tests-suite".
They should _succeed_, with a comment pointing to the JIRA they
reproduce and document.
When such an issue gets fixed, the test fails, will be detected and
the assertion in question can be inverted. Voila.

When there is a test failing we should treat it with the same priority
and thoroughness as issues in JIRA.
If there are other things that are more important or the solution is not
clear it just stays there failing.

Not agreed, see above.

In the current situation there is a barrier for committing tests because
it might brake the holy nightly build and the committer would be
responsible for it.

With the practice I pointed out above this is a non-issue.

IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.

Agreed. It is even so important that we have to have it fixed to keep
awareness for failing tests. A failing test obscures other failing
tests.

Tests are not written to steal someones time. I don't like the idea of
forcing developers to run tests. I hate being forced. Uncommenting
those things in build files is my first reaction to it.
I consider every committer responsible to do what ever is needed to
ensure the quality of his commits.
IMO it's a psychological mistake to give the false impression that the
code will be automatically verified before committing.
Everyone should be aware of ones responsibilities. IMO that is more
effective as forcing someone.
I can't see the big catastrophe if a committer causes a test to fail by
mistake.

Fully agreed. Unit tests are only one means beside others to assure
good code. They must be easy to run and must not be easy to be
forgotten.

What is your objection targeted to? The change to the ant file?

I propose to accept failing tests in the codebase. Nightly build should
not fail just because tests are failing. Instead we could provide a
testreport in the same folder.
Of course tests ran from ant should not be interrupted by the first
failure. (haltonfailure="no")

+1


Well, of course this brings some administrative overhead. Maybe we need
a solution to sort our tests using TestSuites.
We could separate the tests known to fail to be warned at once when a
change brings additional tests to fail.

-1

I don't consider slow tests as bad tests. They are just inappropriate
for a ad hoc results. Even if they took an hour it's okay, but we need
to find a way not to force people to run them all the time.

Not agreed. According to my experience unit tests should be fast and
easy. Otherwise your unit test setup is flawed or could be simplified.
Other tests, of course, for example integration or compliance test
could take much more time.

Bernd

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to