Hi! I don't think that it is an indicator of quality to have always 100% of tests passing. In other words: It's not a problem to create a lot of tests that pass.
There often are known bugs for a long time, some of them not even documented anywhere. It's all right to have open bugs in the issue tracker, and work on them by priority. IMO it's very good to have a corresponding test that fails as long as the issue isn't fixed. And it should be in svn. I don't think it's convenient or practicable to have it only as patch in the issue tracker. It has to evolve with the code base, every developer should have easy access to it. When there is a test failing we should treat it with the same priority and thoroughness as issues in JIRA. If there are other things that are more important or the solution is not clear it just stays there failing. In the current situation there is a barrier for committing tests because it might brake the holy nightly build and the committer would be responsible for it. IMO a failing test is as valuable as a passing one. Maybe even more because it reminds us to do something. Tests are not written to steal someones time. I don't like the idea of forcing developers to run tests. I hate being forced. Uncommenting those things in build files is my first reaction to it. I consider every committer responsible to do what ever is needed to ensure the quality of his commits. IMO it's a psychological mistake to give the false impression that the code will be automatically verified before committing. Everyone should be aware of ones responsibilities. IMO that is more effective as forcing someone. I can't see the big catastrophe if a committer causes a test to fail by mistake. I propose to accept failing tests in the codebase. Nightly build should not fail just because tests are failing. Instead we could provide a testreport in the same folder. Of course tests ran from ant should not be interrupted by the first failure. (haltonfailure="no") Well, of course this brings some administrative overhead. Maybe we need a solution to sort our tests using TestSuites. We could separate the tests known to fail to be warned at once when a change brings additional tests to fail. I don't consider slow tests as bad tests. They are just inappropriate for a ad hoc results. Even if they took an hour it's okay, but we need to find a way not to force people to run them all the time. Joachim --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]