Am Dienstag, den 14.11.2006, 00:04 +0100 schrieb Stefano Bagnara: > >>> But intended-to-fail failing tests obscures all > >>> oops-something-bad-happened failing tests. > >> Right. This is the BIG problem of failing tests. > > > > Okay, but I don't think that not writing, not executing or inverting > > them is better. > > I think tests should be written and executed. But if someone is > committing code that would break the tests he should simply wait to > commit. As soon as also the tests will work he will commit. > Simple and easy. We are not using test driven development, so I think > this is the simple way to go.
I think TDD does not propose writing code that break the tests. :-) It proposes writing code that brings the tests to pass. Of course nobody should commit things that brake more than it fixes. Breaking existing functionality is always worse. Maybe my initial statement was a bit to provoking: When you say your last commit broke the tests it's a synonym to your commit broke probably existing functionality. The quality of the code should reach an acceptable level before commit. But it is mostly not completely finished. I just say that it is a good thing to publish a test that shows the limitations by failing. Okay this is a TDD practice and maybe there is a consensus that we don't want to support TDD here. > I can't have an in-memory list of currenlty non passing tests and the > cause. We have thousands of tests it would be a pain in the ass. I agree on that, but for me it is only an administrative issue. It requires a tool and/or a workflow to keep track of it. If we come to the conclusion that it will make things too complicated, I'll accept that. > It seems that everyone has its own argument: maybe we should simply > start a vote to decide how to keep working. Right, but it's too early at the moment. For just starting mixing our current passing tests with new failing ones or randomly starting committing breaking code I would vote against it too. :-) > My personal preference is to avoid as hell committing code that will > make tests to fail. As like as committing code that does not compile or > run: of course the last one is the most difficult to be determined but > if it happens (and it happened to me many times) it should be handled as > high priority. We can make (by discussing) exception to this "rule" on > special case where the fix is more urgent than having passing tests, but > in trunk I give much more importance to avoid to loose committers time > than to provide fast fixes. I do completely agree on that. My main issue is committing tests that fail not code that make tests fail. My initial statement "When there is a test failing we should treat it with the same priority and thoroughness as issues in JIRA. If there are other things that are more important or the solution is not clear it just stays there failing." was not meant as a free ticket for breaking existing functionality or committing premature code. And I agree that it would need a discussion to make an exception. I just was relating to my impression that everyone started to search for a quick solution with the priority to make the tests passing which ended in commenting it out. Joachim --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
