Andi Vajda wrote:
> Even by 'strictly' following the rules, when a failure is intermittent,
> you easily get into the situation of a bunch of check-ins having
> happened since the possibly bad one. I think Bryan's alternative is an
> improvement.

I was just thinking about 100% reproducible cases, or close to 100%.

It can be really hard to figure out which checkin caused a rare
intermittent bug. Reasonably reliable intermittent bugs should be dealt
like 100% reproducible cases. The rare cases we have dealt with by
filing bugs and proceeding otherwise normally.

I don't think it would be a good idea to turn off intermittent tests.
First, when they succeed, it is still providing information that new
code hasn't made that test fail 100% of the time. And it is pretty easy
to check the new logs to see if it is a known intermittent failure.

If you really want to go the way of disabling all intermittent tests
then I am afraid that we'll have to turn off the whole functional test
suite right now, because there are at least two intermittent bugs that
manifest as test timeout and crash.


I have a sort of related question regarding test failures. Should we
stop further tests as soon as we see the first failure? This would
shorten Tinderbox cycle time when there was a problem. What we currently
 do is that we run all unit tests, and if those passed, we run all
functional tests (and if those passed, perf boxes run all perf tests).

-- 
  Heikki Toivonen


Attachment: signature.asc
Description: OpenPGP digital signature

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to