> 
> > 4) In the case we go another 2 days with no response from a module owner, 
> > we will disable the test.
> 
> 
> 
> Are you talking about newly-added tests, or tests that have been
> 
> passing for a long time and recently started failing?
> 
> 
> 
> In the latter case, the burden should fall on the regressing patch,
> 
> and the regressing patch should get backed out instead of disabling
> 
> the test.
> 

I had overlooked a new test- I agree that backing it out is the right thing.


> 
> If this plan is applied to existing tests, then it will lead to
> style system mochitests being turned off due to other regressions
> because I'm the person who wrote them and the module owner, and I
> don't always have time to deal with regressions in other parts of
> code (e.g., the JS engine) leading to these tests failing
> intermittently.
> 
> If that happens, we won't have the test coverage we need to add new
> CSS properties or values.

Interesting point.  Are these tests failing often?  Can we invest some minimal 
time into these to make them more reliable from a test case or test harness 
perspective?

As long as there is a dialog in the bugs filed, I would find it hard to believe 
we would just disable a test and it come as a surprise.

> 
> More generally, it places a much heavier burden on contributors who
> have been part of the project longer, who are also likely to be
> overburdened in other ways (e.g., reviews).  That's why the burden
> needs to be placed on the regressing change rather than the original
> author of the test.

I am open to ideas to help figure out the offending changes.  My understanding 
is many of the test failures are due to small adjustments to the system or even 
the order the tests are run such that it causes the test to fail intermittently.

I know there are a lot of new faces to the Mozilla community every month, could 
we offload some (not all) of this work to mozillians with less on their plate?
 
> 
> These 10% and 50% numbers don't feel right to me; I think the
> thresholds should probably be substantially lower.  But I think it's
> easier to think about these numbers in failures/day, at least for
> me.

Great feedback on this.  Maybe we pick the top 10 from orange factor 
(http://brasstacks.mozilla.com/orangefactor/index.html), or we cut the numbers 
in half.  10% and 50% were sort of last resort numbers I came up with, ideally 
there would have already been a conversation/bug about the problem.

> > 2) When we are bringing a new platform online (Android 2.3, b2g, etc.) many 
> > tests will need to be disabled prior to getting the tests on tbpl.
> 
> That's reasonable as long as work is done to try to get the tests> 
> enabled (at a minimum, actually enabling all the tests that are
> passing reliably, rather than stopping after enabling the passing
> tests in only some directories).

One this I have heard is coming online is a way to track the # of tests 
available/disabled per platform, that would really help with ensuring we are 
not ignoring thousands of tests on a new platform.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to