Big thanks to Aus for getting work on this kicked off in bug 1222215. If
anyone wants to help out with this effort, please take a look at all the
blue runs here [1] and grab any intermittent test bugs that you are
familiar with.

1.)
https://treeherder.mozilla.org/#/jobs?repo=gaia&revision=80a5920fbf8c49400f457501cf80b81fd30468de

On Thu, Nov 5, 2015 at 5:43 PM, Johnny Stenback <[email protected]> wrote:

> Fair enough. I personally don't think it's worth any more time trying
> to prove this one way or another as we've seen intermittent issues
> arise time and time again by seemingly unrelated changes. The A-team
> at Mozilla has tons of data on this from years of tracking oranges on
> tbpl and now tree herder, jgriffin can point you to that if needed.
>
> My point is simply that if we're care at all about quality then we
> need a test harness that brings intermittent issues to light as
> opposed to tries to hide them. From the op here it sounds like we have
> the latter.
>
> - jst
>
>
> On Thu, Nov 5, 2015 at 8:36 AM, Gareth Aye <[email protected]> wrote:
> > Just to be clear, I meant to ask questions and you can neither agree nor
> > disagree with a question. The assertion here is that the oranges are
> masking
> > real issues. My intention was really to ask to what extent we know that
> > oranges are masking real issues. I only added my own experience that many
> > regressions have resulted in permareds rather than oranges to support the
> > idea that we might look into quantifying the badness of the situation
> before
> > creating more noise for sheriffs. That part is falsifiable and it would
> make
> > more sense to argue (if you're intent on disagreeing with me : ) that
> it's
> > not worth quantifying the extent to which oranges mask real issues for
> > reasons x, y, z, etc.
> >
> > On Thu, Nov 5, 2015 at 11:07 AM, Johnny Stenback <[email protected]>
> wrote:
> >>
> >> On Wed, Nov 4, 2015 at 10:27 AM, Gareth Aye <[email protected]>
> wrote:
> >> > On Wed, Nov 4, 2015 at 10:39 AM, Michael Henretty
> >> > <[email protected]>
> >> > wrote:
> >> >>
> >> >> Hi Gaia Folk,
> >> >>
> >> >> If you've been doing Gaia core work for any length of time, you are
> >> >> probably aware that we have *many* intermittent Gij test failures on
> >> >> Treeherder [1]. But the problem is even worse than you may know! You
> >> >> see,
> >> >> each Gij test is run 5 times within a test chunk (g. Gij4) before it
> is
> >> >> marked as failing. Then that chunk itself is retried up to 5 times
> >> >> before
> >> >> the whole thing is marked as failing. This means that for a test to
> be
> >> >> marked as "passing," it only has to run successfully once in 25
> times.
> >> >> I'm
> >> >> not kidding. Our retry logic, especially those inside the test chunk,
> >> >> make
> >> >> it hard to know which intermittent tests are our worst offenders.
> This
> >> >> is
> >> >> bad.
> >> >
> >> >
> >> > I'm not sure that it is so bad. From my own experience, regressions
> >> > rarely
> >> > cause intermittent failures. They mostly pop up as permareds. I think
> it
> >> > would make sense to demonstrate that we are, in fact, masking a lot of
> >> > real
> >> > broken functionality before making our intermittents noisier for
> >> > sheriffs.
> >>
> >> I couldn't disagree more. A decade+ of Firefox and Gecko test
> >> automation has mountains of evidence that intermittent failures are
> >> caused by regressions or exposed by seemingly unrelated changes.
> >>
> >> - jst
> >
> >
>
_______________________________________________
dev-fxos mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-fxos

Reply via email to