On Tuesday, March 7, 2017 at 1:59:14 PM UTC-5, Steve Fink wrote:
> Is there a mechanism in place to detect when disabled intermittent tests 
> have been fixed?
> 
> eg, every so often you could rerun disabled tests individually a bunch 
> of times. Or if you can distinguish which tests are failing, run them 
> all a bunch of times and pick apart the wreckage to see which ones are 
> now consistently passing. I'm not suggesting those, just using them as 
> example solutions to illustrate what I mean.
> 
> On 03/07/2017 10:33 AM, Honza Bambas wrote:
> > I presume that when a test is disabled a bug is filed and triaged 
> > within the responsible team as any regular bug.  Only that way we 
> > don't forget and push on fixing it and returning back to the wheel.
> >
> > Are there also some data or stats how often tests having a strong 
> > orange factor catch actual regressions?  I.e. fail a different way 
> > than the filed "intermittent" one and uncover a real bug leading to a 
> > patch back out or filing of a regular functionality regression bug.  
> > If that number is found to be high(ish) for a test, the priority of 
> > fixing it after its disabling should be raised.
> >
> > -hb-
> >
> >

I am happy to see the discussion here.  Overall, we do not have data to 
indicate whether we are fixing a bug in the product patching a test.  I agree 
we should track that and I will try to do that going forward.  I recall 1 case 
of that happening this quarter, I suspect there are others.

Most of the disabled tests are on bugs marked leave-open and have the relevant 
developers on the bug- what value would a new bug bring?  If it would be 
better, i am happy to create a new bug.

I have seen 1 bug get fixed after being disabled, but that is it for this 
quarter.  Possibly there are others, but it is hard to know.  If we follow the 
tree rules for visibility, many of the jobs would be hidden and we would have 
no value from them.  

I think running the tests that are disabled on try once in a while seems 
useful, one could argue that is the role of the people who own the test- 
possibly we could make it easier to do this?  I could see adding a |tag = 
disabled| and running the disabled tests x20 or something in a nightly M(d) job 
or something to indicate it is disabled.  If we were to do that, who would look 
at the results and how would we get that information to all of the teams who 
care about the tests?
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to