Everything that's in flaky should be tracked in some way. When we add
something to flaky tests, someone should at least passively be working on
it, e.g. gathering more information by adding logs or similar.
For everything that's already in flaky we should make an effort to work
through them and make them not flaky or get test coverage for the
implementation code via different tests. So eventually there should be
nothing in flaky that is not on somebodies radar. We might get some kind of
a tragedy of the commons issue with tests that nobody feels responsible
for. Having more tests fail likely won't solve that though, since it still
doesn't assign responsibility.

I think this time is better spent tackling flaky tests that are related to
the code you feel responsible for. Once we have cut down significantly on
our amount of flaky tests we should sit down and split up the orphaned
flaky tests. In the meantime the current 400+ flaky tests provide plenty
opportunity to improve the situation.

On Fri, Nov 3, 2017 at 10:33 AM, Patrick Rhomberg <prhomb...@pivotal.io>
wrote:

> Hello, all!
>
>   I was considering doing some git archeology centered around identifying
> how long a any given test class containing a @Flaky has had that
> annotation.  Ultimately, I think it would be good to add a test that would
> fail when any one test has been flaky for too long.  I feel like many of
> our flaky tests have fallen by the wayside, and this could provide the
> impetus to resolve these issues in a timely fashion.
>   This leads naturally to the question: How long should a test be allowed
> to remain marked Flaky?  Certainly, flaky tests are most often of the
> non-deterministic, hard-to-reproduce variety, so some leeway is deserved.
> Two weeks?  One month?
>   Thoughts?
>
> Imagination is Change.
> ~Patrick Rhomberg
>

Reply via email to