How do we plan on verifying #4? Also, root-cause to tie back new code that
introduces flaky tests (i.e. passes on commit, fails 5% of the time
thereafter) is a non-trivial pursuit (thinking #2 here), and a pretty
common problem in this environment.

On Mon, Mar 27, 2017 at 6:51 PM, Nate McCall <zznat...@gmail.com> wrote:

> I don't want to lose track of the original idea from François, so
> let's do this formally in preparation for a vote. Having this all in
> place will make transition to new testing infrastructure more
> goal-oriented and keep us more focused moving forward.
>
> Does anybody have specific feedback/discussion points on the following
> (awesome, IMO) proposal:
>
> Principles:
>
> 1. Tests always pass. This is the starting point. If we don't care
> about test failures, then we should stop writing tests. A recurring
> failing test carries no signal and is better deleted.
> 2. The code is tested.
>
> Assuming we can align on these principles, here is a proposal for
> their implementation.
>
> Rules:
>
> 1. Each new release passes all tests (no flakinesss).
> 2. If a patch has a failing test (test touching the same code path),
> the code or test should be fixed prior to being accepted.
> 3. Bugs fixes should have one test that fails prior to the fix and
> passes after fix.
> 4. New code should have at least 90% test coverage.
>

Reply via email to