Excuse me if I jump into an old thread, but from my experience, I have a
very clear opinion about situations like that as I encountered them before:

Tests are there to give *certainty*.
*Would you like to pass a crossing with a green light if you cannot be sure
if green really means green?*
Do you want to rely on tests that are green, red, green, red? What if a red
is a real red and you missed it because you simply ignore it because it's
flaky?

IMHO there are only 3 options how to deal with broken/red tests:
- Fix the underlying issue
- Fix the test
- Delete the test

If I cannot trust a test, it is better not to have it at all. Otherwise
people are staring at red lights and start to drive.

This causes:
- Uncertainty
- Loss of trust
- Confusion
- More work
- *Less quality*

Just as an example:
Few days ago I created a patch. Then I ran the utest and 1 test failed.
Hmmm, did I break it? I had to check it twice by checking out the former
state, running the tests again just to recognize that it wasn't me who made
it fail. That's annoying.

Sorry again, I'm rather new here but what I just read reminded me much of
situations I have been in years ago.
So: +1, John

2016-12-03 7:48 GMT+01:00 sankalp kohli <kohlisank...@gmail.com>:

> Hi,
>     I dont see any any update on this thread. We will go ahead and make
> Dtest a blocker for cutting releasing for anything after 3.10.
>
> Please respond if anyone has an objection to this.
>
> Thanks,
> Sankalp
>
>
>
> On Mon, Nov 21, 2016 at 11:57 AM, Josh McKenzie <jmcken...@apache.org>
> wrote:
>
> > Caveat: I'm strongly in favor of us blocking a release on a non-green
> test
> > board of either utest or dtest.
> >
> >
> > > put something in prod which is known to be broken in obvious ways
> >
> > In my experience the majority of fixes are actually shoring up
> low-quality
> > / flaky tests or fixing tests that have been invalidated by a commit but
> do
> > not indicate an underlying bug. Inferring "tests are failing so we know
> > we're asking people to put things in prod that are broken in obvious
> ways"
> > is hyperbolic. A more correct statement would be: "Tests are failing so
> we
> > know we're shipping with a test that's failing" which is not helpful.
> >
> > Our signal to noise ratio with tests has been very poor historically;
> we've
> > been trying to address that through aggressive triage and assigning out
> > test failures however we need far more active and widespread community
> > involvement if we want to truly *fix* this problem long-term.
> >
> > On Mon, Nov 21, 2016 at 2:33 PM, Jonathan Haddad <j...@jonhaddad.com>
> > wrote:
> >
> > > +1.  Kind of silly to put advise people to put something in prod which
> is
> > > known to be broken in obvious ways
> > >
> > > On Mon, Nov 21, 2016 at 11:31 AM sankalp kohli <kohlisank...@gmail.com
> >
> > > wrote:
> > >
> > > > Hi,
> > > >     We should not cut a releases if Dtest are not passing. I won't
> > block
> > > > 3.10 on this since we are just discussing this.
> > > >
> > > > Please provide feedback on this.
> > > >
> > > > Thanks,
> > > > Sankalp
> > > >
> > >
> >
>



-- 
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer

Reply via email to