>
> I am reacting to what I currently see
> happening in the project; tests fail as the norm and this is kinda seen as
> expected, even though it goes against the policies as I understand it.

After over half a decade seeing us all continue to struggle with this
problem, I've come around to the school of "apply pain" (I mean that as
light-hearted as you can take it) when there's a failure to incent fixing;
specifically in this case, the only idea I can think of is preventing merge
w/any failing tests on a PR. We go through this cycle as we approach each
major release: we have the gatekeeper of "we're not going to cut a release
with failing tests obviously", and we clean them up. After the release, the
pressure is off, we exhale, relax, and flaky test failures (and others)
start to creep back in.

If the status quo is the world we want to live in, that's totally fine and
no judgement intended - we can build tooling around test failure history
and known flaky tests etc to optimize engineer workflows around that
expectation. But what I keep seeing on threads like this (and have always
heard brought up in conversation) is that our collective *moral* stance is
that we should have green test boards and not merge code if it introduces
failing tests.

Not looking to prescribe or recommend anything, just hoping that
observation above might be of interest or value to the conversation.

On Thu, Jan 23, 2020 at 4:17 PM Michael Shuler <mich...@pbandjelly.org>
wrote:

> On 1/23/20 3:53 PM, David Capwell wrote:
> >
> > 2) Nightly build email to dev@?
>
> Nope. builds@c.a.o is where these go.
> https://lists.apache.org/list.html?bui...@cassandra.apache.org
>
> Michael
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>

Reply via email to