Hi,

I concede it would be fine to do it gradually. Once the pace of issues
introduced by new development is beaten by the pace at which they are
addressed I think things will go well.

Ariel

On Tue, Jan 10, 2017, at 11:17 AM, Josh McKenzie wrote:
> @ariel: you're letting the perfect be the enemy of the good here. We (as
> a
> project) have been releasing with a smattering of test failures and
> upgrade
> edge-cases back into perpetuity. While that doesn't make it ideal or
> justify continuing the behavior, getting a green testall + dtest for 3.10
> is a strong incremental improvement. Integrating other tests in the
> "block
> if not green" on subsequent releases is likewise an improvement.
> 
> I strongly advocate for incremental change in expectations of the
> community's behavior rather than a black-and-white, "this has to be
> perfect
> or we block" mentality.
> 
> Sankalp's proposal of us progressively tightening up our standards allows
> us to get code out the door and regain some lost momentum on the 3.10
> release failures and blocking, and gives us time as a community to adjust
> our behavior without the burden of an ever-later slipped release hanging
> over our heads. There's plenty of bugfixes in the 3.X line; the more time
> people can have to kick the tires on that code, the more things we can
> find
> and the better future releases will be.
> 
> 
> 
> 
> 
> On Tue, Jan 10, 2017 at 10:33 AM, Ariel Weisberg <ar...@weisberg.ws>
> wrote:
> 
> > Hi,
> >
> > At least some of those failures are real. I don't think we should
> > release 3.10 until the real failures are addressed. As I said earlier
> > one of them is a wrong answer bug that is not going to be fixed in 3.10.
> >
> > Can we just ignore failures because we think they don't mean anything?
> > Who is going to check which of the 60 failures is real?
> >
> > These tests were passing just fine at the beginning of December and then
> > commits happened and now the tests are failing. That is exactly what
> > their for. They are good tests. I don't think it matters if the failures
> > are "real" today because those are valid tests and they don't test
> > anything if they fail for spurious reasons. They are a critical part of
> > the Cassandra infrastructure as much as the storage engine or network
> > code.
> >
> > In my opinion the tests need to be fixed and people need to fix them as
> > they break them and we need to figure out how to get from people
> > breaking them and it going unnoticed to they break it and then fix it in
> > a time frame that fits the release schedule.
> >
> > My personal opinion is that releases are a reward for finishing the job.
> > Releasing without finishing the job creates the wrong incentive
> > structure for the community. If you break something you are no longer
> > the person that blocked the release you are just one of several people
> > breaking things without consequence.
> >
> > I think that rapid feedback and triaging combined with releases blocked
> > by the stuff individual contributors have broken is the way to more
> > consistent releases both schedule wise and quality wise.
> >
> > Regarding delaying 3.10? Who exactly is the consumer that is chomping at
> > the bit to get another release? One that doesn't reliably upgrade from a
> > previous version?
> >
> > Ariel
> >
> > On Tue, Jan 10, 2017, at 08:13 AM, Josh McKenzie wrote:
> > > First, I think we need to clarify if we're blocking on just testall +
> > > dtest
> > > or blocking on *all test jobs*.
> > >
> > > If the latter, upgrade tests are the elephant in the room:
> > > http://cassci.datastax.com/view/cassandra-3.11/job/
> > cassandra-3.11_dtest_upgrade/lastCompletedBuild/testReport/
> > >
> > > Do we have confidence that the reported failures are all test problems
> > > and
> > > not w/Cassandra itself? If so, is that documented somewhere?
> > >
> > > On Mon, Jan 9, 2017 at 7:33 PM, Nate McCall <zznat...@gmail.com> wrote:
> > >
> > > > I'm not sure I understand the culmination of the past couple of
> > threads on
> > > > this.
> > > >
> > > > With a situation like:
> > > > http://cassci.datastax.com/view/cassandra-3.11/job/
> > cassandra-3.11_dtest/
> > > > lastCompletedBuild/testReport/
> > > >
> > > > We have some sense of stability on what might be flaky tests(?).
> > > > Again, I'm not sure what our criteria is specifically.
> > > >
> > > > Basically, it feels like we are in a stalemate right now. How do we
> > > > move forward?
> > > >
> > > > -Nate
> > > >
> >

Reply via email to