Re: NGCC 2018?

2018-07-24 Thread Nate McCall
This was discussed amongst the PMC recently. We did not come to a
conclusion and there were not terribly strong feelings either way.

I don't feel like we need to hustle to get "NGCC" in place,
particularly given our decided focus on 4.0. However, that should not
stop us from doing an additional 'c* developer' event in sept. to
coincide with distributed data summit.

On Wed, Jul 25, 2018 at 5:03 AM, Patrick McFadin  wrote:
> Ben,
>
> Lynn Bender had offered a space the day before Distributed Data Summit in
> September (http://distributeddatasummit.com/) since we are both platinum
> sponsors. I thought he and Nate had talked about that being a good place
> for NGCC since many of us will be in town already.
>
> Nate, now that I've spoken for you, you can clarify, :D
>
> Patrick
>

-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: reroll the builds?

2018-07-24 Thread dinesh.jo...@yahoo.com.INVALID
If that's the case, I'm +1 on rerolling the builds.
Dinesh 

On Tuesday, July 24, 2018, 9:18:14 AM PDT, Jason Brown 
 wrote:  
 
 I did run the dtests against the last release shas (3.0.16 and 3.11.2).
Notes are all the way at the bottom of the gist about those runs. Circleci
URLs: https://circleci.com/workflow-run/5a1df5a1-f0c1-4ab4-a7db-e5551e7a5d38
/ https://circleci.com/workflow-run/a4369ab0-ae11-497a-8e10-de3995d10f25.

Current HEAD of 3.0 & 3.11 branches have significantly lower failing
dtests, and the failing tests on HEAD are a subset of those from the last
release.


On Tue, Jul 24, 2018 at 9:03 AM, dinesh.jo...@yahoo.com.INVALID <
dinesh.jo...@yahoo.com.invalid> wrote:

> Hi Jason,
> I agree - we should release with the dataloss bug fix. I went over the
> gist - apart from the Python errors and test teardown failures, there seem
> to be a few failures that look legitimate. Any chance you can run the
> dtests on the previous release SHAs and compare the dtest failures? If
> they're the same / similar, we know at least we're at parity with the
> previous release :)
> Dinesh
>
>    On Tuesday, July 24, 2018, 8:18:50 AM PDT, Jason Brown <
> jasedbr...@gmail.com> wrote:
>
>  TL;DR We are in a better place than we were for the 3.0.16 and 3.11.2
> releases. The current fails are not fatal, although they warrant
> investigation. My opinion is that due the critical data loss bugs that are
> fixed by CASSANDRA-14513 and CASSANDRA-14515, we should cut the builds now.
>
> I've run the HEAD of the 3.0 and 3.11 branches vs the 3.0.16 and 3.11.2
> release shas, and there are far less failing dtests now. In comparison:
>
> - 3.11
> -- HEAD - 5-6 failing tests
> -- 3.11.2 - 18-20 failures
>
> - 3.0
> -- HEAD - 14-16 failures
> -- 3.0.16 - 22-25 failures
>
> The raw dump of my work can be found here:
> https://gist.github.com/jasobrown/e7ecf6d0bf875d1f4a08ee06ac7eaba0. I've
> applied no effort to clean it up, but it's available (includes links to the
> circleci runs). I haven't completed an exhautive analysis of the failures
> to see how far they go back as things become tricky (or, at least, very
> time intensive to research) with the pytest/python-3 update with
> CASSANDRA-14134. Thus some of the failures might be in the dtests
> themselves (I suspect a couple of the failures are), but most are proabaly
> legit failures.
>
> As this thread is about cutting the releases, I'll save any significiant
> analysis for a followup thread. I will say that the current failures are a
> subset of the previous release's failures, and those failures are not data
> loss bugs.
>
> Overall, I feel far more comfortable getting the data loss fixes out
> without any further delay than waiting for a few minor fixes. I will triage
> the dtest failures over the coming days. There are some open tickets, and
> I'll try to corral those with any new ones.
>
> Thanks,
>
> -Jason
>
>
> On Mon, Jul 23, 2018 at 10:26 AM, dinesh.jo...@yahoo.com.INVALID <
> dinesh.jo...@yahoo.com.invalid> wrote:
>
> > I can help out with the triage / rerunning dtests if needed.
> > Dinesh
> >
> >    On Monday, July 23, 2018, 10:22:18 AM PDT, Jason Brown <
> > jasedbr...@gmail.com> wrote:
> >
> >  I spoke with some people over here, and I'm going to spend a day doing a
> > quick triage of the failing dtests. There are some fixes for data loss
> bugs
> > that are critical to get out in these builds, so I'll ensure the current
> > failures are within an acceptable level of flakey-ness in order to
> unblock
> > those fixes.
> >
> > Will have an update shortly ...
> >
> > -Jason
> >
> > On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown 
> wrote:
> >
> > > Hi all,
> > >
> > > First, thanks Joey for running the tests. Your pass/fail counts are
> > > basically what in line with what I've seen for the last several months.
> > (I
> > > don't have an aggregated list anywhere, just observations from recent
> > runs).
> > >
> > > Second, it's beyond me why there's such inertia to actually cutting a
> > > release. We're getting up to almost *six months* since the last
> release.
> > > Are there any grand objections at this point?
> > >
> > > Thanks,
> > >
> > > -Jason
> > >
> > >
> > > On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> > > wrote:
> > >
> > >> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there
> are
> > >> various failing dtests but all three have green unit tests.
> > >>
> > >> 3.11.3 tentative (31d5d87, test branch
> > >>  > >> cassandra_3.11_temp_testing>,
> > >> unit tests 
> > >> pass, 5
> > >>  and 6
> > >>  > >> tests/containers/8>
> > >> dtest failures)
> > >> 3.0.17 tentative (d52c7b8, test branch
> > >>  >,
> > >> unit
> > >> tests 

Re: NGCC 2018?

2018-07-24 Thread Ben Bromhead
That was a factor in our thinking and my suggested timing/city, but as you
know such an event is more than just space in a conference room :)

On Tue, Jul 24, 2018 at 1:03 PM Patrick McFadin  wrote:

> Ben,
>
> Lynn Bender had offered a space the day before Distributed Data Summit in
> September (http://distributeddatasummit.com/) since we are both platinum
> sponsors. I thought he and Nate had talked about that being a good place
> for NGCC since many of us will be in town already.
>
> Nate, now that I've spoken for you, you can clarify, :D
>
> Patrick
>
>
> On Mon, Jul 23, 2018 at 2:25 PM Ben Bromhead  wrote:
>
> > The year has gotten away from us a little bit, but now is as good a time
> as
> > any to put out a general call for interest in an NGCC this year.
> >
> > Last year Gary and Eric did an awesome job organizing it in San Antonio.
> > This year it might be a good idea to do it in another city?
> >
> > We at Instaclustr are happy to sponsor/organize/run it, but ultimately
> this
> > is a community event and we only want to do it if there is a strong
> desire
> > to attend from the community and it meets the wider needs.
> >
> > Here are a few thoughts we have had in no particular order:
> >
> >- I was thinking it might be worth doing it in SF/Bay Area around the
> >dates of distributed data day (14th of September) as I know a number
> of
> >folks will be in town for it.
> >- Typically NGCC has focused on being a single day, single track
> >conference with scheduled sessions and an unconference set of ad-hoc
> > talks
> >at the end. It may make sense to change this up given the pending
> freeze
> >(maybe make this more like a commit/review fest)? Or keep it in the
> same
> >format but focus on the 4.0 work at hand.
> >- Any community members who want to get involved again in the more
> >organizational side of it (Gary, Eric)?
> >- Any other sponsors (doesn't have to be monetary, can be space,
> >resource etc) who want to get involved?
> >
> > If folks are generally happy with the end approach we'll post details as
> > soon as possible (given its July right now)!
> >
> > Ben
> >
> >
> > --
> > Ben Bromhead
> > CTO | Instaclustr 
> > +1 650 284 9692 <(650)%20284-9692>
> > Reliability at Scale
> > Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
> >
>
-- 
Ben Bromhead
CTO | Instaclustr 
+1 650 284 9692
Reliability at Scale
Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer


Re: NGCC 2018?

2018-07-24 Thread Patrick McFadin
Ben,

Lynn Bender had offered a space the day before Distributed Data Summit in
September (http://distributeddatasummit.com/) since we are both platinum
sponsors. I thought he and Nate had talked about that being a good place
for NGCC since many of us will be in town already.

Nate, now that I've spoken for you, you can clarify, :D

Patrick


On Mon, Jul 23, 2018 at 2:25 PM Ben Bromhead  wrote:

> The year has gotten away from us a little bit, but now is as good a time as
> any to put out a general call for interest in an NGCC this year.
>
> Last year Gary and Eric did an awesome job organizing it in San Antonio.
> This year it might be a good idea to do it in another city?
>
> We at Instaclustr are happy to sponsor/organize/run it, but ultimately this
> is a community event and we only want to do it if there is a strong desire
> to attend from the community and it meets the wider needs.
>
> Here are a few thoughts we have had in no particular order:
>
>- I was thinking it might be worth doing it in SF/Bay Area around the
>dates of distributed data day (14th of September) as I know a number of
>folks will be in town for it.
>- Typically NGCC has focused on being a single day, single track
>conference with scheduled sessions and an unconference set of ad-hoc
> talks
>at the end. It may make sense to change this up given the pending freeze
>(maybe make this more like a commit/review fest)? Or keep it in the same
>format but focus on the 4.0 work at hand.
>- Any community members who want to get involved again in the more
>organizational side of it (Gary, Eric)?
>- Any other sponsors (doesn't have to be monetary, can be space,
>resource etc) who want to get involved?
>
> If folks are generally happy with the end approach we'll post details as
> soon as possible (given its July right now)!
>
> Ben
>
>
> --
> Ben Bromhead
> CTO | Instaclustr 
> +1 650 284 9692
> Reliability at Scale
> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
>


Re: NGCC 2018?

2018-07-24 Thread Jeff Beck
I am an organizer with a conference (GR8Conf US) here in Minneapolis I
would be happy to assist with organizing and we also have recording
equipment to support up to 3 tracks. We operate under a non profit already
if we need a legal entity.

If it is the bay area I we can't help much with space but we have contacts
here if there is any desire to have it in the midwest.

Jeff

On Mon, Jul 23, 2018 at 4:25 PM Ben Bromhead  wrote:

> The year has gotten away from us a little bit, but now is as good a time as
> any to put out a general call for interest in an NGCC this year.
>
> Last year Gary and Eric did an awesome job organizing it in San Antonio.
> This year it might be a good idea to do it in another city?
>
> We at Instaclustr are happy to sponsor/organize/run it, but ultimately this
> is a community event and we only want to do it if there is a strong desire
> to attend from the community and it meets the wider needs.
>
> Here are a few thoughts we have had in no particular order:
>
>- I was thinking it might be worth doing it in SF/Bay Area around the
>dates of distributed data day (14th of September) as I know a number of
>folks will be in town for it.
>- Typically NGCC has focused on being a single day, single track
>conference with scheduled sessions and an unconference set of ad-hoc
> talks
>at the end. It may make sense to change this up given the pending freeze
>(maybe make this more like a commit/review fest)? Or keep it in the same
>format but focus on the 4.0 work at hand.
>- Any community members who want to get involved again in the more
>organizational side of it (Gary, Eric)?
>- Any other sponsors (doesn't have to be monetary, can be space,
>resource etc) who want to get involved?
>
> If folks are generally happy with the end approach we'll post details as
> soon as possible (given its July right now)!
>
> Ben
>
>
> --
> Ben Bromhead
> CTO | Instaclustr 
> +1 650 284 9692 <(650)%20284-9692>
> Reliability at Scale
> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
>


Re: reroll the builds?

2018-07-24 Thread Jason Brown
I did run the dtests against the last release shas (3.0.16 and 3.11.2).
Notes are all the way at the bottom of the gist about those runs. Circleci
URLs: https://circleci.com/workflow-run/5a1df5a1-f0c1-4ab4-a7db-e5551e7a5d38
/ https://circleci.com/workflow-run/a4369ab0-ae11-497a-8e10-de3995d10f25.

Current HEAD of 3.0 & 3.11 branches have significantly lower failing
dtests, and the failing tests on HEAD are a subset of those from the last
release.


On Tue, Jul 24, 2018 at 9:03 AM, dinesh.jo...@yahoo.com.INVALID <
dinesh.jo...@yahoo.com.invalid> wrote:

> Hi Jason,
> I agree - we should release with the dataloss bug fix. I went over the
> gist - apart from the Python errors and test teardown failures, there seem
> to be a few failures that look legitimate. Any chance you can run the
> dtests on the previous release SHAs and compare the dtest failures? If
> they're the same / similar, we know at least we're at parity with the
> previous release :)
> Dinesh
>
> On Tuesday, July 24, 2018, 8:18:50 AM PDT, Jason Brown <
> jasedbr...@gmail.com> wrote:
>
>  TL;DR We are in a better place than we were for the 3.0.16 and 3.11.2
> releases. The current fails are not fatal, although they warrant
> investigation. My opinion is that due the critical data loss bugs that are
> fixed by CASSANDRA-14513 and CASSANDRA-14515, we should cut the builds now.
>
> I've run the HEAD of the 3.0 and 3.11 branches vs the 3.0.16 and 3.11.2
> release shas, and there are far less failing dtests now. In comparison:
>
> - 3.11
> -- HEAD - 5-6 failing tests
> -- 3.11.2 - 18-20 failures
>
> - 3.0
> -- HEAD - 14-16 failures
> -- 3.0.16 - 22-25 failures
>
> The raw dump of my work can be found here:
> https://gist.github.com/jasobrown/e7ecf6d0bf875d1f4a08ee06ac7eaba0. I've
> applied no effort to clean it up, but it's available (includes links to the
> circleci runs). I haven't completed an exhautive analysis of the failures
> to see how far they go back as things become tricky (or, at least, very
> time intensive to research) with the pytest/python-3 update with
> CASSANDRA-14134. Thus some of the failures might be in the dtests
> themselves (I suspect a couple of the failures are), but most are proabaly
> legit failures.
>
> As this thread is about cutting the releases, I'll save any significiant
> analysis for a followup thread. I will say that the current failures are a
> subset of the previous release's failures, and those failures are not data
> loss bugs.
>
> Overall, I feel far more comfortable getting the data loss fixes out
> without any further delay than waiting for a few minor fixes. I will triage
> the dtest failures over the coming days. There are some open tickets, and
> I'll try to corral those with any new ones.
>
> Thanks,
>
> -Jason
>
>
> On Mon, Jul 23, 2018 at 10:26 AM, dinesh.jo...@yahoo.com.INVALID <
> dinesh.jo...@yahoo.com.invalid> wrote:
>
> > I can help out with the triage / rerunning dtests if needed.
> > Dinesh
> >
> >On Monday, July 23, 2018, 10:22:18 AM PDT, Jason Brown <
> > jasedbr...@gmail.com> wrote:
> >
> >  I spoke with some people over here, and I'm going to spend a day doing a
> > quick triage of the failing dtests. There are some fixes for data loss
> bugs
> > that are critical to get out in these builds, so I'll ensure the current
> > failures are within an acceptable level of flakey-ness in order to
> unblock
> > those fixes.
> >
> > Will have an update shortly ...
> >
> > -Jason
> >
> > On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown 
> wrote:
> >
> > > Hi all,
> > >
> > > First, thanks Joey for running the tests. Your pass/fail counts are
> > > basically what in line with what I've seen for the last several months.
> > (I
> > > don't have an aggregated list anywhere, just observations from recent
> > runs).
> > >
> > > Second, it's beyond me why there's such inertia to actually cutting a
> > > release. We're getting up to almost *six months* since the last
> release.
> > > Are there any grand objections at this point?
> > >
> > > Thanks,
> > >
> > > -Jason
> > >
> > >
> > > On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> > > wrote:
> > >
> > >> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there
> are
> > >> various failing dtests but all three have green unit tests.
> > >>
> > >> 3.11.3 tentative (31d5d87, test branch
> > >>  > >> cassandra_3.11_temp_testing>,
> > >> unit tests 
> > >> pass, 5
> > >>  and 6
> > >>  > >> tests/containers/8>
> > >> dtest failures)
> > >> 3.0.17 tentative (d52c7b8, test branch
> > >>  >,
> > >> unit
> > >> tests  pass, 14
> > >>  and 15
> > >> 

Re: reroll the builds?

2018-07-24 Thread dinesh.jo...@yahoo.com.INVALID
Hi Jason,
I agree - we should release with the dataloss bug fix. I went over the gist - 
apart from the Python errors and test teardown failures, there seem to be a few 
failures that look legitimate. Any chance you can run the dtests on the 
previous release SHAs and compare the dtest failures? If they're the same / 
similar, we know at least we're at parity with the previous release :)
Dinesh 

On Tuesday, July 24, 2018, 8:18:50 AM PDT, Jason Brown 
 wrote:  
 
 TL;DR We are in a better place than we were for the 3.0.16 and 3.11.2
releases. The current fails are not fatal, although they warrant
investigation. My opinion is that due the critical data loss bugs that are
fixed by CASSANDRA-14513 and CASSANDRA-14515, we should cut the builds now.

I've run the HEAD of the 3.0 and 3.11 branches vs the 3.0.16 and 3.11.2
release shas, and there are far less failing dtests now. In comparison:

- 3.11
-- HEAD - 5-6 failing tests
-- 3.11.2 - 18-20 failures

- 3.0
-- HEAD - 14-16 failures
-- 3.0.16 - 22-25 failures

The raw dump of my work can be found here:
https://gist.github.com/jasobrown/e7ecf6d0bf875d1f4a08ee06ac7eaba0. I've
applied no effort to clean it up, but it's available (includes links to the
circleci runs). I haven't completed an exhautive analysis of the failures
to see how far they go back as things become tricky (or, at least, very
time intensive to research) with the pytest/python-3 update with
CASSANDRA-14134. Thus some of the failures might be in the dtests
themselves (I suspect a couple of the failures are), but most are proabaly
legit failures.

As this thread is about cutting the releases, I'll save any significiant
analysis for a followup thread. I will say that the current failures are a
subset of the previous release's failures, and those failures are not data
loss bugs.

Overall, I feel far more comfortable getting the data loss fixes out
without any further delay than waiting for a few minor fixes. I will triage
the dtest failures over the coming days. There are some open tickets, and
I'll try to corral those with any new ones.

Thanks,

-Jason


On Mon, Jul 23, 2018 at 10:26 AM, dinesh.jo...@yahoo.com.INVALID <
dinesh.jo...@yahoo.com.invalid> wrote:

> I can help out with the triage / rerunning dtests if needed.
> Dinesh
>
>    On Monday, July 23, 2018, 10:22:18 AM PDT, Jason Brown <
> jasedbr...@gmail.com> wrote:
>
>  I spoke with some people over here, and I'm going to spend a day doing a
> quick triage of the failing dtests. There are some fixes for data loss bugs
> that are critical to get out in these builds, so I'll ensure the current
> failures are within an acceptable level of flakey-ness in order to unblock
> those fixes.
>
> Will have an update shortly ...
>
> -Jason
>
> On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown  wrote:
>
> > Hi all,
> >
> > First, thanks Joey for running the tests. Your pass/fail counts are
> > basically what in line with what I've seen for the last several months.
> (I
> > don't have an aggregated list anywhere, just observations from recent
> runs).
> >
> > Second, it's beyond me why there's such inertia to actually cutting a
> > release. We're getting up to almost *six months* since the last release.
> > Are there any grand objections at this point?
> >
> > Thanks,
> >
> > -Jason
> >
> >
> > On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> > wrote:
> >
> >> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there are
> >> various failing dtests but all three have green unit tests.
> >>
> >> 3.11.3 tentative (31d5d87, test branch
> >>  >> cassandra_3.11_temp_testing>,
> >> unit tests 
> >> pass, 5
> >>  and 6
> >>  >> tests/containers/8>
> >> dtest failures)
> >> 3.0.17 tentative (d52c7b8, test branch
> >> ,
> >> unit
> >> tests  pass, 14
> >>  and 15
> >>  dtest failures)
> >> 2.2.13 tentative (3482370, test branch
> >>  >> dra/tree/2.2-testing>,
> >> unit tests 
> >> pass, 9
> >>  and 10
> >>  >> tests/containers/8>
> >> dtest failures)
> >>
> >> It looks like many (~6) of the failures in 3.0.x are related to
> >> snapshot_test.TestArchiveCommitlog. I'm not sure if this is abnormal.
> >>
> >> I don't see a good historical record to know if these are just flakes,
> but
> >> if we only want to go on green builds perhaps we can either disable the
> >> flakey tests or fix them 

Re: reroll the builds?

2018-07-24 Thread Jason Brown
TL;DR We are in a better place than we were for the 3.0.16 and 3.11.2
releases. The current fails are not fatal, although they warrant
investigation. My opinion is that due the critical data loss bugs that are
fixed by CASSANDRA-14513 and CASSANDRA-14515, we should cut the builds now.

I've run the HEAD of the 3.0 and 3.11 branches vs the 3.0.16 and 3.11.2
release shas, and there are far less failing dtests now. In comparison:

- 3.11
-- HEAD - 5-6 failing tests
-- 3.11.2 - 18-20 failures

- 3.0
-- HEAD - 14-16 failures
-- 3.0.16 - 22-25 failures

The raw dump of my work can be found here:
https://gist.github.com/jasobrown/e7ecf6d0bf875d1f4a08ee06ac7eaba0. I've
applied no effort to clean it up, but it's available (includes links to the
circleci runs). I haven't completed an exhautive analysis of the failures
to see how far they go back as things become tricky (or, at least, very
time intensive to research) with the pytest/python-3 update with
CASSANDRA-14134. Thus some of the failures might be in the dtests
themselves (I suspect a couple of the failures are), but most are proabaly
legit failures.

As this thread is about cutting the releases, I'll save any significiant
analysis for a followup thread. I will say that the current failures are a
subset of the previous release's failures, and those failures are not data
loss bugs.

Overall, I feel far more comfortable getting the data loss fixes out
without any further delay than waiting for a few minor fixes. I will triage
the dtest failures over the coming days. There are some open tickets, and
I'll try to corral those with any new ones.

Thanks,

-Jason


On Mon, Jul 23, 2018 at 10:26 AM, dinesh.jo...@yahoo.com.INVALID <
dinesh.jo...@yahoo.com.invalid> wrote:

> I can help out with the triage / rerunning dtests if needed.
> Dinesh
>
> On Monday, July 23, 2018, 10:22:18 AM PDT, Jason Brown <
> jasedbr...@gmail.com> wrote:
>
>  I spoke with some people over here, and I'm going to spend a day doing a
> quick triage of the failing dtests. There are some fixes for data loss bugs
> that are critical to get out in these builds, so I'll ensure the current
> failures are within an acceptable level of flakey-ness in order to unblock
> those fixes.
>
> Will have an update shortly ...
>
> -Jason
>
> On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown  wrote:
>
> > Hi all,
> >
> > First, thanks Joey for running the tests. Your pass/fail counts are
> > basically what in line with what I've seen for the last several months.
> (I
> > don't have an aggregated list anywhere, just observations from recent
> runs).
> >
> > Second, it's beyond me why there's such inertia to actually cutting a
> > release. We're getting up to almost *six months* since the last release.
> > Are there any grand objections at this point?
> >
> > Thanks,
> >
> > -Jason
> >
> >
> > On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> > wrote:
> >
> >> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there are
> >> various failing dtests but all three have green unit tests.
> >>
> >> 3.11.3 tentative (31d5d87, test branch
> >>  >> cassandra_3.11_temp_testing>,
> >> unit tests 
> >> pass, 5
> >>  and 6
> >>  >> tests/containers/8>
> >> dtest failures)
> >> 3.0.17 tentative (d52c7b8, test branch
> >> ,
> >> unit
> >> tests  pass, 14
> >>  and 15
> >>  dtest failures)
> >> 2.2.13 tentative (3482370, test branch
> >>  >> dra/tree/2.2-testing>,
> >> unit tests 
> >> pass, 9
> >>  and 10
> >>  >> tests/containers/8>
> >> dtest failures)
> >>
> >> It looks like many (~6) of the failures in 3.0.x are related to
> >> snapshot_test.TestArchiveCommitlog. I'm not sure if this is abnormal.
> >>
> >> I don't see a good historical record to know if these are just flakes,
> but
> >> if we only want to go on green builds perhaps we can either disable the
> >> flakey tests or fix them up? If someone feels strongly we should fix
> >> particular tests up please link a jira and I can take a whack at some of
> >> them.
> >>
> >> -Joey
> >>
> >> On Tue, Jul 17, 2018 at 9:35 AM Michael Shuler 
> >> wrote:
> >>
> >> > On 07/16/2018 11:27 PM, Jason Brown wrote:
> >> > > Hey all,
> >> > >
> >> > > The recent builds were -1'd, but it appears the issues have been
> >> resolved
> >> > > (2.2.13 with CASSANDRA-14423, and 3.0.17 / 3.11.3 reverting
> >> > >