It would be incredibly helpful for us to have some empirical data and agreed
upon terms and benchmarks to help us navigate discussions like this:
* How widely used is a feature in C* deployments worldwide?
* What are the primary issues users face when deploying them? Scaling them?
During
I see this discussion as several decisions which can be made in small
increments.
1. In release cycles, when can we propose a feature to be deprecated or
marked experimental. Ideally a new feature should come out experimental if
required but we have several who are candidates now. We can work on
> On Jun 30, 2020, at 4:52 PM, joshua.mcken...@gmail.com wrote:
>
> I followed up with the clarification about unit and dtests for that reason
> Dinesh. We test experimental features now.
I hit send before seeing your clarification. I personally feel that unit and
dtests may not surface
>>> Instead of ripping it out, we could instead disable them in the yaml
>>> with big fat warning comments around it.
FYI we have already disabled use of materialized views, SASI, and transient
replication by default in 4.0
I followed up with the clarification about unit and dtests for that reason
Dinesh. We test experimental features now.
If we’re talking about adding experimental features to the 40 quality testing
effort, how does that differ from just saying “we won’t release until we’ve
tested and stabilized
Thank you all those who responded.
One potential way we could speed up sussing out issues is running regular "Bug
Bashes" with the help of the user community. We could periodically post stats
and recognize folks who contribute the most issues. This would help gain
confidence in the builds
> On Jun 30, 2020, at 4:05 PM, Brandon Williams wrote:
>
> Instead of ripping it out, we could instead disable them in the yaml
> with big fat warning comments around it. That way people already
> using them can just enable them again, but it will raise the bar for
> new users who ignore/miss
On Tue, Jun 30, 2020 at 5:41 PM wrote:
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one would hope or smaller use cases) I don’t think ripping
> those features out and further excluding users from the ecosystem is the
> right move.
> On Jun 30, 2020, at 3:40 PM, joshua.mcken...@gmail.com wrote:
>
> I don’t think we should hold up releases on testing experimental features.
> Especially with how many of them we have.
>
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one
Just to clarify one thing. I understand experimental features to be alpha /
beta quality, and as such the guarantees of correctness to differ from the
other features presented in the database. We should likely articulate this in
the wiki and docs if we have not.
In the case of mv’s, since they
I don’t think we should hold up releases on testing experimental features.
Especially with how many of them we have.
Agree re: needing a more quantitative bar for new additions which we can also
retroactively apply to experimental features to bring up to speed and
eventually graduate. Probably
> On Jun 30, 2020, at 3:27 PM, David Capwell wrote:
>
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and
On Wed, Jul 1, 2020 at 10:27 AM David Capwell wrote:
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and
If that is the case then shouldn't we add MV to "4.0 Quality: Components
and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
to the testing road map would be a clear sign that someone is planning to
champion and own this feature; if people feel that this is a broken
feature,
I think the point is that we need to have a clear plan of action to bring
features up to an acceptable standard. That also implies a need to agree how
we determine if a feature has reached an acceptable standard - both going
forwards and retrospectively. For those that don't reach that
Let's forget I said anything about release cadence. That's another thread
entirely and a good deep conversation to explore. Don't want to derail.
If there's a question about "is anyone stepping forward to maintain MV's",
I can say with certainty that at least one full time contributor I work
with
I don't think we can realistically expect majors, with the deprecation cycle
they entail, to come every six months. If nothing else, we would have too many
versions to maintain at once. I personally think all the project needs on that
front is clearer roadmapping at the start of a release
On Tue, Jun 30, 2020 at 1:46 PM Joshua McKenzie
wrote:
> We're just short of 98 tickets on the component since it's original merge
> so at least *some* work has been done to stabilize them. Not to say I'm
> endorsing running them at massive scale today without knowing what you're
> doing, to be
I think, just as importantly, we also need to grapple with what went wrong when
features landed this way, since these were not isolated occurrences -
suggesting structural issues were at play.
I'm not sure if a retrospective is viable with this organisational structure,
but we can perhaps
Seems like a reasonable point of view to me Sankalp. I'd also suggest we
try to find other sources of data than just the user ML, like searching on
github for instance. A collection of imperfect metrics beats just one in my
experience.
Though I would ask why we're having this discussion this late
> So from my PoV, I'm against us just voting to deprecate and remove without
> going into more depth into the current state of things and what options are
> on the table, since people will continue to build MV's at the client level
> which, in theory, should have worse correctness and performance
We're just short of 98 tickets on the component since it's original merge
so at least *some* work has been done to stabilize them. Not to say I'm
endorsing running them at massive scale today without knowing what you're
doing, to be clear. They are perhaps our largest loaded gun of a feature of
+1
On Tue, Jun 30, 2020 at 2:44 PM Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the date
> was five years ago...
>
> While at TLP, I helped numerous customers move off of MVs,
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> As we move forward with the 4.0 release, we should consider this an
> opportunity to deprecate materialized views, and remove them in 5.0. We
> should take this opportunity to learn from the mistake and raise the bar
> for new features to
+1 for deprecation and removal (assuming a credible plan to fix them doesn't
materialize)
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the
A couple days ago when writing a separate email I came across this DataStax
blog post discussing MVs [1]. Imagine my surprise when I noticed the date
was five years ago...
While at TLP, I helped numerous customers move off of MVs, mostly because
they affected stability of clusters in a horrific
It is a good catch, Mick. :-)
I will triage those tickets to be sure that our view of things is accurate.
On Tue, Jun 30, 2020 at 11:38 AM Berenguer Blasi
wrote:
> That's a very good point. At the risk of saying sthg silly or being
> captain obvious, as I am not familiar with the project
That's a very good point. At the risk of saying sthg silly or being
captain obvious, as I am not familiar with the project dynamics, there
should be a periodic 'backlog triage' or similar. Otherwise we'll have
the impression we have just a handful of pending issues while another
10x packet is
That is a good finger in the air starting point imo. We'd have to adjust
the backing filter to reflect exactly what we want. But we have the data
and a graph report available already at hand which is good :-)
On 30/6/20 11:09, Benjamin Lerer wrote:
>> It would be nice to have a graph on our
>
>
> Berenguer pointed out to me that we already have a graph to track those
> things:
>
>
>
>
> It would be nice to have a graph on our weekly status of the number of
> issues reported on 4.0. I feel like having a visual representation of the
> number of bugs on 4.0 over time would be really helpful to give us a
> feeling of the progress on its stability.
>
Berenguer pointed out to me
Thanks a lot for starting this thread Dinesh.
As a baseline expectation, we thought big users of Cassandra should be
> running the latest trunk internally and testing it out for their particular
> use cases. We wanted them to file as many jiras as possible based on their
> experience. Operations
32 matches
Mail list logo