+1

On Wed, Jul 1, 2020 at 1:55 PM Jon Haddad <j...@jonhaddad.com> wrote:

> I think coming up with a formal comprehensive guide for determining if we
> can merge these sort of huge impacting features is a great idea.
>
> I'm also on board with applying the same standard to the experimental
> features.
>
> On Wed, Jul 1, 2020 at 1:45 PM Joshua McKenzie <jmcken...@apache.org>
> wrote:
>
> > Which questions and how we frame it aside, it's clear we have some
> > foundational thinking to do, articulate, and agree upon as a project
> before
> > we can reasonably make decisions about deprecation, promotion, or
> inclusion
> > of features in the project.
> >
> > Is that fair?
> >
> > If so, I propose we set this thread down for now in deference to us
> > articulating the quality bar we set and how we achieve it for features in
> > the DB and then retroactively apply them to existing experimental
> features.
> > Should we determine nobody is stepping up to maintain an
> > experimental feature in a reasonable time frame, we can cross the bridge
> of
> > the implications of scale of adoption and the perceived impact on the
> user
> > community of deprecation and removal at that time.
> >
> > On Wed, Jul 1, 2020 at 9:59 AM Benedict Elliott Smith <
> bened...@apache.org
> > >
> > wrote:
> >
> > > I humbly suggest these are the wrong questions to ask.  Instead, two
> > sides
> > > of just one question matter: how did we miss these problems, and what
> > would
> > > we have needed to do procedurally to have not missed it.  Whatever it
> is,
> > > we need to do it now to have confidence other things were not missed,
> as
> > > well as for all future features.
> > >
> > > We should start by producing a list of what we think is necessary for
> > > deploying successful features.  We can then determine what items are
> > > missing that would have been needed to catch a problem.  Obvious things
> > > are:
> > >
> > >   * integration tests at scale
> > >   * integration tests with a variety of extreme workloads
> > >   * integration tests with various cluster topologies
> > >   * data integrity tests as part of the above
> > >   * all of the above as reproducible tests incorporated into the source
> > > tree
> > >
> > > We can then ensure Jira accurately represents all of the known issues
> > with
> > > MVs (and other features).  This includes those that are poorly defined
> > > (such as "doesn't scale").
> > >
> > > Then we can look at all issues and ask: would this approach have caught
> > > it, and if not what do we need to add to the guidelines to prevent a
> > > recurrence - and also ensure this problem is unique?  In future we can
> > ask,
> > > for bugs found in features built to these guidelines: why didn't it
> catch
> > > this bug? Do the guidelines need additional items, or greater
> specificity
> > > about how to meet given criteria?
> > >
> > > I do not think that data from deployments - even if reliably obtained -
> > > can tell us much besides which problems we prioritise.
> > >
> > >
> > >
> > > On 01/07/2020, 01:58, "joshua.mcken...@gmail.com" <
> > > joshua.mcken...@gmail.com> wrote:
> > >
> > >     It would be incredibly helpful for us to have some empirical data
> and
> > > agreed upon terms and benchmarks to help us navigate discussions like
> > this:
> > >
> > >       * How widely used is a feature  in C* deployments worldwide?
> > >       * What are the primary issues users face when deploying them?
> > > Scaling them? During failure scenarios?
> > >       * What does the engineering effort to bridge these gaps look
> like?
> > > Who will do that? On what time horizon?
> > >       * What does our current test coverage for this feature look like?
> > >       * What shape of defects are arising with the feature? In a
> specific
> > > subsection of the module or usage?
> > >       * Do we have an agreed upon set of standards for labeling a
> feature
> > > stable? As experimental? If not, how do we get there?
> > >       * What effort will it take to bridge from where we are to where
> we
> > > agree we need to be? On what timeline is this acceptable?
> > >
> > >     I believe these are not only answerable questions, but
> fundamentally
> > > the underlying themes our discussion alludes to. They’re also questions
> > > that apply to a lot more than just MV’s and tie into what you’re
> speaking
> > > to above Benedict.
> > >
> > >
> > >     > On Jun 30, 2020, at 8:32 PM, sankalp kohli <
> kohlisank...@gmail.com
> > >
> > > wrote:
> > >     >
> > >     > I see this discussion as several decisions which can be made in
> > > small
> > >     > increments.
> > >     >
> > >     > 1. In release cycles, when can we propose a feature to be
> > deprecated
> > > or
> > >     > marked experimental. Ideally a new feature should come out
> > > experimental if
> > >     > required but we have several who are candidates now. We can work
> on
> > >     > integrating this in the release lifecycle doc we already have.
> > >     > 2. What is the process of making an existing feature
> experimental?
> > > How does
> > >     > it affect major releases around testing.
> > >     > 3. What is the process of deprecating/removing an experimental
> > > feature.
> > >     > (Assuming experimental features should be deprecated/removed)
> > >     >
> > >     > Coming to MV, I think we need more data before we can say we
> > >     > should deprecate MV. Here are some of them which should be part
> of
> > >     > deprecation process
> > >     > 1.Talk to customers who use them and understand what is the
> impact.
> > > Give
> > >     > them a forum to talk about it.
> > >     > 2. Do we have enough resources to bring this feature out of the
> > >     > experimental feature list in next 1 or 2 major releases. We
> cannot
> > > have too
> > >     > many experimental features in the database. Marking a feature
> > > experimental
> > >     > should not be a parking place for a non functioning feature but a
> > > place
> > >     > while we stabilize it.
> > >     >
> > >     >
> > >     >
> > >     >
> > >     >> On Tue, Jun 30, 2020 at 4:52 PM <joshua.mcken...@gmail.com>
> > wrote:
> > >     >>
> > >     >> I followed up with the clarification about unit and dtests for
> > that
> > > reason
> > >     >> Dinesh. We test experimental features now.
> > >     >>
> > >     >> If we’re talking about adding experimental features to the 40
> > > quality
> > >     >> testing effort, how does that differ from just saying “we won’t
> > > release
> > >     >> until we’ve tested and stabilized these features and they’re no
> > > longer
> > >     >> experimental”?
> > >     >>
> > >     >> Maybe I’m just misunderstanding something here?
> > >     >>
> > >     >>>> On Jun 30, 2020, at 7:12 PM, Dinesh Joshi <djo...@apache.org>
> > > wrote:
> > >     >>>
> > >     >>> 
> > >     >>>>
> > >     >>>> On Jun 30, 2020, at 4:05 PM, Brandon Williams <
> dri...@gmail.com
> > >
> > > wrote:
> > >     >>>>
> > >     >>>> Instead of ripping it out, we could instead disable them in
> the
> > > yaml
> > >     >>>> with big fat warning comments around it.  That way people
> > already
> > >     >>>> using them can just enable them again, but it will raise the
> bar
> > > for
> > >     >>>> new users who ignore/miss the warnings in the logs and just
> use
> > > them.
> > >     >>>
> > >     >>> Not a bad idea. Although, the real issue is that users enable
> MV
> > > on a 3
> > >     >> node cluster with a few megs of data and conclude that MVs will
> > >     >> horizontally scale with the size of data. This is what causes
> > > issues for
> > >     >> users who naively roll it out in production and discover that
> MVs
> > > do not
> > >     >> scale with their data growth. So whatever we do, the big fat
> > > warning should
> > >     >> educate the unsuspecting operator.
> > >     >>>
> > >     >>> Dinesh
> > >     >>>
> > > ---------------------------------------------------------------------
> > >     >>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > >     >>> For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >     >>>
> > >     >>
> > >     >>
> > > ---------------------------------------------------------------------
> > >     >> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > >     >> For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >     >>
> > >     >>
> > >
> > >
>  ---------------------------------------------------------------------
> > >     To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > >     For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >
> > >
> >
>

Reply via email to