Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Marcus Eriksson
I was just thinking that we should try really hard to avoid adding
experimental features - they are experimental due to lack of testing right?
There should be a clear path to making the feature non-experimental (or get
it removed) and having that path discussed on dev@ might give more
visibility to it.

I'm also struggling a bit to find good historic examples of "this would
have been better off as an experimental feature" - I used to think that it
would have been good to commit DTCS with some sort of experimental flag,
but that would not have made DTCS any better - it would have been better to
do more testing, realise that it does not work and then not commit it at
all of course.

Does anyone have good examples of features where it would have made sense
to commit them behind an experimental flag? SASI might be a good example,
but for MVs - if we knew how painful they would be, they really would not
have gotten committed at all, right?

/Marcus

On Sat, Sep 30, 2017 at 7:42 AM, Jeff Jirsa  wrote:

> Reviewers should be able to suggest when experimental is warranted, and
> conversation on dev+jira to justify when it’s transitioned from
> experimental to stable?
>
> We should remove the flag as soon as we’re (collectively) confident in a
> feature’s behavior - at least correctness, if not performance.
>
>
> > On Sep 29, 2017, at 10:31 PM, Marcus Eriksson  wrote:
> >
> > +1 on marking MVs experimental, but should there be some point in the
> > future where we consider removing them from the code base unless they
> have
> > gotten significant improvement as well?
> >
> > We probably need to enforce some kind of process for adding new
> > experimental features in the future - perhaps a mail like this one to
> dev@
> > motivating why it should be experimental?
> >
> > /Marcus
> >
> > On Sat, Sep 30, 2017 at 1:15 AM, Vinay Chella
> 
> > wrote:
> >
> >> We tried perf testing MVs internally here but did not see good results
> with
> >> it, hence paused its usage. +1 on tagging certain features which are not
> >> PROD ready or not stable enough.
> >>
> >> Regards,
> >> Vinay Chella
> >>
> >>> On Fri, Sep 29, 2017 at 7:22 PM, Ben Bromhead 
> wrote:
> >>>
> >>> I'm a fan of introducing experimental flags in general as well, +1
> >>>
> >>>
> >>>
>  On Fri, 29 Sep 2017 at 13:22 Jon Haddad  wrote:
> 
>  I’m very much +1 on this, and to new features in general.
> 
>  I think having a clear line in which we classify something as
> >> production
>  ready would be nice.  It would be great if committers were using the
>  feature in prod and could vouch for it’s stability.
> 
> > On Sep 29, 2017, at 1:09 PM, Blake Eggleston 
>  wrote:
> >
> > Hi dev@,
> >
> > I’d like to propose that we retroactively classify materialized views
> >>> as
>  an experimental feature, disable them by default, and require users to
>  enable them through a config setting before using.
> >
> > Materialized views have several issues that make them (effectively)
>  unusable in production. Some of the issues aren’t just implementation
>  problems, but problems with the design that aren’t easily fixed. It’s
>  unfair of us to make features available to users in this state without
>  providing a clear warning that bad or unexpected things are likely to
>  happen if they use it.
> >
> > Obviously, this isn’t great news for users that have already adopted
>  MVs, and I don’t have a great answer for that. I think that’s sort of
> a
>  sunk cost at this point. If they have any MV related problems, they’ll
> >>> have
>  them whether they’re marked experimental or not. I would expect this
> to
>  reduce the number of users adopting MVs in the future though, and if
> >> they
>  do, it would be opt-in.
> >
> > Once MVs reach a point where they’re usable in production, we can
> >>> remove
>  the flag. Specifics of how the experimental flag would work can be
> >>> hammered
>  out in a forthcoming JIRA, but I’d imagine it would just prevent users
> >>> from
>  creating new MVs, and maybe log warnings on startup for existing MVs
> if
> >>> the
>  flag isn’t enabled.
> >
> > Let me know what you think.
> >
> > Thanks,
> >
> > Blake
> 
> 
>  -
>  To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>  For additional commands, e-mail: dev-h...@cassandra.apache.org
> 
>  --
> >>> Ben Bromhead
> >>> CTO | Instaclustr 
> >>> +1 650 284 9692
> >>> Reliability at Scale
> >>> Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer
> >>>
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>


Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread DuyHai Doan
How should we transition one feature from the "experimental" state to
"production ready" state ? On which criteria ?



On Sun, Oct 1, 2017 at 12:12 PM, Marcus Eriksson  wrote:

> I was just thinking that we should try really hard to avoid adding
> experimental features - they are experimental due to lack of testing right?
> There should be a clear path to making the feature non-experimental (or get
> it removed) and having that path discussed on dev@ might give more
> visibility to it.
>
> I'm also struggling a bit to find good historic examples of "this would
> have been better off as an experimental feature" - I used to think that it
> would have been good to commit DTCS with some sort of experimental flag,
> but that would not have made DTCS any better - it would have been better to
> do more testing, realise that it does not work and then not commit it at
> all of course.
>
> Does anyone have good examples of features where it would have made sense
> to commit them behind an experimental flag? SASI might be a good example,
> but for MVs - if we knew how painful they would be, they really would not
> have gotten committed at all, right?
>
> /Marcus
>
> On Sat, Sep 30, 2017 at 7:42 AM, Jeff Jirsa  wrote:
>
> > Reviewers should be able to suggest when experimental is warranted, and
> > conversation on dev+jira to justify when it’s transitioned from
> > experimental to stable?
> >
> > We should remove the flag as soon as we’re (collectively) confident in a
> > feature’s behavior - at least correctness, if not performance.
> >
> >
> > > On Sep 29, 2017, at 10:31 PM, Marcus Eriksson 
> wrote:
> > >
> > > +1 on marking MVs experimental, but should there be some point in the
> > > future where we consider removing them from the code base unless they
> > have
> > > gotten significant improvement as well?
> > >
> > > We probably need to enforce some kind of process for adding new
> > > experimental features in the future - perhaps a mail like this one to
> > dev@
> > > motivating why it should be experimental?
> > >
> > > /Marcus
> > >
> > > On Sat, Sep 30, 2017 at 1:15 AM, Vinay Chella
> > 
> > > wrote:
> > >
> > >> We tried perf testing MVs internally here but did not see good results
> > with
> > >> it, hence paused its usage. +1 on tagging certain features which are
> not
> > >> PROD ready or not stable enough.
> > >>
> > >> Regards,
> > >> Vinay Chella
> > >>
> > >>> On Fri, Sep 29, 2017 at 7:22 PM, Ben Bromhead 
> > wrote:
> > >>>
> > >>> I'm a fan of introducing experimental flags in general as well, +1
> > >>>
> > >>>
> > >>>
> >  On Fri, 29 Sep 2017 at 13:22 Jon Haddad  wrote:
> > 
> >  I’m very much +1 on this, and to new features in general.
> > 
> >  I think having a clear line in which we classify something as
> > >> production
> >  ready would be nice.  It would be great if committers were using the
> >  feature in prod and could vouch for it’s stability.
> > 
> > > On Sep 29, 2017, at 1:09 PM, Blake Eggleston  >
> >  wrote:
> > >
> > > Hi dev@,
> > >
> > > I’d like to propose that we retroactively classify materialized
> views
> > >>> as
> >  an experimental feature, disable them by default, and require users
> to
> >  enable them through a config setting before using.
> > >
> > > Materialized views have several issues that make them (effectively)
> >  unusable in production. Some of the issues aren’t just
> implementation
> >  problems, but problems with the design that aren’t easily fixed.
> It’s
> >  unfair of us to make features available to users in this state
> without
> >  providing a clear warning that bad or unexpected things are likely
> to
> >  happen if they use it.
> > >
> > > Obviously, this isn’t great news for users that have already
> adopted
> >  MVs, and I don’t have a great answer for that. I think that’s sort
> of
> > a
> >  sunk cost at this point. If they have any MV related problems,
> they’ll
> > >>> have
> >  them whether they’re marked experimental or not. I would expect this
> > to
> >  reduce the number of users adopting MVs in the future though, and if
> > >> they
> >  do, it would be opt-in.
> > >
> > > Once MVs reach a point where they’re usable in production, we can
> > >>> remove
> >  the flag. Specifics of how the experimental flag would work can be
> > >>> hammered
> >  out in a forthcoming JIRA, but I’d imagine it would just prevent
> users
> > >>> from
> >  creating new MVs, and maybe log warnings on startup for existing MVs
> > if
> > >>> the
> >  flag isn’t enabled.
> > >
> > > Let me know what you think.
> > >
> > > Thanks,
> > >
> > > Blake
> > 
> > 
> >  
> -
> >  To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> >  For additional commands, e-mail: dev-h...@cassandra.apache.org
> > 
> >

Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Jeff Jirsa
Historical examples are anything that you wouldn’t bet your job on for the 
first release:

Udf/uda in 2.2
Incremental repair - would have yanked the flag following 9143
SASI - probably still experimental 
Counters - all sorts of correctness issues originally, no longer true since the 
rewrite in 2.1
Vnodes - or at least shuffle
CDC - is the API going to change or is it good as-is? 
CQL - we’re on v3, what’s that say about v1?

Basically anything where we can’t definitively say “this feature is going to 
work for you, build your product on it” because companies around the world are 
trying to make that determination on their own, and they don’t have the same 
insight that the active committers have.

The transition out we could define as a fixed number of releases or a dev@ 
vote, I don’t think you’ll find something that applies to all experimental 
features, so being flexible is probably the best bet there


-- 
Jeff Jirsa


> On Oct 1, 2017, at 3:12 AM, Marcus Eriksson  wrote:
> 
> I was just thinking that we should try really hard to avoid adding
> experimental features - they are experimental due to lack of testing right?
> There should be a clear path to making the feature non-experimental (or get
> it removed) and having that path discussed on dev@ might give more
> visibility to it.
> 
> I'm also struggling a bit to find good historic examples of "this would
> have been better off as an experimental feature" - I used to think that it
> would have been good to commit DTCS with some sort of experimental flag,
> but that would not have made DTCS any better - it would have been better to
> do more testing, realise that it does not work and then not commit it at
> all of course.
> 
> Does anyone have good examples of features where it would have made sense
> to commit them behind an experimental flag? SASI might be a good example,
> but for MVs - if we knew how painful they would be, they really would not
> have gotten committed at all, right?
> 
> /Marcus
> 
>> On Sat, Sep 30, 2017 at 7:42 AM, Jeff Jirsa  wrote:
>> 
>> Reviewers should be able to suggest when experimental is warranted, and
>> conversation on dev+jira to justify when it’s transitioned from
>> experimental to stable?
>> 
>> We should remove the flag as soon as we’re (collectively) confident in a
>> feature’s behavior - at least correctness, if not performance.
>> 
>> 
>>> On Sep 29, 2017, at 10:31 PM, Marcus Eriksson  wrote:
>>> 
>>> +1 on marking MVs experimental, but should there be some point in the
>>> future where we consider removing them from the code base unless they
>> have
>>> gotten significant improvement as well?
>>> 
>>> We probably need to enforce some kind of process for adding new
>>> experimental features in the future - perhaps a mail like this one to
>> dev@
>>> motivating why it should be experimental?
>>> 
>>> /Marcus
>>> 
>>> On Sat, Sep 30, 2017 at 1:15 AM, Vinay Chella
>> 
>>> wrote:
>>> 
 We tried perf testing MVs internally here but did not see good results
>> with
 it, hence paused its usage. +1 on tagging certain features which are not
 PROD ready or not stable enough.
 
 Regards,
 Vinay Chella
 
> On Fri, Sep 29, 2017 at 7:22 PM, Ben Bromhead 
>> wrote:
> 
> I'm a fan of introducing experimental flags in general as well, +1
> 
> 
> 
>> On Fri, 29 Sep 2017 at 13:22 Jon Haddad  wrote:
>> 
>> I’m very much +1 on this, and to new features in general.
>> 
>> I think having a clear line in which we classify something as
 production
>> ready would be nice.  It would be great if committers were using the
>> feature in prod and could vouch for it’s stability.
>> 
>>> On Sep 29, 2017, at 1:09 PM, Blake Eggleston 
>> wrote:
>>> 
>>> Hi dev@,
>>> 
>>> I’d like to propose that we retroactively classify materialized views
> as
>> an experimental feature, disable them by default, and require users to
>> enable them through a config setting before using.
>>> 
>>> Materialized views have several issues that make them (effectively)
>> unusable in production. Some of the issues aren’t just implementation
>> problems, but problems with the design that aren’t easily fixed. It’s
>> unfair of us to make features available to users in this state without
>> providing a clear warning that bad or unexpected things are likely to
>> happen if they use it.
>>> 
>>> Obviously, this isn’t great news for users that have already adopted
>> MVs, and I don’t have a great answer for that. I think that’s sort of
>> a
>> sunk cost at this point. If they have any MV related problems, they’ll
> have
>> them whether they’re marked experimental or not. I would expect this
>> to
>> reduce the number of users adopting MVs in the future though, and if
 they
>> do, it would be opt-in.
>>> 
>>> Once MVs reach a point where they’re usable in production, we can

Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Blake Eggleston
I'm not sure the main issue in the case of MVs is testing. In this case it 
seems to be that there are some design issues and/or the design was only works 
in some overly restrictive use cases. That MVs were committed knowing these 
were issues seems to be the real problem. So in the case of MVs, sure I don't 
think they should have ever made it to an experimental stage.

Thinking of how an experimental flag fits in the with the project going forward 
though, I disagree that we should avoid adding experimental features. On the 
contrary, I think leaning towards classifying new features as  experimental 
would be better for users. Especially larger features and changes.

Even with well spec'd, well tested, and well designed features, there will 
always be edge cases that you didn't think of, or you'll have made assumptions 
about the other parts of C* it relies on that aren't 100% correct. Small 
problems here can often affect correctness, or result in data loss. So, I think 
it makes sense to avoid marking them as ready for regular use until they've had 
time to bake in clusters where there are some expert operators that are 
sophisticated enough to understand the implications of running them, detect 
issues, and report bugs.

Regarding historical examples, in hindsight I think committing 8099, or at the 
very least, parts of it, behind an experimental flag would have been the right 
thing to do. It was a huge change that we're still finding issues with 2 years 
later.

On October 1, 2017 at 6:08:50 AM, DuyHai Doan (doanduy...@gmail.com) wrote:

How should we transition one feature from the "experimental" state to  
"production ready" state ? On which criteria ?  



On Sun, Oct 1, 2017 at 12:12 PM, Marcus Eriksson  wrote:  

> I was just thinking that we should try really hard to avoid adding  
> experimental features - they are experimental due to lack of testing right?  
> There should be a clear path to making the feature non-experimental (or get  
> it removed) and having that path discussed on dev@ might give more  
> visibility to it.  
>  
> I'm also struggling a bit to find good historic examples of "this would  
> have been better off as an experimental feature" - I used to think that it  
> would have been good to commit DTCS with some sort of experimental flag,  
> but that would not have made DTCS any better - it would have been better to  
> do more testing, realise that it does not work and then not commit it at  
> all of course.  
>  
> Does anyone have good examples of features where it would have made sense  
> to commit them behind an experimental flag? SASI might be a good example,  
> but for MVs - if we knew how painful they would be, they really would not  
> have gotten committed at all, right?  
>  
> /Marcus  
>  
> On Sat, Sep 30, 2017 at 7:42 AM, Jeff Jirsa  wrote:  
>  
> > Reviewers should be able to suggest when experimental is warranted, and  
> > conversation on dev+jira to justify when it’s transitioned from  
> > experimental to stable?  
> >  
> > We should remove the flag as soon as we’re (collectively) confident in a  
> > feature’s behavior - at least correctness, if not performance.  
> >  
> >  
> > > On Sep 29, 2017, at 10:31 PM, Marcus Eriksson   
> wrote:  
> > >  
> > > +1 on marking MVs experimental, but should there be some point in the  
> > > future where we consider removing them from the code base unless they  
> > have  
> > > gotten significant improvement as well?  
> > >  
> > > We probably need to enforce some kind of process for adding new  
> > > experimental features in the future - perhaps a mail like this one to  
> > dev@  
> > > motivating why it should be experimental?  
> > >  
> > > /Marcus  
> > >  
> > > On Sat, Sep 30, 2017 at 1:15 AM, Vinay Chella  
> >   
> > > wrote:  
> > >  
> > >> We tried perf testing MVs internally here but did not see good results  
> > with  
> > >> it, hence paused its usage. +1 on tagging certain features which are  
> not  
> > >> PROD ready or not stable enough.  
> > >>  
> > >> Regards,  
> > >> Vinay Chella  
> > >>  
> > >>> On Fri, Sep 29, 2017 at 7:22 PM, Ben Bromhead   
> > wrote:  
> > >>>  
> > >>> I'm a fan of introducing experimental flags in general as well, +1  
> > >>>  
> > >>>  
> > >>>  
> >  On Fri, 29 Sep 2017 at 13:22 Jon Haddad  wrote:  
> >   
> >  I’m very much +1 on this, and to new features in general.  
> >   
> >  I think having a clear line in which we classify something as  
> > >> production  
> >  ready would be nice. It would be great if committers were using the  
> >  feature in prod and could vouch for it’s stability.  
> >   
> > > On Sep 29, 2017, at 1:09 PM, Blake Eggleston  >  
> >  wrote:  
> > >  
> > > Hi dev@,  
> > >  
> > > I’d like to propose that we retroactively classify materialized  
> views  
> > >>> as  
> >  an experimental feature, disable them by default, and require users  
> to  
> >  ena

Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread DuyHai Doan
So basically we're saying that even with a lot of tests, you're never sure
to cover all the possible edge cases and the real stamp for "production
readiness" is only when the "experimental features" have been deployed in
various clusters with various scenarios/use-cases, just re-phrasing Blake
here. Totally +1 on the idea.

Now I can foresee a problem with the "experimental" flag, that is nobody
(in the community) will use it or even dare to play with it and thus the
"experimental" features never get a chance to be tested and then we break
the bug-reports/bug-fixes iterations ...

How many times have I seen users on the ML asking which version of C* is
the most fit for production and the answer was always at least 1 major
version behind the current released major (2.1 was recommended when 3.x was
released and so one ...) ?

The fundamental issue here is that a lot of folks in the community do not
want to take any risk and take a conservative approach for the production,
which is fine and perfectly understandable. But it means that the implicit
contract for OSS software, e.g. "you have a software for free in exchange
you will give feedbacks and bug reports to improve it", is completely
broken.

Let's take the example of MV. MV was shipped with 3.0 --> considered not
stable --> nobody/few people uses MV --> few bug reports --> bugs didn't
have chance to get fixed --> the problem lasts until now

About SASI, how many people really played with thoroughly apart from some
toy examples ? Same causes, same consequences. And we can't even blame its
design because fundamentally the architecture is pretty solid, just a
question of usage and feedbacks.

I suspect that this broken community QA/feedback loop did also explain
partially the failure of tic/toc releases but it's only my own
interpretation here.

So if we don't figure out how to restore the "new feature/community bug
report" strong feedback loop, we're going to face again the same issues and
same debate in the future


On Sun, Oct 1, 2017 at 5:30 PM, Blake Eggleston 
wrote:

> I'm not sure the main issue in the case of MVs is testing. In this case it
> seems to be that there are some design issues and/or the design was only
> works in some overly restrictive use cases. That MVs were committed knowing
> these were issues seems to be the real problem. So in the case of MVs, sure
> I don't think they should have ever made it to an experimental stage.
>
> Thinking of how an experimental flag fits in the with the project going
> forward though, I disagree that we should avoid adding experimental
> features. On the contrary, I think leaning towards classifying new features
> as  experimental would be better for users. Especially larger features and
> changes.
>
> Even with well spec'd, well tested, and well designed features, there will
> always be edge cases that you didn't think of, or you'll have made
> assumptions about the other parts of C* it relies on that aren't 100%
> correct. Small problems here can often affect correctness, or result in
> data loss. So, I think it makes sense to avoid marking them as ready for
> regular use until they've had time to bake in clusters where there are some
> expert operators that are sophisticated enough to understand the
> implications of running them, detect issues, and report bugs.
>
> Regarding historical examples, in hindsight I think committing 8099, or at
> the very least, parts of it, behind an experimental flag would have been
> the right thing to do. It was a huge change that we're still finding issues
> with 2 years later.
>
> On October 1, 2017 at 6:08:50 AM, DuyHai Doan (doanduy...@gmail.com)
> wrote:
>
> How should we transition one feature from the "experimental" state to
> "production ready" state ? On which criteria ?
>
>
>
> On Sun, Oct 1, 2017 at 12:12 PM, Marcus Eriksson 
> wrote:
>
> > I was just thinking that we should try really hard to avoid adding
> > experimental features - they are experimental due to lack of testing
> right?
> > There should be a clear path to making the feature non-experimental (or
> get
> > it removed) and having that path discussed on dev@ might give more
> > visibility to it.
> >
> > I'm also struggling a bit to find good historic examples of "this would
> > have been better off as an experimental feature" - I used to think that
> it
> > would have been good to commit DTCS with some sort of experimental flag,
> > but that would not have made DTCS any better - it would have been better
> to
> > do more testing, realise that it does not work and then not commit it at
> > all of course.
> >
> > Does anyone have good examples of features where it would have made sense
> > to commit them behind an experimental flag? SASI might be a good example,
> > but for MVs - if we knew how painful they would be, they really would not
> > have gotten committed at all, right?
> >
> > /Marcus
> >
> > On Sat, Sep 30, 2017 at 7:42 AM, Jeff Jirsa  wrote:
> >
> > > Reviewers should be able to sugge

Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Blake Eggleston
I think you're presenting a false dichotomy here. Yes there are people who are 
not interested in taking risks with C* and are still running 1.2, there are 
probably a few people who would put trunk in prod if we packaged it up for 
them, but there's a whole spectrum of users in between. Operator competence / 
sophistication has the same sort of spectrum.

I'd expect the amount of feedback on experimental features would be a function 
of the quality of the design / implementation and the amount of user interest. 
If you're not getting feedback on experimental feature, it's probably poorly 
implemented, or no one's interested in it.

I don't think labelling features is going to kill the user <-> developer 
feedback loop. It will probably slow down the pace of feature development a 
bit, but it's been slowing down anyway, and that's a good thing imo.

On October 1, 2017 at 9:14:45 AM, DuyHai Doan (doanduy...@gmail.com) wrote:

So basically we're saying that even with a lot of tests, you're never sure  
to cover all the possible edge cases and the real stamp for "production  
readiness" is only when the "experimental features" have been deployed in  
various clusters with various scenarios/use-cases, just re-phrasing Blake  
here. Totally +1 on the idea.  

Now I can foresee a problem with the "experimental" flag, that is nobody  
(in the community) will use it or even dare to play with it and thus the  
"experimental" features never get a chance to be tested and then we break  
the bug-reports/bug-fixes iterations ...  

How many times have I seen users on the ML asking which version of C* is  
the most fit for production and the answer was always at least 1 major  
version behind the current released major (2.1 was recommended when 3.x was  
released and so one ...) ?  

The fundamental issue here is that a lot of folks in the community do not  
want to take any risk and take a conservative approach for the production,  
which is fine and perfectly understandable. But it means that the implicit  
contract for OSS software, e.g. "you have a software for free in exchange  
you will give feedbacks and bug reports to improve it", is completely  
broken.  

Let's take the example of MV. MV was shipped with 3.0 --> considered not  
stable --> nobody/few people uses MV --> few bug reports --> bugs didn't  
have chance to get fixed --> the problem lasts until now  

About SASI, how many people really played with thoroughly apart from some  
toy examples ? Same causes, same consequences. And we can't even blame its  
design because fundamentally the architecture is pretty solid, just a  
question of usage and feedbacks.  

I suspect that this broken community QA/feedback loop did also explain  
partially the failure of tic/toc releases but it's only my own  
interpretation here.  

So if we don't figure out how to restore the "new feature/community bug  
report" strong feedback loop, we're going to face again the same issues and  
same debate in the future  


On Sun, Oct 1, 2017 at 5:30 PM, Blake Eggleston   
wrote:  

> I'm not sure the main issue in the case of MVs is testing. In this case it  
> seems to be that there are some design issues and/or the design was only  
> works in some overly restrictive use cases. That MVs were committed knowing  
> these were issues seems to be the real problem. So in the case of MVs, sure  
> I don't think they should have ever made it to an experimental stage.  
>  
> Thinking of how an experimental flag fits in the with the project going  
> forward though, I disagree that we should avoid adding experimental  
> features. On the contrary, I think leaning towards classifying new features  
> as experimental would be better for users. Especially larger features and  
> changes.  
>  
> Even with well spec'd, well tested, and well designed features, there will  
> always be edge cases that you didn't think of, or you'll have made  
> assumptions about the other parts of C* it relies on that aren't 100%  
> correct. Small problems here can often affect correctness, or result in  
> data loss. So, I think it makes sense to avoid marking them as ready for  
> regular use until they've had time to bake in clusters where there are some  
> expert operators that are sophisticated enough to understand the  
> implications of running them, detect issues, and report bugs.  
>  
> Regarding historical examples, in hindsight I think committing 8099, or at  
> the very least, parts of it, behind an experimental flag would have been  
> the right thing to do. It was a huge change that we're still finding issues  
> with 2 years later.  
>  
> On October 1, 2017 at 6:08:50 AM, DuyHai Doan (doanduy...@gmail.com)  
> wrote:  
>  
> How should we transition one feature from the "experimental" state to  
> "production ready" state ? On which criteria ?  
>  
>  
>  
> On Sun, Oct 1, 2017 at 12:12 PM, Marcus Eriksson   
> wrote:  
>  
> > I was just thinking that we should try really hard to avoi

Spawning nodes for testing

2017-10-01 Thread me
Hi there,
I am playing with Gossip classes and looking for a way to create nodes and join 
to cluster while debugging in IDEA. Is there any way to make this process 
simple? Or shoud I do something like Docker containers?

Thanks a lot!
Cheers

Salih



Re: Spawning nodes for testing

2017-10-01 Thread Jeff Jirsa
Check out CCM - it’s how the project writes distributed tests 

https://github.com/pcmanus/ccm

-- 
Jeff Jirsa


> On Oct 1, 2017, at 10:25 AM, m...@salih.xyz wrote:
> 
> Hi there,
> I am playing with Gossip classes and looking for a way to create nodes and 
> join to cluster while debugging in IDEA. Is there any way to make this 
> process simple? Or shoud I do something like Docker containers?
> 
> Thanks a lot!
> Cheers
> 
> Salih
> 


Cassandra 3.11.1 (snapshot build) - io.netty.util.Recycler$Stack memory leak

2017-10-01 Thread Steinmaurer, Thomas
Hello,

posted also to the users list, but possibly it is better targeted to this list, 
cause 3.11.1 is close to be released?

We were facing a memory leak with 3.11.0 
(https://issues.apache.org/jira/browse/CASSANDRA-13754) thus upgraded our 
loadtest environment to a snapshot build of 3.11.1. Having it running for > 48 
hrs now, we still see a steady increase on heap utilization.

Eclipse memory analyzer shows 147 instances of io.netty.util.Recycler$Stack 
with a total retained heap usage of ~ 1,8G, growing over time.

Should this be fixed already by CASSANDRA-13754 or is this something new?

Thanks,
Thomas


The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freist?dterstra?e 313


Re: Spawning nodes for testing

2017-10-01 Thread me
Thanks Jeff, I will check this out.

On 1 Oct 2017 20:31 +0300, Jeff Jirsa , wrote:
> Check out CCM - it’s how the project writes distributed tests
>
> https://github.com/pcmanus/ccm
>
> --
> Jeff Jirsa
>
>
> > On Oct 1, 2017, at 10:25 AM, m...@salih.xyz wrote:
> >
> > Hi there,
> > I am playing with Gossip classes and looking for a way to create nodes and 
> > join to cluster while debugging in IDEA. Is there any way to make this 
> > process simple? Or shoud I do something like Docker containers?
> >
> > Thanks a lot!
> > Cheers
> >
> > Salih
> >


Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Dave Brosius

triggers


On 10/01/2017 11:25 AM, Jeff Jirsa wrote:

Historical examples are anything that you wouldn’t bet your job on for the 
first release:

Udf/uda in 2.2
Incremental repair - would have yanked the flag following 9143
SASI - probably still experimental
Counters - all sorts of correctness issues originally, no longer true since the 
rewrite in 2.1
Vnodes - or at least shuffle
CDC - is the API going to change or is it good as-is?
CQL - we’re on v3, what’s that say about v1?

Basically anything where we can’t definitively say “this feature is going to 
work for you, build your product on it” because companies around the world are 
trying to make that determination on their own, and they don’t have the same 
insight that the active committers have.

The transition out we could define as a fixed number of releases or a dev@ 
vote, I don’t think you’ll find something that applies to all experimental 
features, so being flexible is probably the best bet there





-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: Proposal to retroactively mark materialized views experimental

2017-10-01 Thread Josh McKenzie
>
> I think committing 8099, or at the very least, parts of it, behind an
> experimental flag would have been the right thing to do.

With a major refactor like that, it's a staggering amount of extra work to
have a parallel re-write of core components of a storage engine accessible
in parallel to the major based on an experimental flag in the same branch.
I think the complexity in the code-base of having two such channels in
parallel would be an altogether different kind of burden along with making
the work take considerably longer. The argument of modularizing a change
like that, however, is something I can get behind as a matter of general
principle. As we discussed at NGCC, the amount of static state in the C*
code-base makes this an aspirational goal rather than a reality all too
often, unfortunately.

Not looking to get into the discussion of the appropriateness of 8099 and
other major refactors like it (nio MessagingService for instance) - but
there's a difference between building out new features and shielding the
code-base and users from their complexity and reliability and refactoring
core components of the code-base to keep it relevant.

On Sun, Oct 1, 2017 at 5:01 PM, Dave Brosius  wrote:

> triggers
>
>
> On 10/01/2017 11:25 AM, Jeff Jirsa wrote:
>
>> Historical examples are anything that you wouldn’t bet your job on for
>> the first release:
>>
>> Udf/uda in 2.2
>> Incremental repair - would have yanked the flag following 9143
>> SASI - probably still experimental
>> Counters - all sorts of correctness issues originally, no longer true
>> since the rewrite in 2.1
>> Vnodes - or at least shuffle
>> CDC - is the API going to change or is it good as-is?
>> CQL - we’re on v3, what’s that say about v1?
>>
>> Basically anything where we can’t definitively say “this feature is going
>> to work for you, build your product on it” because companies around the
>> world are trying to make that determination on their own, and they don’t
>> have the same insight that the active committers have.
>>
>> The transition out we could define as a fixed number of releases or a dev@
>> vote, I don’t think you’ll find something that applies to all experimental
>> features, so being flexible is probably the best bet there
>>
>>
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>