Re: Downgradability

2023-02-22 Thread Henrik Ingo
... ok apparently shift+enter  sends messages now?

I was just saying if at least the file format AND system/tables - anything
written to disk - can be protected with a switch, then it allows for quick
downgrade by shutting down the entire cluster and restarting with the
downgraded binary. It's a start.

To be able to do that live in a distributed system needs to consider much
more: gossip, streaming, drivers, and ultimately all features, because we
don't' want an application developer to use a shiny new thing that a) may
not be available on all nodes, or b) may disappear if the cluster has to be
downgraded later.

henrik

On Thu, Feb 23, 2023 at 1:14 AM Henrik Ingo 
wrote:

> Just this once I'm going to be really brief :-)
>
> Just wanted to share for reference how Mongodb implemented
> downgradeability around their 4.4 version:
> https://www.mongodb.com/docs/manual/release-notes/6.0-downgrade-sharded-cluster/
>
> Jeff you're right. Ultimately this is about more than file formats.
> However, ideally if at least the
>
> On Mon, Feb 20, 2023 at 10:02 PM Jeff Jirsa  wrote:
>
>> I'm not even convinced even 8110 addresses this - just writing sstables
>> in old versions won't help if we ever add things like new types or new
>> types of collections without other control abilities. Claude's other email
>> in another thread a few hours ago talks about some of these surprises -
>> "Specifically during the 3.1 -> 4.0 changes a column broadcast_port was
>> added to system/local.  This means that 3.1 system can not read the table
>> as it has no definition for it.  I tried marking the column for deletion in
>> the metadata and in the serialization header.  The later got past the
>> column not found problem, but I suspect that it just means that data
>> columns after broadcast_port shifted and so incorrectly read." - this is a
>> harder problem to solve than just versioning sstables and network
>> protocols.
>>
>> Stepping back a bit, we have downgrade ability listed as a goal, but it's
>> not (as far as I can tell) universally enforced, nor is it clear at which
>> point we will be able to concretely say "this release can be downgraded to
>> X".   Until we actually define and agree that this is a real goal with a
>> concrete version where downgrade-ability becomes real, it feels like things
>> are somewhat arbitrarily enforced, which is probably very frustrating for
>> people trying to commit work/tickets.
>>
>> - Jeff
>>
>>
>>
>> On Mon, Feb 20, 2023 at 11:48 AM Dinesh Joshi  wrote:
>>
>>> I’m a big fan of maintaining backward compatibility. Downgradability
>>> implies that we could potentially roll back an upgrade at any time. While I
>>> don’t think we need to retain the ability to downgrade in perpetuity it
>>> would be a good objective to maintain strict backward compatibility and
>>> therefore downgradability until a certain point. This would imply
>>> versioning metadata and extending it in such a way that prior version(s)
>>> could continue functioning. This can certainly be expensive to implement
>>> and might bloat on-disk storage. However, we could always offer an option
>>> for the operator to optimize the on-disk structures for the current version
>>> then we can rewrite them in the latest version. This optimizes the storage
>>> and opens up new functionality. This means new features that can work with
>>> old on-disk structures will be available while others that strictly require
>>> new versions of the data structures will be unavailable until the operator
>>> migrates to the new version. This migration IMO should be irreversible.
>>> Beyond this point the operator will lose the ability to downgrade which is
>>> ok.
>>>
>>> Dinesh
>>>
>>> On Feb 20, 2023, at 10:40 AM, Jake Luciani  wrote:
>>>
>>> 
>>> There has been progress on
>>>
>>> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-8928
>>>
>>> Which is similar to what datastax does for DSE. Would this be an
>>> acceptable solution?
>>>
>>> Jake
>>>
>>> On Mon, Feb 20, 2023 at 11:17 AM guo Maxwell 
>>> wrote:
>>>
 It seems “An alternative solution is to implement/complete
 CASSANDRA-8110 ”
 can give us more options if it is finished

 Branimir Lambov 于2023年2月20日 周一下午11:03写道:

> Hi everyone,
>
> There has been a discussion lately about changes to the sstable format
> in the context of being able to abort a cluster upgrade, and the fact that
> changes to sstables can prevent downgraded nodes from reading any data
> written during their temporary operation with the new version.
>
> Most of the discussion is in CASSANDRA-18134
> , and is
> spreading into CASSANDRA-14277
>  and
> CASSANDRA-17698
> , none of
> which is a good place to 

Re: Downgradability

2023-02-22 Thread Henrik Ingo
Just this once I'm going to be really brief :-)

Just wanted to share for reference how Mongodb implemented downgradeability
around their 4.4 version:
https://www.mongodb.com/docs/manual/release-notes/6.0-downgrade-sharded-cluster/

Jeff you're right. Ultimately this is about more than file formats.
However, ideally if at least the

On Mon, Feb 20, 2023 at 10:02 PM Jeff Jirsa  wrote:

> I'm not even convinced even 8110 addresses this - just writing sstables in
> old versions won't help if we ever add things like new types or new types
> of collections without other control abilities. Claude's other email in
> another thread a few hours ago talks about some of these surprises -
> "Specifically during the 3.1 -> 4.0 changes a column broadcast_port was
> added to system/local.  This means that 3.1 system can not read the table
> as it has no definition for it.  I tried marking the column for deletion in
> the metadata and in the serialization header.  The later got past the
> column not found problem, but I suspect that it just means that data
> columns after broadcast_port shifted and so incorrectly read." - this is a
> harder problem to solve than just versioning sstables and network
> protocols.
>
> Stepping back a bit, we have downgrade ability listed as a goal, but it's
> not (as far as I can tell) universally enforced, nor is it clear at which
> point we will be able to concretely say "this release can be downgraded to
> X".   Until we actually define and agree that this is a real goal with a
> concrete version where downgrade-ability becomes real, it feels like things
> are somewhat arbitrarily enforced, which is probably very frustrating for
> people trying to commit work/tickets.
>
> - Jeff
>
>
>
> On Mon, Feb 20, 2023 at 11:48 AM Dinesh Joshi  wrote:
>
>> I’m a big fan of maintaining backward compatibility. Downgradability
>> implies that we could potentially roll back an upgrade at any time. While I
>> don’t think we need to retain the ability to downgrade in perpetuity it
>> would be a good objective to maintain strict backward compatibility and
>> therefore downgradability until a certain point. This would imply
>> versioning metadata and extending it in such a way that prior version(s)
>> could continue functioning. This can certainly be expensive to implement
>> and might bloat on-disk storage. However, we could always offer an option
>> for the operator to optimize the on-disk structures for the current version
>> then we can rewrite them in the latest version. This optimizes the storage
>> and opens up new functionality. This means new features that can work with
>> old on-disk structures will be available while others that strictly require
>> new versions of the data structures will be unavailable until the operator
>> migrates to the new version. This migration IMO should be irreversible.
>> Beyond this point the operator will lose the ability to downgrade which is
>> ok.
>>
>> Dinesh
>>
>> On Feb 20, 2023, at 10:40 AM, Jake Luciani  wrote:
>>
>> 
>> There has been progress on
>> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-8928
>>
>> Which is similar to what datastax does for DSE. Would this be an
>> acceptable solution?
>>
>> Jake
>>
>> On Mon, Feb 20, 2023 at 11:17 AM guo Maxwell 
>> wrote:
>>
>>> It seems “An alternative solution is to implement/complete
>>> CASSANDRA-8110 ”
>>> can give us more options if it is finished
>>>
>>> Branimir Lambov 于2023年2月20日 周一下午11:03写道:
>>>
 Hi everyone,

 There has been a discussion lately about changes to the sstable format
 in the context of being able to abort a cluster upgrade, and the fact that
 changes to sstables can prevent downgraded nodes from reading any data
 written during their temporary operation with the new version.

 Most of the discussion is in CASSANDRA-18134
 , and is
 spreading into CASSANDRA-14277
  and
 CASSANDRA-17698 ,
 none of which is a good place to discuss the topic seriously.

 Downgradability is a worthy goal and is listed in the current roadmap.
 I would like to open a discussion here on how it would be achieved.

 My understanding of what has been suggested so far translates to:
 - avoid changes to sstable formats;
 - if there are changes, implement them in a way that is
 backwards-compatible, e.g. by duplicating data, so that a new version is
 presented in a component or portion of a component that legacy nodes will
 not try to read;
 - if the latter is not feasible, make sure the changes are only applied
 if a feature flag has been enabled.

 To me this approach introduces several risks:
 - it bloats file and parsing complexity;
 - it discourages improvement 

Re: Downgradability

2023-02-22 Thread Benedict
Those tickets mostly do not need to break compatibility, and it is pretty easy for them to avoid doing so without any additional facilities.Only the TTL ticket has an excuse, as it debatably needs to support a higher version under certain non-default config settings. However there are no serialisation changes, so even here this is only a matter of selecting the version we write to the descriptor and no other changes at all.Can somebody explain to me what is so burdensome, that we seem to be spending longer debating it than it would take to implement the necessary changes?On 22 Feb 2023, at 21:23, Jeremiah D Jordan  wrote:We have multiple tickets about to merge that introduce new on disk format changes.  I see no reason to block those indefinitely while we figure out how to do the on disk format downgrade stuff.-JeremiahOn Feb 22, 2023, at 3:12 PM, Benedict  wrote:Ok I will be honest, I was fairly sure we hadn’t yet broken downgrade - but I was wrong. CASSANDRA-18061 introduced a new column to a system table, which is a breaking change. But that’s it, as far as I can tell. I have run a downgrade test successfully after reverting that ticket, using the one line patch below. This makes every in-jvm upgrade test also a downgrade test. I’m sure somebody more familiar with dtests can readily do the same there.While we look to fix 18061 and enable downgrade tests (and get a clean run of the full suite), can we all agree not to introduce new breaking changes?index e41444fe52..085b25f8af 100644--- a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java+++ b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java@@ -104,6 +104,7 @@ public class UpgradeTestBase extends DistributedTestBase .addEdge(v40, v41) .addEdge(v40, v42) .addEdge(v41, v42)+ .addEdge(v42, v41) .build();On 22 Feb 2023, at 15:08, Jeff Jirsa  wrote:When people are serious about this requirement, they’ll build the downgrade equivalents of the upgrade tests and run them automatically, often, so people understand what the real gap is and when something new makes it break Until those tests exist, I think collectively we should all stop pretending like this is dogma. Best effort is best effort. On Feb 22, 2023, at 6:57 AM, Branimir Lambov  wrote:> 1. Major SSTable changes should begin with forward-compatibility in a prior release.This requires "feature" changes, i.e. new non-trivial code for previous patch releases. It also entails porting over any further format modification.Instead of this, in combination with your second point, why not implement backwards write compatibility? The opt-in is then clearer to define (i.e. upgrades start with e.g. a "4.1-compatible" settings set that includes file format compatibility and disabling of new features, new nodes start with "current" settings set). When the upgrade completes and the user is happy with the result, the settings set can be replaced.Doesn't this achieve what you want (and we all agree is a worthy goal) with much less effort for everyone? Supporting backwards-compatible writing is trivial, and we even have a proof-of-concept in the stats metadata serializer. It also simplifies by a serious margin the amount of work and thinking one has to do when a format improvement is implemented -- e.g. the TTL patch can just address this in exactly the way the problem was addressed in earlier versions of the format, by capping to 2038, without any need to specify, obey or test any configuration flags.>> It’s a commitment, and it requires every contributor to consider it as part of work they produce.> But it shouldn't be a burden. Ability to downgrade is a testable problem, so I see this work as a function of the suite of tests the project is willing to agree on supporting.I fully agree with this sentiment, and I feel that the current "try to not introduce breaking changes" approach is adding the burden, but not the benefits -- because the latter cannot be proven, and are most likely already broken.Regards,BranimirOn Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky  wrote:Some interesting existing work on this subject is "Understanding and Detecting Software Upgrade Failures in Distributed Systems" - https://dl.acm.org/doi/10.1145/3477132.3483577, also summarized by Andrey Satarin here: https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/They specifically tested Cassandra upgrades, and have a solid list of defects that they found. They also describe their testing mechanism DUPTester, which includes a component that confirms that the leftover state from one version can 

Re: Downgradability

2023-02-22 Thread Jeremiah D Jordan
We have multiple tickets about to merge that introduce new on disk format 
changes.  I see no reason to block those indefinitely while we figure out how 
to do the on disk format downgrade stuff.

-Jeremiah

> On Feb 22, 2023, at 3:12 PM, Benedict  wrote:
> 
> Ok I will be honest, I was fairly sure we hadn’t yet broken downgrade - but I 
> was wrong. CASSANDRA-18061 introduced a new column to a system table, which 
> is a breaking change. 
> 
> But that’s it, as far as I can tell. I have run a downgrade test successfully 
> after reverting that ticket, using the one line patch below. This makes every 
> in-jvm upgrade test also a downgrade test. I’m sure somebody more familiar 
> with dtests can readily do the same there.
> 
> While we look to fix 18061 and enable downgrade tests (and get a clean run of 
> the full suite), can we all agree not to introduce new breaking changes?
> 
> 
> index e41444fe52..085b25f8af 100644
> --- 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> +++ 
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> @@ -104,6 +104,7 @@ public class UpgradeTestBase extends DistributedTestBase
>  
> .addEdge(v40, v41)
>  
> .addEdge(v40, v42)
>  
> .addEdge(v41, v42)
> + 
> .addEdge(v42, v41)
>  
> .build();
> 
> 
>> On 22 Feb 2023, at 15:08, Jeff Jirsa  wrote:
>> 
>> When people are serious about this requirement, they’ll build the downgrade 
>> equivalents of the upgrade tests and run them automatically, often, so 
>> people understand what the real gap is and when something new makes it break 
>> 
>> Until those tests exist, I think collectively we should all stop pretending 
>> like this is dogma. Best effort is best effort. 
>> 
>> 
>> 
>>> On Feb 22, 2023, at 6:57 AM, Branimir Lambov  
>>> wrote:
>>> 
>>> 
>>> > 1. Major SSTable changes should begin with forward-compatibility in a 
>>> > prior release.
>>> 
>>> This requires "feature" changes, i.e. new non-trivial code for previous 
>>> patch releases. It also entails porting over any further format 
>>> modification.
>>> 
>>> Instead of this, in combination with your second point, why not implement 
>>> backwards write compatibility? The opt-in is then clearer to define (i.e. 
>>> upgrades start with e.g. a "4.1-compatible" settings set that includes file 
>>> format compatibility and disabling of new features, new nodes start with 
>>> "current" settings set). When the upgrade completes and the user is happy 
>>> with the result, the settings set can be replaced.
>>> 
>>> Doesn't this achieve what you want (and we all agree is a worthy goal) with 
>>> much less effort for everyone? Supporting backwards-compatible writing is 
>>> trivial, and we even have a proof-of-concept in the stats metadata 
>>> serializer. It also simplifies by a serious margin the amount of work and 
>>> thinking one has to do when a format improvement is implemented -- e.g. the 
>>> TTL patch can just address this in exactly the way the problem was 
>>> addressed in earlier versions of the format, by capping to 2038, without 
>>> any need to specify, obey or test any configuration flags.
>>> 
>>> >> It’s a commitment, and it requires every contributor to consider it as 
>>> >> part of work they produce.
>>> 
>>> > But it shouldn't be a burden. Ability to downgrade is a testable problem, 
>>> > so I see this work as a function of the suite of tests the project is 
>>> > willing to agree on supporting.
>>> 
>>> I fully agree with this sentiment, and I feel that the current "try to not 
>>> introduce breaking changes" approach is adding the burden, but not the 
>>> benefits -- because the latter cannot be proven, and are most likely 
>>> already broken.
>>> 
>>> Regards,
>>> Branimir
>>> 
>>> On Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky >> > wrote:
 Some interesting existing work on this subject is "Understanding and 
 Detecting Software Upgrade Failures in Distributed Systems" - 
 https://dl.acm.org/doi/10.1145/3477132.3483577 
 ,
  also summarized by Andrey Satarin here: 
 https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/
  
 
 
 They specifically tested Cassandra 

Re: Downgradability

2023-02-22 Thread Benedict
Ok I will be honest, I was fairly sure we hadn’t yet broken downgrade - but I was wrong. CASSANDRA-18061 introduced a new column to a system table, which is a breaking change. But that’s it, as far as I can tell. I have run a downgrade test successfully after reverting that ticket, using the one line patch below. This makes every in-jvm upgrade test also a downgrade test. I’m sure somebody more familiar with dtests can readily do the same there.While we look to fix 18061 and enable downgrade tests (and get a clean run of the full suite), can we all agree not to introduce new breaking changes?index e41444fe52..085b25f8af 100644--- a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java+++ b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java@@ -104,6 +104,7 @@ public class UpgradeTestBase extends DistributedTestBase .addEdge(v40, v41) .addEdge(v40, v42) .addEdge(v41, v42)+ .addEdge(v42, v41) .build();On 22 Feb 2023, at 15:08, Jeff Jirsa  wrote:When people are serious about this requirement, they’ll build the downgrade equivalents of the upgrade tests and run them automatically, often, so people understand what the real gap is and when something new makes it break Until those tests exist, I think collectively we should all stop pretending like this is dogma. Best effort is best effort. On Feb 22, 2023, at 6:57 AM, Branimir Lambov  wrote:> 1. Major SSTable changes should begin with forward-compatibility in a prior release.This requires "feature" changes, i.e. new non-trivial code for previous patch releases. It also entails porting over any further format modification.Instead of this, in combination with your second point, why not implement backwards write compatibility? The opt-in is then clearer to define (i.e. upgrades start with e.g. a "4.1-compatible" settings set that includes file format compatibility and disabling of new features, new nodes start with "current" settings set). When the upgrade completes and the user is happy with the result, the settings set can be replaced.Doesn't this achieve what you want (and we all agree is a worthy goal) with much less effort for everyone? Supporting backwards-compatible writing is trivial, and we even have a proof-of-concept in the stats metadata serializer. It also simplifies by a serious margin the amount of work and thinking one has to do when a format improvement is implemented -- e.g. the TTL patch can just address this in exactly the way the problem was addressed in earlier versions of the format, by capping to 2038, without any need to specify, obey or test any configuration flags.>> It’s a commitment, and it requires every contributor to consider it as part of work they produce.> But it shouldn't be a burden. Ability to downgrade is a testable problem, so I see this work as a function of the suite of tests the project is willing to agree on supporting.I fully agree with this sentiment, and I feel that the current "try to not introduce breaking changes" approach is adding the burden, but not the benefits -- because the latter cannot be proven, and are most likely already broken.Regards,BranimirOn Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky  wrote:Some interesting existing work on this subject is "Understanding and Detecting Software Upgrade Failures in Distributed Systems" - https://dl.acm.org/doi/10.1145/3477132.3483577, also summarized by Andrey Satarin here: https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/They specifically tested Cassandra upgrades, and have a solid list of defects that they found. They also describe their testing mechanism DUPTester, which includes a component that confirms that the leftover state from one version can start up on the next version. There is a wider scope of upgrade defects highlighted in the paper, beyond SSTable version support.I believe the project would benefit from expanding our test suite similarly, by parametrizing more tests on upgrade version pairs.Also, per Benedict's comment:> It’s a commitment, and it requires every contributor to consider it as part of work they produce.But it shouldn't be a burden. Ability to downgrade is a testable problem, so I see this work as a function of the suite of tests the project is willing to agree on supporting.Specifically - I agree with Scott's proposal to emulate the HDFS upgrade-then-finalize approach. I would also support automatic finalization based on a time threshold or similar, to balance the priorities of safe and straightforward upgrades. Users need to be aware of the range of SSTable formats supported by a given 

Re: [DISCUSS] Allow UPDATE on settings virtual table to change running configuration

2023-02-22 Thread David Capwell
I guess back to the point of the thread, we need a way to know what configs are 
mutable for the settings virtual table, so need some way to denote that the 
config replica_filtering_protection.cached_rows_fail_threshold is mutable.  
Given the way that the yaml config works, we can’t rely on the presences of 
“final” or not, so need some way to mark a config is mutable for that table, 
does anyone want to offer feedback on what works best for them?

Out of all proposals given so far “volatile” is the least verbose but also not 
explicit (as this thread is showing there is debate on if this should be 
present), new annotations are a little more verbose but would be explicit (no 
surprises), and getter/setters in different classes (such as DD) is the most 
verbose and suffers from not being explicit and ambiguity for mapping back to 
Config.   

Given the above, annotations sounds like the best option, but do we really want 
our config to look as follows?

@Replaces(oldName = "native_transport_idle_timeout_in_ms", converter = 
Converters.MILLIS_DURATION_LONG, deprecated = true)
@Mutable
public DurationSpec.LongMillisecondsBound native_transport_idle_timeout = new 
DurationSpec.LongMillisecondsBound("0ms”);
@Mutable
public DurationSpec.LongMillisecondsBound transaction_timeout = new 
DurationSpec.LongMillisecondsBound("30s”);
@Mutable
public double phi_convict_threshold = 8.0;
public String partitioner; // assume immutable by default?


> On Feb 22, 2023, at 6:20 AM, Benedict  wrote:
> 
> Could you describe the issues? Config that is globally exposed should ideally 
> be immutable with final members, in which case volatile is only necessary if 
> you’re using the config parameter in a tight loop that you need to witness a 
> new value - which shouldn’t apply to any of our config.
> 
> There are some weird niches, like updating long values on some (unsupported 
> by us) JVMs that may tear. Technically you also require it for visibility 
> with the JMM. But in practice it is mostly unnecessary. Often what seems to 
> be a volatile issue is really something else.
> 
>> On 22 Feb 2023, at 13:18, Benjamin Lerer  wrote:
>> 
>> I have seen issues with some updatable parameters which were missing the 
>> volatile keyword.
>> 
>> Le mer. 22 févr. 2023 à 11:36, Aleksey Yeshchenko  a 
>> écrit :
>> FWIW most of those volatile fields, if not in fact all of them, should NOT 
>> be volatile at all. Someone started the trend and most folks have been 
>> copycatting or doing the same for consistency with the rest of the codebase.
>> 
>> Please definitely don’t rely on that.
>> 
>>> On 21 Feb 2023, at 21:06, Maxim Muzafarov  wrote:
>>> 
>>> 1. Rely on the volatile keyword in front of fields in the Config class;
>>> 
>>> I would say this is the most confusing option for me because it
>>> doesn't give us all the guarantees we need, and also:
>>> - We have no explicit control over what exactly we expose to a user.
>>> When we modify the JMX API, we're implementing a new method for the
>>> MBean, which in turn makes this action an explicit exposure;
>>> - The volatile keyword is not the only way to achieve thread safety,
>>> and looks strange for the public API design point;
>>> - A good example is the setEnableDropCompactStorage method, which
>>> changes the volatile field, but is only visible for testing purposes;
>> 
>> 



Re: [DISCUSSION] Cassandra's code style and source code analysis

2023-02-22 Thread Jacek Lewandowski
I suppose it can be easy for the existing feature branches if they have a
single commit. Don't we need to adjust each commit for multi-commit feature
branches?

śr., 22 lut 2023, 19:48 użytkownik Maxim Muzafarov 
napisał:

> Hello everyone,
>
> I have created an issue CASSANDRA-18277 that may help us move forward
> with code style changes. It only affects the way we store the IntelliJ
> code style configuration and has no effect on any current (or any)
> releases, so it should be safe to merge. So, once the issue is
> resolved, every developer that checkouts a release branch will use the
> same code style stored in that branch. This in turn makes rebasing a
> big change like the import order [1] a really straightforward matter
> (by pressing Crtl + Opt + O in their local branch to organize
> imports).
>
> See:
>
> Move the IntelliJ Idea code style and inspections configuration to the
> project's root .idea directory
> https://issues.apache.org/jira/browse/CASSANDRA-18277
>
>
>
> [1] https://issues.apache.org/jira/browse/CASSANDRA-17925
>
> On Wed, 25 Jan 2023 at 13:05, Miklosovic, Stefan
>  wrote:
> >
> > Thank you Maxim for doing this.
> >
> > It is nice to see this effort materialized in a PR.
> >
> > I would wait until bigger chunks of work are committed to trunk (like
> CEP-15) to not collide too much. I would say we can postpone doing this
> until the actual 5.0 release, last weeks before it so we would not clash
> with any work people would like to include in 5.0. This can go in anytime,
> basically.
> >
> > Are people on the same page?
> >
> > Regards
> >
> > 
> > From: Maxim Muzafarov 
> > Sent: Monday, January 23, 2023 19:46
> > To: dev@cassandra.apache.org
> > Subject: Re: [DISCUSSION] Cassandra's code style and source code analysis
> >
> > NetApp Security WARNING: This is an external email. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
> >
> >
> >
> >
> > Hello everyone,
> >
> > You can find the changes here:
> > https://issues.apache.org/jira/browse/CASSANDRA-17925
> >
> > While preparing the code style configuration for the Eclipse IDE, I
> > discovered that there was no easy way to have complex grouping options
> > for the set of packages. So we need to add extra blank lines between
> > each group of packages so that all the configurations for Eclipse,
> > NetBeans, IntelliJ IDEA and checkstyle are aligned. I should have
> > checked this earlier for sure, but I only did it for static imports
> > and some groups, my bad. The resultant configuration looks like this:
> >
> > java.*
> > [blank line]
> > javax.*
> > [blank line]
> > com.*
> > [blank line]
> > net.*
> > [blank line]
> > org.*
> > [blank line]
> > org.apache.cassandra.*
> > [blank line]
> > all other imports
> > [blank line]
> > static all other imports
> >
> > The pull request is here:
> > https://github.com/apache/cassandra/pull/2108
> >
> > The configuration-related changes are placed in a dedicated commit, so
> > it should be easy to make a review:
> >
> https://github.com/apache/cassandra/pull/2108/commits/84e292ddc9671a0be76ceb9304b2b9a051c2d52a
> >
> > 
> >
> > Another important thing to mention is that the total amount of changes
> > for organising imports is really big (more than 2000 files!), so we
> > need to decide the right time to merge this PR. Although rebasing or
> > merging changes to development branches should become much easier
> > ("Accept local" + "Organize imports"), we still need to pay extra
> > attention here to minimise the impact on major patches for the next
> > release.
> >
> > On Mon, 16 Jan 2023 at 13:16, Maxim Muzafarov  wrote:
> > >
> > > Stefan,
> > >
> > > Thank you for bringing this topic up. I'll prepare the PR shortly with
> > > option 4, so everyone can take a look at the amount of changes. This
> > > does not force us to go exactly this path, but it may shed light on
> > > changes in general.
> > >
> > > What exactly we're planning to do in the PR:
> > >
> > > 1. Checkstyle AvoidStarImport rule, so no star imports will be allowed.
> > > 2. Checkstyle ImportOrder rule, for controlling the order.
> > > 3. The IDE code style configuration for Intellij IDEA, NetBeans, and
> > > Eclipse (it doesn't exist for Eclipse yet).
> > > 4. The import order according to option 4:
> > >
> > > ```
> > > java.*
> > > javax.*
> > > [blank line]
> > > com.*
> > > net.*
> > > org.*
> > > [blank line]
> > > org.apache.cassandra.*
> > > [blank line]
> > > all other imports
> > > [blank line]
> > > static all other imports
> > > ```
> > >
> > >
> > >
> > > On Mon, 16 Jan 2023 at 12:39, Miklosovic, Stefan
> > >  wrote:
> > > >
> > > > Based on the voting we should go with option 4?
> > > >
> > > > Two weeks passed without anybody joining so I guess folks are all
> happy with that or this just went unnoticed?
> > > >
> > > > Let's give it time until the end of this week (Friday 12:00 

Re: [DISCUSSION] Cassandra's code style and source code analysis

2023-02-22 Thread Maxim Muzafarov
Hello everyone,

I have created an issue CASSANDRA-18277 that may help us move forward
with code style changes. It only affects the way we store the IntelliJ
code style configuration and has no effect on any current (or any)
releases, so it should be safe to merge. So, once the issue is
resolved, every developer that checkouts a release branch will use the
same code style stored in that branch. This in turn makes rebasing a
big change like the import order [1] a really straightforward matter
(by pressing Crtl + Opt + O in their local branch to organize
imports).

See:

Move the IntelliJ Idea code style and inspections configuration to the
project's root .idea directory
https://issues.apache.org/jira/browse/CASSANDRA-18277



[1] https://issues.apache.org/jira/browse/CASSANDRA-17925

On Wed, 25 Jan 2023 at 13:05, Miklosovic, Stefan
 wrote:
>
> Thank you Maxim for doing this.
>
> It is nice to see this effort materialized in a PR.
>
> I would wait until bigger chunks of work are committed to trunk (like CEP-15) 
> to not collide too much. I would say we can postpone doing this until the 
> actual 5.0 release, last weeks before it so we would not clash with any work 
> people would like to include in 5.0. This can go in anytime, basically.
>
> Are people on the same page?
>
> Regards
>
> 
> From: Maxim Muzafarov 
> Sent: Monday, January 23, 2023 19:46
> To: dev@cassandra.apache.org
> Subject: Re: [DISCUSSION] Cassandra's code style and source code analysis
>
> NetApp Security WARNING: This is an external email. Do not click links or 
> open attachments unless you recognize the sender and know the content is safe.
>
>
>
>
> Hello everyone,
>
> You can find the changes here:
> https://issues.apache.org/jira/browse/CASSANDRA-17925
>
> While preparing the code style configuration for the Eclipse IDE, I
> discovered that there was no easy way to have complex grouping options
> for the set of packages. So we need to add extra blank lines between
> each group of packages so that all the configurations for Eclipse,
> NetBeans, IntelliJ IDEA and checkstyle are aligned. I should have
> checked this earlier for sure, but I only did it for static imports
> and some groups, my bad. The resultant configuration looks like this:
>
> java.*
> [blank line]
> javax.*
> [blank line]
> com.*
> [blank line]
> net.*
> [blank line]
> org.*
> [blank line]
> org.apache.cassandra.*
> [blank line]
> all other imports
> [blank line]
> static all other imports
>
> The pull request is here:
> https://github.com/apache/cassandra/pull/2108
>
> The configuration-related changes are placed in a dedicated commit, so
> it should be easy to make a review:
> https://github.com/apache/cassandra/pull/2108/commits/84e292ddc9671a0be76ceb9304b2b9a051c2d52a
>
> 
>
> Another important thing to mention is that the total amount of changes
> for organising imports is really big (more than 2000 files!), so we
> need to decide the right time to merge this PR. Although rebasing or
> merging changes to development branches should become much easier
> ("Accept local" + "Organize imports"), we still need to pay extra
> attention here to minimise the impact on major patches for the next
> release.
>
> On Mon, 16 Jan 2023 at 13:16, Maxim Muzafarov  wrote:
> >
> > Stefan,
> >
> > Thank you for bringing this topic up. I'll prepare the PR shortly with
> > option 4, so everyone can take a look at the amount of changes. This
> > does not force us to go exactly this path, but it may shed light on
> > changes in general.
> >
> > What exactly we're planning to do in the PR:
> >
> > 1. Checkstyle AvoidStarImport rule, so no star imports will be allowed.
> > 2. Checkstyle ImportOrder rule, for controlling the order.
> > 3. The IDE code style configuration for Intellij IDEA, NetBeans, and
> > Eclipse (it doesn't exist for Eclipse yet).
> > 4. The import order according to option 4:
> >
> > ```
> > java.*
> > javax.*
> > [blank line]
> > com.*
> > net.*
> > org.*
> > [blank line]
> > org.apache.cassandra.*
> > [blank line]
> > all other imports
> > [blank line]
> > static all other imports
> > ```
> >
> >
> >
> > On Mon, 16 Jan 2023 at 12:39, Miklosovic, Stefan
> >  wrote:
> > >
> > > Based on the voting we should go with option 4?
> > >
> > > Two weeks passed without anybody joining so I guess folks are all happy 
> > > with that or this just went unnoticed?
> > >
> > > Let's give it time until the end of this week (Friday 12:00 UTC).
> > >
> > > Regards
> > >
> > > 
> > > From: Maxim Muzafarov 
> > > Sent: Tuesday, January 3, 2023 14:31
> > > To: dev@cassandra.apache.org
> > > Subject: Re: [DISCUSSION] Cassandra's code style and source code analysis
> > >
> > > NetApp Security WARNING: This is an external email. Do not click links or 
> > > open attachments unless you recognize the sender and know the content is 
> > > safe.
> > >
> > >
> > >
> > >
> > > Folks,
> 

Re: Downgradability

2023-02-22 Thread Josh McKenzie
> why not implement backwards write compatibility?
+1 to this from a philosophical perspective. Keeping prior releases completely 
in the dark about new release sstable formats is a clean approach, and we 
should already have the code around to ser/deser the prior version's data on 
the next version.

On Wed, Feb 22, 2023, at 10:07 AM, Jeff Jirsa wrote:
> When people are serious about this requirement, they’ll build the downgrade 
> equivalents of the upgrade tests and run them automatically, often, so people 
> understand what the real gap is and when something new makes it break 
> 
> Until those tests exist, I think collectively we should all stop pretending 
> like this is dogma. Best effort is best effort. 
> 
> 
> 
>> On Feb 22, 2023, at 6:57 AM, Branimir Lambov  
>> wrote:
>> 
>> > 1. Major SSTable changes should begin with forward-compatibility in a 
>> > prior release.
>> 
>> This requires "feature" changes, i.e. new non-trivial code for previous 
>> patch releases. It also entails porting over any further format modification.
>> 
>> Instead of this, in combination with your second point, why not implement 
>> backwards write compatibility? The opt-in is then clearer to define (i.e. 
>> upgrades start with e.g. a "4.1-compatible" settings set that includes file 
>> format compatibility and disabling of new features, new nodes start with 
>> "current" settings set). When the upgrade completes and the user is happy 
>> with the result, the settings set can be replaced.
>> 
>> Doesn't this achieve what you want (and we all agree is a worthy goal) with 
>> much less effort for everyone? Supporting backwards-compatible writing is 
>> trivial, and we even have a proof-of-concept in the stats metadata 
>> serializer. It also simplifies by a serious margin the amount of work and 
>> thinking one has to do when a format improvement is implemented -- e.g. the 
>> TTL patch can just address this in exactly the way the problem was addressed 
>> in earlier versions of the format, by capping to 2038, without any need to 
>> specify, obey or test any configuration flags.
>> 
>> >> It’s a commitment, and it requires every contributor to consider it as 
>> >> part of work they produce.
>> 
>> > But it shouldn't be a burden. Ability to downgrade is a testable problem, 
>> > so I see this work as a function of the suite of tests the project is 
>> > willing to agree on supporting.
>> 
>> I fully agree with this sentiment, and I feel that the current "try to not 
>> introduce breaking changes" approach is adding the burden, but not the 
>> benefits -- because the latter cannot be proven, and are most likely already 
>> broken.
>> 
>> Regards,
>> Branimir
>> 
>> On Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky  wrote:
>>> Some interesting existing work on this subject is "Understanding and 
>>> Detecting Software Upgrade Failures in Distributed Systems" - 
>>> https://dl.acm.org/doi/10.1145/3477132.3483577 
>>> ,
>>>  also summarized by Andrey Satarin here: 
>>> https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/
>>>  
>>> 
>>> 
>>> They specifically tested Cassandra upgrades, and have a solid list of 
>>> defects that they found. They also describe their testing mechanism 
>>> DUPTester, which includes a component that confirms that the leftover state 
>>> from one version can start up on the next version. There is a wider scope 
>>> of upgrade defects highlighted in the paper, beyond SSTable version support.
>>> 
>>> I believe the project would benefit from expanding our test suite 
>>> similarly, by parametrizing more tests on upgrade version pairs.
>>> 
>>> Also, per Benedict's comment:
>>> 
>>> > It’s a commitment, and it requires every contributor to consider it as 
>>> > part of work they produce.
>>> 
>>> But it shouldn't be a burden. Ability to downgrade is a testable problem, 
>>> so I see this work as a function of the suite of tests the project is 
>>> willing to agree on supporting.
>>> 
>>> Specifically - I agree with Scott's proposal to emulate the HDFS 
>>> upgrade-then-finalize approach. I would also support automatic finalization 
>>> based on a time threshold or similar, to balance the priorities of safe and 
>>> straightforward upgrades. Users need to be aware of the range of SSTable 
>>> formats supported by a given version, and how to handle when their SSTables 
>>> wouldn't be supported by an upcoming upgrade.
>>> 
>>> --
>>> Abe
>> 
>> 
>> --
>> Branimir Lambov
>> e. branimir.lam...@datastax.com
>> w. www.datastax.com
>> 


Re: Downgradability

2023-02-22 Thread Jeff Jirsa
When people are serious about this requirement, they’ll build the downgrade equivalents of the upgrade tests and run them automatically, often, so people understand what the real gap is and when something new makes it break Until those tests exist, I think collectively we should all stop pretending like this is dogma. Best effort is best effort. On Feb 22, 2023, at 6:57 AM, Branimir Lambov  wrote:> 1. Major SSTable changes should begin with forward-compatibility in a prior release.This requires "feature" changes, i.e. new non-trivial code for previous patch releases. It also entails porting over any further format modification.Instead of this, in combination with your second point, why not implement backwards write compatibility? The opt-in is then clearer to define (i.e. upgrades start with e.g. a "4.1-compatible" settings set that includes file format compatibility and disabling of new features, new nodes start with "current" settings set). When the upgrade completes and the user is happy with the result, the settings set can be replaced.Doesn't this achieve what you want (and we all agree is a worthy goal) with much less effort for everyone? Supporting backwards-compatible writing is trivial, and we even have a proof-of-concept in the stats metadata serializer. It also simplifies by a serious margin the amount of work and thinking one has to do when a format improvement is implemented -- e.g. the TTL patch can just address this in exactly the way the problem was addressed in earlier versions of the format, by capping to 2038, without any need to specify, obey or test any configuration flags.>> It’s a commitment, and it requires every contributor to consider it as part of work they produce.> But it shouldn't be a burden. Ability to downgrade is a testable problem, so I see this work as a function of the suite of tests the project is willing to agree on supporting.I fully agree with this sentiment, and I feel that the current "try to not introduce breaking changes" approach is adding the burden, but not the benefits -- because the latter cannot be proven, and are most likely already broken.Regards,BranimirOn Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky  wrote:Some interesting existing work on this subject is "Understanding and Detecting Software Upgrade Failures in Distributed Systems" - https://dl.acm.org/doi/10.1145/3477132.3483577, also summarized by Andrey Satarin here: https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/They specifically tested Cassandra upgrades, and have a solid list of defects that they found. They also describe their testing mechanism DUPTester, which includes a component that confirms that the leftover state from one version can start up on the next version. There is a wider scope of upgrade defects highlighted in the paper, beyond SSTable version support.I believe the project would benefit from expanding our test suite similarly, by parametrizing more tests on upgrade version pairs.Also, per Benedict's comment:> It’s a commitment, and it requires every contributor to consider it as part of work they produce.But it shouldn't be a burden. Ability to downgrade is a testable problem, so I see this work as a function of the suite of tests the project is willing to agree on supporting.Specifically - I agree with Scott's proposal to emulate the HDFS upgrade-then-finalize approach. I would also support automatic finalization based on a time threshold or similar, to balance the priorities of safe and straightforward upgrades. Users need to be aware of the range of SSTable formats supported by a given version, and how to handle when their SSTables wouldn't be supported by an upcoming upgrade.--Abe--  
 Branimir Lambov
 
 e.
 branimir.lam...@datastax.com
 w. www.datastax.com



Re: Downgradability

2023-02-22 Thread Branimir Lambov
> 1. Major SSTable changes should begin with forward-compatibility in a
prior release.

This requires "feature" changes, i.e. new non-trivial code for previous
patch releases. It also entails porting over any further format
modification.

Instead of this, in combination with your second point, why not implement
backwards write compatibility? The opt-in is then clearer to define (i.e.
upgrades start with e.g. a "4.1-compatible" settings set that includes file
format compatibility and disabling of new features, new nodes start with
"current" settings set). When the upgrade completes and the user is happy
with the result, the settings set can be replaced.

Doesn't this achieve what you want (and we all agree is a worthy goal) with
much less effort for everyone? Supporting backwards-compatible writing is
trivial, and we even have a proof-of-concept in the stats metadata
serializer. It also simplifies by a serious margin the amount of work and
thinking one has to do when a format improvement is implemented -- e.g. the
TTL patch can just address this in exactly the way the problem was
addressed in earlier versions of the format, by capping to 2038, without
any need to specify, obey or test any configuration flags.

>> It’s a commitment, and it requires every contributor to consider it as
part of work they produce.

> But it shouldn't be a burden. Ability to downgrade is a testable problem,
so I see this work as a function of the suite of tests the project is
willing to agree on supporting.

I fully agree with this sentiment, and I feel that the current "try to not
introduce breaking changes" approach is adding the burden, but not the
benefits -- because the latter cannot be proven, and are most likely
already broken.

Regards,
Branimir

On Wed, Feb 22, 2023 at 1:01 AM Abe Ratnofsky  wrote:

> Some interesting existing work on this subject is "Understanding and
> Detecting Software Upgrade Failures in Distributed Systems" -
> https://dl.acm.org/doi/10.1145/3477132.3483577
> ,
> also summarized by Andrey Satarin here:
> https://asatarin.github.io/talks/2022-09-upgrade-failures-in-distributed-systems/
> 
>
> They specifically tested Cassandra upgrades, and have a solid list of
> defects that they found. They also describe their testing mechanism
> DUPTester, which includes a component that confirms that the leftover state
> from one version can start up on the next version. There is a wider scope
> of upgrade defects highlighted in the paper, beyond SSTable version support.
>
> I believe the project would benefit from expanding our test suite
> similarly, by parametrizing more tests on upgrade version pairs.
>
> Also, per Benedict's comment:
>
> > It’s a commitment, and it requires every contributor to consider it as
> part of work they produce.
>
> But it shouldn't be a burden. Ability to downgrade is a testable problem,
> so I see this work as a function of the suite of tests the project is
> willing to agree on supporting.
>
> Specifically - I agree with Scott's proposal to emulate the HDFS
> upgrade-then-finalize approach. I would also support automatic finalization
> based on a time threshold or similar, to balance the priorities of safe and
> straightforward upgrades. Users need to be aware of the range of SSTable
> formats supported by a given version, and how to handle when their SSTables
> wouldn't be supported by an upcoming upgrade.
>
> --
> Abe
>


-- 
Branimir Lambov
e. branimir.lam...@datastax.com
w. www.datastax.com


Re: [DISCUSS] Allow UPDATE on settings virtual table to change running configuration

2023-02-22 Thread Benedict
Could you describe the issues? Config that is globally exposed should ideally be immutable with final members, in which case volatile is only necessary if you’re using the config parameter in a tight loop that you need to witness a new value - which shouldn’t apply to any of our config.There are some weird niches, like updating long values on some (unsupported by us) JVMs that may tear. Technically you also require it for visibility with the JMM. But in practice it is mostly unnecessary. Often what seems to be a volatile issue is really something else.On 22 Feb 2023, at 13:18, Benjamin Lerer  wrote:I have seen issues with some updatable parameters which were missing the volatile keyword.Le mer. 22 févr. 2023 à 11:36, Aleksey Yeshchenko  a écrit :FWIW most of those volatile fields, if not in fact all of them, should NOT be volatile at all. Someone started the trend and most folks have been copycatting or doing the same for consistency with the rest of the codebase.Please definitely don’t rely on that.On 21 Feb 2023, at 21:06, Maxim Muzafarov  wrote:1. Rely on the volatile keyword in front of fields in the Config class;I would say this is the most confusing option for me because itdoesn't give us all the guarantees we need, and also:- We have no explicit control over what exactly we expose to a user.When we modify the JMX API, we're implementing a new method for theMBean, which in turn makes this action an explicit exposure;- The volatile keyword is not the only way to achieve thread safety,and looks strange for the public API design point;- A good example is the setEnableDropCompactStorage method, whichchanges the volatile field, but is only visible for testing purposes;


Re: [DISCUSS] Allow UPDATE on settings virtual table to change running configuration

2023-02-22 Thread Aleksey Yeshchenko
Could maybe be an issue for some really tight unit tests. In actual use the 
updates to those fields will be globally visible near instantly without 
volatile keyword.

> On 22 Feb 2023, at 13:17, Benjamin Lerer  wrote:
> 
> I have seen issues with some updatable parameters which were missing the 
> volatile keyword.
> 
> Le mer. 22 févr. 2023 à 11:36, Aleksey Yeshchenko  > a écrit :
>> FWIW most of those volatile fields, if not in fact all of them, should NOT 
>> be volatile at all. Someone started the trend and most folks have been 
>> copycatting or doing the same for consistency with the rest of the codebase.
>> 
>> Please definitely don’t rely on that.
>> 
>>> On 21 Feb 2023, at 21:06, Maxim Muzafarov >> > wrote:
>>> 
>>> 1. Rely on the volatile keyword in front of fields in the Config class;
>>> 
>>> I would say this is the most confusing option for me because it
>>> doesn't give us all the guarantees we need, and also:
>>> - We have no explicit control over what exactly we expose to a user.
>>> When we modify the JMX API, we're implementing a new method for the
>>> MBean, which in turn makes this action an explicit exposure;
>>> - The volatile keyword is not the only way to achieve thread safety,
>>> and looks strange for the public API design point;
>>> - A good example is the setEnableDropCompactStorage method, which
>>> changes the volatile field, but is only visible for testing purposes;
>> 



Re: [DISCUSS] Allow UPDATE on settings virtual table to change running configuration

2023-02-22 Thread Benjamin Lerer
I have seen issues with some updatable parameters which were missing the
volatile keyword.

Le mer. 22 févr. 2023 à 11:36, Aleksey Yeshchenko  a
écrit :

> FWIW most of those volatile fields, if not in fact all of them, should NOT
> be volatile at all. Someone started the trend and most folks have been
> copycatting or doing the same for consistency with the rest of the codebase.
>
> Please definitely don’t rely on that.
>
> On 21 Feb 2023, at 21:06, Maxim Muzafarov  wrote:
>
> 1. Rely on the volatile keyword in front of fields in the Config class;
>
> I would say this is the most confusing option for me because it
> doesn't give us all the guarantees we need, and also:
> - We have no explicit control over what exactly we expose to a user.
> When we modify the JMX API, we're implementing a new method for the
> MBean, which in turn makes this action an explicit exposure;
> - The volatile keyword is not the only way to achieve thread safety,
> and looks strange for the public API design point;
> - A good example is the setEnableDropCompactStorage method, which
> changes the volatile field, but is only visible for testing purposes;
>
>
>


Re: Removing columns from sstables

2023-02-22 Thread Claude Warren, Jr via dev
Close.  It is still in the table so the v3.x code that reads system.local
will detect it and fail on an unknown column as that code appears to be
looking at the actual on-disk format.

It sounds like the short answer is that there is no way to physically
remove the column from the on-disk format once it is added.

On Wed, Feb 22, 2023 at 11:28 AM Erick Ramirez 
wrote:

> When a column is dropped from a table, it is added to the
> system.dropped_columns table so it doesn't get returned in the results. Is
> that what you mean? 
>
>>


Re: Removing columns from sstables

2023-02-22 Thread Erick Ramirez
When a column is dropped from a table, it is added to the
system.dropped_columns table so it doesn't get returned in the results. Is
that what you mean? 

>


Removing columns from sstables

2023-02-22 Thread Claude Warren, Jr via dev
Greetings,

I have been looking through the code and I can't find any place where
columns are removed from an sstable.   I have found that rows can be
deleted.  Columns can be marked as deleted.  But I have found no place
where the deleted cell is removed from the row.  Is there the concept of
completely removing all traces of the column from the table?

The specific case I am working on is downgrading v4.x system.local table to
v3.1 format.  This involves the removal of the broadcast_port column so
that the hardcoded definition of the v3.1 table can read the sstable from
disk.

Any assistance or pointers would be appreciated,
Claude


Re: [DISCUSS] Allow UPDATE on settings virtual table to change running configuration

2023-02-22 Thread Aleksey Yeshchenko
FWIW most of those volatile fields, if not in fact all of them, should NOT be 
volatile at all. Someone started the trend and most folks have been copycatting 
or doing the same for consistency with the rest of the codebase.

Please definitely don’t rely on that.

> On 21 Feb 2023, at 21:06, Maxim Muzafarov  wrote:
> 
> 1. Rely on the volatile keyword in front of fields in the Config class;
> 
> I would say this is the most confusing option for me because it
> doesn't give us all the guarantees we need, and also:
> - We have no explicit control over what exactly we expose to a user.
> When we modify the JMX API, we're implementing a new method for the
> MBean, which in turn makes this action an explicit exposure;
> - The volatile keyword is not the only way to achieve thread safety,
> and looks strange for the public API design point;
> - A good example is the setEnableDropCompactStorage method, which
> changes the volatile field, but is only visible for testing purposes;