[jira] [Commented] (CASSANDRA-18935) Unable to write to counter table if native transport is disabled on startup

2023-10-17 Thread Maxwell Guo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776464#comment-17776464
 ] 

Maxwell Guo commented on CASSANDRA-18935:
-

So, why not set the flag when doing nodetool enablebinary if we found the flag 
is not true? If I understand your description correctly.

> Unable to write to counter table if native transport is disabled on startup
> ---
>
> Key: CASSANDRA-18935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18935
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Cameron Zemek
>Priority: Normal
> Attachments: 18935-3.11.patch
>
>
>  
> {code:java}
>     if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
> (nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
>     {
>     startNativeTransport();
>     StorageService.instance.setRpcReady(true);
>     } {code}
> The startup code here only sets RpcReady if native transport is enabled. If 
> you call 
> {code:java}
> nodetool enablebinary{code}
> then this flag doesn't get set.
> But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
> order to get a leader for the counter update.
> Not sure what the correct fix is here, seems to only really use this flag for 
> counters. So thinking perhaps the fix is to just move this outside the if 
> condition.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18905) Index.Group is incorrectly unregistered from the SecondaryIndexManager

2023-10-17 Thread Zhao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776444#comment-17776444
 ] 

Zhao Yang commented on CASSANDRA-18905:
---

LGTM

> Index.Group is incorrectly unregistered from the SecondaryIndexManager
> --
>
> Key: CASSANDRA-18905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18905
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Urgent
> Fix For: 5.0, 5.1
>
>
> An Index.Group is removed from the SecondaryIndexManager during 
> unregisterIndex if it contains no indexes after the index is unregistered.
> The code for removing the group uses the wrong key to remove the group from 
> the indexGroups map. It is using the group object rather than the group name 
> that is used as the key in the map.
> This means that the group is not added again if a new index is registered 
> using that group. The knock on from this is that the 
> StorageAttachedIndexGroup unregisters itself from the Tracker when it has no 
> indexes after an index is removed. The same group with no tracker is then 
> used for new indexes. This group then receives no notifications about sstable 
> or memtable updates. The ultimate side effect of this is that, memtables are 
> not released, resulting in memory leaks and indexes are not updated with new 
> sstables and their associated index files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cep-21-tcm-review updated: Suggested tiny fixes No CI was run as it is just small fixes and CI is having a bad day with this branch anyway

2023-10-17 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a commit to branch cep-21-tcm-review
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cep-21-tcm-review by this push:
 new c38301e23a Suggested tiny fixes No CI was run as it is just small 
fixes and CI is having a bad day with this branch anyway
c38301e23a is described below

commit c38301e23afd99d475ab2b4c479af3b5e1579268
Author: Ekaterina Dimitrova 
AuthorDate: Tue Oct 17 19:52:08 2023 -0400

Suggested tiny fixes
No CI was run as it is just small fixes and CI is having a bad day with 
this branch anyway
---
 ci/harry_simulation.sh |  4 +---
 .../org/apache/cassandra/auth/AuthKeyspace.java|  4 ++--
 .../statements/schema/CreateKeyspaceStatement.java |  4 +++-
 .../cassandra/locator/InetAddressAndPort.java  |  4 ++--
 .../schema/DistributedMetadataLogKeyspace.java | 10 -
 .../apache/cassandra/schema/ReplicationParams.java |  2 +-
 .../cassandra/service/StorageServiceMBean.java | 24 +++---
 .../cassandra/tcm/ClusterMetadataService.java  |  8 +++-
 .../org/apache/cassandra/tcm/log/LogState.java |  9 +++-
 .../cassandra/tcm/ownership/PlacementForRange.java |  4 ++--
 .../tcm/ownership/VersionedEndpoints.java  |  6 --
 .../simulator/test/ShortPaxosSimulationTest.java   |  8 
 12 files changed, 42 insertions(+), 45 deletions(-)

diff --git a/ci/harry_simulation.sh b/ci/harry_simulation.sh
index 537784b3b7..47d254463f 100755
--- a/ci/harry_simulation.sh
+++ b/ci/harry_simulation.sh
@@ -35,8 +35,6 @@ common=(-Dstorage-config=$current_dir/../test/conf
 -Dcassandra.test.sstableformatdevelopment=true
 -Djava.security.egd=file:/dev/urandom
 -Dcassandra.testtag=.jdk11
--Dstorage-config=$current_dir/../test/conf
--Djava.awt.headless=true
 -Dcassandra.keepBriefBrief=true
 -Dcassandra.allow_simplestrategy=true
 -Dcassandra.strict.runtime.checks=true
@@ -57,7 +55,7 @@ common=(-Dstorage-config=$current_dir/../test/conf
 
-Dcassandra.test.logConfigPath=$current_dir/../test/conf/log4j2-dtest-simulator.xml
 -Dcassandra.test.logConfigProperty=log4j.configurationFile
 
-Dlog4j2.configurationFile=$current_dir/../test/conf/log4j2-dtest-simulator.xml
--javaagent:$current_dir/../lib/jamm-0.3.2.jar
+-javaagent:$current_dir/../lib/jamm-0.4.0.jar
 -javaagent:$current_dir/../build/test/lib/jars/simulator-asm.jar
 
-Xbootclasspath/a:$current_dir/../build/test/lib/jars/simulator-bootstrap.jar
 -XX:ActiveProcessorCount=4
diff --git a/src/java/org/apache/cassandra/auth/AuthKeyspace.java 
b/src/java/org/apache/cassandra/auth/AuthKeyspace.java
index 98a9496064..7546a2e948 100644
--- a/src/java/org/apache/cassandra/auth/AuthKeyspace.java
+++ b/src/java/org/apache/cassandra/auth/AuthKeyspace.java
@@ -41,13 +41,13 @@ public final class AuthKeyspace
 {
 }
 
-public static final int DEFAULT_RF = 
CassandraRelevantProperties.SYSTEM_AUTH_DEFAULT_RF.getInt();
+private static final int DEFAULT_RF = 
CassandraRelevantProperties.SYSTEM_AUTH_DEFAULT_RF.getInt();
 
 /**
  * Generation is used as a timestamp for automatic table creation on 
startup.
  * If you make any changes to the tables below, make sure to increment the
  * generation and document your change here.
- *
+ * 
  * gen 0: original definition in 3.0
  * gen 1: compression chunk length reduced to 16KiB, 
memtable_flush_period_in_ms now unset on all tables in 4.0
  */
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/schema/CreateKeyspaceStatement.java
 
b/src/java/org/apache/cassandra/cql3/statements/schema/CreateKeyspaceStatement.java
index 6d432fe5a8..68b91d0ad5 100644
--- 
a/src/java/org/apache/cassandra/cql3/statements/schema/CreateKeyspaceStatement.java
+++ 
b/src/java/org/apache/cassandra/cql3/statements/schema/CreateKeyspaceStatement.java
@@ -35,7 +35,9 @@ import org.apache.cassandra.db.guardrails.Guardrails;
 import org.apache.cassandra.exceptions.AlreadyExistsException;
 import org.apache.cassandra.locator.LocalStrategy;
 import org.apache.cassandra.locator.SimpleStrategy;
-import org.apache.cassandra.schema.*;
+import org.apache.cassandra.schema.Keyspaces;
+import org.apache.cassandra.schema.KeyspaceMetadata;
+import org.apache.cassandra.schema.Schema;
 import org.apache.cassandra.schema.KeyspaceParams.Option;
 import org.apache.cassandra.schema.Keyspaces.KeyspacesDiff;
 import org.apache.cassandra.service.ClientState;
diff --git a/src/java/org/apache/cassandra/locator/InetAddressAndPort.java 
b/src/java/org/apache/cassandra/locator/InetAddressAndPort.java
index 50f3368b20..60c7fd5bf1 100644
--- a/src/java/org/apache/cassandra/locator/InetAddressAndPort.java
+++ b/src/java/org/apache/cassandra/locator/InetAddressAndPort.java
@@ 

[jira] [Updated] (CASSANDRA-18935) Unable to write to counter table if native transport is disabled on startup

2023-10-17 Thread Cameron Zemek (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cameron Zemek updated CASSANDRA-18935:
--
Attachment: 18935-3.11.patch

> Unable to write to counter table if native transport is disabled on startup
> ---
>
> Key: CASSANDRA-18935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18935
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Cameron Zemek
>Priority: Normal
> Attachments: 18935-3.11.patch
>
>
>  
> {code:java}
>     if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
> (nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
>     {
>     startNativeTransport();
>     StorageService.instance.setRpcReady(true);
>     } {code}
> The startup code here only sets RpcReady if native transport is enabled. If 
> you call 
> {code:java}
> nodetool enablebinary{code}
> then this flag doesn't get set.
> But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
> order to get a leader for the counter update.
> Not sure what the correct fix is here, seems to only really use this flag for 
> counters. So thinking perhaps the fix is to just move this outside the if 
> condition.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18935) Unable to write to counter table if native transport is disabled on startup

2023-10-17 Thread Cameron Zemek (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cameron Zemek updated CASSANDRA-18935:
--
Description: 
 
{code:java}
    if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
(nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
    {
    startNativeTransport();
    StorageService.instance.setRpcReady(true);
    } {code}
The startup code here only sets RpcReady if native transport is enabled. If you 
call 
{code:java}
nodetool enablebinary{code}
then this flag doesn't get set.

But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
order to get a leader for the counter update.

Not sure what the correct fix is here, seems to only really use this flag for 
counters. So thinking perhaps the fix is to just move this outside the if 
condition.

 

  was:
 
{code:java}
    if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
(nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
    {
    startNativeTransport();
    StorageService.instance.setRpcReady(true);
    } {code}
The startup code here only sets RpcReady if native transport is enabled. If you 
call 
{code:java}
nodetool enablebinary{code}
then this flag doesn't get set.

But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
other to get a leader for the counter update.

Not sure what the correct fix is here, seems to only really use this flag for 
counters. So thinking perhaps the fix is to just move this outside the if 
condition.

 


> Unable to write to counter table if native transport is disabled on startup
> ---
>
> Key: CASSANDRA-18935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18935
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Cameron Zemek
>Priority: Normal
>
>  
> {code:java}
>     if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
> (nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
>     {
>     startNativeTransport();
>     StorageService.instance.setRpcReady(true);
>     } {code}
> The startup code here only sets RpcReady if native transport is enabled. If 
> you call 
> {code:java}
> nodetool enablebinary{code}
> then this flag doesn't get set.
> But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
> order to get a leader for the counter update.
> Not sure what the correct fix is here, seems to only really use this flag for 
> counters. So thinking perhaps the fix is to just move this outside the if 
> condition.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18935) Unable to write to counter table if native transport is disabled on startup

2023-10-17 Thread Cameron Zemek (Jira)
Cameron Zemek created CASSANDRA-18935:
-

 Summary: Unable to write to counter table if native transport is 
disabled on startup
 Key: CASSANDRA-18935
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18935
 Project: Cassandra
  Issue Type: Bug
Reporter: Cameron Zemek


 
{code:java}
    if ((nativeFlag != null && Boolean.parseBoolean(nativeFlag)) || 
(nativeFlag == null && DatabaseDescriptor.startNativeTransport()))
    {
    startNativeTransport();
    StorageService.instance.setRpcReady(true);
    } {code}
The startup code here only sets RpcReady if native transport is enabled. If you 
call 
{code:java}
nodetool enablebinary{code}
then this flag doesn't get set.

But with the change from CASSANDRA-13043 it requires RpcReady set to true in 
other to get a leader for the counter update.

Not sure what the correct fix is here, seems to only really use this flag for 
counters. So thinking perhaps the fix is to just move this outside the if 
condition.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18221) CEP-15: (Accord) Define a configuration system for Accord to allow overriding of internal configs and integrate with Cassandra yaml

2023-10-17 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776418#comment-17776418
 ] 

David Capwell commented on CASSANDRA-18221:
---

+1 to accord changes, sorry looks like I looked at the wrong PR =(

> CEP-15: (Accord) Define a configuration system for Accord to allow overriding 
> of internal configs and integrate with Cassandra yaml
> ---
>
> Key: CASSANDRA-18221
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18221
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Accord
>Reporter: David Capwell
>Assignee: Jacek Lewandowski
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 5.x
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Users are used to modifying cassandra via yaml and JMX (for dynamic configs) 
> but accord does not integrate with this right now; we should enhance Accord 
> to expose configs that are defined in cassandra yaml.
> As an extension on this, we should figure out which configs are “dynamic” and 
> allow overriding via JMX or system vtable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776417#comment-17776417
 ] 

David Capwell commented on CASSANDRA-18934:
---

For accord we needed to add a keyspace and table property and found we couldn't 
do that because it would break downgrade!  system_schema.tables has 
"extensions" which we could use, but keyspace table doesn't!  So for us we need 
to define how to make such schema changes without breaking downgrade, which 
seems related to the compaction table change...

If we need to maintain support for downgrading from latest to previous 
releases, this will become an ongoing issue we need to address...

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> // Uncomment if you want to test what happens after reading the commit log, 
> which fails right now
> //.runBeforeNodeRestart((cluster, nodeId) -> {
> //IUpgradeableInstance inst = cluster.get(nodeId);
> //File f = new File((String) 
> inst.config().get("commitlog_directory"));
> //deleteRecursive(f);
> //})
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
>   

[jira] [Commented] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776410#comment-17776410
 ] 

David Capwell commented on CASSANDRA-18934:
---

sorry just fixed the description... the test was passing as I wipe the commit 
log on restart; I wanted to see if anything else failed on boot... I just 
updated the test to comment that out so it fails with the commit log issue.

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> // Uncomment if you want to test what happens after reading the commit log, 
> which fails right now
> //.runBeforeNodeRestart((cluster, nodeId) -> {
> //IUpgradeableInstance inst = cluster.get(nodeId);
> //File f = new File((String) 
> inst.config().get("commitlog_directory"));
> //deleteRecursive(f);
> //})
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> 

[jira] [Updated] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-18934:
--
Description: 
We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
don’t have tests to show this is working… I wrote a quick test to make sure a 
change we needed in Accord wouldn’t block the downgrade and see that we fail 
right now.

{code}
ERROR 20:56:39 Exiting due to error while processing commit log during 
initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: 
Unexpected error deserializing mutation; saved to 
/var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
  This may be caused by replaying a mutation against a table with the same name 
but incompatible schema.  Exception follows: java.lang.RuntimeException: 
Unknown column compaction_properties during deserialization
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
{code}

This was caused by a schema change in CASSANDRA-18061

{code}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.cassandra.distributed.upgrade;

import java.io.IOException;
import java.io.File;
import java.util.concurrent.atomic.AtomicBoolean;

import org.junit.Test;

import org.apache.cassandra.distributed.api.IUpgradeableInstance;

public class DowngradeTest extends UpgradeTestBase
{
@Test
public void test() throws Throwable
{
AtomicBoolean first = new AtomicBoolean(true);
new TestCase()
.nodes(1)
.withConfig(c -> {
if (first.compareAndSet(true, false))
c.set("storage_compatibility_mode", "CASSANDRA_4");
})
.downgradeTo(v41)
.setup(cluster -> {})
// Uncomment if you want to test what happens after reading the commit log, 
which fails right now
//.runBeforeNodeRestart((cluster, nodeId) -> {
//IUpgradeableInstance inst = cluster.get(nodeId);
//File f = new File((String) 
inst.config().get("commitlog_directory"));
//deleteRecursive(f);
//})
.runAfterClusterUpgrade(cluster -> {})
.run();
}

private void deleteRecursive(File f)
{
if (f.isDirectory())
{
File[] children = f.listFiles();
if (children != null)
{
for (File c : children)
deleteRecursive(c);
}
}
f.delete();
}
}
{code}

{code}
diff --git 
a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
 
b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
index 5ee8780204..b4111e3b44 100644
--- 
a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
+++ 
b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
@@ -226,6 +226,12 @@ public class UpgradeTestBase extends DistributedTestBase
 return this;
 }

+public TestCase downgradeTo(Semver to)
+{
+upgrade.add(new TestVersions(versions.getLatest(CURRENT), 
Collections.singletonList(versions.getLatest(to;
+return this;
+}
+
 /**
  * performs all supported upgrade paths that exist in between from and 
to that include the current version.
  * This call is equivalent to calling {@code upgradesTo(from, 
CURRENT).upgradesFrom(CURRENT, to)}.
{code}

  was:
We are required to support 

[jira] [Comment Edited] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776400#comment-17776400
 ] 

Maxim Muzafarov edited comment on CASSANDRA-18934 at 10/17/23 9:39 PM:
---

[~claude] I think you will find this issue interesting


was (Author: mmuzaf):
[I think you will find this issue interesting

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> .runBeforeNodeRestart((cluster, nodeId) -> {
> IUpgradeableInstance inst = cluster.get(nodeId);
> File f = new File((String) 
> inst.config().get("commitlog_directory"));
> deleteRecursive(f);
> })
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> index 5ee8780204..b4111e3b44 100644
> --- 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> +++ 
> 

[jira] [Commented] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776400#comment-17776400
 ] 

Maxim Muzafarov commented on CASSANDRA-18934:
-

[I think you will find this issue interesting

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> .runBeforeNodeRestart((cluster, nodeId) -> {
> IUpgradeableInstance inst = cluster.get(nodeId);
> File f = new File((String) 
> inst.config().get("commitlog_directory"));
> deleteRecursive(f);
> })
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> index 5ee8780204..b4111e3b44 100644
> --- 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> +++ 
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> @@ -226,6 +226,12 @@ public class UpgradeTestBase 

[jira] [Updated] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-18934:
--
Description: 
We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
don’t have tests to show this is working… I wrote a quick test to make sure a 
change we needed in Accord wouldn’t block the downgrade and see that we fail 
right now.

{code}
ERROR 20:56:39 Exiting due to error while processing commit log during 
initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: 
Unexpected error deserializing mutation; saved to 
/var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
  This may be caused by replaying a mutation against a table with the same name 
but incompatible schema.  Exception follows: java.lang.RuntimeException: 
Unknown column compaction_properties during deserialization
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
{code}

This was caused by a schema change in CASSANDRA-18061

{code}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.cassandra.distributed.upgrade;

import java.io.IOException;
import java.io.File;
import java.util.concurrent.atomic.AtomicBoolean;

import org.junit.Test;

import org.apache.cassandra.distributed.api.IUpgradeableInstance;

public class DowngradeTest extends UpgradeTestBase
{
@Test
public void test() throws Throwable
{
AtomicBoolean first = new AtomicBoolean(true);
new TestCase()
.nodes(1)
.withConfig(c -> {
if (first.compareAndSet(true, false))
c.set("storage_compatibility_mode", "CASSANDRA_4");
})
.downgradeTo(v41)
.setup(cluster -> {})
.runBeforeNodeRestart((cluster, nodeId) -> {
IUpgradeableInstance inst = cluster.get(nodeId);
File f = new File((String) 
inst.config().get("commitlog_directory"));
deleteRecursive(f);
})
.runAfterClusterUpgrade(cluster -> {})
.run();
}

private void deleteRecursive(File f)
{
if (f.isDirectory())
{
File[] children = f.listFiles();
if (children != null)
{
for (File c : children)
deleteRecursive(c);
}
}
f.delete();
}
}
{code}

{code}
diff --git 
a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
 
b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
index 5ee8780204..b4111e3b44 100644
--- 
a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
+++ 
b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
@@ -226,6 +226,12 @@ public class UpgradeTestBase extends DistributedTestBase
 return this;
 }

+public TestCase downgradeTo(Semver to)
+{
+upgrade.add(new TestVersions(versions.getLatest(CURRENT), 
Collections.singletonList(versions.getLatest(to;
+return this;
+}
+
 /**
  * performs all supported upgrade paths that exist in between from and 
to that include the current version.
  * This call is equivalent to calling {@code upgradesTo(from, 
CURRENT).upgradesFrom(CURRENT, to)}.
{code}

  was:
We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
don’t have tests to show this is working… I wrote a quick 

[jira] [Updated] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-18934:
--
 Bug Category: Parent values: Correctness(12982)Level 1 values: 
Unrecoverable Corruption / Loss(13161)
   Complexity: Normal
Discovered By: Unit Test
Fix Version/s: 5.0.x
   5.x
 Severity: Critical
   Status: Open  (was: Triage Needed)

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> .runBeforeNodeRestart((cluster, nodeId) -> {
> IUpgradeableInstance inst = cluster.get(nodeId);
> File f = new File((String) 
> inst.config().get("commitlog_directory"));
> deleteRecursive(f);
> })
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-10-17 Thread David Capwell (Jira)
David Capwell created CASSANDRA-18934:
-

 Summary: Downgrade to 4.1 fails due to schema changes
 Key: CASSANDRA-18934
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
 Project: Cassandra
  Issue Type: Bug
  Components: Local/Startup and Shutdown
Reporter: David Capwell


We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
don’t have tests to show this is working… I wrote a quick test to make sure a 
change we needed in Accord wouldn’t block the downgrade and see that we fail 
right now.

{code}
ERROR 20:56:39 Exiting due to error while processing commit log during 
initialization.
org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: 
Unexpected error deserializing mutation; saved to 
/var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
  This may be caused by replaying a mutation against a table with the same name 
but incompatible schema.  Exception follows: java.lang.RuntimeException: 
Unknown column compaction_properties during deserialization
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
at 
org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
at 
org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
{code}

This was caused by a schema change in CASSANDRA-18061

{code}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.cassandra.distributed.upgrade;

import java.io.IOException;
import java.io.File;
import java.util.concurrent.atomic.AtomicBoolean;

import org.junit.Test;

import org.apache.cassandra.distributed.api.IUpgradeableInstance;

public class DowngradeTest extends UpgradeTestBase
{
@Test
public void test() throws Throwable
{
AtomicBoolean first = new AtomicBoolean(true);
new TestCase()
.nodes(1)
.withConfig(c -> {
if (first.compareAndSet(true, false))
c.set("storage_compatibility_mode", "CASSANDRA_4");
})
.downgradeTo(v41)
.setup(cluster -> {})
.runBeforeNodeRestart((cluster, nodeId) -> {
IUpgradeableInstance inst = cluster.get(nodeId);
File f = new File((String) 
inst.config().get("commitlog_directory"));
deleteRecursive(f);
})
.runAfterClusterUpgrade(cluster -> {})
.run();
}

private void deleteRecursive(File f)
{
if (f.isDirectory())
{
File[] children = f.listFiles();
if (children != null)
{
for (File c : children)
deleteRecursive(c);
}
}
f.delete();
}
}
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18800) Create 5.0 Landing Page

2023-10-17 Thread Hugh Lashbrooke (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776397#comment-17776397
 ] 

Hugh Lashbrooke commented on CASSANDRA-18800:
-

Approved landing page content is here: 
[https://docs.google.com/document/d/1M_FQNnnvtiLfaGBH_IZjSRyIr56HaOlnZnU0ViHtk8Q/edit]



This has been submitted to be published.

> Create 5.0 Landing Page
> ---
>
> Key: CASSANDRA-18800
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18800
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Documentation/Website
>Reporter: Hugh Lashbrooke
>Assignee: Hugh Lashbrooke
>Priority: Normal
>
> As discussed on the dev list, the upcoming release of Apache Cassandra 5.0 
> would benefit from a public landing page: 
> [https://lists.apache.org/thread/xwb1bxpdcof1bk3x8wnbo5wrlkklq07k]
> The landing page would educate users about what is coming up in this 
> important release, highlighting why upgrading will be valuable to them, as 
> well as guiding them into more community activities, such as Town Halls and 
> Contributor Meetings, where they can learn more and become further involved.
> The landing page should include:
>  * An overview of the release with a brief summary of the major features
>  * Links to each of the relevant CEP pages on Confluence
>  * CTAs to community platforms and activities - Slack, Meetups, Town Halls, 
> Contributor Meetings, etc.
> At the same time, the CEP pages on Confluence should be updated to ensure 
> they include the most current information about the feature and enhanced with 
> any videos or other resources that highlight the work being done.
> All of the new and updated content should be written to be accessible and 
> understandable to end users.
> I will draft the initial page and add it to this ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18903) Incremental repair never cleans up completed sessions from CoordinatorSessions.sessions

2023-10-17 Thread Abe Ratnofsky (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776370#comment-17776370
 ] 

Abe Ratnofsky commented on CASSANDRA-18903:
---

PRs up:

||Release branch||PR||
|trunk|https://github.com/apache/cassandra/pull/2810|
|5.0|https://github.com/apache/cassandra/pull/2811|
|4.1|https://github.com/apache/cassandra/pull/2812|
|4.0|https://github.com/apache/cassandra/pull/2813|


> Incremental repair never cleans up completed sessions from 
> CoordinatorSessions.sessions
> ---
>
> Key: CASSANDRA-18903
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18903
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Abe Ratnofsky
>Assignee: Abe Ratnofsky
>Priority: Normal
>
> Currently, there is nothing cleaning up repaired sessions from 
> org.apache.cassandra.repair.consistent.CoordinatorSessions#sessions. This 
> causes memory to leak for cluster members with long uptimes and lots of 
> incremental repair.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Francisco Guerrero (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776300#comment-17776300
 ] 

Francisco Guerrero commented on CASSANDRA-18933:


[~jlewandowski] would you be able to take a look since you added the original 
feature?

> Correct comment for nc SSTable format
> -
>
> Key: CASSANDRA-18933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/SSTable
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
> the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
> however the comment in source incorrectly mentions that the format was added 
> to version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Francisco Guerrero (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776299#comment-17776299
 ] 

Francisco Guerrero commented on CASSANDRA-18933:


trunk: https://github.com/apache/cassandra/pull/2808
5.0: https://github.com/apache/cassandra/pull/2809

> Correct comment for nc SSTable format
> -
>
> Key: CASSANDRA-18933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/SSTable
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
> the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
> however the comment in source incorrectly mentions that the format was added 
> to version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776295#comment-17776295
 ] 

Brandon Williams commented on CASSANDRA-18933:
--

+1

> Correct comment for nc SSTable format
> -
>
> Key: CASSANDRA-18933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/SSTable
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
> the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
> however the comment in source incorrectly mentions that the format was added 
> to version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18905) Index.Group is incorrectly unregistered from the SecondaryIndexManager

2023-10-17 Thread Caleb Rackliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776290#comment-17776290
 ] 

Caleb Rackliffe commented on CASSANDRA-18905:
-

+1 on both PRs

> Index.Group is incorrectly unregistered from the SecondaryIndexManager
> --
>
> Key: CASSANDRA-18905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18905
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Urgent
> Fix For: 5.0
>
>
> An Index.Group is removed from the SecondaryIndexManager during 
> unregisterIndex if it contains no indexes after the index is unregistered.
> The code for removing the group uses the wrong key to remove the group from 
> the indexGroups map. It is using the group object rather than the group name 
> that is used as the key in the map.
> This means that the group is not added again if a new index is registered 
> using that group. The knock on from this is that the 
> StorageAttachedIndexGroup unregisters itself from the Tracker when it has no 
> indexes after an index is removed. The same group with no tracker is then 
> used for new indexes. This group then receives no notifications about sstable 
> or memtable updates. The ultimate side effect of this is that, memtables are 
> not released, resulting in memory leaks and indexes are not updated with new 
> sstables and their associated index files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18905) Index.Group is incorrectly unregistered from the SecondaryIndexManager

2023-10-17 Thread Caleb Rackliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated CASSANDRA-18905:

Fix Version/s: 5.1

> Index.Group is incorrectly unregistered from the SecondaryIndexManager
> --
>
> Key: CASSANDRA-18905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18905
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Urgent
> Fix For: 5.0, 5.1
>
>
> An Index.Group is removed from the SecondaryIndexManager during 
> unregisterIndex if it contains no indexes after the index is unregistered.
> The code for removing the group uses the wrong key to remove the group from 
> the indexGroups map. It is using the group object rather than the group name 
> that is used as the key in the map.
> This means that the group is not added again if a new index is registered 
> using that group. The knock on from this is that the 
> StorageAttachedIndexGroup unregisters itself from the Tracker when it has no 
> indexes after an index is removed. The same group with no tracker is then 
> used for new indexes. This group then receives no notifications about sstable 
> or memtable updates. The ultimate side effect of this is that, memtables are 
> not released, resulting in memory leaks and indexes are not updated with new 
> sstables and their associated index files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18134) Improve handling of min/max clustering in sstable

2023-10-17 Thread Francisco Guerrero (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776291#comment-17776291
 ] 

Francisco Guerrero commented on CASSANDRA-18134:


I've created this jira: https://issues.apache.org/jira/browse/CASSANDRA-18933

> Improve handling of min/max clustering in sstable
> -
>
> Key: CASSANDRA-18134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18134
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 5.0, 5.0-alpha1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This patch improves the following things:
> # SSTable metadata will store a covered slice in addition min/max 
> clusterings. The difference is that for slices there is available the type of 
> a bound rather than just a clustering. In particular it will provide the 
> information whether the lower and upper bound of an sstable is opened or 
> closed. The legacy min/max clustering will be stored until a new major format 
> {{o}} to ensure backward compatibility
> # SSTable metadata will store a flag whether the SSTable contains any 
> partition level deletions or not
> # SSTable metadata will store the first and the last keys of the sstable. 
> This is mostly for consistency - key range is logically a part of stats 
> metadata. So far it is stored at the end of the index summary. After this 
> change, index summary will be no longer needed to read key range of an 
> sstable (although we will keep storing key range as before for compatibility 
> reasons)
> # The above two changes required to introduce a new minor format for SSTables 
> - {{nc}}
> # Single partition read command makes use of the above changes. In particular 
> an sstable can be skipped when it does not intersect with the column filter, 
> does not have partition level deletions and does not have statics; In case 
> there are partition level deletions, but the other conditions are satisfied, 
> only the partition header needs to be accessed (tests attached)
> # Skipping sstables assuming those three conditions are satisfied has been 
> implemented also for partition range queries (tests attached). Also added 
> minor separate statistics to record the number of accessed sstables in 
> partition reads because now not all of them need to be accessed. That 
> statistics is also needed in tests to confirm skipping.
> # Artificial lower bound marker is now an object on its own and is not 
> implemented as a special case of range tombstone bound. Instead it sorts 
> right before the lowest available bound in the data
> # Extended the lower bound optimization usage due the 1 and 2
> # Do not initialize iterator just to get a cached partition and associated 
> columns index. The purpose of using lower bound optimization was to avoid 
> opening an iterator of an sstable if possible.
> See also CASSANDRA-14861
> The changes in this patch include work of [~blambov], [~slebresne], 
> [~jakubzytka] and [~jlewandowski]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18915) CorruptSSTableException due to invalid Columns subset bytes; too many bits set

2023-10-17 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated CASSANDRA-18915:

Resolution: Duplicate
Status: Resolved  (was: Triage Needed)

> CorruptSSTableException due to invalid Columns subset bytes; too many bits set
> --
>
> Key: CASSANDRA-18915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18915
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Normal
>
> The following exception occurred:
> {code}
> ERROR [ValidationExecutor:1096] CassandraDaemon.java:581 - Exception in 
> thread Thread[ValidationExecutor:1096,1,main]
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /opt/cassandra/data/gateways/reply_messages_ids-bc0242asdfasdfs45f3d493/nb-11-big-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:138)
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:376)
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:188)
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:157)
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:523)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:391)
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>   at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:210)
>   at org.apache.cassandra.repair.Validator.rowHash(Validator.java:204)
>   at org.apache.cassandra.repair.Validator.add(Validator.java:182)
>   at 
> org.apache.cassandra.repair.ValidationManager.doValidation(ValidationManager.java:123)
>   at 
> org.apache.cassandra.repair.ValidationManager$1.call(ValidationManager.java:162)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: Invalid Columns subset bytes; too many bits 
> set:11010
>   at 
> org.apache.cassandra.db.Columns$Serializer.deserializeSubset(Columns.java:548)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:597)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:478)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:434)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:84)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:62)
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:126)
>   ... 21 common frames omitted
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18923) BLOG - Apache Cassandra 5.0 Features: Dynamic Data Masking

2023-10-17 Thread Diogenese Topper (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776283#comment-17776283
 ] 

Diogenese Topper commented on CASSANDRA-18923:
--

Fixed. 
Preview here: 
https://raw.githack.com/nonstopdtop/cassandra-website/CASSANDRA-18923_generated/content/_/blog/Apache-Cassandra-5.0-Features-Dynamic-Data-Masking.html

> BLOG - Apache Cassandra 5.0 Features: Dynamic Data Masking
> --
>
> Key: CASSANDRA-18923
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18923
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Blog
>Reporter: Diogenese Topper
>Priority: Normal
>
> This ticket is to capture the work associated with publishing the blog 
> "Apache Cassandra 5.0 Features: Dynamic Data Masking"
> This blog can be published as soon as possible, but if it cannot be published 
> within a week of the noted publish date *(October 11)*, please contact me, 
> suggest changes, or correct the date when possible in the pull request for 
> the appropriate time that the blog will go live (on both the blog.adoc and 
> the blog post's file).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-18932) Harry-found CorruptSSTableException / RT Closer issue when reading entire partition

2023-10-17 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov reassigned CASSANDRA-18932:
---

Assignee: Maxim Muzafarov

> Harry-found CorruptSSTableException / RT Closer issue when reading entire 
> partition
> ---
>
> Key: CASSANDRA-18932
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18932
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Maxim Muzafarov
>Priority: Normal
> Attachments: node1_.zip, operation.log.zip
>
>
> While testing some new machinery for Harry, I have encountered a new RT 
> closer / SSTable Corruption issue. I have grounds to believe this was 
> introduced during the last year.
> Issue seems to happen because of intricate interleaving of flushes with 
> writes and deletes.
> {code:java}
> ERROR [ReadStage-2] 2023-10-16 18:47:06,696 JVMStabilityInspector.java:76 - 
> Exception in thread Thread[ReadStage-2,5,SharedPool]
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> RandomAccessReader:BufferManagingRebufferer.Aligned:CompressedChunkReader.Mmap(/Users/ifesdjeen/foss/java/apache-cassandra-4.0/data/data1/harry/table_1-07c35a606c0a11eeae7a4f6ca489eb0c/nc-5-big-Data.db
>  - LZ4Compressor, chunk length 16384, data length 232569)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator$AbstractReader.hasNext(AbstractSSTableIterator.java:381)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:242)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:376)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:188)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:157)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:534)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:402)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:151)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:101)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:86)
>         at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:343)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:201)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:186)
>         at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:48)
>         at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:346)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:2186)
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2581)
>         at 
> org.apache.cassandra.concurrent.ExecutionFailure$2.run(ExecutionFailure.java:163)
>         at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:143)
>         at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at 

[jira] [Commented] (CASSANDRA-18932) Harry-found CorruptSSTableException / RT Closer issue when reading entire partition

2023-10-17 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776284#comment-17776284
 ] 

Maxim Muzafarov commented on CASSANDRA-18932:
-

I've assigned this issue to myself and will look at it soon. I'll close 
CASSANDRA-18915 as a duplicate since the discussion here is more valuable. 

> Harry-found CorruptSSTableException / RT Closer issue when reading entire 
> partition
> ---
>
> Key: CASSANDRA-18932
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18932
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Maxim Muzafarov
>Priority: Normal
> Attachments: node1_.zip, operation.log.zip
>
>
> While testing some new machinery for Harry, I have encountered a new RT 
> closer / SSTable Corruption issue. I have grounds to believe this was 
> introduced during the last year.
> Issue seems to happen because of intricate interleaving of flushes with 
> writes and deletes.
> {code:java}
> ERROR [ReadStage-2] 2023-10-16 18:47:06,696 JVMStabilityInspector.java:76 - 
> Exception in thread Thread[ReadStage-2,5,SharedPool]
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> RandomAccessReader:BufferManagingRebufferer.Aligned:CompressedChunkReader.Mmap(/Users/ifesdjeen/foss/java/apache-cassandra-4.0/data/data1/harry/table_1-07c35a606c0a11eeae7a4f6ca489eb0c/nc-5-big-Data.db
>  - LZ4Compressor, chunk length 16384, data length 232569)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator$AbstractReader.hasNext(AbstractSSTableIterator.java:381)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:242)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:376)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:188)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:157)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:534)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:402)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:151)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:101)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:86)
>         at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:343)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:201)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:186)
>         at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:48)
>         at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:346)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:2186)
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2581)
>         at 
> org.apache.cassandra.concurrent.ExecutionFailure$2.run(ExecutionFailure.java:163)
>         at 

[jira] [Updated] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Francisco Guerrero (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francisco Guerrero updated CASSANDRA-18933:
---
Change Category: Code Clarity
 Complexity: Low Hanging Fruit
Component/s: Local/SSTable
 Status: Open  (was: Triage Needed)

> Correct comment for nc SSTable format
> -
>
> Key: CASSANDRA-18933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/SSTable
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>
> In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
> the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
> however the comment in source incorrectly mentions that the format was added 
> to version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Francisco Guerrero (Jira)
Francisco Guerrero created CASSANDRA-18933:
--

 Summary: Correct comment for nc SSTable format
 Key: CASSANDRA-18933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
 Project: Cassandra
  Issue Type: Task
Reporter: Francisco Guerrero
Assignee: Francisco Guerrero


In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
however the comment in source incorrectly mentions that the format was added to 
version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18933) Correct comment for nc SSTable format

2023-10-17 Thread Francisco Guerrero (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francisco Guerrero updated CASSANDRA-18933:
---
Test and Documentation Plan: Only the comment in code is modified
 Status: Patch Available  (was: In Progress)

> Correct comment for nc SSTable format
> -
>
> Key: CASSANDRA-18933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18933
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/SSTable
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>
> In [CASSANDRA-18134 |https://issues.apache.org/jira/browse/CASSANDRA-18134], 
> the {{nc}} SSTable format was introduced. The patch was merged into {{5.0+}}, 
> however the comment in source incorrectly mentions that the format was added 
> to version {{4.1}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cep-21-tcm-review updated: Add JavaDoc - jacek - wip

2023-10-17 Thread jlewandowski
This is an automated email from the ASF dual-hosted git repository.

jlewandowski pushed a commit to branch cep-21-tcm-review
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cep-21-tcm-review by this push:
 new 341dd2c95c Add JavaDoc - jacek - wip
341dd2c95c is described below

commit 341dd2c95c59aad831ed668ce4377a93958a3ead
Author: Jacek Lewandowski 
AuthorDate: Tue Oct 17 18:41:45 2023 +0200

Add JavaDoc - jacek - wip
---
 src/java/org/apache/cassandra/db/SystemKeyspace.java  |  2 ++
 .../org/apache/cassandra/tcm/ClusterMetadata.java | 19 +++
 .../apache/cassandra/tcm/ClusterMetadataService.java  | 11 +++
 .../org/apache/cassandra/tcm/InProgressSequence.java  |  4 
 .../org/apache/cassandra/tcm/RemoteProcessor.java |  3 +++
 src/java/org/apache/cassandra/tcm/log/LocalLog.java   |  1 +
 .../cassandra/tcm/sequences/InProgressSequences.java  |  3 +++
 .../cassandra/tcm/sequences/ProgressBarrier.java  |  4 ++--
 8 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index 51074de248..402c17219f 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -164,6 +164,8 @@ public final class SystemKeyspace
 public static final String PREPARED_STATEMENTS = "prepared_statements";
 public static final String REPAIRS = "repairs";
 public static final String TOP_PARTITIONS = "top_partitions";
+
+// --- TCM tables ---
 public static final String METADATA_LOG = "local_metadata_log";
 public static final String SNAPSHOT_TABLE_NAME = "metadata_snapshots";
 public static final String SEALED_PERIODS_TABLE_NAME = 
"metadata_sealed_periods";
diff --git a/src/java/org/apache/cassandra/tcm/ClusterMetadata.java 
b/src/java/org/apache/cassandra/tcm/ClusterMetadata.java
index f9b08fccd7..022910d0fb 100644
--- a/src/java/org/apache/cassandra/tcm/ClusterMetadata.java
+++ b/src/java/org/apache/cassandra/tcm/ClusterMetadata.java
@@ -37,6 +37,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.CassandraRelevantProperties;
+import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
@@ -54,6 +55,8 @@ import org.apache.cassandra.schema.Keyspaces;
 import org.apache.cassandra.schema.ReplicationParams;
 import org.apache.cassandra.tcm.extensions.ExtensionKey;
 import org.apache.cassandra.tcm.extensions.ExtensionValue;
+import org.apache.cassandra.tcm.listeners.MetadataSnapshotListener;
+import org.apache.cassandra.tcm.log.LocalLog;
 import org.apache.cassandra.tcm.membership.Directory;
 import org.apache.cassandra.tcm.membership.Location;
 import org.apache.cassandra.tcm.membership.NodeAddresses;
@@ -79,6 +82,18 @@ import org.apache.cassandra.utils.vint.VIntCoding;
 import static 
org.apache.cassandra.config.CassandraRelevantProperties.LINE_SEPARATOR;
 import static org.apache.cassandra.db.TypeSizes.sizeof;
 
+/**
+ * Represents all transactional metadata of the cluster. It is versioned, 
immutable and serializable.
+ * CMS guarantees that all the nodes in the cluster see the same cluster 
metadata for the given epoch.
+ * When the metadata gets updated by a node, the new version must be 
associated with the new epoch.
+ *
+ * Epochs are groupped into periods. The number of epoch that can fit into a 
period is defined by
+ * {@link DatabaseDescriptor#getMetadataSnapshotFrequency()}. When a period is 
completed, its number is incremented by
+ * {@link LocalLog#snapshotListener()}, and then, a snapshot is created by the 
{@link MetadataSnapshotListener}.
+ * Both are triggered by the {@link LocalLog#processPendingInternal()} method, 
which processes the log entries.
+ *
+ * @see MetadataSnapshots for more information about cluster metadata snapshots
+ */
 public class ClusterMetadata
 {
 public static final Serializer serializer = new Serializer();
@@ -356,6 +371,10 @@ public class ClusterMetadata
 return VersionedEndpoints.forToken(writeEndpoints.lastModified(), 
endpointsForToken.build());
 }
 
+/**
+ * Builds a new cluster metadata based on top of the provided one, 
registering the keys of all the overridden
+ * items.
+ */
 public static class Transformer
 {
 private final ClusterMetadata base;
diff --git a/src/java/org/apache/cassandra/tcm/ClusterMetadataService.java 
b/src/java/org/apache/cassandra/tcm/ClusterMetadataService.java
index 6871289e27..6ca65f2931 100644
--- a/src/java/org/apache/cassandra/tcm/ClusterMetadataService.java
+++ b/src/java/org/apache/cassandra/tcm/ClusterMetadataService.java
@@ -81,6 +81,17 @@ import static 

[jira] [Commented] (CASSANDRA-13911) IllegalStateException thrown by UPI.Serializer.hasNext() for some SELECT queries

2023-10-17 Thread Szymon Miezal (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776257#comment-17776257
 ] 

Szymon Miezal commented on CASSANDRA-13911:
---

Does anyone remember what was the motivation for this particular condition 
change 
[https://github.com/apache/cassandra/commit/1efdf330e291a41cd8051e0c1195f75b5d352370#diff-212cd446e1b8c2aeb27818def0ba8c67370a1e00c5f657c85e42f0d43adfe05bR559?]

I have found that 
[https://github.com/apache/cassandra-dtest/commit/51ad68ec45c7a40de1c51b31651632f2e87ceaa4]
 passes without it as well which is suspicious.

The motivation for the questions is that I have found that in case of DISTINCT 
queries we are making an additional SRP call for every call of 
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/service/DataResolver.java#L729
 due to exactly that condition.

> IllegalStateException thrown by UPI.Serializer.hasNext() for some SELECT 
> queries
> 
>
> Key: CASSANDRA-13911
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13911
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Coordination
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 3.0.15, 3.11.1
>
>
> Certain combinations of rows, in presence of per partition limit (set 
> explicitly in 3.6+ or implicitly to 1 via DISTINCT) cause 
> {{UnfilteredPartitionIterators.Serializer.hasNext()}} to throw 
> {{IllegalStateException}} .
> Relevant code snippet:
> {code}
> // We can't answer this until the previously returned iterator has been fully 
> consumed,
> // so complain if that's not the case.
> if (next != null && next.hasNext())
> throw new IllegalStateException("Cannot call hasNext() until the previous 
> iterator has been fully consumed");
> {code}
> Since {{UnfilteredPartitionIterators.Serializer}} and 
> {{UnfilteredRowIteratorSerializer.serializer}} deserialize partitions/rows 
> lazily, it is required for correct operation of the partition iterator to 
> have the previous partition fully consumed, so that deserializing the next 
> one can start from the correct position in the byte buffer. However, that 
> condition won’t always be satisfied, as there are legitimate combinations of 
> rows that do not consume every row in every partition.
> For example, look at [this 
> dtest|https://github.com/iamaleksey/cassandra-dtest/commits/13911].
> In case we end up with a following pattern of rows:
> {code}
> node1, partition 0 | 0
> node2, partition 0 |   x x
> {code}
> , where {{x}} and {{x}} a row tombstones for rows 1 and 2, it’s sufficient 
> for {{MergeIterator}} to only look at row 0 in partition from node1 and at 
> row tombstone 1 from node2 to satisfy the per partition limit of 1. The 
> stopping merge result counter will stop iteration right there, leaving row 
> tombstone 2 from node2 unvisited and not deseiralized. Switching to the next 
> partition will in turn trigger the {{IllegalStateException}} because we 
> aren’t done yet.
> The stopping counter is behaving correctly, so is the {{MergeIterator}}. I’ll 
> note that simply removing that condition is not enough to fix the problem 
> properly - it’d just cause us to deseiralize garbage, trying to deserialize a 
> new partition from a position in the bytebuffer that precedes remaining rows 
> in the previous partition.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/02: Add javadoc WIP

2023-10-17 Thread blerer
This is an automated email from the ASF dual-hosted git repository.

blerer pushed a commit to branch cep-21-tcm-review
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 73c7ca4a5f90cec5297c3983752506646645146a
Author: Benjamin Lerer 
AuthorDate: Thu Oct 5 16:41:48 2023 +0200

Add javadoc WIP
---
 .../config/CassandraRelevantProperties.java|  3 +-
 src/java/org/apache/cassandra/config/Config.java   |  7 +-
 .../cassandra/config/DatabaseDescriptor.java   |  6 ++
 .../cql3/statements/DescribeStatement.java |  6 ++
 .../org/apache/cassandra/db/SystemKeyspace.java| 24 ++
 .../apache/cassandra/schema/DistributedSchema.java |  4 +
 src/java/org/apache/cassandra/schema/Schema.java   |  7 ++
 .../cassandra/schema/SchemaTransformation.java |  2 +-
 .../org/apache/cassandra/tcm/ClusterMetadata.java  | 25 ++
 .../cassandra/tcm/ClusterMetadataService.java  | 22 --
 src/java/org/apache/cassandra/tcm/Discovery.java   | 22 +-
 src/java/org/apache/cassandra/tcm/Epoch.java   | 92 +-
 .../apache/cassandra/tcm/MetadataSnapshots.java| 56 +
 .../org/apache/cassandra/tcm/MetadataValue.java| 14 
 src/java/org/apache/cassandra/tcm/Period.java  |  4 +
 .../cassandra/tcm/RecentlySealedPeriods.java   |  8 +-
 src/java/org/apache/cassandra/tcm/Retry.java   | 79 +++
 src/java/org/apache/cassandra/tcm/Sealed.java  | 27 +++
 src/java/org/apache/cassandra/tcm/Startup.java | 19 +
 .../org/apache/cassandra/tcm/Transformation.java   | 34 
 .../cassandra/tcm/listeners/ChangeListener.java|  4 +-
 .../tcm/listeners/MetadataSnapshotListener.java|  7 ++
 src/java/org/apache/cassandra/tcm/log/Entry.java   | 17 
 .../org/apache/cassandra/tcm/log/LocalLog.java | 31 +++-
 .../org/apache/cassandra/tcm/log/LogState.java | 11 +++
 .../org/apache/cassandra/tcm/log/LogStorage.java   |  8 ++
 .../org/apache/cassandra/tcm/log/Replication.java  | 22 +-
 .../cassandra/tcm/log/SystemKeyspaceStorage.java   | 12 ++-
 .../apache/cassandra/tcm/migration/Election.java   | 11 +++
 .../tcm/ownership/UniformRangePlacement.java   |  6 +-
 .../cassandra/tcm/transformations/SealPeriod.java  |  2 +-
 31 files changed, 531 insertions(+), 61 deletions(-)

diff --git 
a/src/java/org/apache/cassandra/config/CassandraRelevantProperties.java 
b/src/java/org/apache/cassandra/config/CassandraRelevantProperties.java
index 32c5d4fb59..554e407bbf 100644
--- a/src/java/org/apache/cassandra/config/CassandraRelevantProperties.java
+++ b/src/java/org/apache/cassandra/config/CassandraRelevantProperties.java
@@ -511,8 +511,9 @@ public enum CassandraRelevantProperties
  */
 
TCM_PROGRESS_BARRIER_BACKOFF_MILLIS("cassandra.progress_barrier_backoff_ms", 
"1000"),
 
TCM_PROGRESS_BARRIER_TIMEOUT_MILLIS("cassandra.progress_barrier_timeout_ms", 
"360"),
+
 /**
- * size of in-memory index of max epoch -> sealed period
+ * Maximum sized of the {@code RecentlySealedPeriods} in-memory index.
  */
 
TCM_RECENTLY_SEALED_PERIOD_INDEX_SIZE("cassandra.recently_sealed_period_index_size",
 "10"),
 
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 01f9be8c15..927e830ce1 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -174,8 +174,13 @@ public class Config
 public volatile DurationSpec.LongMillisecondsBound cms_await_timeout = new 
DurationSpec.LongMillisecondsBound("12ms");
 public volatile int cms_default_max_retries = 10;
 public volatile DurationSpec.IntMillisecondsBound 
cms_default_retry_backoff = new DurationSpec.IntMillisecondsBound("50ms");
+
 /**
- * How often we should snapshot the cluster metadata.
+ * Specify how often a snapshot of the cluster metadata must be taken.
+ * The frequency is express in epochs. A frequency of 100, for example, 
means that a snapshot will be taken every time
+ * the epoch is a multiple of 100.
+ * Taking a snapshot will also seal a period (e.g. cluster metadata 
partition). Therefore the snapshot frequency also determine the size of the
+ * {@code system.local_metadata_log} and {@code 
cluster_metadata.distributed_metadata_log} tables partitions.
  */
 public volatile int metadata_snapshot_frequency = 100;
 
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index f22f845b38..365dd255d6 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -4985,6 +4985,12 @@ public class DatabaseDescriptor
 return conf.cms_await_timeout;
 }
 
+/**
+ * Returns how often a snapshot of the cluster metadata must be taken.
+ * The frequency is express in epochs. 

[cassandra] branch cep-21-tcm-review created (now 509d33155f)

2023-10-17 Thread blerer
This is an automated email from the ASF dual-hosted git repository.

blerer pushed a change to branch cep-21-tcm-review
in repository https://gitbox.apache.org/repos/asf/cassandra.git


  at 509d33155f Simplify the RecentlySealedPeriod logic

This branch includes the following new commits:

 new 73c7ca4a5f Add javadoc WIP
 new 509d33155f Simplify the RecentlySealedPeriod logic

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 02/02: Simplify the RecentlySealedPeriod logic

2023-10-17 Thread blerer
This is an automated email from the ASF dual-hosted git repository.

blerer pushed a commit to branch cep-21-tcm-review
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 509d33155f9e222a086b3d61c969208dad6fe722
Author: Benjamin Lerer 
AuthorDate: Tue Oct 17 13:10:15 2023 +0200

Simplify the RecentlySealedPeriod logic
---
 .../cassandra/tcm/RecentlySealedPeriods.java   | 92 +-
 src/java/org/apache/cassandra/tcm/Sealed.java  |  2 +-
 .../cassandra/tcm/RecentlySealedPeriodsTest.java   | 18 +++--
 3 files changed, 48 insertions(+), 64 deletions(-)

diff --git a/src/java/org/apache/cassandra/tcm/RecentlySealedPeriods.java 
b/src/java/org/apache/cassandra/tcm/RecentlySealedPeriods.java
index cc16ae656e..cca1d7a213 100644
--- a/src/java/org/apache/cassandra/tcm/RecentlySealedPeriods.java
+++ b/src/java/org/apache/cassandra/tcm/RecentlySealedPeriods.java
@@ -18,11 +18,12 @@
 
 package org.apache.cassandra.tcm;
 
-import java.util.Arrays;
-import java.util.Collections;
 import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableSortedMap;
 
 import org.apache.cassandra.config.CassandraRelevantProperties;
 
@@ -34,50 +35,40 @@ import 
org.apache.cassandra.config.CassandraRelevantProperties;
  * the target is outside the range of this index, we eventually fall back to a 
read
  * from the system.metadata_sealed_periods table.
  */
-public class RecentlySealedPeriods
+public final class RecentlySealedPeriods
 {
-public static final RecentlySealedPeriods EMPTY = new 
RecentlySealedPeriods(new Sealed[0]);
+public static final RecentlySealedPeriods EMPTY = new 
RecentlySealedPeriods(ImmutableSortedMap.of());
 
 /**
  * The maximum number of sealed periods stored in memory.
  */
-private int maxSize = 
CassandraRelevantProperties.TCM_RECENTLY_SEALED_PERIOD_INDEX_SIZE.getInt();
-private Sealed[] recent;
+private final int maxSize = 
CassandraRelevantProperties.TCM_RECENTLY_SEALED_PERIOD_INDEX_SIZE.getInt();
+private final NavigableMap recent;
 
-private RecentlySealedPeriods(Sealed first)
-{
-this.recent = new Sealed[]{first};
-}
-
-private RecentlySealedPeriods(Sealed[] recent)
+private RecentlySealedPeriods(ImmutableSortedMap recent)
 {
 this.recent = recent;
 }
 
 public static RecentlySealedPeriods init(List recent)
 {
-Collections.sort(recent);
-return new RecentlySealedPeriods(recent.toArray(new 
Sealed[recent.size()]));
+ImmutableSortedMap.Builder builder = 
ImmutableSortedMap.naturalOrder();
+for (Sealed sealed: recent)
+{
+builder.put(sealed.epoch, sealed);
+}
+return new RecentlySealedPeriods(builder.build());
 }
 
-
 public RecentlySealedPeriods with(Epoch epoch, long period)
 {
-if (recent == null)
-{
-return new RecentlySealedPeriods(new Sealed(period, epoch));
-}
-else
-{
-int toCopy = Math.min(recent.length, maxSize - 1);
-int newSize = Math.min(recent.length + 1, maxSize);
-Sealed[] newList = new Sealed[newSize];
-System.arraycopy(recent, recent.length - toCopy, newList, 0, 
toCopy);
-newList[newSize - 1] = new Sealed(period, epoch);
-// shouldn't be necessary, but is cheap
-Arrays.sort(newList, Sealed::compareTo);
-return new RecentlySealedPeriods(newList);
-}
+NavigableMap toKeep = recent.size() < maxSize ? recent
+ : 
recent.tailMap(recent.firstKey(), false);
+
+return new RecentlySealedPeriods(ImmutableSortedMap.naturalOrder()
+   .putAll(toKeep)
+   .put(epoch, new 
Sealed(period, epoch))
+   .build());
 }
 
 /**
@@ -86,28 +77,27 @@ public class RecentlySealedPeriods
  * as long as its epoch is greater than the target.
  * If the target epoch is greater than the max epoch in the latest sealed
  * period, then assume there is no suitable snapshot.
- * @param epoch
+ * @param epoch the target epoch
  * @return
  */
 public Sealed lookupEpochForSnapshot(Epoch epoch)
 {
+if (recent.isEmpty())
+return Sealed.EMPTY;
+
 // if the target is > the highest indexed value there's no need to
 // scan the index. Instead, just signal to the caller that no suitable
 // sealed period was found.
-if (recent.length > 0)
-{
-Sealed latest = recent[recent.length - 1];
-return latest.epoch.isAfter(epoch) ? latest : Sealed.EMPTY;
-}
-return 

[jira] [Comment Edited] (CASSANDRA-18798) Appending to list in Accord transactions uses insertion timestamp

2023-10-17 Thread Henrik Ingo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776200#comment-17776200
 ] 

Henrik Ingo edited comment on CASSANDRA-18798 at 10/17/23 2:06 PM:
---

Pushed new snapshot of progress: 
https://github.com/henrikingo/cassandra/commit/4b2292bfa52ed713163abbc4f72b8300bf630e8e

This commit "fixes" the issue in the sense that 
{{updateAllTimestampAndLocalDeletionTime()}} will now also update the {{path}} 
variable for elements of a ListType. However, this does not actualy fix the 
issue. In the unit test that's also part of the patch, the transactions end up 
always having the same timestamp, and hence generate the same TimeUUID().

(Note that separately we might wonder what would happen if we append 2 list 
elements in the same transaction?)

To emphasize  the point that the above does the right thing given the original 
assumptions, if I just use { {nextTimeUUID()}}, which generates new UUIDs, not 
just maps the current timestamp to a deterministic UUID, then the test 
"passes", though I doubt that would be correct in a real cluster with multiple 
nodes. IT works on a single node because this code executes serially in the 
accord execution phase, so newly generated UUIDs are ordered correctly, even if 
they are not the correct UUIDs. (...as in derived from the Accord transaction 
id)

But ok, debugging this I realized another issue, which I first thought was with 
the test setup, but might be some kind of race condition. It turns out the two 
transactions in the unit test end up executing with the exact same timestamps.

{noformat}
lastmicros 0
DEBUG [node2_CommandStore[1]:1] node2 2023-10-17 15:39:35,579 
AccordMessageSink.java:167 - Replying ACCORD_APPLY_RSP ApplyApplied to 
/127.0.0.1:7012
DEBUG [node1_RequestResponseStage-1] node1 2023-10-17 15:39:35,580 
AccordCallback.java:49 - Received response ApplyApplied from /127.0.0.2:7012
lastmicros 0
raw 0  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [0,0,0,0]
lastmicros 1697546374434000
raw 0  (NO_LAST_EXECUTED_HLC=-9223372036854775808
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [0,0,0,0]
lastExecutedTimestamp [10,1697546374434000,10,1]
lastmicros 1697546374434000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697546374434000,10,1]
timestamp 1697546374434000executeAt[10,1697546374434000,10,1]
timestamp 1697546374434000executeAt[10,1697546374434000,10,1]
{noformat}

But adding a sleep to one thread, it resolves the issue (also makes the test 
pass, actually):
{code}
ForkJoinTask add2 = ForkJoinPool.commonPool().submit(() -> {
try {
Thread.sleep(1000);
}catch (InterruptedException e){
// It's ok
}

latch.awaitThrowUncheckedOnInterrupt();
SHARED_CLUSTER.get(1).executeInternal("BEGIN TRANSACTION " +
"UPDATE " + currentTable + " SET l = l + [2] WHERE 
k = 1; " +
"COMMIT TRANSACTION");
completionOrder.add(2);
});

{code}

{noformat}
lastmicros 1697544893676000
raw 1697544893676000  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544893676000,10,1]
lastmicros 1697544894677000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544894677000,10,1]
timestamp 1697544894677000executeAt[10,1697544894677000,10,1]
DEBUG [node2_CommandStore[1]:1] node2 2023-10-17 15:14:54,728 
AccordMessageSink.java:167 - Replying ACCORD_APPLY_RSP ApplyApplied to 
/127.0.0.1:7012
DEBUG [node1_RequestResponseStage-1] node1 2023-10-17 15:14:54,728 
AccordCallback.java:49 - Received response ApplyApplied from /127.0.0.1:7012
DEBUG [node2_Messaging-EventLoop-3-4] node2 2023-10-17 15:14:54,728 
AccordVerbHandler.java:54 - Receiving 
PreAccept{txnId:[10,1697544894711000,0,1], 
txn:{read:TxnRead{TxnNamedRead{name='RETURNING:', 
key=distributed_test_keyspace:DecoratedKey(-4069959284402364209, 0001), 
update=Read(distributed_test_keyspace.tbl0 columns=*/[l] rowFilter= limits= 
key=1 filter=names(EMPTY), nowInSec=0)}}}, 
scope:[distributed_test_keyspace:-4069959284402364209]} from /127.0.0.1:7012
DEBUG [node1_CommandStore[1]:1] node1 2023-10-17 15:14:54,730 
AbstractCell.java:144 - timestamp: 1697544894677000   buffer: 0newPath: 
java.nio.HeapByteBuffer[pos=0 lim=16 cap=16]
lastmicros 1697544893676000
raw 1697544893676000  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544893676000,10,1]
lastmicros 1697544894677000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544894677000,10,1]
DEBUG [node1_RequestResponseStage-1] node1 

[jira] [Commented] (CASSANDRA-18798) Appending to list in Accord transactions uses insertion timestamp

2023-10-17 Thread Henrik Ingo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776200#comment-17776200
 ] 

Henrik Ingo commented on CASSANDRA-18798:
-

Pushed new snapshot of progress: 
https://github.com/henrikingo/cassandra/commit/4b2292bfa52ed713163abbc4f72b8300bf630e8e

This commit "fixes" the issue in the sense that 
{{updateAllTimestampAndLocalDeletionTime()}} will now also update the {{path}} 
variable for elements of a ListType. However, this does not actualy fix the 
issue. In the unit test that's also part of the patch, the transactions end up 
always having the same timestamp, and hence generate the same TimeUUID().

To emphasize  the point that the above does the right thing given the original 
assumptions, if I just use { {nextTimeUUID()}}, which generates new UUIDs, not 
just maps the current timestamp to a deterministic UUID, then the test 
"passes", though I doubt that would be correct in a real cluster with multiple 
nodes. IT works on a single node because this code executes serially in the 
accord execution phase, so newly generated UUIDs are ordered correctly, even if 
they are not the correct UUIDs. (...as in derived from the Accord transaction 
id)

But ok, debugging this I realized another issue, which I first thought was with 
the test setup, but might be some kind of race condition. It turns out the two 
transactions in the unit test end up executing with the exact same timestamps.

{noformat}
lastmicros 0
DEBUG [node2_CommandStore[1]:1] node2 2023-10-17 15:39:35,579 
AccordMessageSink.java:167 - Replying ACCORD_APPLY_RSP ApplyApplied to 
/127.0.0.1:7012
DEBUG [node1_RequestResponseStage-1] node1 2023-10-17 15:39:35,580 
AccordCallback.java:49 - Received response ApplyApplied from /127.0.0.2:7012
lastmicros 0
raw 0  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [0,0,0,0]
lastmicros 1697546374434000
raw 0  (NO_LAST_EXECUTED_HLC=-9223372036854775808
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [0,0,0,0]
lastExecutedTimestamp [10,1697546374434000,10,1]
lastmicros 1697546374434000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697546374434000,10,1]
timestamp 1697546374434000executeAt[10,1697546374434000,10,1]
timestamp 1697546374434000executeAt[10,1697546374434000,10,1]
{noformat}

But adding a sleep to one thread, it resolves the issue (also makes the test 
pass, actually):
{code}
ForkJoinTask add2 = ForkJoinPool.commonPool().submit(() -> {
try {
Thread.sleep(1000);
}catch (InterruptedException e){
// It's ok
}

latch.awaitThrowUncheckedOnInterrupt();
SHARED_CLUSTER.get(1).executeInternal("BEGIN TRANSACTION " +
"UPDATE " + currentTable + " SET l = l + [2] WHERE 
k = 1; " +
"COMMIT TRANSACTION");
completionOrder.add(2);
});

{code}

{noformat}
lastmicros 1697544893676000
raw 1697544893676000  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544893676000,10,1]
lastmicros 1697544894677000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544894677000,10,1]
timestamp 1697544894677000executeAt[10,1697544894677000,10,1]
DEBUG [node2_CommandStore[1]:1] node2 2023-10-17 15:14:54,728 
AccordMessageSink.java:167 - Replying ACCORD_APPLY_RSP ApplyApplied to 
/127.0.0.1:7012
DEBUG [node1_RequestResponseStage-1] node1 2023-10-17 15:14:54,728 
AccordCallback.java:49 - Received response ApplyApplied from /127.0.0.1:7012
DEBUG [node2_Messaging-EventLoop-3-4] node2 2023-10-17 15:14:54,728 
AccordVerbHandler.java:54 - Receiving 
PreAccept{txnId:[10,1697544894711000,0,1], 
txn:{read:TxnRead{TxnNamedRead{name='RETURNING:', 
key=distributed_test_keyspace:DecoratedKey(-4069959284402364209, 0001), 
update=Read(distributed_test_keyspace.tbl0 columns=*/[l] rowFilter= limits= 
key=1 filter=names(EMPTY), nowInSec=0)}}}, 
scope:[distributed_test_keyspace:-4069959284402364209]} from /127.0.0.1:7012
DEBUG [node1_CommandStore[1]:1] node1 2023-10-17 15:14:54,730 
AbstractCell.java:144 - timestamp: 1697544894677000   buffer: 0newPath: 
java.nio.HeapByteBuffer[pos=0 lim=16 cap=16]
lastmicros 1697544893676000
raw 1697544893676000  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544893676000,10,1]
lastmicros 1697544894677000
raw -9223372036854775808  (NO_LAST_EXECUTED_HLC=-9223372036854775808
lastExecutedTimestamp [10,1697544894677000,10,1]
DEBUG [node1_RequestResponseStage-1] node1 2023-10-17 15:14:54,734 
AccordCallback.java:49 - Received response ApplyApplied from /127.0.0.2:7012
timestamp 1697544894677000

[cassandra] 01/01: Merge branch 'cassandra-5.0' into trunk

2023-10-17 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 3d15be1d5ea35c63f93e07e9c158a657a8e37ee8
Merge: 302b272b14 802bd5fe13
Author: mck 
AuthorDate: Tue Oct 17 12:14:46 2023 +0200

Merge branch 'cassandra-5.0' into trunk

* cassandra-5.0:
  ninja-fix – reusing git clone under build needs reset and permissions

 .build/run-tests.sh | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --cc .build/run-tests.sh
index cf2fb04354,cf5ddcc1b8..35b166f863
--- a/.build/run-tests.sh
+++ b/.build/run-tests.sh
@@@ -102,7 -104,8 +105,8 @@@ _build_all_dtest_jars() 
  [ "${java_version}" -eq 11 ] && export CASSANDRA_USE_JDK11=true
  
  pushd ${TMP_DIR}/cassandra-dtest-jars >/dev/null
 -for branch in cassandra-4.0 cassandra-4.1 cassandra-5.0 ; do
 +for branch in cassandra-4.0 cassandra-4.1 cassandra-5.0 trunk ; do
+ git reset --hard HEAD && git clean -qxdff  || echo "failed to 
reset/clean ${TMP_DIR}/cassandra-dtest-jars… continuing…"
  git checkout --quiet $branch
  dtest_jar_version=$(grep 'property\s*name=\"base.version\"' build.xml 
|sed -ne 's/.*value=\"\([^"]*\)\".*/\1/p')
  if [ -f "${DIST_DIR}/dtest-${dtest_jar_version}.jar" ] ; then


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (302b272b14 -> 3d15be1d5e)

2023-10-17 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 302b272b14 Merge branch 'cassandra-5.0' into trunk
 add 802bd5fe13 ninja-fix – reusing git clone under build needs reset and 
permissions
 new 3d15be1d5e Merge branch 'cassandra-5.0' into trunk

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .build/run-tests.sh | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-5.0 updated (987d03c142 -> 802bd5fe13)

2023-10-17 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch cassandra-5.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 987d03c142 Fixes for in-tree scripts: offline mode, maybe-build for 
fqltool-test, jvm-dtest-upgrade
 add 802bd5fe13 ninja-fix – reusing git clone under build needs reset and 
permissions

No new revisions were added by this update.

Summary of changes:
 .build/run-tests.sh | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18932) Harry-found CorruptSSTableException / RT Closer issue when reading entire partition

2023-10-17 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776045#comment-17776045
 ] 

Alex Petrov commented on CASSANDRA-18932:
-

[~brandon.williams] sure, the attached sstables can be used to stably reproduce 
the issue. I did try against 
[https://github.com/apache/cassandra/commit/15be17ecef53adf575732fc8aa0f86eb1a774092]
 and can confirm that at that commit this issue doesn't repro.

As soon as I have a Harry branch up, I can also share a dtest that reproduces 
this in about 17 seconds, and we can try shrinking the commands, since we do 
know which page/row it breaks on.

> Harry-found CorruptSSTableException / RT Closer issue when reading entire 
> partition
> ---
>
> Key: CASSANDRA-18932
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18932
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Priority: Normal
> Attachments: node1_.zip, operation.log.zip
>
>
> While testing some new machinery for Harry, I have encountered a new RT 
> closer / SSTable Corruption issue. I have grounds to believe this was 
> introduced during the last year.
> Issue seems to happen because of intricate interleaving of flushes with 
> writes and deletes.
> {code:java}
> ERROR [ReadStage-2] 2023-10-16 18:47:06,696 JVMStabilityInspector.java:76 - 
> Exception in thread Thread[ReadStage-2,5,SharedPool]
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> RandomAccessReader:BufferManagingRebufferer.Aligned:CompressedChunkReader.Mmap(/Users/ifesdjeen/foss/java/apache-cassandra-4.0/data/data1/harry/table_1-07c35a606c0a11eeae7a4f6ca489eb0c/nc-5-big-Data.db
>  - LZ4Compressor, chunk length 16384, data length 232569)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator$AbstractReader.hasNext(AbstractSSTableIterator.java:381)
>         at 
> org.apache.cassandra.io.sstable.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:242)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:376)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:188)
>         at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:157)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:534)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:402)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
>         at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>         at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:151)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:101)
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:86)
>         at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:343)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:201)
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:186)
>         at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:48)
>         at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:346)
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:2186)
>