[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17790884#comment-17790884
 ] 

Maxwell Guo commented on CASSANDRA-18934:
-----------------------------------------

I found that if we we just doing normal drain, the commitlog will be clean and 
the data for version like 5.1 will not left for 4.1's log replay.
For this test  I just modify the dtest’s 
[code|https://github.com/apache/cassandra/blob/trunk/test/distributed/org/apache/cassandra/distributed/impl/Instance.java#L873]
 with  
 commitlog's forceRecycleAllSegments  and this part can pass.
But I also found that an exception will occurs at 4.1's startup stage with the 
exception like 
{code:java}
RROR [SSTableBatchOpen:2] 2023-11-29 10:33:00,974 JVMStabilityInspector.java:68 
- Exception in thread Thread[SSTableBatchOpen:2,5,SSTableBatchOpen]
java.lang.IllegalStateException: 
org.apache.cassandra.exceptions.UnknownColumnException: Unknown column 
compaction_properties during deserialization
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:514)
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:381)
        at 
org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:551)
        at org.apache.cassandra.concurrent.FutureTask$1.call(FutureTask.java:81)
        at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:47)
        at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:57)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.cassandra.exceptions.UnknownColumnException: Unknown 
column compaction_properties during deserialization
        at 
org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:337)
        at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:510)
{code}
. That is when  opening  sstable for system.compaction_history c* will use the 
system table’s meta that is only for 4.1 but 5.1's sstable have the 5.1's meta 
(in header) which contains the newly added column  , see 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SerializationHeader.java#L323]
 (1).

I have question here :
(1) can we modify 4.1's code ?
(2) I just had a simple idea. Let’s discuss it together.  :DI suggest adding a 
new system table, which just store the schema change and the version that we 
changed the schema like :
create table system.version_detail (keyspace text, table text, version_changed 
text, add_column text, del_column text, propertied text, primary key(keyspace, 
table, version_changed));
so when downgrade, we can use this info to skip this exception . As for the 
data can be insert at setup stage like SystemKeyspaceMigrator41 
(3) if we can not change the code of 4.1 or some other solution. 

Looking forward to more replies, if there are any other different opinions.




> Downgrade to 4.1 fails due to schema changes
> --------------------------------------------
>
>                 Key: CASSANDRA-18934
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Local/Startup and Shutdown
>            Reporter: David Capwell
>            Assignee: Maxwell Guo
>            Priority: Normal
>             Fix For: 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0h0000gn/T/mutation4184214444767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>       at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>       at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>       at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  *     http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
>     @Test
>     public void test() throws Throwable
>     {
>         AtomicBoolean first = new AtomicBoolean(true);
>         new TestCase()
>         .nodes(1)
>         .withConfig(c -> {
>             if (first.compareAndSet(true, false))
>                 c.set("storage_compatibility_mode", "CASSANDRA_4");
>         })
>         .downgradeTo(v41)
>         .setup(cluster -> {})
> // Uncomment if you want to test what happens after reading the commit log, 
> which fails right now
> //        .runBeforeNodeRestart((cluster, nodeId) -> {
> //            IUpgradeableInstance inst = cluster.get(nodeId);
> //            File f = new File((String) 
> inst.config().get("commitlog_directory"));
> //            deleteRecursive(f);
> //        })
>         .runAfterClusterUpgrade(cluster -> {})
>         .run();
>     }
>     private void deleteRecursive(File f)
>     {
>         if (f.isDirectory())
>         {
>             File[] children = f.listFiles();
>             if (children != null)
>             {
>                 for (File c : children)
>                     deleteRecursive(c);
>             }
>         }
>         f.delete();
>     }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> index 5ee8780204..b4111e3b44 100644
> --- 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> +++ 
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> @@ -226,6 +226,12 @@ public class UpgradeTestBase extends DistributedTestBase
>              return this;
>          }
> +        public TestCase downgradeTo(Semver to)
> +        {
> +            upgrade.add(new TestVersions(versions.getLatest(CURRENT), 
> Collections.singletonList(versions.getLatest(to))));
> +            return this;
> +        }
> +
>          /**
>           * performs all supported upgrade paths that exist in between from 
> and to that include the current version.
>           * This call is equivalent to calling {@code upgradesTo(from, 
> CURRENT).upgradesFrom(CURRENT, to)}.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to