[ 
https://issues.apache.org/jira/browse/CASSANDRA-19363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Lewandowski updated CASSANDRA-19363:
------------------------------------------
    Description: 
While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the newly 
added decommission test. It looked innocent; however, when digging into the 
logs, it turned out that, for some reason, the data that were being pumped into 
the cluster went to the decommissioned node instead of going to the working 
node.

That is, the data were inserted into a 2-node cluster (RF=1) while, say, node1 
got decommissioned. The expected behavior would be that the data land in node2 
after that. However, for some reason, in this 1/1000 flaky test, the situation 
was the opposite, and the data went to the decommissioned node, resulting in a 
total loss.

I haven't found the reason. I don't know if it is a test failure or a 
production code problem. I cannot prove that it is only a 3.11 problem. I'm 
creating this ticket because if this is a real issue and exists on newer 
branches, it is serious.
 
The logs artifact is lost in CircleCI thus I'm attaching the one I've 
downloaded earlier, unfortunately it is cleaned up a bit. The relevant part is:
{noformat}
DEBUG [node1_isolatedExecutor:3] node1 ColumnFamilyStore.java:949 - Enqueuing 
flush of tbl: 38.965KiB (0%) on-heap, 0.000KiB (0%) off-heap
DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:477 - Writing 
Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), 
flushed range = (max(-3074457345618258603), max(3074457345618258602)]
DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:477 - Writing 
Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), 
flushed range = (max(3074457345618258602), max(9223372036854775807)]
DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:477 - Writing 
Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), 
flushed range = (min(-9223372036854775808), max(-3074457345618258603)]
DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:506 - 
Completed flushing 
/node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db
 (1.059KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
position=47614)
DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:506 - 
Completed flushing 
/node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db
 (1.091KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
position=47614)
DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:506 - 
Completed flushing 
/node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db
 (1.260KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
position=47614)
DEBUG [node1_MemtableFlushWriter:1] node1 ColumnFamilyStore.java:1267 - Flushed 
to 
[BigTableReader(path='/node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db'),
 
BigTableReader(path='/node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db'),
 
BigTableReader(path='/node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db')]
 (3 sstables, 17.521KiB), biggest 5.947KiB, smallest 5.773KiB
DEBUG [node2_isolatedExecutor:1] node2 ColumnFamilyStore.java:949 - Enqueuing 
flush of tbl: 38.379KiB (0%) on-heap, 0.000KiB (0%) off-heap
DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:477 - Writing 
Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap limit), 
flushed range = (null, null]
DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:506 - 
Completed flushing 
/node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db
 (3.409KiB) for commitlog position CommitLogPosition(segmentId=1704397821653, 
position=54585)
DEBUG [node2_MemtableFlushWriter:1] node2 ColumnFamilyStore.java:1267 - Flushed 
to 
[BigTableReader(path='/node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db')]
 (1 sstables, 7.731KiB), biggest 7.731KiB, 
{noformat}

As one can see, node1 flushed 3 sstables of {{tbl}} although it is already 
decommissioned. Node 2 did not flush much. This is opposite to the passing run 
of the test.

The test code is as follows:
{code:java}
        try (Cluster cluster = init(builder().withNodes(2)
                                             
.withTokenSupplier(evenlyDistributedTokens(2))
                                             
.withNodeIdTopology(NetworkTopology.singleDcNetworkTopology(2, "dc0", "rack0"))
                                             .withConfig(config -> 
config.with(NETWORK, GOSSIP))
                                             .start(), 1))
        {
            IInvokableInstance nodeToDecommission = cluster.get(1);
            IInvokableInstance nodeToRemainInCluster = cluster.get(2);

            // Start decomission on nodeToDecommission
            cluster.forEach(statusToDecommission(nodeToDecommission));
            logger.info("Decommissioning node {}", 
nodeToDecommission.broadcastAddress());

            // Add data to cluster while node is decomissioning
            int numRows = 100;
            cluster.schemaChange("CREATE TABLE IF NOT EXISTS " + KEYSPACE + 
".tbl (pk int, ck int, v int, PRIMARY KEY (pk, ck))");
            insertData(cluster, 1, numRows, ConsistencyLevel.ONE);

            // Check data before cleanup on nodeToRemainInCluster
            assertEquals(100, nodeToRemainInCluster.executeInternal("SELECT * 
FROM " + KEYSPACE + ".tbl").length); // <----------- HERE it was reported 0 rows
    }
{code}


  was:
While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the newly 
added decommission test. It looked innocent; however, when digging into the 
logs, it turned out that, for some reason, the data that were being pumped into 
the cluster went to the decommissioned node instead of going to the working 
node.

That is, the data were inserted into a 2-node cluster (RF=1) while, say, node2 
got decommissioned. The expected behavior would be that the data land in node1 
after that. However, for some reason, in this 1/1000 flaky test, the situation 
was the opposite, and the data went to the decommissioned node, resulting in a 
total loss.

I haven't found the reason. I don't know if it is a test failure or a 
production code problem. I cannot prove that it is only a 3.11 problem. I'm 
creating this ticket because if this is a real issue and exists on newer 
branches, it is serious.
 


> Weird data loss in 3.11 flakiness during decommission
> -----------------------------------------------------
>
>                 Key: CASSANDRA-19363
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-19363
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Consistency/Bootstrap and Decommission
>            Reporter: Jacek Lewandowski
>            Priority: Normal
>             Fix For: 3.11.x
>
>         Attachments: bad.txt
>
>
> While testing CASSANDRA-18824 on 3.11, we noticed one flaky result of the 
> newly added decommission test. It looked innocent; however, when digging into 
> the logs, it turned out that, for some reason, the data that were being 
> pumped into the cluster went to the decommissioned node instead of going to 
> the working node.
> That is, the data were inserted into a 2-node cluster (RF=1) while, say, 
> node1 got decommissioned. The expected behavior would be that the data land 
> in node2 after that. However, for some reason, in this 1/1000 flaky test, the 
> situation was the opposite, and the data went to the decommissioned node, 
> resulting in a total loss.
> I haven't found the reason. I don't know if it is a test failure or a 
> production code problem. I cannot prove that it is only a 3.11 problem. I'm 
> creating this ticket because if this is a real issue and exists on newer 
> branches, it is serious.
>  
> The logs artifact is lost in CircleCI thus I'm attaching the one I've 
> downloaded earlier, unfortunately it is cleaned up a bit. The relevant part 
> is:
> {noformat}
> DEBUG [node1_isolatedExecutor:3] node1 ColumnFamilyStore.java:949 - Enqueuing 
> flush of tbl: 38.965KiB (0%) on-heap, 0.000KiB (0%) off-heap
> DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:477 - 
> Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap 
> limit), flushed range = (max(-3074457345618258603), max(3074457345618258602)]
> DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:477 - 
> Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap 
> limit), flushed range = (max(3074457345618258602), max(9223372036854775807)]
> DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:477 - 
> Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap 
> limit), flushed range = (min(-9223372036854775808), max(-3074457345618258603)]
> DEBUG [node1_PerDiskMemtableFlushWriter_2:1] node1 Memtable.java:506 - 
> Completed flushing 
> /node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db
>  (1.059KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
> position=47614)
> DEBUG [node1_PerDiskMemtableFlushWriter_1:1] node1 Memtable.java:506 - 
> Completed flushing 
> /node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db
>  (1.091KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
> position=47614)
> DEBUG [node1_PerDiskMemtableFlushWriter_0:1] node1 Memtable.java:506 - 
> Completed flushing 
> /node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db
>  (1.260KiB) for commitlog position CommitLogPosition(segmentId=1704397819937, 
> position=47614)
> DEBUG [node1_MemtableFlushWriter:1] node1 ColumnFamilyStore.java:1267 - 
> Flushed to 
> [BigTableReader(path='/node1/data0/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db'),
>  
> BigTableReader(path='/node1/data1/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-2-big-Data.db'),
>  
> BigTableReader(path='/node1/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-3-big-Data.db')]
>  (3 sstables, 17.521KiB), biggest 5.947KiB, smallest 5.773KiB
> DEBUG [node2_isolatedExecutor:1] node2 ColumnFamilyStore.java:949 - Enqueuing 
> flush of tbl: 38.379KiB (0%) on-heap, 0.000KiB (0%) off-heap
> DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:477 - 
> Writing Memtable-tbl(5.176KiB serialized bytes, 100 ops, 0%/0% of on/off-heap 
> limit), flushed range = (null, null]
> DEBUG [node2_PerDiskMemtableFlushWriter_0:1] node2 Memtable.java:506 - 
> Completed flushing 
> /node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db
>  (3.409KiB) for commitlog position CommitLogPosition(segmentId=1704397821653, 
> position=54585)
> DEBUG [node2_MemtableFlushWriter:1] node2 ColumnFamilyStore.java:1267 - 
> Flushed to 
> [BigTableReader(path='/node2/data2/distributed_test_keyspace/tbl-7fb7aa20ab3a11eeac381f661fe8b82f/me-1-big-Data.db')]
>  (1 sstables, 7.731KiB), biggest 7.731KiB, 
> {noformat}
> As one can see, node1 flushed 3 sstables of {{tbl}} although it is already 
> decommissioned. Node 2 did not flush much. This is opposite to the passing 
> run of the test.
> The test code is as follows:
> {code:java}
>         try (Cluster cluster = init(builder().withNodes(2)
>                                              
> .withTokenSupplier(evenlyDistributedTokens(2))
>                                              
> .withNodeIdTopology(NetworkTopology.singleDcNetworkTopology(2, "dc0", 
> "rack0"))
>                                              .withConfig(config -> 
> config.with(NETWORK, GOSSIP))
>                                              .start(), 1))
>         {
>             IInvokableInstance nodeToDecommission = cluster.get(1);
>             IInvokableInstance nodeToRemainInCluster = cluster.get(2);
>             // Start decomission on nodeToDecommission
>             cluster.forEach(statusToDecommission(nodeToDecommission));
>             logger.info("Decommissioning node {}", 
> nodeToDecommission.broadcastAddress());
>             // Add data to cluster while node is decomissioning
>             int numRows = 100;
>             cluster.schemaChange("CREATE TABLE IF NOT EXISTS " + KEYSPACE + 
> ".tbl (pk int, ck int, v int, PRIMARY KEY (pk, ck))");
>             insertData(cluster, 1, numRows, ConsistencyLevel.ONE);
>             // Check data before cleanup on nodeToRemainInCluster
>             assertEquals(100, nodeToRemainInCluster.executeInternal("SELECT * 
> FROM " + KEYSPACE + ".tbl").length); // <----------- HERE it was reported 0 
> rows
>     }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to