[
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15222365#comment-15222365
]
Tyler Hobbs commented on CASSANDRA-11349:
-----------------------------------------
I think there are a couple of things wrong with the current patch.
First, the comparator needs to continue to compare the full cell name first,
and only break ties on range tombstones with {{compare(t1.max, t2.max)}}.
Second, I believe we should remove the timestamp tie-breaking behavior from the
comparator in general, and not just for validation compactions. In other
words, I think we're doing the comparison incorrectly for all compactions right
now.
We want the comparison to return 0 whenever range tombstones have equal names
and ranges, even if they have different timestamps. This will result in
{{LazilyCompactedRow.Reducer.reduce()}} being called in one round with each of
the tombstones that only differ in timestamp. The logic in
{{LCR.Reducer.reduce()}} already handles the case of multiple range tombstones
with different timestamps by picking the one with the highest timestamp, so
these will correctly be reduced to a single RT. It looks like the current
codebase will keep both range tombstones during a compaction, which isn't
necessarily harmful, but is suboptimal. For repair purposes, though, this is
incorrect as it produces a different digest.
To summarize: I think all we need to do is remove the timestamp tie-breaking
logic from the existing comparator.
[~slebresne] should double-check my logic, though.
> MerkleTree mismatch when multiple range tombstones exists for the same
> partition and interval
> ---------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
> Issue Type: Bug
> Reporter: Fabien Rousseau
> Assignee: Stefan Podkowinski
> Labels: repair
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11349-2.1.patch
>
>
> We observed that repair, for some of our clusters, streamed a lot of data and
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters,
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a
> partition for the same range/interval, they're both included in the merkle
> tree computation.
> But, if for some reason, on another node, the two range tombstones were
> already compacted into a single range tombstone, this will result in a merkle
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent
> on compactions (and if a partition is deleted and created multiple times, the
> only way to ensure that repair "works correctly"/"don't overstream data" is
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy',
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables
> (up to thousands for a rather short period of time when using VNodes, the
> time for compaction to absorb those small files), but also an increased size
> on disk.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)