[ 
https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392955#comment-14392955
 ] 

Yuki Morishita commented on CASSANDRA-8979:
-------------------------------------------

[~slebresne], [[email protected]]
Sorry, I missed about mixed minor version cluster. Thanks for catching that.
In my opinion, users may experience more streaming if they have mixed versioned 
cluster.
I don't think they keep mixed version forever and eventually upgrade to their 
cluster to the same version, the impact is temporary.
So I'm +1 to this change in minor version (with the latest patches by Stefan).

And for the dtest failing on trunk, it is caused by another bug in trunk and 
will be fixed in CASSANDRA-9099.

> MerkleTree mismatch for deleted and non-existing rows
> -----------------------------------------------------
>
>                 Key: CASSANDRA-8979
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Stefan Podkowinski
>            Assignee: Stefan Podkowinski
>             Fix For: 2.1.5
>
>         Attachments: 8979-AvoidBufferAllocation-2.0_patch.txt, 
> 8979-LazilyCompactedRow-2.0.txt, 8979-RevertPrecompactedRow-2.0.txt, 
> cassandra-2.0-8979-lazyrow_patch.txt, cassandra-2.0-8979-validator_patch.txt, 
> cassandra-2.0-8979-validatortest_patch.txt, 
> cassandra-2.1-8979-lazyrow_patch.txt, cassandra-2.1-8979-validator_patch.txt
>
>
> Validation compaction will currently create different hashes for rows that 
> have been deleted compared to nodes that have not seen the rows at all or 
> have already compacted them away. 
> In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to 
> prevent hashing of expired tombstones. This still seems to be in place, but 
> does not address the issue completely. Or there was a change in 2.0 that 
> rendered the patch ineffective. 
> The problem is that rowHash() in the Validator will return a new hash in any 
> case, whether the PrecompactedRow did actually update the digest or not. This 
> will lead to the case that a purged, PrecompactedRow will not change the 
> digest, but we end up with a different tree compared to not having rowHash 
> called at all (such as in case the row already doesn't exist).
> As an implication, repair jobs will constantly detect mismatches between 
> older sstables containing purgable rows and nodes that have already compacted 
> these rows. After transfering the reported ranges, the newly created sstables 
> will immediately get deleted again during the following compaction. This will 
> happen for each repair run over again until the sstable with the purgable row 
> finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to