[ 
https://issues.apache.org/jira/browse/CASSANDRA-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15270766#comment-15270766
 ] 

Marcus Eriksson commented on CASSANDRA-8911:
--------------------------------------------

Have pushed a bunch new commits to the 
[branch|https://github.com/krummas/cassandra/commits/marcuse/8911]

Apart from a bunch of fixes, it adds a nodetool command to trigger a repair {{$ 
nodetool -p7100 mutationbasedrepair stresscql datatest -w 2000 -r 8000}} where 
-w is the page size and -r is the number of rows per second to repair.

Also pushed a bunch of dtests 
[here|https://github.com/krummas/cassandra-dtest/commits/marcuse/8911]

My plan for this is:
* Add a separate memtable without commitlog, instead we record the last 
repaired page when we flush this memtable, so if we lose the memtable, we will 
start repairing from the point where we flushed the memtable last time. We can 
probably also skip serving reads from this separate memtable.
* Make it impossible to start if you run DTCS
* Clean up code, get it reviewed etc.
* Release it as an "experimental" feature
* Create new tickets:
**  make it incremental
** continuously run this
** investigate how to handle gc grace
** make it safe to use with DTCS

Reason I prefer to release it incrementally like this is that I don't want us 
to waste a lot of time if the approach does not really work in real life

> Consider Mutation-based Repairs
> -------------------------------
>
>                 Key: CASSANDRA-8911
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8911
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Tyler Hobbs
>            Assignee: Marcus Eriksson
>             Fix For: 3.x
>
>
> We should consider a mutation-based repair to replace the existing streaming 
> repair.  While we're at it, we could do away with a lot of the complexity 
> around merkle trees.
> I have not planned this out in detail, but here's roughly what I'm thinking:
>  * Instead of building an entire merkle tree up front, just send the "leaves" 
> one-by-one.  Instead of dealing with token ranges, make the leaves primary 
> key ranges.  The PK ranges would need to be contiguous, so that the start of 
> each range would match the end of the previous range. (The first and last 
> leaves would need to be open-ended on one end of the PK range.) This would be 
> similar to doing a read with paging.
>  * Once one page of data is read, compute a hash of it and send it to the 
> other replicas along with the PK range that it covers and a row count.
>  * When the replicas receive the hash, the perform a read over the same PK 
> range (using a LIMIT of the row count + 1) and compare hashes (unless the row 
> counts don't match, in which case this can be skipped).
>  * If there is a mismatch, the replica will send a mutation covering that 
> page's worth of data (ignoring the row count this time) to the source node.
> Here are the advantages that I can think of:
>  * With the current repair behavior of streaming, vnode-enabled clusters may 
> need to stream hundreds of small SSTables.  This results in increased compact
> ion load on the receiving node.  With the mutation-based approach, memtables 
> would naturally merge these.
>  * It's simple to throttle.  For example, you could give a number of rows/sec 
> that should be repaired.
>  * It's easy to see what PK range has been repaired so far.  This could make 
> it simpler to resume a repair that fails midway.
>  * Inconsistencies start to be repaired almost right away.
>  * Less special code \(?\)
>  * Wide partitions are no longer a problem.
> There are a few problems I can think of:
>  * Counters.  I don't know if this can be made safe, or if they need to be 
> skipped.
>  * To support incremental repair, we need to be able to read from only 
> repaired sstables.  Probably not too difficult to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to