[ 
https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207377#comment-14207377
 ] 

Yuki Morishita commented on CASSANDRA-8193:
-------------------------------------------

First of all, thanks for the patch!
I review it based on 2.0, but because the patch adds new feature, I'd rather 
put this to 2.1+. (So go ahead and apply 2.0.x yourself after review).

So, some comments:

* If replication factor is set to be 1 for each DC, then it will be the same as 
ParallelRequestCoordinator. There needs fall back to current behavior in this 
case.
* It looks like ParallelRequestCoordinator class can be {{... implements 
IRequestCoordinator<InetAddress>}}.
* DatacenterAwareRequestCoordinator uses AtomicInteger, but primitive int just 
works here.
* nit: put braces on a new line.

> Multi-DC parallel snapshot repair
> ---------------------------------
>
>                 Key: CASSANDRA-8193
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8193
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Jimmy Mårdell
>            Assignee: Jimmy Mårdell
>            Priority: Minor
>             Fix For: 2.0.12
>
>         Attachments: cassandra-2.0-8193-1.txt
>
>
> The current behaviour of snapshot repair is to let one node at a time 
> calculate a merkle tree. This is to ensure only one node at a time is doing 
> the expensive calculation. The drawback is that it takes even longer time to 
> do the merkle tree calculation.
> In a multi-DC setup, I think it would make more sense to have one node in 
> each DC calculate the merkle tree at the same time. This would yield a 
> significant improvement when you have many data centers.
> I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 
> any time soon. Unless there is an obvious drawback that I'm missing, I'd like 
> to implement this in the 2.0 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to