Thank you guys. It makes sense.
I'll have repair-pr schedule on each node.
On Thu, Oct 18, 2012 at 3:39 AM, aaron morton wrote:
> Without -pr the repair works on all token ranges the node is a replica
> for.
>
> With -pr it only repairs data in the token range it is assigned. In your
> case wh
Without -pr the repair works on all token ranges the node is a replica for.
With -pr it only repairs data in the token range it is assigned. In your case
when you ran it on node 0 with RF the token range form node 0 was repaired on
nodes 0, 1 and 2. The other token ranges on nodes 0, 1 and 2 w
>
> In my mind it does make sense, and what you're saying is correct. But I read
> that it was better to run repair in each node with a "-pr" option.
>
> Alain
>
Yes, it's correct. Running repair -pr on each node you repair whole
cluster without job duplication.
Andrey
what if first node in range is down? then -pr would be ineffective
"I see. So if I don't use the '-pr' option, triggering repair on node-00 is
sufficient to repair the first 3 nodes.
No need to cron a repair on node-{01,02}.
correct?"
"forget it. this was nonsense."
In my mind it does make sense, and what you're saying is correct. But I
read that it was better t
forget it. this was nonsense.
On Mon, Oct 15, 2012 at 10:05 PM, Alexis Midon wrote:
> I see. So if I don't use the '-pr' option, triggering repair on node-00 is
> sufficient to repair the first 3 nodes.
> No need to cron a repair on node-{01,02}.
> correct?
>
> thanks for your answer.
>
>
> On Mo
I see. So if I don't use the '-pr' option, triggering repair on node-00 is
sufficient to repair the first 3 nodes.
No need to cron a repair on node-{01,02}.
correct?
thanks for your answer.
On Mon, Oct 15, 2012 at 6:51 PM, Andrey Ilinykh wrote:
> Only one region (node-00 is responsible for) wil
Only one region (node-00 is responsible for) will get repaired on all
three nodes.
Andrey
On Mon, Oct 15, 2012 at 11:56 AM, Alexis Midon wrote:
>
> Hi all,
>
> I have a 9-node cluster with a replication factor R=3. When I run repair -pr
> on node-00, I see the exact same load and activity on node-
+1Is this a consensus activity and is the master fixed, or does the voting
process migrate during the cycles??
On Oct 15, 2012, at 2:56 PM, Alexis Midon wrote:
>
> Hi all,
>
> I have a 9-node cluster with a replication factor R=3. When I run repair -pr
> on node-00, I see the exact same
Hi all,
I have a 9-node cluster with a replication factor R=3. When I run repair
-pr on node-00, I see the exact same load and activity on node-{01,02}.
Specifically, compactionstats shows the same Validation tasks.
Does this mean that all 3 nodes will be repaired when nodetool returns? or
do I st
10 matches
Mail list logo