[ 
https://issues.apache.org/jira/browse/SOLR-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14995003#comment-14995003
 ] 

Ishan Chattopadhyaya commented on SOLR-8227:
--------------------------------------------


bq. That should not be a problem no? At point X when the replica starts 
recovering from a leader the leader has the latest data . The replica also 
starts accepting documents while it it recovering so even if the leader changes 
at this point it would still have the latest data.

There could be a scenario that the recovering replica is partitioned away from 
the leader, but not from the non-leader replica it is recovering from. In this 
case, while the recovery is going on, the recent updates are lost on this 
replica. Do you think it will lead to out-of-sync replica?

I think after recovering from another active non-leader replica, we should 
again do a peer sync with the leader just to be sure we haven't missed an 
update, e.g. in the event of a partition from the leader.

> Recovering replicas should be able to recover from any active replica
> ---------------------------------------------------------------------
>
>                 Key: SOLR-8227
>                 URL: https://issues.apache.org/jira/browse/SOLR-8227
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: Varun Thacker
>
> Currently when a replica goes into recovery it uses the leader to recover. It 
> first   tries to do a PeerSync. If thats not successful it does a 
> replication. Most of the times it ends up doing a full replication because 
> segment merging, autoCommits causing segments to be formed differently on the 
> replicas ( We should explore improving that in another issue ) . 
> But when many replicas are recovering and hitting the leader, the leader can 
> become a bottleneck. Since Solr is a CP system , we should be able to recover 
> from any of the 'active' replicas instead of just the leader. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to