[
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13847614#comment-13847614
]
Timothy Potter commented on SOLR-5552:
--------------------------------------
Here's a first cut at a solution sans unit tests, which relies on a new Slice
property - last_known_leader_core_url. However I'm open to other suggestions on
how to solve this issue if someone sees a cleaner way.
During the leader recovery process outlined in the description of this ticket,
the ShardLeaderElectionContext can use this property as a hint to replicas to
defer to the previous known leader if it is one of the replicas that is trying
to recover. Specifically, this patch only applies if all replicas are "down"
and the previous known leader is on a "live" node and is one of the replicas
trying to recover. This may be too restrictive but it covers this issue nicely
and minimizes chance of regression for other leader election / recovery cases.
Here are some log messages from the replica as it exits the
waitForReplicasToComeUp process that show this patch working:
>>>
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - Enough replicas found to continue.
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - Last known leader is
http://cloud84:8984/solr/cloud_shard1_replica1/ and I am
http://cloud85:8985/solr/cloud_shard1_replica2/
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - Found previous? true and numDown is 2
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - All 2 replicas are down. Choosing to
let last known leader http://cloud84:8984/solr/cloud_shard1_replica1/ try first
...
2013-12-13 08:51:26,992 [coreLoadExecutor-3-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - There may be a better leader candidate
than us - going back into recovery
<<<
The end result was that my shard recovered correctly and the data remained
consistent between leader and replica. I've also tried this with 3 replicas in
a Slice and when the last known leader doesn't come back, which works as it did
previously.
Lastly, I'm not entirely certain I like how the property gets set in the Slice
constructor. It may be better to set this property in the Overseer? Or even
store the last_known_leader_core_url in a separate znode, such as
/collections/<COLL>/last_known_leader/shardN. I do see some comments in places
about keeping the leader property on the Slice vs. in the leader Replica so
maybe that figures into this as well?
> Leader recovery process can select the wrong leader if all replicas for a
> shard are down and trying to recover
> --------------------------------------------------------------------------------------------------------------
>
> Key: SOLR-5552
> URL: https://issues.apache.org/jira/browse/SOLR-5552
> Project: Solr
> Issue Type: Bug
> Components: SolrCloud
> Reporter: Timothy Potter
> Labels: leader, recovery
>
> One particular issue that leads to out-of-sync shards, related to SOLR-4260
> Here's what I know so far, which admittedly isn't much:
> As cloud85 (replica before it crashed) is initializing, it enters the wait
> process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is
> expected and a good thing.
> Some short amount of time in the future, cloud84 (leader before it crashed)
> begins initializing and gets to a point where it adds itself as a possible
> leader for the shard (by creating a znode under
> /collections/cloud/leaders_elect/shard1/election), which leads to cloud85
> being able to return from waitForReplicasToComeUp and try to determine who
> should be the leader.
> cloud85 then tries to run the SyncStrategy, which can never work because in
> this scenario the Jetty HTTP listener is not active yet on either node, so
> all replication work that uses HTTP requests fails on both nodes ... PeerSync
> treats these failures as indicators that the other replicas in the shard are
> unavailable (or whatever) and assumes success. Here's the log message:
> 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN
> solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1
> url=http://cloud85:8985/solr couldn't connect to
> http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
> The Jetty HTTP listener doesn't start accepting connections until long after
> this process has completed and already selected the wrong leader.
> From what I can see, we seem to have a leader recovery process that is based
> partly on HTTP requests to the other nodes, but the HTTP listener on those
> nodes isn't active yet. We need a leader recovery process that doesn't rely
> on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader
> may need to work differently than leader election in a shard that has
> replicas that can respond to HTTP requests? All of what I'm seeing makes
> perfect sense for leader election when there are active replicas and the
> current leader fails.
> All this aside, I'm not asserting that this is the only cause for the
> out-of-sync issues reported in this ticket, but it definitely seems like it
> could happen in a real cluster.
--
This message was sent by Atlassian JIRA
(v6.1.4#6159)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]