[
https://issues.apache.org/jira/browse/CURATOR-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16731158#comment-16731158
]
Jordan Zimmerman edited comment on CURATOR-498 at 12/31/18 4:21 AM:
--------------------------------------------------------------------
I have a theory as to what's going on. I've diagramed it below.
Curator has a feature called "protection" mode that handles that case of an
ephemeral node being created on the server but where the client connection
fails before the server returns the ZNode that was created. I believe there is
a potential bug here. If the connection is lost for long enough, Curator will
want to kill the session. Session deletions must be handled by the Leader ZK
instance. At the same time that the session kill is being processed, Curator's
protection mode handling could be calling the follower that it's connected to
get the current list of children - this can be handled directly by the follower
instance without needing to call the leader. So, in this scenario, the client
will get a list of children that includes the ZNode that will get deleted as
part of killing the session. This would explain the behavior you're seeing.
I need to think more about this. I'll first try to create a test scenario that
generates this.
!CURATOR-498.png!
was (Author: randgalt):
I have a theory as to what's going on. I've diagramed (see below).
Curator has a feature called "protection" mode that handles that case of an
ephemeral node being created on the server but where the client connection
fails before the server returns the ZNode that was created. I believe there is
a potential bug here. If the connection is lost for long enough, Curator will
want to kill the session. Session deletions must be handled by the Leader ZK
instance. At the same time that the session kill is being processed, Curator's
protection mode handling could be calling the follower that it's connected to
get the current list of children - this can be handled directly by the follower
instance without needing to call the leader. So, in this scenario, the client
will get a list of children that includes the ZNode that will get deleted as
part of killing the session. This would explain the behavior you're seeing.
I need to think more about this. I'll first try to create a test scenario that
generates this.
!CURATOR-498.png!
> LeaderLatch deletes leader and leaves it hung besides a second leader
> ---------------------------------------------------------------------
>
> Key: CURATOR-498
> URL: https://issues.apache.org/jira/browse/CURATOR-498
> Project: Apache Curator
> Issue Type: Bug
> Affects Versions: 4.0.1, 4.1.0
> Environment: ZooKeeper 3.4.13, Curator 4.1.0 (selecting explicitly
> 3.4.13), Linux
> Reporter: Shay Shimony
> Assignee: Jordan Zimmerman
> Priority: Major
> Attachments: CURATOR-498.png, HaWatcher.log, LeaderLatch0.java,
> ha.tar.gz, logs.tar.gz
>
>
> The Curator app I am working on uses the LeaderLatch to select a leader out
> of 6 clients.
> While testing my app, I noticed that when I make ZK lose its quorum for a
> while and then restore it, then after Curator in my app restores it's
> connection to ZK - sometimes not all the 6 clients are found in the latch
> path (using zkCli.sh). That is, I have 5 instead of 6.
> After investigating a little, I have a suspicion that LeaderLatch deleted the
> leader in method setNode.
> To investigate it I copied the LeaderLatch code and added some log messages,
> and from them it seems like very old create() background callback was
> surprisingly scheduled and corrupted the current leader with its stale path
> name. Meaning, this old one called setNode with its stale name, and set
> itself instead of the leader and deleted the leader. This leaves client
> running, thinking it is the leader, while another leader is selected.
> If my analysis is correct then it seems like we need to make this obsolete
> create callback cancelled (I think its session was suspended on 22:38:54 and
> then lost on 22:39:04 - so on SUSPENDED cancel ongoing callbacks).
> Please see attached log file and modified LeaderLatch0.
>
> In the log, note that on 22:39:26 it shows that 0000000485 is replaced by
> 0000000480 and then probably deleted.
> Note also that at 22:38:52, 34 seconds before, we can see that it was in the
> reset() method ("RESET OUR PATH") and possibly triggered the creation of
> 0000000480 then.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)