[
https://issues.apache.org/jira/browse/CURATOR-504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16803405#comment-16803405
]
Yuri Tceretian commented on CURATOR-504:
----------------------------------------
I think this is more serious than I initially believed. I encountered a
situation when an instance of LeaderLatch got into a loop of creating\deleting
znodes and it stopped only after we restarted the service.
The
[gist|https://gist.github.com/yuri-tceretian/1f5ea4f37f574cae7ec01d75f5c11f9a]
(can't attach files to the issue) contains a piece of logs from the client that
shows a huge amount of requests produced by one client. In one of lines you can
find a list of znodes that the latch's znode contained at that time:
{{'_c_7c38ac16-5844-41b8-a8af-cd4f4624c3e9-latch-0002560578}}
{{'_c_500ff830-754d-42a8-b695-6a1d6422a306-latch-0000000136}}
{{'_c_33a67632-037b-4b30-9623-dfdf60ddbbe9-latch-0002560577}}
{{'_c_6d2fdd32-8e16-453a-a953-dcc06fccc380-latch-0000000130}}
{{'_c_f37f97c2-8f2d-4bf6-b363-dfcd5e480fa3-latch-0002560576}}
{{'_c_c83883a8-2d0a-46d5-afec-ff67f9f6d5d0-latch-0000000133}}
{{'_c_1bc15cc0-c46e-4270-9502-9e0673eec763-latch-0000000128}}
as you can see there is a node with index 0002560576, which means that there
were created 2.6 million of znodes (for about 2 hours) (3 znodes with a big
index can be explained as that there were 2 leader latches that shared same
connection and pointed to the same znode). I believe the problem started
happening after the client timed out and reconnected to ZooKeeper.
> Race conditions in LeaderLatch after reconnecting to ensemble
> -------------------------------------------------------------
>
> Key: CURATOR-504
> URL: https://issues.apache.org/jira/browse/CURATOR-504
> Project: Apache Curator
> Issue Type: Bug
> Affects Versions: 4.1.0
> Reporter: Yuri Tceretian
> Assignee: Jordan Zimmerman
> Priority: Minor
> Attachments: 51868597-65791000-231c-11e9-9bfa-1def62bc3ea1.png,
> Screen Shot 2019-01-31 at 10.26.59 PM.png,
> XP91JuD048Nl_8h9NZpH01QZJMfCLewjfd2eQNfOsR6GuApPNV.png
>
>
> We use LeaderLatch in a lot of places in our system and when ZooKeeper
> ensemble is unstable and clients are reconnecting to logs are full of
> messages like the following:
> {{[2017-08-31
> 19:18:34,562][ERROR][org.apache.curator.framework.recipes.leader.LeaderLatch]
> Can't find our node. Resetting. Index: -1 {}}}
> According to the
> [implementation|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L529-L536],
> this can happen in two cases:
> * When internal state `ourPath` is null
> * When the list of latches does not have the expected one.
> I believe we hit the first condition because of races that occur after client
> reconnects to ZooKeeper.
> * Client reconnects to ZooKeeper and LeaderLatch gets the event and calls
> reset method which set the internal state (`ourPath`) to null, removes old
> latch and creates a new one. This happens in thread
> "Curator-ConnectionStateManager-0".
> * Almost simultaneously, LeaderLatch gets another even NodeDeleted
> ([here|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L543-L554])
> and tries to re-read the list of latches and check leadership. This happens
> in the thread "main-EventThread".
> Therefore, sometimes there is a situation when method `checkLeadership` is
> called when `ourPath` is null.
> Below is an approximate diagram of what happens:
> !51868597-65791000-231c-11e9-9bfa-1def62bc3ea1.png|width=1261,height=150!
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)