In step 3, during the time that zk1 is partitioned latch1 cannot know if it’s 
leader or not. Further, if zk2/zk3 are communicating then they are in Quorum. 
Therefore, latch2 will declare that it is leader. So, latch1 MUST go into a 
non-leader state. It doesn’t matter if latch1 reconnects before the session 
expires. The issue is how it manages its state while it is in partition. To 
your point, latch1 could check to see if it’s still leader when it reconnects 
but I’m not sure that there’s a huge win here. Any clients of the latch will 
have to have handled the leadership loss during the partition.

-JZ


From: chao chu [email protected]
Reply: [email protected] [email protected]
Date: March 10, 2014 at 10:21:29 AM
To: user [email protected]
Subject:  Re: Leader Latch recovery after suspended state  

Thanks for your reply first.

>>> Technically, this is already the case. When a RECONNECTED is received, 
>>>LeaderLatch will attempt to regain leadership. The problem is that when 
>>>there is a network partition there is no way to guarantee that you are still 
>>>the leader. If there is Quorum in another segment of the cluster a new 
>>>leader might be elected there.

I don't quite understand what you meant here. But let me explain the case I 
mentioned in detail:

1. let's say there is a 3-server zk ensemble {zk1, zk2, zk3}, and two 
participants for the leader electoin {leader latch1, leader latch2}
2. latch1 is the current leader, and connected to zk1
3. the connection latch1 <--> zk1 broken somehow
4. but sooner (within the session timeout), latch1 re-connected (maybe to zk2)

I guess the problem is that in LeaderLatch, it setLeadership(false) (meaning 
that there must be leader changes) as long as a SUSPEND state (or actually a ZK 
DISCONNECTED event).

while in this case, ideally (just my personal thinking), since latch1 
re-connected in time, its znode will still be there, no others (latch2 in this 
example) will observe any events due to this, and once re-connected, if it 
detected that its znode still there, checkLeadership should still set it as 
leader, and no leader changes (no 'isLeader' or 'notLeader' will be called) 
during this whole process. Thus, we can avoid the unnecessary leader switch 
(which, as I mentioned, can be very expensive in most of cases).

does this make any sense to you? thanks



On Mon, Mar 10, 2014 at 10:52 PM, Jordan Zimmerman <[email protected]> 
wrote:
Please provide an implementation/fix and submit a pull request on Github.

I also have a related question about not only re-use the znode, but imho, It 
would be great that LeaderLatch can survive from teomprary 
ConnectionLossException (i.e., due to transient network issue). 
Technically, this is already the case. When a RECONNECTED is received, 
LeaderLatch will attempt to regain leadership. The problem is that when there 
is a network partition there is no way to guarantee that you are still the 
leader. If there is Quorum in another segment of the cluster a new leader might 
be elected there.

-JZ

From: chao chu [email protected]
Reply: [email protected] [email protected]
Date: March 10, 2014 at 9:39:50 AM
To: [email protected] [email protected]
Subject:  Re: Leader Latch recovery after suspended state

Hi,

Just want to see if there is any progress on this?

I also have a related question about not only re-use the znode, but imho, It 
would be great that LeaderLatch can survive from teomprary 
ConnectionLossException (i.e., due to transient network issue). 

I guess in most cases, the context switch due to leader re-election is quite 
expensive, we might not want to do that just because of some transient issue. 
if the current leader can re-connect within the session timeout, it should 
still hold the leadership and no leader change would happen during between. The 
similar rational like the differences between ConnestionLossException (which is 
recoverable) and SessionExipredException (which is not recoverable).

what are your thoughts on this? Thanks a lot!

Regards,


On Wed, Aug 21, 2013 at 2:05 AM, Jordan Zimmerman <[email protected]> 
wrote:
Yes, I was suggesting how to patch Curator.

On Aug 20, 2013, at 10:59 AM, Calvin Jia <[email protected]> wrote:

Currently this is not supported in the Curator library, but the Curator library 
(specifically leader latch's reset method) is the correct/logical place to add 
this feature if I want it?


On Tue, Aug 20, 2013 at 10:34 AM, Jordan Zimmerman <[email protected]> 
wrote:
On reset() it could check to see if its node still exists. It would make the 
code a lot more complicated though.

-JZ

On Aug 20, 2013, at 10:25 AM, Calvin Jia <[email protected]> wrote:

A leader latch enters the suspended state after failing to receive a response 
from the first ZK machine it heartbeats to (takes 2 thirds of the timeout). For 
the last 1 third, it tries to contact another ZK machine. If it is successful, 
it will enter the state reconnected.

However, on reconnect, despite the fact the original node it created in ZK is 
still there, it will create another ephemeral-sequential node (the reset method 
is called). This means it will relinquish leadership, if there is another 
machine with a latch in the same path.

Is there any way to reconnect and reuse the original ZK node?

Thanks!






--
ChuChao



--
ChuChao

Reply via email to