I think that Mahadev was correct that there is some confusion here.
Leader election is normally a term used for an operation that is entirely
internal to ZK. It is very robust and you probably don't need to worry
You can then use ZK in your application to pick a lead machine for other
operations. In that case, essentially every failure scenario is handled by
the standard recipe. In your example where the master and slave are cut
off, but both still have access to ZK, all that will happen is that the
master cannot communicate with the slave. Both will still be clear about
who is in which role.
The case where the master is cut off from both ZK and the slave is also
handled well as is the case where the master is cut off from ZK, but not
from the slave. In both cases, the master will get a connection loss event
and stop trying to act like a master and the slave will be notified that the
master has dropped out of its role.
On Fri, Apr 30, 2010 at 4:05 PM, Lei Gao <l...@linkedin.com> wrote:
> Hi Mahadev,
> Why would the leader be disconnected from ZK? ZK is fine communicating with
> the leader in this case. We are talking about asymmetric network failure.
> Yes. Leader could consider all the slaves being down if it tracks the
> of all slaves himself. But I guess if ZK is used for for membership
> management, neither the leader nor the slaves will be considered
> disconnected because they can all connect to ZK.
> On 4/30/10 3:47 PM, "Mahadev Konar" <maha...@yahoo-inc.com> wrote:
> > Hi Lei,
> > In this case, the Leader will be disconnected from ZK cluster and will
> > up its leadership. Since its disconnected, ZK cluster will realize that
> > Leader is dead!....
> > When Zk cluster realizes that the Leader is dead (this is because the zk
> > cluster hasn't heard from the Leader for a certain time.... Configurable
> > session timeout parameter), the slaves will be notified of this via
> > in zookeeper cluster. The slaves will realize that the Leader is gone and
> > will relect a new Leader and will start working with the new Leader.
> > Does that answer your question?
> > You might want to look though the documentation of ZK to understand its
> > case and how it solves these kind of issues....
> > Thanks
> > mahadev
> > On 4/30/10 2:08 PM, "Lei Gao" <l...@linkedin.com> wrote:
> >> Thank you all for your answers. It clarifies a lot of my confusion about
> >> service guarantees of ZK. I am still struggling with one failure case (I
> >> not trying to be the pain in the neck. But I need to have a full
> >> understanding of what ZK can offer before I make a decision on whether
> >> used it in my cluster.)
> >> Assume the following topology:
> >> Leader ==== ZK cluster
> >> \\ //
> >> \\ //
> >> \\ //
> >> Slave(s)
> >> If I am asymmetric network failure such that the connection between
> >> and Slave(s) are broken while all other connections are still alive,
> >> my system hang after some point? Because no new leader election will be
> >> initiated by slaves and the leader can't get the work to slave(s).
> >> Thanks,
> >> Lei
> >> On 4/30/10 1:54 PM, "Ted Dunning" <ted.dunn...@gmail.com> wrote:
> >>> If one of your user clients can no longer reach one member of the ZK
> >>> cluster, then it will try to reach another. If it succeeds, then it
> >>> continue without any problems as long as the ZK cluster itself is OK.
> >>> This applies for all the ZK recipes. You will have to be a little bit
> >>> careful to handle connection loss, but that should get easier soon (and
> >>> isn't all that difficult anyway).
> >>> On Fri, Apr 30, 2010 at 1:26 PM, Lei Gao <l...@linkedin.com> wrote:
> >>>> I am not talking about the leader election within zookeeper cluster. I
> >>>> guess
> >>>> I didn't make the discussion context clear. In my case, I run a
> >>>> that
> >>>> uses zookeeper for doing the leader election. Yes, nodes in my cluster
> >>>> the clients of zookeeper. Those nodes depend on zookeeper to elect a
> >>>> leader and figure out what the current leader is. So if the zookeeper
> >>>> (think
> >>>> of it as a stand-alone entity) becomes unavailabe in the way I've
> >>>> earlier, how can I handle such situation so my cluster can still
> >>>> while a majority of nodes still connect to each other (but not to the
> >>>> zookeeper)?