There is no such a way currently and implementing it would probably be harder than implementing the addition and removal of servers in a ZooKeeper cluster. It does seem like a useful feature to gracefully decommission a running ZooKeeper leader.
ben On Tuesday 22 July 2008 17:16:05 Patrick Hunt wrote: > Is there a way to ask the zk cluster to switch leaders? Switch in such a > way as it doesn't cause a "virtual bounce"? (I mean can we code > something that would enable this) > > Patrick > > Benjamin Reed wrote: > > Creative idea. This should work. It's kind of a pain. I don't think > > there is a Jira opened for this, but you should open one. It's really a > > matter of implementation. You should be able to add (or remove) a > > machine to ZooKeeper and just have it get integrated in without having > > to do any restarting. The real problem with the restarting is that > > eventually you will get to the leader which will cause a virtual bounce > > of everyone when a new leader gets elected, so you really don't save > > much by doing it gradually. > > > > ben > > > > Austin Shoemaker wrote: > >> We are using Zookeeper to implement consistent hashing, and need to be > >> able to add or remove hosts from the Zookeeper service without > >> interrupting its functionality. > >> > >> According to > >> http://zookeeper.wiki.sourceforge.net/ZooKeeperConfiguration: "Every > >> machine that is part of the ZooKeeper service needs to know about every > >> other machine. So, we need to have a list of machines in the > >> configuration file." > >> > >> Can we add a single Zookeeper server to the service with an expanded > >> host list without interrupting the proper functioning of the service? > >> The old servers will each have host list s_1, ..., s_k, and the new > >> server s_k+1 will have host list s_1, ..., s_k, s_k+1. Now we need to > >> restart the rest of the servers one by one from s_1 to s_k with the > >> new host list s_1, ..., s_k, s_k+1. Is this a violation of the service > >> specification? > >> > >> Thanks, > >> > >> Austin > >> > >> ------------------------------------------------------------------------ > >> > >> ------------------------------------------------------------------------ > >>- This SF.Net email is sponsored by the Moblin Your Move Developer's > >> challenge Build the coolest Linux based applications with Moblin SDK & > >> win great prizes Grand prize is a trip for two to an Open Source event > >> anywhere in the world > >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ > >> ------------------------------------------------------------------------ > >> > >> _______________________________________________ > >> Zookeeper-user mailing list > >> Zookeeper-user@lists.sourceforge.net > >> https://lists.sourceforge.net/lists/listinfo/zookeeper-user > > > > ------------------------------------------------------------------------ > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > > challenge Build the coolest Linux based applications with Moblin SDK & > > win great prizes Grand prize is a trip for two to an Open Source event > > anywhere in the world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Zookeeper-user mailing list > > Zookeeper-user@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/zookeeper-user ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Zookeeper-user mailing list Zookeeper-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/zookeeper-user