I forgot to mention this. You may also consider adding more zookeeper
servers and setting the weight of such servers to zero. We will be
introducing this possibility in 3.2.1 (the upcoming release). Zero-
weight servers simulate observers, but they do not behave exactly as a
observers, since they still send all messages required for the
agreement phase of our update protocol.
In this way, I expect you to be able to scale to the number of watches
you're talking about, assuming you'll be able to add enough servers.
To compute the number of zookeeper servers you need, I suspect that
you only need to determine how many connections you want each
zookeeper server to handle, and divide the total number of clients by
such number. For example, if you have an ensemble of 21 zookeeper
servers, and only followers accept client connections, then each
server will have to handle 5k connections (assuming your 100k case).
Is 5k connections per server reasonable?
For some information on how to set the weight of servers, check the
"Cluster options" section of the Administrator's guide of 3.2.0. I
believe we will have some more documentation in the upcoming release.
On Aug 30, 2009, at 4:32 AM, Mahadev Konar wrote:
100K clients would be a stretch. We have never tested at that scale.
100K watches should not be a problem at all. I am more concerened
number of client connections that would result to each of the
In case of 5 servers, that would be 20K persistent clients to each
zookeeper servers which seems really high and not really feasible.
high level of quorums servers like 13 or so you should have a more
reasonable number of connections per server but again you would just
running at the edge (in case some amount of servers went down).
Is 100K machines all running in the same data center? If not (which
is more likely the case?), I would suggest running different zookeeper
ensembles in different data centers and using a bridge to keep them
synchronized amongst themselves.
If you can shed more light on the setup and use case of these 100K
machines, I think we can work out a reasonable solution.
On 8/29/09 6:16 PM, "Ted Dunning" <ted.dunn...@gmail.com> wrote:
That is probably a bit beyond reasonable levels of scaling. For
putting 100,000 machines close together in a network is a bit
two major limitations are likely to be memory for keeping the
watches on the
server side and bandwidth for publishing the notifications.
That said, ZK is solid enough that I would not be surprised if it
that level with sufficient memory and low enough update rate.
On Sat, Aug 29, 2009 at 2:51 PM, Avinash Lakshman <
Is it possible to have 100K machines register for a watch on a
theoritically yes it should work but ZK scale to these many
it comes to delivering watch notifications? Perhaps no one has
experience in dealing with this but is there any fundamental
I should be aware of? These 100K machines are only interested in