100K clients would be a stretch. We have never tested at that scale.
100K watches should not be a problem at all. I am more concerened about the
number of client connections that would result to each of the zookeeper
In case of 5 servers, that would be 20K persistent clients to each of the
zookeeper servers which seems really high and not really feasible. With a
high level of quorums servers like 13 or so you should have a more
reasonable number of connections per server but again you would just be
running at the edge (in case some amount of servers went down).
Is 100K machines all running in the same data center? If not (which I think
is more likely the case?), I would suggest running different zookeeper
ensembles in different data centers and using a bridge to keep them
synchronized amongst themselves.
If you can shed more light on the setup and use case of these 100K
machines, I think we can work out a reasonable solution.
On 8/29/09 6:16 PM, "Ted Dunning" <ted.dunn...@gmail.com> wrote:
> That is probably a bit beyond reasonable levels of scaling. For one thing,
> putting 100,000 machines close together in a network is a bit tricky. The
> two major limitations are likely to be memory for keeping the watches on the
> server side and bandwidth for publishing the notifications.
> That said, ZK is solid enough that I would not be surprised if it scaled to
> that level with sufficient memory and low enough update rate.
> On Sat, Aug 29, 2009 at 2:51 PM, Avinash Lakshman <
> avinash.laksh...@gmail.com> wrote:
>> Hi All
>> Is it possible to have 100K machines register for a watch on a znode? I
>> theoritically yes it should work but ZK scale to these many instances when
>> it comes to delivering watch notifications? Perhaps no one has practical
>> experience in dealing with this but is there any fundamental limitation
>> I should be aware of? These 100K machines are only interested in receiving