Hi all,
I would like to customize the config of Zookeeper so that when a client is
disconnected, the server will instantaneously remove all ephemeral nodes
associated with the client.
No wait for session timeout or at least a very small amount of time. But I
would like to keep the feature that
Greetings,
As some of you already know, we've been using ZooKeeper at Canonical
for a project we've been pushing (Ensemble, http://j.mp/dql6Fu).
We've already written down txzookeeper (http://j.mp/d3Zx7z), to
integrate the Python bindings with Twisted, and we're also in the
process of creating a
I had a question about number of clients against a zookeeper cluster. I was
looking at having between 10,000 and 100,000 (towards 100,000) watchers within
a single datacenter at a given time. Assuming that some fraction of that
number are active clients and the r/w ratio is well within the
Can you clarify what you mean when you say 10-100K watchers? Do you mean
10-100K clients with 1 active watch, or some lesser number of clients with more
watches, or a few clients doing a lot of watches and other clients doing other
things?
-Original Message-
From: Jeremy Hanna
Camille, that's a very good question. Largest cluster I've heard about
is 10k sessions.
Jeremy - largest I've ever tested was a 3 server cluster with ~500
sessions. Each session created 10k znodes (100bytes each znode) and
set 5 watches on each. So 5 million znodes and 25million watches. I
then
This is exactly the scenario that you use to test session expiration, make one
connection to a ZK and then another with the same session and password, and
close the second connection, which causes the first to expire. It is only a
clean close that will cause this to happen, though (one where
Hi Camille,
Check out ZKClient: https://github.com/sgroschupf/zkclient
The way this client deals with sessions is pretty nice and clean and I ended
up using a lot of this code as the basis for my Java client.
Looking at the code base feels like a pretty dumb wrapper on top of
standard ZK.
fyi: I haven't heard of anyone running over 10k sessions. I've tried
20k before and had issues, you may want to look at this sooner rather
than later.
* Server gc tuning will be an issue (be sure to use cms/incremental).
* Be sure to disable clients accessing the leader (server configuration
Right now, if you have a partition between client and server A, I would not
expect
server A to see a clean close from the client, but one of the various
exceptions
that cause the socket to close.
Please don't get me wrong, but I find it very funny to rely on the
stability of a network
We tested up to the ulimit (~16K) of connections against a single server and
performance was ok, but I would definitely try to do some serious load testing
before I put a system into production that I knew was going to have that load
from the get-go.
The system degrades VERY ungracefully when
ah i see. you are manually reestablishing the connection to B using the
session identifier for the session with A.
the problem is that when you call close on a session, it kills the
session. we don't really have a way to close a handle without do that.
(actually there is a test class that
At canonical we've been using zookeeper heavily in the development of a new
project (ensemble) as noted by gustavo.
I just wanted to give a quick overview of the client library we're using for
it. Its called txzookeeper, its got 100% test coverage, and implements
various queue, lock, and
On Thu, Nov 18, 2010 at 3:46 PM, Jeremy Hanna
jeremy.hanna1...@gmail.com wrote:
Unless I misunderstand, active watches aren't open sessions. If that's the
case, I don't think we'll hit the 10K-20K number of open sessions at a given
time. However, that's a good boundary to keep in mind as we
13 matches
Mail list logo