Actually you can do it today, but not as easily. Look at the patch for 544 - in your test you need to create your own subclass of ZooKeeper, then you can use the cnxn in a similar way as the patch does in order to access the data. Not particularly hard, but we've wrapped it up in a nice package for 3.3.0.

I wouldn't stress out too much about this particular issue though. If you implement a reasonable recipe ZooKeeper is doing the heavy lifting/theory and things should just work out in the end. I'd be interested to hear if you do find anything interesting though.



Mahadev Konar wrote:
Hi martin,
 Currently you cannot access the server that the client is connected to.
This was fixed in this jira

But again this does not tell you if you are connected to the primary or the
other followers. So you will anyway have to do some manual testing with
specifying the client host:port address as just the primary or just the
follower (for the follower test case).

Leaking information like (if the server is primary or not) can cause
applications to use this information in a wrong way. So we never exposed
this information! :)


On 2/24/10 11:25 AM, "Martin Waite" <> wrote:


I take the point that the watch is useful for stopping clients unnecessarily
pestering the zk nodes.

I think that this is something I will have to experiment with and see how it
goes.  I only need to place about 10k locks per minute, so I am hoping that
whatever approach I take is well within the headroom of Zookeeper on some
reasonable boxes.

Is it possible for the client to know whether it has connected to the
current primary or not ?   During my testing I would like to make sure that
the approach works both when the client is attached to the primary and when
attached to a lagged non-primary node.


On 24 February 2010 18:42, Ted Dunning <> wrote:

Random back-off like this is unlikely to succeed (seems to me).  Better to
use the watch on the locks directory to make the wait as long as possible
AND as short as possible.

On Wed, Feb 24, 2010 at 8:53 AM, Patrick Hunt <> wrote:

Anyone interested in locking an explicit resource attempts to create an
ephemeral node in /locks with the same ### as they resource they want
to. If interested in just getting "any" resource then you would
getchildren(/resources) and getchildren(/locks) and attempt to lock
not in the intersection (avail). This could be done efficiently since
resources won't change much, just cache the results of getchildren and
set a
watch at the same time. To lock a resource randomize "avail" and attempt
lock each in turn. If all avail fail to acq the lock, then have some
holdoff time, then re-getchildren(locks) and start over.

Ted Dunning, CTO

Reply via email to