Hi martin,
 Currently you cannot access the server that the client is connected to.
This was fixed in this jira

http://issues.apache.org/jira/browse/ZOOKEEPER-544

But again this does not tell you if you are connected to the primary or the
other followers. So you will anyway have to do some manual testing with
specifying the client host:port address as just the primary or just the
follower (for the follower test case).

Leaking information like (if the server is primary or not) can cause
applications to use this information in a wrong way. So we never exposed
this information! :)

Thanks
mahadev




On 2/24/10 11:25 AM, "Martin Waite" <waite....@googlemail.com> wrote:

> Hi,
> 
> I take the point that the watch is useful for stopping clients unnecessarily
> pestering the zk nodes.
> 
> I think that this is something I will have to experiment with and see how it
> goes.  I only need to place about 10k locks per minute, so I am hoping that
> whatever approach I take is well within the headroom of Zookeeper on some
> reasonable boxes.
> 
> Is it possible for the client to know whether it has connected to the
> current primary or not ?   During my testing I would like to make sure that
> the approach works both when the client is attached to the primary and when
> attached to a lagged non-primary node.
> 
> regards,
> Martin
> 
> On 24 February 2010 18:42, Ted Dunning <ted.dunn...@gmail.com> wrote:
> 
>> Random back-off like this is unlikely to succeed (seems to me).  Better to
>> use the watch on the locks directory to make the wait as long as possible
>> AND as short as possible.
>> 
>> On Wed, Feb 24, 2010 at 8:53 AM, Patrick Hunt <ph...@apache.org> wrote:
>> 
>>> Anyone interested in locking an explicit resource attempts to create an
>>> ephemeral node in /locks with the same ### as they resource they want
>> access
>>> to. If interested in just getting "any" resource then you would
>>> getchildren(/resources) and getchildren(/locks) and attempt to lock
>> anything
>>> not in the intersection (avail). This could be done efficiently since
>>> resources won't change much, just cache the results of getchildren and
>> set a
>>> watch at the same time. To lock a resource randomize "avail" and attempt
>> to
>>> lock each in turn. If all avail fail to acq the lock, then have some
>> random
>>> holdoff time, then re-getchildren(locks) and start over.
>>> 
>> 
>> 
>> 
>> --
>> Ted Dunning, CTO
>> DeepDyve
>> 

Reply via email to