Hi, I take the point that the watch is useful for stopping clients unnecessarily pestering the zk nodes.
I think that this is something I will have to experiment with and see how it goes. I only need to place about 10k locks per minute, so I am hoping that whatever approach I take is well within the headroom of Zookeeper on some reasonable boxes. Is it possible for the client to know whether it has connected to the current primary or not ? During my testing I would like to make sure that the approach works both when the client is attached to the primary and when attached to a lagged non-primary node. regards, Martin On 24 February 2010 18:42, Ted Dunning <ted.dunn...@gmail.com> wrote: > Random back-off like this is unlikely to succeed (seems to me). Better to > use the watch on the locks directory to make the wait as long as possible > AND as short as possible. > > On Wed, Feb 24, 2010 at 8:53 AM, Patrick Hunt <ph...@apache.org> wrote: > > > Anyone interested in locking an explicit resource attempts to create an > > ephemeral node in /locks with the same ### as they resource they want > access > > to. If interested in just getting "any" resource then you would > > getchildren(/resources) and getchildren(/locks) and attempt to lock > anything > > not in the intersection (avail). This could be done efficiently since > > resources won't change much, just cache the results of getchildren and > set a > > watch at the same time. To lock a resource randomize "avail" and attempt > to > > lock each in turn. If all avail fail to acq the lock, then have some > random > > holdoff time, then re-getchildren(locks) and start over. > > > > > > -- > Ted Dunning, CTO > DeepDyve >