I think that depends on the "resources" and details of the use case we may not have. Martin mentioned that his lock frequency is high with very low hold times. This leads me to assume (ahem) in my response/thinking that the number of resources relative to the number of "lockers" is very high (many resources, few lockers each taking a lock at a time). If that's the case then I think it's fine. no?

An alternate in my mind, say you have resources ~= lockers (again, hard to say specifically based on info we have, if this makes sense or not) where the resources are heterogeneous then I would go with a scheme more like you suggest (to minimize starvation for example). If the resources are homogeneous then I'd go with something more along the lines of a long lived lock (or "master with failover" type scheme if you want to think about it that way).


Ted Dunning wrote:
Random back-off like this is unlikely to succeed (seems to me).  Better to
use the watch on the locks directory to make the wait as long as possible
AND as short as possible.

On Wed, Feb 24, 2010 at 8:53 AM, Patrick Hunt <ph...@apache.org> wrote:

Anyone interested in locking an explicit resource attempts to create an
ephemeral node in /locks with the same ### as they resource they want access
to. If interested in just getting "any" resource then you would
getchildren(/resources) and getchildren(/locks) and attempt to lock anything
not in the intersection (avail). This could be done efficiently since
resources won't change much, just cache the results of getchildren and set a
watch at the same time. To lock a resource randomize "avail" and attempt to
lock each in turn. If all avail fail to acq the lock, then have some random
holdoff time, then re-getchildren(locks) and start over.

Reply via email to