Internally, ZkMultiLock constructs single path ZkReadLock and ZkWriteLock
objects to handle the lock paths you add to it. These work in a similar way
to that described in the ZooKeeper recipes.
If you only add a single lock path to ZkMultiLock, then when you try and
acquire() it behaves exactly like a ZkReadLock or ZkWriteLock.
However, if you add multiple paths, then it proceeds differently. In this
case, it constructs an appropriate array of single path lock objects for
each one, and calls tryAcquire() on each.
If any of these locks fail to acquire because they are already held, then it
calls release() on all locks in the array (one aspect of the work on
ZkReadLock and ZkWriteLock was enabling them to accept calls to release()
before they reach an acquired state).
Having released all the locks, ZkMultiLock then waits for a delay determined
by a Binary Exponential Backoff style algorithm, constructs a new array of
equivalent single path lock objects and calls tryAcquire() on each. This
goes on ad infinitum, until all the single path locks are acquired with a
single pass over the array.
The advantage of this approach is that if you have an operation that
requires some number of locks, if all these locks are acquired together
using ZkMultiLock you cannot get into a deadlock situation.
Where lock paths are heavily contended this can be less efficient than using
nested single path locks, but in practice most lock paths aren't that
contended and you just need to guard against that occasional contention that
would otherwise mess your data up.
For that reason, certainly I am asking everyone to stick to ZkMultiLock in
our work - there's nothing worse than distributed deadlock!
On 12 May 2010 00:51, Patrick Hunt <ph...@apache.org> wrote:
> Hi Dominic, this looks really interesting thanks for open sourcing it. I
> really like the idea of providing higher level concepts. I only just looked
> at the code, it wasn't clear on first pass what happens if you multilock on
> 3 paths, the first 2 are success, but the third fails. How are the locks
> cleared? How about the case where the client loses connectivity to the
> cluster, what happens in this case (both if partial locks are acquired, and
> the case where all the locks were acquired (for example how does the caller
> know if the locks are still held or released due to client partitioned from
> the cluster, etc...)).
> I'll try d/l the code and looking at it more, I see some javadoc in there
> as well so that's great.
> On 05/11/2010 04:02 PM, Dominic Williams wrote:
>> Anyone looking for a Java client library for ZooKeeper, please checkout:
>> Cages - http://cages.googlecode.com
>> The library will be expanded and feedback will be helpful.
>> Many thanks,