if i understand your situation correctly, you have a lock that may have more than 100,000 processes contending for a lock. since this can cause a problem for getChildren, you want to have a way to get the server to do it for you without returning everything.

the isFirst method would return true if you are first (sorted in utf8 order?) in the list of children. and you can set a watch on that condition. what do the path and type arguments do?


ben

On 06/03/2010 03:20 AM, Joe Zou wrote:

Hi All:

Use zookeeper to build distribute lock is main feature. now implement the lock function as below code:

Public void lock() throws InterruptedException{

    Do{

        If(path == null){

  Path = zk.create(lockPrefix,null,acl, CreateMode./EPHEMERAL_SEQUENTIAL/)

}

       List<String> children = zk.getChildren(parentPath);

       If(isFirst(children,path)){

   Return;

}else{

     Final CowntDownLatch latch = new CountDownLatch(1);

    String nestestChild = findLastBefore(children,path);

       If(zk.exist(nestestChildPath,new Watcher(Event){

             Latch.countDown();

}) != null){

   Latch.await();

}else{

  //acquire lock success

   Return;

}

}

}while(true);

}

In high concurrent case, lock node may need to get a big ephemeral children nodes. So that the GetChildren may cause the package exceeding the limitation(4MB as default), and also this would cause the performance issue. To avoid the issue, I plan to add a new interface isFirst for zeekeeper. I don’t know if it is useful as a common usage, but I do think it should help a little bit in the concurrent situation. Below is snippet of the code change, and the attachment is full list of it.

Public void lock() throws InterruptedException{

    Do{

        If(path == null){

  Path = zk.create(lockPrefix,null,acl, CreateMode./EPHEMERAL_SEQUENTIAL/)

}

Final CowntDownLatch latch = new CountDownLatch(1);

If(!Zk.isFirst(parentPath,path,Type,new Watcher(Event){

         Latch.countDown();

})){

   Latch.countDown()

}else{

  //acquire success.

   Return;

}

}while(true);

}

As we know, only the first node can aquire the lock success, so when lock Type parent node remove child node, it need trigger the the wather to notify the first node.

the second lock requirement is:

in our current project, each save need require multiple lock. In distribute Env, it very maybe cause dead lock or lock starve. So we need a stateLock, in the lock node, it keep the multiple states to judge the node if acquire the lock or not. Example:

Client1:    lock( id1,id2,id3) ->zdnode---0000000001

Client2:    lock(id2,id3)    ->zdnode---0000000002

Client3:    lock(id4)       ->zdnode---0000000003

We need client2 need wait the lock until the client1 unlock lock. But client 3 can acquire the lock at once. These judge logic in zookeeper server. We add a LockState interface:

*public* *interface* LockState{

    String /PATH_SEPERATOR/ = "/";

    String /PATH_DELIMIT/ = "|";

*boolean* isConflict(LockState state);

*byte*[] getBytes();

}

Any new lock strategy can be added by implement the interface.

Attached is my code diff from 3.2.2 and the use lock some case.

Best Regards

Joe Zou


Reply via email to