[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15836532#comment-15836532
 ] 

Jordan Zimmerman edited comment on ZOOKEEPER-1416 at 1/25/17 9:39 PM:
----------------------------------------------------------------------

I'd note that this issue has 9 votes (including you it seems). I'm not sure 
what you want me to say. This would be an excellent addition to ZooKeeper that 
people have been asking for for years. Do you have issues with the 
implementation? I've already seen how it simplifies writing a TreeCache style 
implementation (here is the code: https://github.com/apache/curator/pull/181). 
The performance overhead for this is negligible when considering the use case. 
The purpose of this feature is to support what had to be done manually in 
Curator - TreeCache. Have a look a the TreeCache code and see how complex it 
is. Now compare that to https://github.com/apache/curator/pull/181 to see how 
much easier it is with this new API.

For simplicity look just at this class - it does the work: 
https://github.com/apache/curator/blob/1089eedc1a29469250c161a575e7b3bfb300d5d7/curator-recipes/src/main/java/org/apache/curator/framework/recipes/watch/InternalCuratorCache.java

update: actually the performance with this new feature will be _better_ than 
having to use one-time triggers. Note the use-case. People want _every_ event 
for a tree of nodes. This is a very common use case with ZK.

another thing: this change uses far, far, far less memory than the current 
alternative for writing a tree cache. Currently, you have to have watchers on 
every parent and every child recursively This escalates very quickly. The 
reason I picked this issue up in the first place was that we were seeing 
ridiculous memory usage with our TreeCache implementation. If we have this 
change, 1 watcher can watch an entire tree of nodes (again, a very common use 
case).


was (Author: randgalt):
I'd note that this issue has 9 votes (including you it seems). I'm not sure 
what you want me to say. This would be an excellent addition to ZooKeeper that 
people have been asking for for years. Do you have issues with the 
implementation? I've already seen how it simplifies writing a TreeCache style 
implementation (here is the code: https://github.com/apache/curator/pull/181). 
The performance overhead for this is negligible when considering the use case. 
The purpose of this feature is to support what had to be done manually in 
Curator - TreeCache. Have a look a the TreeCache code and see how complex it 
is. Now compare that to https://github.com/apache/curator/pull/181 to see how 
much easier it is with this new API.

For simplicity look just at this class - it does the work: 
https://github.com/apache/curator/blob/1089eedc1a29469250c161a575e7b3bfb300d5d7/curator-recipes/src/main/java/org/apache/curator/framework/recipes/watch/InternalCuratorCache.java

update: actually the performance with this new feature will be _better_ than 
having to use one-time triggers. Note the use-case. People want _every_ event 
for a tree of nodes. This is a very common use case with ZK.

> Persistent Recursive Watch
> --------------------------
>
>                 Key: ZOOKEEPER-1416
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: c client, documentation, java client, server
>            Reporter: Phillip Liu
>            Assignee: Jordan Zimmerman
>         Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. The Problem
> A ZooKeeper Watch can be placed on a single znode and when the znode changes 
> a Watch event is sent to the client. If there are thousands of znodes being 
> watched, when a client (re)connect, it would have to send thousands of watch 
> requests. At Facebook, we have this problem storing information for thousands 
> of db shards. Consequently a naming service that consumes the db shard 
> definition issues thousands of watch requests each time the service starts 
> and changes client watcher.
> h4. Proposed Solution
> We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent 
> means no Watch reset is necessary after a watch-fire. Recursive means the 
> Watch applies to the node and descendant nodes. A Persistent Recursive Watch 
> behaves as follows:
> # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS.
> # CHILDREN and DATA Recursive Watches can be placed on any znode.
> # EXISTS Recursive Watches can be placed on any path.
> # A Recursive Watch behaves like a auto-watch registrar on the server side. 
> Setting a  Recursive Watch means to set watches on all descendant znodes.
> # When a watch on a descendant fires, no subsequent event is fired until a 
> corresponding getData(..) on the znode is called, then Recursive Watch 
> automically apply the watch on the znode. This maintains the existing Watch 
> semantic on an individual znode.
> # A Recursive Watch overrides any watches placed on a descendant znode. 
> Practically this means the Recursive Watch Watcher callback is the one 
> receiving the event and event is delivered exactly once.
> A goal here is to reduce the number of semantic changes. The guarantee of no 
> intermediate watch event until data is read will be maintained. The only 
> difference is we will automatically re-add the watch after read. At the same 
> time we add the convience of reducing the need to add multiple watches for 
> sibling znodes and in turn reduce the number of watch messages sent from the 
> client to the server.
> There are some implementation details that needs to be hashed out. Initial 
> thinking is to have the Recursive Watch create per-node watches. This will 
> cause a lot of watches to be created on the server side. Currently, each 
> watch is stored as a single bit in a bit set relative to a session - up to 3 
> bits per client per znode. If there are 100m znodes with 100k clients, each 
> watching all nodes, then this strategy will consume approximately 3.75TB of 
> ram distributed across all Observers. Seems expensive.
> Alternatively, a blacklist of paths to not send Watches regardless of Watch 
> setting can be set each time a watch event from a Recursive Watch is fired. 
> The memory utilization is relative to the number of outstanding reads and at 
> worst case it's 1/3 * 3.75TB using the parameters given above.
> Otherwise, a relaxation of no intermediate watch event until read guarantee 
> is required. If the server can send watch events regardless of one has 
> already been fired without corresponding read, then the server can simply 
> fire watch events without tracking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to