OK, the PathChildrenCache (or preferably TreeCache once Curator 2.7.0 is released) would do the job.
Something to be aware of though: you may not get notified if a machine quickly registers then deregisters (or deregisters then registers). Implement the TreeCacheListener and you'll get notified of any changes (additions, removals) Also note that having 4-5k of nodes under a single zNode (i.e. all under /test or whatever) then you may run into some performance issues with ZK. You may need to hash them into a bunch of different zNodes (i.e /test/1, /test/2) if you're having issues. cheers On Mon, Nov 3, 2014 at 4:40 PM, Tony Jackson <[email protected]> wrote: > I have a client (or several clients responsible for different services) > watching a particular znode, and will notify other services that newly > registered machines are available now. The registered machines may be up to > e.g. 4k~5k. So effect like diff would be preferred if possible. Otherwise > clients responsible for watching znodes need to cache and diff by itself. > That seems to be an overhead for clients because it needs to hold many > information related to the registered nodes and perform diff function with > old one. > > However I am naive about this, so I appreciate any advises. > > Thanks > > > >Sent: Monday, November 03, 2014 at 1:11 PM > >From: "Cameron McKenzie" <[email protected]> > >To: [email protected] > >Subject: Re: Watcher latency question > > >hey Tony, > >The way that watches work in ZK is that once they fire they need to be > added back again. If the data >in ZK changes before the watch is reset, > then the client will not find out about this change in data. > > >i.e You're watching the data in node /test > >-Data changes to state 'A' > >-Watch fires > >-Data changes to state 'B' > >-Data changes to state 'A' > >-Reset watch > >-Data changes to state 'C' > >-Watch fires > > >You're going to miss the intermediate state where the data transitioned > from state A to state B and >back to state A again. This is just a > limitation of ZK, there's a window of opportunity for these >events to be > missed. I don't think that the PathCache in Curator is going to solve this > problem. > > >Do you have a particular use case where missing these transitions is an > issue? > > >cheers > >Cam > > >>On Mon, Nov 3, 2014 at 4:04 PM, Tony Jackson <[email protected]> > wrote:I read some articles mentioning that Zookeeper watcher has latency > issue. Something like before the next watcher is placed and after the > previous watcher is triggered, it is possible that the client may not > receive notification within that period. > > >> > http://www.quora.com/Does-Zookeeper-clients-keep-open-lots-of-TCP-connections-if-so-how-scalable-is-it-Any-limits[http://www.quora.com/Does-Zookeeper-clients-keep-open-lots-of-TCP-connections-if-so-how-scalable-is-it-Any-limits] > > >>Then on the internet some people recommend using curator pathcache, > which helps watch child znodes being added, updated, etc. operations. So my > question - is it the right recipe to use if I want to avoid watcher latency > problem? Otherwise which recipe should I use instead or how to avoid such > problem with curator? > > >> > http://curator.apache.org/curator-recipes/path-cache.html[http://curator.apache.org/curator-recipes/path-cache.html] > > >>Thanks >
