[ https://issues.apache.org/jira/browse/ZOOKEEPER-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13177485#comment-13177485 ]
Patrick Hunt commented on ZOOKEEPER-1177: ----------------------------------------- I ran some ghetto performance numbers against this patch on trunk (NEW) vs without (OLD) I modified testSizeInBytes to create 10k watchers and 1k paths, each watcher is watching all the paths - 10m watches in total. (OLD failed with 10k/10k, even at 2g, while NEW ran fine with 512m) {noformat} java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) Server VM (build 20.1-b02, mixed mode) ant -Dtest.junit.maxmem=2g -Dtest.output=yes -Dtestcase=WatchManagerTest clean test-core-java add - add 10m watches size - run size one the manager dump - dump the watches to /dev/null (bypath and byid) trigger - trigger the 10m watches the numbers settled down to something like this after letting the VM warm up: NEW [junit] 1753ms to add [junit] size:10000000 [junit] 1ms to size [junit] 3424ms to dumpwatches true [junit] 3066ms to dumpwatches false [junit] 2318ms to trigger OLD [junit] 9736ms to add [junit] size:10000000 [junit] 0ms to size [junit] 5615ms to dumpwatches true [junit] 3035ms to dumpwatches false [junit] 5530ms to trigger notice: add - ~5 times faster size - approx the same, even though NEW is scanning all bitsets dump - faster for bypath, about the same for byid trigger - ~2 times faster {noformat} here are the numbers with 1k watchers and 10k paths {noformat} NEW [junit] 1219ms to add [junit] size:10000000 [junit] 0ms to size [junit] 3527ms to dumpwatches true [junit] 3680ms to dumpwatches false [junit] 1426ms to trigger OLD [junit] 7020ms to add [junit] size:10000000 [junit] 1ms to size [junit] 3585ms to dumpwatches true [junit] 3251ms to dumpwatches false [junit] 2843ms to trigger both old and NEW do better in this case than in the 10k/1k case. NEW is still significantly ahead of OLD. {noformat} > Enabling a large number of watches for a large number of clients > ---------------------------------------------------------------- > > Key: ZOOKEEPER-1177 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1177 > Project: ZooKeeper > Issue Type: Improvement > Components: server > Affects Versions: 3.3.3 > Reporter: Vishal Kathuria > Assignee: Vishal Kathuria > Fix For: 3.5.0 > > Attachments: ZooKeeper-with-fix-for-findbugs-warning.patch, > ZooKeeper.patch, Zookeeper-after-resolving-merge-conflicts.patch > > > In my ZooKeeper, I see watch manager consuming several GB of memory and I dug > a bit deeper. > In the scenario I am testing, I have 10K clients connected to an observer. > There are about 20K znodes in ZooKeeper, each is about 1K - so about 20M data > in total. > Each client fetches and puts watches on all the znodes. That is 200 million > watches. > It seems a single watch takes about 100 bytes. I am currently at 14528037 > watches and according to the yourkit profiler, WatchManager has 1.2 G > already. This is not going to work as it might end up needing 20G of RAM just > for the watches. > So we need a more compact way of storing watches. Here are the possible > solutions. > 1. Use a bitmap instead of the current hashmap. In this approach, each znode > would get a unique id when its gets created. For every session, we can keep > track of a bitmap that indicates the set of znodes this session is watching. > A bitmap, assuming a 100K znodes, would be 12K. For 10K sessions, we can keep > track of watches using 120M instead of 20G. > 2. This second idea is based on the observation that clients watch znodes in > sets (for example all znodes under a folder). Multiple clients watch the same > set and the total number of sets is a couple of orders of magnitude smaller > than the total number of znodes. In my scenario, there are about 100 sets. So > instead of keeping track of watches at the znode level, keep track of it at > the set level. It may mean that get may also need to be implemented at the > set level. With this, we can save the watches in 100M. > Are there any other suggestions of solutions? > Thanks > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira