size of data / number of znodes

2009-12-15 Thread Michael Bauland
Hello, I'm new to the Zookeeper project and wondering whether our use case is a good one for Zookeeper. I read the documentation, but couldn't find an answer. At some point it says that A common property of the various forms of coordination data is that they are relatively small: measured in

Re: size of data / number of znodes

2009-12-15 Thread Benjamin Reed
there aren't any limits on the number of znodes, it's just limited by your memory. there are two things (probably more :) to keep in mind: 1) the 1M limit also applies to the children list. you can't grow the list of children to more than 1M (the sum of the names of all of the children)

Re: size of data / number of znodes

2009-12-15 Thread Patrick Hunt
See this recent benchmark I did: http://bit.ly/4ekN8G In this case I have 20 clients doing 10k zodes each (200k znodes of size 100 bytes each with 1million watches). However I have tested similar setup with 400 clients (so 4 million znodes and 20million watches). As Ben mentioned there are

Return data size in zkpython

2009-12-15 Thread Rich Schumacher
Hey all, I'm working on using ZooKeeper for an internal application at Digg. I've been using the zkpython package and I just noticed that the data I was receiving from a zookeeper.get() call was being truncated. After some quick digging I found that zookeeper.c limits the data returned to

Re: Return data size in zkpython

2009-12-15 Thread Henry Robinson
Hi - See https://issues.apache.org/jira/browse/ZOOKEEPER-627, and the attached patch. I've upped the limit to a 1Mb buffer. Also I've added a fourth parameter to zookeeper.get - if you set this integer parameter to the size of the buffer you are expecting, zkpython will return no more than this