Hello,
I'm new to the Zookeeper project and wondering whether our use case is a
good one for Zookeeper. I read the documentation, but couldn't find an
answer. At some point it says that
> A common property of the various forms of coordination data is that they are
> relatively small: measured in
there aren't any limits on the number of znodes, it's just limited by
your memory. there are two things (probably more :) to keep in mind:
1) the 1M limit also applies to the children list. you can't grow the
list of children to more than 1M (the sum of the names of all of the
children) otherw
See this recent benchmark I did: http://bit.ly/4ekN8G
In this case I have 20 clients doing 10k zodes each (200k znodes of size
100 bytes each with 1million watches). However I have tested similar
setup with 400 clients (so 4 million znodes and 20million watches).
As Ben mentioned there are no
Hey all,
I'm working on using ZooKeeper for an internal application at Digg. I've been
using the zkpython package and I just noticed that the data I was receiving
from a zookeeper.get() call was being truncated. After some quick digging I
found that zookeeper.c limits the data returned to 512
Hey Rich -
That's a really dumb restriction :) I'll open a JIRA and get it fixed asap.
Thanks for the report!
Henry
On Tue, Dec 15, 2009 at 4:38 PM, Rich Schumacher wrote:
> Hey all,
>
> I'm working on using ZooKeeper for an internal application at Digg. I've
> been using the zkpython package
Hi -
See https://issues.apache.org/jira/browse/ZOOKEEPER-627, and the attached
patch. I've upped the limit to a 1Mb buffer. Also I've added a fourth
parameter to zookeeper.get - if you set this integer parameter to the size
of the buffer you are expecting, zkpython will return no more than this ma