you will need to time how long it takes to read all that state back in and adjust the initTime accordingly. it will probably take a while to pull all that data into memory.


On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
I have run it over 5 GB of heap with over 10M znodes. We will definitely run
it with over 64 GB of heap. Technically I do not see any limitiation.
However I will the experts chime in.


On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar<>wrote:

Hi Maarteen,
  I definitely know of a group which uses around 3GB of memory heap for
zookeeper but never heard of someone with such huge requirements. I would
say it definitely would be a learning experience with such high memory
I definitely think would be very very useful for others in the community as


On 10/5/10 11:03 AM, "Maarten Koopmans"<>  wrote:


I just wondered: has anybody ever ran zookeeper "to the max" on a 68GB
quadruple extra large high memory EC2 instance? With, say, 60GB allocated

Because EC2 with EBS is a nice way to grow your zookeeper cluster (data
on the
ebs columes, upgrade as your memory utilization grows....)  - I just
what the limits are there, or if I am foing where angels fear to tread...


Reply via email to