I think the issue of having to write a full ~60GB snapshot file at
intervals would make this prohibitive, particularly on EC2 via EBS. At
a scale like that I think you'd be better off with a traditional
database or a nosql database like Cassandra, possibly using Zookeeper
for transaction locking/coordination on top.


-Dave Wright

On Tue, Oct 5, 2010 at 5:27 PM, Patrick Hunt <ph...@apache.org> wrote:
> Tuning GC is going to be critical, otw all the sessions will timeout (and
> potentially expire) during GC pauses.
>
> Patrick
>
> On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans <maar...@vrijheid.net>wrote:
>
>> Yes, and syncing after a crash will be interesting as well. Off note; I am
>> running it with a 6GB heap now, but it's not filled yet. I do have smoke
>> tests thoug, so maybe I'll give it a try.
>>
>>
>>
>> Op 5 okt. 2010 om 21:13 heeft Benjamin Reed <br...@yahoo-inc.com> het
>> volgende geschreven:
>>
>> >
>> > you will need to time how long it takes to read all that state back in
>> and adjust the initTime accordingly. it will probably take a while to pull
>> all that data into memory.
>> >
>> > ben
>> >
>> > On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
>> >> I have run it over 5 GB of heap with over 10M znodes. We will definitely
>> run
>> >> it with over 64 GB of heap. Technically I do not see any limitiation.
>> >> However I will the experts chime in.
>> >>
>> >> Avinash
>> >>
>> >> On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar<maha...@yahoo-inc.com
>> >wrote:
>> >>
>> >>> Hi Maarteen,
>> >>>  I definitely know of a group which uses around 3GB of memory heap for
>> >>> zookeeper but never heard of someone with such huge requirements. I
>> would
>> >>> say it definitely would be a learning experience with such high memory
>> >>> which
>> >>> I definitely think would be very very useful for others in the
>> community as
>> >>> well.
>> >>>
>> >>> Thanks
>> >>> mahadev
>> >>>
>> >>>
>> >>> On 10/5/10 11:03 AM, "Maarten Koopmans"<maar...@vrijheid.net>  wrote:
>> >>>
>> >>>> Hi,
>> >>>>
>> >>>> I just wondered: has anybody ever ran zookeeper "to the max" on a 68GB
>> >>>> quadruple extra large high memory EC2 instance? With, say, 60GB
>> allocated
>> >>> or
>> >>>> so?
>> >>>>
>> >>>> Because EC2 with EBS is a nice way to grow your zookeeper cluster
>> (data
>> >>> on the
>> >>>> ebs columes, upgrade as your memory utilization grows....)  - I just
>> >>> wonder
>> >>>> what the limits are there, or if I am foing where angels fear to
>> tread...
>> >>>>
>> >>>> --Maarten
>> >>>>
>> >>>
>> >
>> >
>>
>

Reply via email to