On Tue, Apr 15, 2014 at 9:42 AM, Wouter van Atteveldt <
[email protected]> wrote:

>
>
>
> On Tue, Apr 15, 2014 at 2:00 PM, [email protected] <
> [email protected]> wrote:
>
>> This is not Elasticsearch related. If you use a 40g heap of such extreme
>> size, you must expect that garbage collection must run for minutes, on
>> every JVM I know.
>>
>>
> Right, but it is actually advised to give elastic a lot of heap, right?
> The whole index is around 140G, so I would have thought that all frequently
> used parts should get loaded in memory, but it still starts running slow
> after a while.
>
> Any ideas?
>
>
Go with 30GB.  30GB is magic because much over that and the JVM can't do
pointer compression so there is a hole in how effective heap is.  You can
learn more by clicking links in this:
http://stackoverflow.com/questions/13549787/can-i-use-more-heap-than-32-gb-with-compressed-oops

Beyond that, you may want to look at what is actually happening when
collections are done.  This article is about Cassandra but it seems pretty
on the ball:
http://tech.shift.com/post/74311817513/cassandra-tuning-the-jvm-for-read-heavy-workloads

Beyond that, scale out.

Nik

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2ZUBdwM7m09A0BBZ-ugaJDLxYLqXcH6RoMVJYRJFQhLg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to