I’d like to verify best practices for setting eviction threshold settings. 
There’s not much written on it. I’m following guidelines at:
https://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.gemfire.6.6/managing/heap_use/controlling_heap_use.html
 
<https://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.gemfire.6.6/managing/heap_use/controlling_heap_use.html>
and hoping that they are still current.

I have about 750GB of data, 1/2 historical on disk and 1/2 active in memory in 
a cluster of servers with 36GB RAM and 28GB heaps (20% overhead). The 
read/write ratio is about 60%/40% and lots of OQL queries, which need memory 
space to run. A small percentage of the queries will hit disk. I'm thinking 
that I want to give Java 50% overhead.  Based on the above, here is what I am 
thinking:

20% overhead between RAM limit and heap  (36GB RAM with 28GB heap)  - why? Java 
needs memory for its own use outside the heap.

-compressedOops     -why? Heap is < 32GB and this gives me more space.  Space 
is more valuable than speed in this instance.
 
--eviction-heap-percentage=50             - why? I want to start evicting 
around 14GB, which gives the JVM 50% headroom. I found that when I raised this 
to 70% I was getting OOM exceptions with several OQL queries. I'm thinking of 
lowering this even to 40. Tradeoffs?

-CMSInitiatingOccupancyFraction=40   - why? I want the GC to be working when 
eviction starts. This is from point 3 in the above link under "Set the JVM's GC 
Tuning Parameters​"​

--critical-heap-percentage=90        


Would you determine the above a general best-practice approach?

Wes Williams | Sr. Data Engineer
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire 
<http://pivotal.io/big-data/pivotal-gemfire>

Reply via email to