I think 32g is a good max heap goal to have since it allows compressed oops
(the jvm now just defaults to compressed oops if the heap is <=32g).

Be aware that in 1.8 you can now have compressed oops with heaps <= 64g.

The larger the heap the longer the possible gc pause.

Also according to the following if your heap is < 26g then compressed oops
have less of a performance impact:

> Zero-Based Compressed Ordinary Object Pointers (oops)
> When using compressed oops in a 64-bit Java Virtual Machine process, the
> JVM software asks the operating system to reserve memory for the Java heap
> starting at virtual address zero. If the operating system supports such a
> request and can reserve memory for the Java heap at virtual address zero,
> then zero-based compressed oops are used.
> Use of zero-based compressed oops means that a 64-bit pointer can be
> decoded from a 32-bit object offset without adding in the Java heap base
> address. For heap sizes less than 4 gigabytes, the JVM software can use a
> byte offset instead of an object offset and thus also avoid scaling the
> offset by 8. Encoding a 64-bit address into a 32-bit offset is
> correspondingly efficient.
> For Java heap sizes up around 26 gigabytes, any of Solaris, Linux, and
> Windows operating systems will typically be able to allocate the Java heap
> at virtual address zero.


This quote came from:
http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html

On Fri, Jun 26, 2015 at 6:24 PM, Real Wes Williams <[email protected]>
wrote:

> I’d like to verify best practices for setting eviction threshold settings.
> There’s not much written on it. I’m following guidelines at:
>
> https://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.gemfire.6.6/managing/heap_use/controlling_heap_use.html
> and hoping that they are still current.
>
> I have about 750GB of data, 1/2 historical on disk and 1/2 active in
> memory in a cluster of servers with 36GB RAM and 28GB heaps (20% overhead).
> The read/write ratio is about 60%/40% and lots of OQL queries, which need
> memory space to run. A small percentage of the queries will hit disk. I'm
> thinking that I want to give Java 50% overhead.  Based on the above, here
> is what I am thinking:
>
> 20% overhead between RAM limit and heap  (36GB RAM with 28GB heap)  -
> why? Java needs memory for its own use outside the heap.
>
> -compressedOops     -why? Heap is < 32GB and this gives me more space.
> Space is more valuable than speed in this instance.
>
> --eviction-heap-percentage=50             - why? I want to start evicting
> around 14GB, which gives the JVM 50% headroom. I found that when I raised
> this to 70% I was getting OOM exceptions with several OQL queries. I'm
> thinking of lowering this even to 40. Tradeoffs?
>
> -CMSInitiatingOccupancyFraction=40   - why? I want the GC to be working
> when eviction starts. This is from point 3 in the above link under "Set
> the JVM's GC Tuning Parameters
> ​"​
>
> --critical-heap-percentage=90
>
>
> Would you determine the above a general best-practice approach?
>
> *Wes Williams | Sr. **Data Engineer*
> 781.606.0325
> http://pivotal.io/big-data/pivotal-gemfire
>

Reply via email to