Hi,

At Adobe, we use large commits to update our content repository atomically.
Those large commits require a large amount of heap memory or the JVM throws
OOMEs and the commit fails.

In one setup, we are configuring the JVM with a max heap size of 32GB, yet
we still hit OOMEs.
I looked at the heap dump taken at the occurrence of the OOME and running
it through Eclipse Memory Analyser I noticed that

1. HashMap objects are consuming the most heap size (~10GB) ; and
2. 54% of the HashMap instances do contain less than 12 elements ; and
3. ~40% of the HashMap instances do contain 1 element ; and
4. The ModifiedNodeState instances contains ~10GB of HashMap

Since HashMaps consist in the vast majority of the memory consumed, memory
consumption could be diminished by using HashMaps with a higher fill ratio.
Looking at the code in [0], it seems HashMaps are sometimes created with
default capacity.
Specifying the initial capacity for every new HashMap instance in [0] as
either (the capacity required) or 1 (if no better guess) would improve the
HashMap fill ratio and thus decrease the commits memory footprint.

wdyt ?

Regards,

Timothee

[0]
org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState#ModifiedNodeState

Reply via email to