Hi Kris, Thanks very much for the response!
I didn’t expect the RSS of the Java process to be 300GB when I set the heap
size to 300GB since from a lot of other use cases I know that G1 takes more
memory overhead compared to CMS.
I got the 450G number from our internal metrics, which essentially reads the
/proc/meminfo file for the memory footprint. On the machine there is no other
process taking a lot of memory (more than 1% of total memory or single digit
GB).
I turned on the NMT option, and print out the JVM memory stack:
Native Memory Tracking:
Total: reserved=477GB, committed=476GB
- Java Heap (reserved=300GB, committed=300GB)
(mmap: reserved=300GB, committed=300GB)
- Thread (reserved=1GB, committed=1GB)
(thread #723)
(stack: reserved=1GB, committed=1GB)
- GC (reserved=23GB, committed=23GB)
(malloc=12GB #20497380)
(mmap: reserved=11GB, committed=11GB)
- Internal (reserved=152GB, committed=152GB)
(malloc=152GB #19364496)
- Native Memory Tracking (reserved=1GB, committed=1GB)
(tracking overhead=1GB)
- Unknown (reserved=1GB, committed=0GB)
(mmap: reserved=1GB, committed=0GB)
Interal (direct byte buffer) takes a lot of the space. GC overhead looks OK in
this case.
This is pretty weird, I run the same app with CMS on the machine, but there is
no Internal part from the stack. Do you know why this happens?
Thanks,
Fengnan
> On Feb 8, 2019, at 2:07 PM, Krystal Mok <[email protected]> wrote:
>
> Hi Fengnan,
>
> This is Kris Mok currently working at Databricks. I used to work on HotSpot
> and Zing JVMs.
> Just curious how you got to the conclusion of G1 taking 450GB memory. Did you
> start the VM with -Xmx300g expecting that the RSS of that Java process be
> close to 300GB? If that's the case, that's a unreasonable expectation to
> begin with.
>
> It's a very common case for G1 itself to take a high memory overhead due to
> its design of the Remembered Set (RSet), and that can be tuned to use less
> memory by making it more coarse-grained, with the tradeoff that root scanning
> pause can take longer because more portion of the heap is going to have to be
> scanned.
>
> But I doubt that what you're actually seeing. To confirm where the memory
> went within the HotSpot JVM, please turn on NMT (Native Memory Tracking) and
> see how much memory each component within the JVM is using.
>
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
>
> <https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html>
> https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html
> <https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html>
>
> - Kris
>
> On Fri, Feb 8, 2019 at 1:51 PM Fengnan Li <[email protected]
> <mailto:[email protected]>> wrote:
> Hi All,
>
> We are trying to use G1 for our HDFS Namenode to see whether it will deliver
> better GC overall than currently used CMS. However, with the 200G heap size
> JVM option, the G1 wouldn’t even start our namenode with the production image
> and will be killed out of memory after running for 1 hours (loading initial
> data). For the same heap size, CMS can work properly with around 98%
> throughput and averagely 120ms pause.
>
> We use pretty much the basic options, and tried to tune a little but not much
> progress. Is there a way to lower down the overall memory footprint for G1?
>
> We managed to start the application with 300G heap size option, but overall
> G1 will consume about 450G memory, which is problematic.
>
> Thanks,
> Fengnan_______________________________________________
> hotspot-gc-use mailing list
> [email protected] <mailto:[email protected]>
> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> <https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ hotspot-gc-use mailing list [email protected] https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
