Hi,

On 06.05.22 23:45, charlie hunt wrote:
Yes, GC's internal data structures.

  as Charlie said.

How much reduction there will likely vary depending on the application (and G1 region size). The type of application that will experience larger reductions are those applications that have a large number of cross G1 region references. A cache like type of application usually has a large number of these since updates to the cache introduce new references to an older or longer lived object which is likely held in a different G1 region.

On 5/6/22 4:36 PM, Stefan Reich wrote:
Ah, so the memory sections that are now smaller are basically the GC's internal data structures, rather than the general heap?

Yes. Significantly so. There has been a presentation for Oracle Developer Live in March that shows the progress in that area for such a cache-like application at https://inside.java/2022/05/02/odl-jdk8-to-jdk18-gc/ ; around 17:49 it talks about memory footprint reductions for the G1 collector over time.

Just to make it clear: next to the application, this is highly dependent on the garbage collector; e.g. Parallel GC needs memory roughly constant equal to the "Floor" line on that blog, Serial GC roughly half of that. With all their other tradeoffs of course wrt to latency/throughput.

As mentioned in that blog post at the very end, we are working on something that might make G1 GC data structure overhead comparable to Serial GC plus remembered sets (you can see the new "floor" for that change by looking at the "Prototype (calculated)" line, the level in the first ~50s). That change basically removes a constant amount that is exactly 1.5% of Java heap memory size from gc data structure overhead.

Maybe that change makes JDK 19, as usual no guarantees.

That kind of puts things in perspective. Still a great improvement. Has it been tested by how much the overall memory footprint of the JVM decreases in larger benchmarks?

In JDK8 the rule-of-thumb was like that you probably need to consider around 20% of Java heap, JDK 11 around 10% and with JDK 18 (probably) around 5% for G1 GC data structures to be fairly safe for all but the largest outliers with default ergonomics (i.e. that application we use for demonstration purposes is on the upper end). You *can* tune GC remembered set memory footprint quite a bit in earlier releases, but there are also other changes in later JDKs that can't be reproduced by some options. That blog contains a few posts about these.

Many (typically throughput-oriented) applications need (much) less remembered set data structure memory though.


On Fri, 6 May 2022 at 23:28, charlie hunt <charlie.h...@oracle.com> wrote:

    Hi Stefan,

    The graph Thomas shows in his blog is the GC part of the NMT
    output which does not include metadata.

    The GC part of NMT output includes native memory allocated on
    behalf of the GC itself such as a card table or remembered set ..
    those things that GC needs to do its work. The native memory
    allocated for the Java heap are in the "Java Heap" section of the
    NMT output. Class metadata is in the "Class" section of the NMT
    output.

    There is a "Native Memory Tracking Memory Categories" table that
    lists the sections / categories reported by NMT and a description
    of each at:
    
https://docs.oracle.com/en/java/javase/17/troubleshoot/diagnostic-tools.html#GUID-5EF7BB07-C903-4EBD-A9C2-EC0E44048D37

That documentation isn't current for JDK 18 - there is a new category for GC called "GCCardSet" that measures the memory overhead of the so-called (G1) remembered set only as it is a significant part of it.

TL;DR: the previous "GC" category has been split into "GC" and "GCCardSet" and need to be added together to be comparable with "GC" of earlier versions.

I'll try to get the documentation updated for JDK 19 at least, possibly JDK 18.


    hths,

    Charlie

    On 5/6/22 2:13 PM, Stefan Reich wrote:
    I'm right now just trying to get over how amazing this graph is:
    https://tschatzl.github.io/2022/03/14/jdk18-g1-parallel-gc-changes.html
    
<https://urldefense.com/v3/__https://tschatzl.github.io/2022/03/14/jdk18-g1-parallel-gc-changes.html__;!!ACWV5N9M2RV99hQ!OKRokskrYA5lROK6mWSdvn5puKnqs34LsTUVTkWX2Rp65OYC5CRMO5pthP_bsdg-msr3P2Q6ag1T55XNqMseOt88jWtWJhKP$>


    35-40% savings in memory use just by using JDK 18??? Who would
    have expected such a major improvement after 17 iterations of the
    language!

    Just so I'm sure I'm reading it correctly... the graph basically
    shows the Java heap's memory footprint in terms of committed
    native memory. Right?

    So it would include Eden, tenured generations, humongous objects,
    but not metadata and code. Is that correct?

    Basically tell me how much I should celebrate this. lol

    I did switch another small, pretty crammed (8 GB RAM) server over
    to JDK 18 and it does feel like there's a lot more memory
    available on it now and everything is a lot faster too.

    Cheers
    Stefan


Hth,
  Thomas

_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use@openjdk.java.net
https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use

Reply via email to