[ 
https://issues.apache.org/jira/browse/GROOVY-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17947823#comment-17947823
 ] 

Suo L. commented on GROOVY-11640:
---------------------------------

Update:

The Full GC is reproduced today (MaxMetaspaceSize to 1G and MaxGCPauseMillis to 
100ms) anyway, and twice:

node start from: 2025-04-07

jvm args:

_-XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=1g -XX:MaxMetaspaceFreeRatio=80 
-XX:MinMetaspaceExpansion=8m -XX:MaxMetaspaceExpansion=32m -XX:+UseG1GC 
-XX:ParallelGCThreads=8 -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=100_

full gc log:
{code:java}
2025-04-28T15:27:25.894+0800: 1833011.785: [GC pause (Metadata GC Threshold) 
(young) (initial-mark), 0.0932363 secs]
[Parallel Time: 86.7 ms, GC Workers: 8]
[GC Worker Start (ms): Min: 1833011806.3, Avg: 1833011806.4, Max: 1833011806.4, 
Diff: 0.1]
[Ext Root Scanning (ms): Min: 56.5, Avg: 57.3, Max: 59.0, Diff: 2.5, Sum: 458.7]
[Update RS (ms): Min: 3.5, Avg: 3.6, Max: 3.7, Diff: 0.1, Sum: 28.8]
[Processed Buffers: Min: 141, Avg: 154.9, Max: 167, Diff: 26, Sum: 1239]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.3]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.4, Max: 0.7, Diff: 0.6, Sum: 3.0]
[Object Copy (ms): Min: 4.3, Avg: 5.6, Max: 6.4, Diff: 2.1, Sum: 44.5]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Termination Attempts: Min: 1, Avg: 8.1, Max: 13, Diff: 12, Sum: 65]
[GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.6]
[GC Worker Total (ms): Min: 66.9, Avg: 67.0, Max: 67.1, Diff: 0.1, Sum: 536.0]
[GC Worker End (ms): Min: 1833011873.4, Avg: 1833011873.4, Max: 1833011873.4, 
Diff: 0.0]
[Code Root Fixup: 0.0 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.2 ms]
[Other: 6.2 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 3.9 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Register: 0.0 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.3 ms]
[Eden: 1708.0M(4888.0M)->0.0B(4880.0M) Survivors: 24.0M->32.0M Heap: 
2874.3M(8192.0M)->1178.4M(8192.0M)]
[Times: user=0.49 sys=0.00, real=0.09 secs]
2025-04-28T15:27:25.988+0800: 1833011.880: [GC 
concurrent-root-region-scan-start]
2025-04-28T15:27:25.988+0800: 1833011.880: [Full GC (Metadata GC Threshold) 
2025-04-28T15:27:25.995+0800: 1833011.887: [GC concurrent-root-region-scan-end, 
0.0074271 secs]
2025-04-28T15:27:25.995+0800: 1833011.887: [GC concurrent-mark-start]
1178M->598M(8192M), 2.3124652 secs]
[Eden: 0.0B(4880.0M)->0.0B(4912.0M) Survivors: 32.0M->0.0B Heap: 
1178.4M(8192.0M)->598.8M(8192.0M)], [Metaspace: 537858K->168217K(1413120K)]
[Times: user=2.01 sys=0.62, real=2.31 secs]
2025-04-28T15:27:28.301+0800: 1833014.193: [GC concurrent-mark-abort]
 {code}

> GC pause by Metadata GC Threshold for weeks then turned to Full GC
> ------------------------------------------------------------------
>
>                 Key: GROOVY-11640
>                 URL: https://issues.apache.org/jira/browse/GROOVY-11640
>             Project: Groovy
>          Issue Type: Question
>    Affects Versions: 4.0.8
>         Environment: Server: 8 CPU, 16 GB memory, docker;
> OS: Centos6;
> JDK: Oracle jdk8-191;
> JVM args:
> -server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4g 
> -XX:MaxMetaspaceSize=768m -XX:+UseG1GC -XX:ParallelGCThreads=8 
> -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=50
> Groovy ver: 4.0.8 
>            Reporter: Suo L.
>            Priority: Major
>              Labels: Metaspace, g1gc, java-8, jvm
>         Attachments: groovy_usage_1.txt, groovy_usage_2.txt
>
>
> I have a java application running heavily groovy scripts. After running for 
> weeks, the system encountered Full GC in every day, every time it cost about 
> {*}one second which is not acceptable{*}.
> How can I reduce the STW GC time?
> Environment:
> {code:java}
> Server: 8 CPU, 16 GB memory, docker
> OS: Centos6
> JDK: Oracle jdk8-191
> JVM args:
> -server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4g 
> -XX:MaxMetaspaceSize=768m -XX:+UseG1GC -XX:ParallelGCThreads=8 
> -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=50
> Groovy ver: 4.0.8 {code}
> -------
> What I have investigated:
> 1.
> Before the first time Full GC, every time Metadata minor GC, size threshold 
> will increase a little bit.
> e.g.:
> 1st:
>     meta size before gc: 502.14
>     meta size after gc: 359.07
> 2nd
>     meta size before gc: 535.11
>     meta size after gc: 361.08
> 3rd
>     meta size before gc: 559.96
>     meta size after gc: 358.70
> ...
> After the meta size increased to about 630MB threshold, GC changed to Full GC 
> every time.
> First Full GC happens one months later, and then about every 24 hours 
> frequently.
> No resource leak happened since every time full gc meta size went back to 
> 300MB+.
> 2.
> After opened jvm TraceClassLoading and TraceClassUnloading, I found hundreds 
> of follows logs prints time to time
> {code:java}
>  [Loaded java.lang.invoke.LambdaForm$MH/1818461913 from 
> java.lang.invoke.LambdaForm]
> [Loaded java.lang.invoke.LambdaForm$MH/1950338341 from 
> java.lang.invoke.LambdaForm]
> [Loaded java.lang.invoke.LambdaForm$MH/411925311 from 
> java.lang.invoke.LambdaForm]
> ...
> //Metadata GC Threshold
> ...
> [Unloading class java.lang.invoke.LambdaForm$MH/1871712942 0x00000007c7ac3828]
> [Unloading class java.lang.invoke.LambdaForm$MH/55669925 0x00000007c7ac3428]
> [Unloading class java.lang.invoke.LambdaForm$MH/1781124732 
> 0x00000007c7ac3028] {code}
> I tried to dump the class via arthas, but it failed with process crashed.
> 3.
> I tried MaxGCPauseMillis from 50ms to 100ms, meta size threshold increasing 
> still happens, problem not solved.
> -------------
> Metadata GC Threshold log sample:
> {code:java}
> 2025-04-25T08:44:54.824+0800: 1549660.716: [GC pause (Metadata GC Threshold) 
> (young) (initial-mark), 0.0793971 secs]
> [Parallel Time: 73.7 ms, GC Workers: 8]
> [GC Worker Start (ms): Min: 1549660734.7, Avg: 1549660734.7, Max: 
> 1549660734.8, Diff: 0.1]
> [Ext Root Scanning (ms): Min: 47.2, Avg: 47.7, Max: 48.5, Diff: 1.3, Sum: 
> 381.3]
> [Update RS (ms): Min: 3.4, Avg: 3.5, Max: 3.6, Diff: 0.1, Sum: 28.1]
> [Processed Buffers: Min: 135, Avg: 155.5, Max: 183, Diff: 48, Sum: 1244]
> [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.0, Sum: 0.4]
> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.3]
> [Object Copy (ms): Min: 4.3, Avg: 5.1, Max: 5.5, Diff: 1.2, Sum: 41.0]
> [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.4]
> [Termination Attempts: Min: 1, Avg: 87.0, Max: 121, Diff: 120, Sum: 696]
> [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.7]
> [GC Worker Total (ms): Min: 56.5, Avg: 56.5, Max: 56.6, Diff: 0.2, Sum: 452.2]
> [GC Worker End (ms): Min: 1549660791.2, Avg: 1549660791.2, Max: 1549660791.3, 
> Diff: 0.1]
> [Code Root Fixup: 0.0 ms]
> [Code Root Purge: 0.0 ms]
> [Clear CT: 0.2 ms]
> [Other: 5.4 ms]
> [Choose CSet: 0.0 ms]
> [Ref Proc: 3.1 ms]
> [Ref Enq: 0.0 ms]
> [Redirty Cards: 0.1 ms]
> [Humongous Register: 0.0 ms]
> [Humongous Reclaim: 0.0 ms]
> [Free CSet: 0.3 ms]
> [Eden: 2272.0M(4896.0M)->0.0B(4896.0M) Survivors: 16.0M->16.0M Heap: 
> 3627.3M(8192.0M)->1358.9M(8192.0M)]
> [Times: user=0.48 sys=0.00, real=0.08 secs]
> 2025-04-25T08:44:54.904+0800: 1549660.796: [GC 
> concurrent-root-region-scan-start]
> 2025-04-25T08:44:54.917+0800: 1549660.809: [GC 
> concurrent-root-region-scan-end, 0.0130581 secs]
> 2025-04-25T08:44:54.917+0800: 1549660.809: [GC concurrent-mark-start]
> 2025-04-25T08:44:55.259+0800: 1549661.151: [GC concurrent-mark-end, 0.3418808 
> secs]
> 2025-04-25T08:44:55.268+0800: 1549661.160: [GC remark 
> 2025-04-25T08:44:55.268+0800: 1549661.160: [Finalize Marking, 0.0009449 secs] 
> 2025-04-25T08:44:55.269+0800: 1549661.161: [GC ref-proc, 0.0094765 secs] 
> 2025-04-25T08:44:55.279+0800: 1549661.170: [Unloading, 0.4803602 secs], 
> 0.4925154 secs]
> [Times: user=0.74 sys=0.48, real=0.50 secs] {code}
> (Have no Full GC log yet.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to