Thanks Daniel,

I had reviewed all changelog <https://groovy-lang.org/changelogs.html>
titles between 4.0.8 and 4.0.26, it seems nothing strong related to my
issue.
It is definitely one of my test options, but I need weeks to push the
production environment test.




On Sun, Apr 27, 2025 at 2:12 AM Daniel Sun <sun...@apache.org> wrote:

> Hi Luis,
>
> I find 4.0.8 a bit outdated. Please give Groovy 4.0.26 a try.
>
> BTW, could you submit a JIRA ticket and attach a script to reproduce the
> issue?
>
> Cheers,
> Daniel Sun
>
> On 2025/04/25 05:13:15 Luis Ashurei wrote:
> > Hello,
> >
> > I have a java application running heavily groovy scripts. After
> > running for weeks, the system encountered Full GC in every day, every
> > time it cost about one second which is not acceptable.
> >
> > Any suggestions are appreciated.
> >
> > (I also start a stackoverflow topic here:
> >
> https://stackoverflow.com/questions/79591808/gc-pause-by-metadata-gc-threshold-for-weeks-then-turned-to-full-gc
> > )
> >
> > -------
> > Environment:
> > Server: 8 CPU, 16 GB memory, docker
> > OS: Centos6
> > JDK: Oracle jdk18-191
> > JVM args:
> > -server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4g
> > -XX:MaxMetaspaceSize=768m -XX:+UseG1GC -XX:ParallelGCThreads=8
> > -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=50
> > Groovy ver: 4.0.8
> >
> > -------
> > What I have investigated:
> >
> > 1.
> > Before the first time Full GC, every time Metadata minor GC, space
> > will increase a little bit.
> > e.g.:
> > 1st:
> >     meta size before gc: 502.14
> >     meta size after gc: 359.07
> > 2nd
> >     meta size before gc: 535.11
> >     meta size after gc: 361.08
> > 3rd
> >     meta size before gc: 559.96
> >     meta size after gc: 358.70
> > ...
> >
> > After the meta size increased to about 630MB threshold, GC changed to
> > Full GC every time. First Full GC happens one months later, and then
> > about every 24 hours frequently.
> > No resource leak happened since every time full gc meta size went back
> > to 300MB+.
> >
> > 2.
> > After opened jvm TraceClassLoading and TraceClassUnloading, I found
> > hundreds of follows logs prints time to time
> >
> > [Loaded java.lang.invoke.LambdaForm$MH/1818461913 from
> > java.lang.invoke.LambdaForm]
> > [Loaded java.lang.invoke.LambdaForm$MH/1950338341 from
> > java.lang.invoke.LambdaForm]
> > [Loaded java.lang.invoke.LambdaForm$MH/411925311 from
> > java.lang.invoke.LambdaForm]
> > ...
> > //Metadata GC Threshold
> > ...
> > [Unloading class java.lang.invoke.LambdaForm$MH/1871712942
> 0x00000007c7ac3828]
> > [Unloading class java.lang.invoke.LambdaForm$MH/55669925
> 0x00000007c7ac3428]
> > [Unloading class java.lang.invoke.LambdaForm$MH/1781124732
> 0x00000007c7ac3028]
> >
> > I tried to dump the class via arthas, but it failed with process crashed.
> >
> > 3.
> > I tried MaxGCPauseMillis from 50ms to 100ms, meta size threshold
> > increasing still happens, problem not solved.
> >
> > -------------
> > Metadata GC Threshold log sample:
> >
> > 2025-04-25T08:44:54.824+0800: 1549660.716: [GC pause (Metadata GC
> > Threshold) (young) (initial-mark), 0.0793971 secs]
> > [Parallel Time: 73.7 ms, GC Workers: 8]
> > [GC Worker Start (ms): Min: 1549660734.7, Avg: 1549660734.7, Max:
> > 1549660734.8, Diff: 0.1]
> > [Ext Root Scanning (ms): Min: 47.2, Avg: 47.7, Max: 48.5, Diff: 1.3,
> Sum: 381.3]
> > [Update RS (ms): Min: 3.4, Avg: 3.5, Max: 3.6, Diff: 0.1, Sum: 28.1]
> > [Processed Buffers: Min: 135, Avg: 155.5, Max: 183, Diff: 48, Sum: 1244]
> > [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.0, Sum: 0.4]
> > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum:
> 0.3]
> > [Object Copy (ms): Min: 4.3, Avg: 5.1, Max: 5.5, Diff: 1.2, Sum: 41.0]
> > [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.4]
> > [Termination Attempts: Min: 1, Avg: 87.0, Max: 121, Diff: 120, Sum: 696]
> > [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.7]
> > [GC Worker Total (ms): Min: 56.5, Avg: 56.5, Max: 56.6, Diff: 0.2, Sum:
> 452.2]
> > [GC Worker End (ms): Min: 1549660791.2, Avg: 1549660791.2, Max:
> > 1549660791.3, Diff: 0.1]
> > [Code Root Fixup: 0.0 ms]
> > [Code Root Purge: 0.0 ms]
> > [Clear CT: 0.2 ms]
> > [Other: 5.4 ms]
> > [Choose CSet: 0.0 ms]
> > [Ref Proc: 3.1 ms]
> > [Ref Enq: 0.0 ms]
> > [Redirty Cards: 0.1 ms]
> > [Humongous Register: 0.0 ms]
> > [Humongous Reclaim: 0.0 ms]
> > [Free CSet: 0.3 ms]
> > [Eden: 2272.0M(4896.0M)->0.0B(4896.0M) Survivors: 16.0M->16.0M Heap:
> > 3627.3M(8192.0M)->1358.9M(8192.0M)]
> > [Times: user=0.48 sys=0.00, real=0.08 secs]
> > 2025-04-25T08:44:54.904+0800: 1549660.796: [GC
> > concurrent-root-region-scan-start]
> > 2025-04-25T08:44:54.917+0800: 1549660.809: [GC
> > concurrent-root-region-scan-end, 0.0130581 secs]
> > 2025-04-25T08:44:54.917+0800: 1549660.809: [GC concurrent-mark-start]
> > 2025-04-25T08:44:55.259+0800: 1549661.151: [GC concurrent-mark-end,
> > 0.3418808 secs]
> > 2025-04-25T08:44:55.268+0800: 1549661.160: [GC remark
> > 2025-04-25T08:44:55.268+0800: 1549661.160: [Finalize Marking,
> > 0.0009449 secs] 2025-04-25T08:44:55.269+0800: 1549661.161: [GC
> > ref-proc, 0.0094765 secs] 2025-04-25T08:44:55.279+0800: 1549661.170:
> > [Unloading, 0.4803602 secs], 0.4925154 secs]
> > [Times: user=0.74 sys=0.48, real=0.50 secs]
> >
> > (Have no Full GC log yet.)
> >
>

Reply via email to