[ https://issues.apache.org/jira/browse/GROOVY-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17953806#comment-17953806 ]
ASF GitHub Bot commented on GROOVY-11640: ----------------------------------------- Copilot commented on code in PR #2233: URL: https://github.com/apache/groovy/pull/2233#discussion_r2105761391 ########## src/main/java/org/codehaus/groovy/vmplugin/v8/IndyInterface.java: ########## @@ -327,8 +328,15 @@ public static Object fromCache(CacheableCallSite callSite, Class<?> sender, Stri } if (mhw.isCanSetTarget() && (callSite.getTarget() != mhw.getTargetMethodHandle()) && (mhw.getLatestHitCount() > INDY_OPTIMIZE_THRESHOLD)) { - callSite.setTarget(mhw.getTargetMethodHandle()); - if (LOG_ENABLED) LOG.info("call site target set, preparing outside invocation"); + if (callSite.getFallbackRound().get() > INDY_FALLBACK_CUTOFF) { + if (callSite.getTarget() != callSite.getDefaultTarget()) { + // reset the call site target to default forever to avoid JIT deoptimization storm further + callSite.setTarget(callSite.getDefaultTarget()); + } + } else { + callSite.setTarget(mhw.getTargetMethodHandle()); + if (LOG_ENABLED) LOG.info("call site target set, preparing outside invocation"); Review Comment: [nitpick] Consider refactoring or adding inline comments to clarify the nested conditional logic for better readability and maintainability. ```suggestion if (shouldResetCallSiteTarget(callSite)) { resetCallSiteTarget(callSite); } else { updateCallSiteTarget(callSite, mhw); ``` ########## src/main/java/org/codehaus/groovy/vmplugin/v8/CacheableCallSite.java: ########## @@ -119,6 +120,11 @@ public long incrementFallbackCount() { public void resetFallbackCount() { fallbackCount.set(0); + fallbackRound.incrementAndGet(); Review Comment: Consider adding JavaDoc or inline comments to explain the role of fallbackRound in the fallback reset process to enhance code clarity. ########## src/main/java/org/codehaus/groovy/vmplugin/v8/IndyInterface.java: ########## @@ -48,8 +48,9 @@ * methods and classes. */ public class IndyInterface { - private static final long INDY_OPTIMIZE_THRESHOLD = SystemUtil.getLongSafe("groovy.indy.optimize.threshold", 10_000L); - private static final long INDY_FALLBACK_THRESHOLD = SystemUtil.getLongSafe("groovy.indy.fallback.threshold", 10_000L); + private static final long INDY_OPTIMIZE_THRESHOLD = SystemUtil.getLongSafe("groovy.indy.optimize.threshold", 1_000L); + private static final long INDY_FALLBACK_THRESHOLD = SystemUtil.getLongSafe("groovy.indy.fallback.threshold", 1_000L); + private static final long INDY_FALLBACK_CUTOFF = SystemUtil.getLongSafe("groovy.indy.fallback.cutoff", 100L); Review Comment: Consider adding a comment that explains the purpose of INDY_FALLBACK_CUTOFF and how it impacts the fallback mechanism to aid future maintainers. > GC pause by Metadata GC Threshold for weeks then turned to Full GC > ------------------------------------------------------------------ > > Key: GROOVY-11640 > URL: https://issues.apache.org/jira/browse/GROOVY-11640 > Project: Groovy > Issue Type: Improvement > Affects Versions: 4.0.8 > Environment: Server: 8 CPU, 16 GB memory, docker; > OS: Centos6; > JDK: Oracle jdk8-191; > JVM args: > -server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4g > -XX:MaxMetaspaceSize=768m -XX:+UseG1GC -XX:ParallelGCThreads=8 > -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=50 > Groovy ver: 4.0.8 > Reporter: Suo L. > Assignee: Daniel Sun > Priority: Major > Labels: Metaspace, g1gc, java-8, jvm > Attachments: groovy_usage_1.txt, groovy_usage_2.txt > > > I have a java application running heavily groovy scripts. After running for > weeks, the system encountered Full GC in every day, every time it cost about > {*}one second which is not acceptable{*}. > How can I reduce the STW GC time? > Environment: > {code:java} > Server: 8 CPU, 16 GB memory, docker > OS: Centos6 > JDK: Oracle jdk8-191 > JVM args: > -server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4g > -XX:MaxMetaspaceSize=768m -XX:+UseG1GC -XX:ParallelGCThreads=8 > -XX:ConcGCThreads=2 -XX:MaxGCPauseMillis=50 > Groovy ver: 4.0.8 {code} > ------- > What I have investigated: > 1. > Before the first time Full GC, every time Metadata minor GC, size threshold > will increase a little bit. > e.g.: > 1st: > meta size before gc: 502.14 > meta size after gc: 359.07 > 2nd > meta size before gc: 535.11 > meta size after gc: 361.08 > 3rd > meta size before gc: 559.96 > meta size after gc: 358.70 > ... > After the meta size increased to about 630MB threshold, GC changed to Full GC > every time. > First Full GC happens one months later, and then about every 24 hours > frequently. > No resource leak happened since every time full gc meta size went back to > 300MB+. > 2. > After opened jvm TraceClassLoading and TraceClassUnloading, I found hundreds > of follows logs prints time to time > {code:java} > [Loaded java.lang.invoke.LambdaForm$MH/1818461913 from > java.lang.invoke.LambdaForm] > [Loaded java.lang.invoke.LambdaForm$MH/1950338341 from > java.lang.invoke.LambdaForm] > [Loaded java.lang.invoke.LambdaForm$MH/411925311 from > java.lang.invoke.LambdaForm] > ... > //Metadata GC Threshold > ... > [Unloading class java.lang.invoke.LambdaForm$MH/1871712942 0x00000007c7ac3828] > [Unloading class java.lang.invoke.LambdaForm$MH/55669925 0x00000007c7ac3428] > [Unloading class java.lang.invoke.LambdaForm$MH/1781124732 > 0x00000007c7ac3028] {code} > I tried to dump the class via arthas, but it failed with process crashed. > 3. > I tried MaxGCPauseMillis from 50ms to 100ms, meta size threshold increasing > still happens, problem not solved. > ------------- > Metadata GC Threshold log sample: > {code:java} > 2025-04-25T08:44:54.824+0800: 1549660.716: [GC pause (Metadata GC Threshold) > (young) (initial-mark), 0.0793971 secs] > [Parallel Time: 73.7 ms, GC Workers: 8] > [GC Worker Start (ms): Min: 1549660734.7, Avg: 1549660734.7, Max: > 1549660734.8, Diff: 0.1] > [Ext Root Scanning (ms): Min: 47.2, Avg: 47.7, Max: 48.5, Diff: 1.3, Sum: > 381.3] > [Update RS (ms): Min: 3.4, Avg: 3.5, Max: 3.6, Diff: 0.1, Sum: 28.1] > [Processed Buffers: Min: 135, Avg: 155.5, Max: 183, Diff: 48, Sum: 1244] > [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.0, Sum: 0.4] > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.3] > [Object Copy (ms): Min: 4.3, Avg: 5.1, Max: 5.5, Diff: 1.2, Sum: 41.0] > [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.4] > [Termination Attempts: Min: 1, Avg: 87.0, Max: 121, Diff: 120, Sum: 696] > [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.7] > [GC Worker Total (ms): Min: 56.5, Avg: 56.5, Max: 56.6, Diff: 0.2, Sum: 452.2] > [GC Worker End (ms): Min: 1549660791.2, Avg: 1549660791.2, Max: 1549660791.3, > Diff: 0.1] > [Code Root Fixup: 0.0 ms] > [Code Root Purge: 0.0 ms] > [Clear CT: 0.2 ms] > [Other: 5.4 ms] > [Choose CSet: 0.0 ms] > [Ref Proc: 3.1 ms] > [Ref Enq: 0.0 ms] > [Redirty Cards: 0.1 ms] > [Humongous Register: 0.0 ms] > [Humongous Reclaim: 0.0 ms] > [Free CSet: 0.3 ms] > [Eden: 2272.0M(4896.0M)->0.0B(4896.0M) Survivors: 16.0M->16.0M Heap: > 3627.3M(8192.0M)->1358.9M(8192.0M)] > [Times: user=0.48 sys=0.00, real=0.08 secs] > 2025-04-25T08:44:54.904+0800: 1549660.796: [GC > concurrent-root-region-scan-start] > 2025-04-25T08:44:54.917+0800: 1549660.809: [GC > concurrent-root-region-scan-end, 0.0130581 secs] > 2025-04-25T08:44:54.917+0800: 1549660.809: [GC concurrent-mark-start] > 2025-04-25T08:44:55.259+0800: 1549661.151: [GC concurrent-mark-end, 0.3418808 > secs] > 2025-04-25T08:44:55.268+0800: 1549661.160: [GC remark > 2025-04-25T08:44:55.268+0800: 1549661.160: [Finalize Marking, 0.0009449 secs] > 2025-04-25T08:44:55.269+0800: 1549661.161: [GC ref-proc, 0.0094765 secs] > 2025-04-25T08:44:55.279+0800: 1549661.170: [Unloading, 0.4803602 secs], > 0.4925154 secs] > [Times: user=0.74 sys=0.48, real=0.50 secs] {code} > (Have no Full GC log yet.) -- This message was sent by Atlassian Jira (v8.20.10#820010)