Andrea, >> Sometimes Full GC were able to clean up some space from Metaspace but >> only as part of a final last ditch collection effort: >> >> 43618.504: [Full GC (Last ditch collection) 1386M->250M(20G), 1.6455823 >> secs] >> [Eden: 0.0B(6408.0M)->0.0B(6408.0M) Survivors: 0.0B->0.0B Heap: >> 1386.7M(20.0G)->250.5M(20.0G)], [Metaspace: >> 1646471K->163253K(1843200K)] >> [Times: user=2.23 sys=0.10, real=1.65 secs] >> 49034.392: [Full GC (Last ditch collection) 1491M->347M(20G), 1.9965534 >> secs] >> [Eden: 0.0B(5600.0M)->0.0B(5600.0M) Survivors: 0.0B->0.0B Heap: >> 1491.1M(20.0G)->347.9M(20.0G)], [Metaspace: >> 1660804K->156199K(2031616K)] >> [Times: user=2.78 sys=0.10, real=2.00 secs] > > This is interesting because I see this pattern for metaspace gc… > > https://ibb.co/b2B9HQ > > there’s something that clears it but it seems it’s not the full gc. I don’t > think there’s something that happens in the app that causes that much > metaspace space to become instantly dead. So I think the full gc is not > doing the metaspace cleaning the way it could do it. Maybe it’s a light clean… >
I believe MinMetaspaceFreeRatio and MaxMetaspaceFreeRatio playing a role there. Default values for them in JDK 1.8 is 40 and 70 respectively. You can read about it more here: https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/considerations.html >> That's is the default value. Please see: >> http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/sunrmiproperties.html >> >> <http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/sunrmiproperties.html> > > They must have changed it recently, it was 180k when I first checked that > option. Do you think it’s worth to increase it ? I don’t see a full gc every > hour… > I have seen that if you use G1 GC, then JVM completely ignores those RMI DGC parameter values. If you use Parallel GC, then you will see periodical Full GCs(System.gc) are getting triggered based on those param values. I did not get chance to go in-depth about this behavior of G1. As G1 is new and much more efficient than other collectors(I know this statement can draw some arguments), probably it is cleaning those RMI objects more efficiently. I will suggest that if you have some budget, please get a decent APM like AppDynamics, New Relic, Dynatrace to monitor your prod system 24x7x365. Trust me, you will be able to identify and solve this type of sporadic slowness issue very quickly. Thanks! Suvendu --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org