James,

> -----Original Message-----
> From: James H. H. Lampert <jam...@touchtonecorp.com>
> Sent: Tuesday, February 11, 2020 6:41 PM
> To: Tomcat Users List <users@tomcat.apache.org>
> Subject: JVM job for Tomcat taking lots and lots of CPU
> 
> Ladies and Gentlemen:
> 
> We have a customer installation in which the JVM job for our Tomcat server
> is frequently using massive amounts of CPU.
> 
> It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with -
> Xms3096m and -Xmx5120m JVM arguments.
> 
> GC information on the JVM job shows:
> > Garbage collected heap:
> >   Initial heap size  . . . . . . . . . :          3096.000M
> >   Maximum heap size  . . . . . . . . . :          5120.000M
> >   Current heap size  . . . . . . . . . :          4458.562M
> >   Heap in use  . . . . . . . . . . . . :          1907.673M
> > Other memory:
> >   Internal (break) memory size . . . . :           504.982M
> >   JIT memory size  . . . . . . . . . . :            74.000M
> >   Shared classes memory size . . . . . :             0.000M
> > General GC information:
> >   Current GC cycle . . . . . . . . . . :               2184
> >   GC policy type . . . . . . . . . . . :             GENCON
> >   Current GC cycle time  . . . . . . . :                552
> >   Accumulated GC time  . . . . . . . . :            5108241
> 
> It seems to be doing a lot of garbage-collecting.
> 
> Would switching to Java 8 help? Would switching to 7.0.93 help?
> 
> --
> James H. H. Lampert

I haven't worked with java on the AS400.

You said high CPU is the symptom.  Frequent GC is certainly one possible cause, 
but there are others.  If possible, take several thread dumps at short 
intervals (5-10 seconds) and review them to see what they have in common.  If 
you're not a java developer, ask one for help.  On Linux, it's possible to 
correlate the thread ids with process ids obtained from top to see how much CPU 
each one is using/has used.

I can't tell whether GC might be the culprit from the data you provided.  The 
first thing I always look at is the throughput, which is 1 - (gc time/total 
time).  You want that to be as close to 100% as possible.  Take that 
accumulated GC time on the last line and divide by the time the app has been 
running and subtract from 1.  Hopefully that number is up around .98 or .99.  
You also have to keep the time range in mind.  If the app sits idle at night, 
then the total throughput will look good because there were so few GCs during 
that idle time.  However if there is a surge in activity over the last few 
minutes, that number could be very different over that short time range.

I assume "current GC cycles" means that this is the 2184th GC since the JVM 
started.  Ok, but how frequent are they?  I've been busy apps do 5-20 minor GCs 
per minute, so 2000 total isn't a scary number to me.

Does "gencon" mean it's collecting the old generation at that moment?  If there 
are really 2000 of those, I would be mildly concerned.  The generations should 
be sized so that the old generation grows slowly.  Some of the apps I work with 
only do 1 or 2 of those per day.  This doesn't necessarily indicate a bug, 
however.  It might just mean that the young generation needs to be increased.

The fact that your used heap is so much lower than your total heap is a good 
sign.

Is JIT memory size how much space is allocated for compiled code?  How much is 
actually used?  By default, HotSpot allocates 240MB and I regularly see apps 
that use more than 74MB.  I don't know what your JVM does when that fills up.  
HotSpot used to puke but is better behaved now.  If you've filled up the space 
allocated for compiled code, I could definitely see this being a contributor to 
high CPU because of a) code running in interpreted mode and b) the JIT compiler 
having to run.

John


Reply via email to