-----BEGIN PGP SIGNED MESSAGE-----
On 8/29/17 8:09 AM, Ing. Andrea Vettori wrote:
>> On 29 Aug 2017, at 12:29, Suvendu Sekhar Mondal
>> <suv3...@gmail.com> wrote:
>> On Tue, Aug 29, 2017 at 2:54 PM, Ing. Andrea Vettori
>> <a.vett...@b2bires.com> wrote:
>>> - with a fresh started tomcat instance, the time it takes is
>>> around 0,8 seconds. Most of the time is spent on the two RMI
>>> calls the task does. - with an instance that is running from
>>> some time, the time can reach 2/3 seconds; occasionally 5/6
>>> seconds. Most time is still spent on RMI calls. I.e. what slows
>>> down are the RMI calls.
>> Sill question, do you mean RMI calls generating from Tomcat is
>> getting slower with time? or, JBoss is taking time to return the
> Thanks for your help. What I see is that the http requests are
> slower. Looking at the details of that specific request (putting a
> product in the cart) using System.nanotime I can see that most of
> the time is spent during the RMI calls. I’m pretty sure it’s not
> jboss that is slower because doing other calls at the same time
> with fresh tomcats or with a java client they're not slow.
>>> - restarting the jvm fixes the issue - ***it seems*** but I’m
>>> still testing this since it seems there’s no ‘meatspace gc
>>> trigger command available', that when Metaspace is garbage
>>> collected, tomcat then performs like a fresh instance.
>> Yes, no command to only garbage collect Metaspace area. Without
>> recycling Tomcat, you can trigger a Full GC either using VisualVM
>> or jcmd's GC.run command.
> I think that explicit GC is not doing a metaspace GC… that’s what I
> observed starting the GC with the java console.
>>> Since we’re using more than one tomcat instance (2 in
>>> production for this website, 1 for development) I can see that
>>> the issue is isolated to Tomcat or the JVM/Host where it runs
>>> because other Tomcat instances behave well at the same time one
>>> is slow. The same JBoss/Postgres backend is used by java
>>> batches and fat clients and it does work well with consistent
>>> To clarify: the moment one production tomcat that is running
>>> from some time finishes the task in 3 seconds, the development
>>> tomcat or a fresh started production tomcat instance does the
>>> same task in less that one second. Note that repeating the task
>>> gives always consistent results, i.e. the instance is running
>>> from some time is always slow, the fresh running instance is
>>> always fast.
>>> Tomcat is running with these VM options:
>>> -Xms20480m -Xmx20480m -Dsun.rmi.dgc.client.gcInterval=3600000
>>> -Dsun.rmi.dgc.server.gcInterval=3600000 -Xloggc:/tmp/gc.txt
>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseG1GC
>>> -XX:ReservedCodeCacheSize=1g -XX:InitialCodeCacheSize=256m
>>> -XX:+UseHugeTLBFS -XX:MetaspaceSize=1g -XX:MaxMetaspaceSize=2g
>>> Some of the options have been recently added (for example the
>>> increase in code cache size) but it seems they had no impact
>>> on the issue.
>> Is there any specific reason to trigger Full GC every
>> 60min(-Dsun.rmi.dgc.client.gcInterval=3600000)? I think that's
> That’s a value I saw “everywhere” as suggested value (the default
> is too short). Note that on the GC logs I don’t see full GCs every
> hour so maybe that option is not triggering full gc ?
>>> Metaspace goes up to 1,6GB before being collected. Value after
>>> garbage collect is around 200MB. Heap usage is variable, it
>>> usually stays under 10G and is around 1G after garbage
>>> collect. CPU usage rarely goes over 10%. Loaded classes between
>>> 20k and 40k. Active sessions around 100/120 for each instance.
>> That sounds like a healthy behavior to me. That means GC is
>> occurring and Metaspace is getting cleaned up. Why do you think
>> Metaspace is the problem?
> Because I can observe that when the metaspace is collected the
> requests become fast. I observer that a few hours ago, looking at
> the metaspace graph of the java console and doing requests just
> after the collect.
>> From the given data, my first impression is JVM is not having
>> any memory pressure. If you can post your gc.txt file to this
>> group, I would be happy to check it.
> Thanks, here’s a link for gceasy report:
and the txt log file: http://ge.tt/1Un0DLm2
> Please consider that I triggered a Full GC manually two times on
> this server.
>> Another thing, is there any way you can measure the processing
>> time in JBoss for each RMI calls?
> At the moment I’m not measuring the time on the jboss side but I’m
> considering using a telemetry product in the future. I’m measuring
> the time on the client side (tomcat or fat client).
I think it's critical to get metrics on the JBoss side, otherwise you
don't even know where the problem might be. It's good information to
have that the application running within Tomcat performs well
immediately following a metaspace collection, but it's not enough
information to inform a good decision.
RMI is known for flagrantly wasting permgen/metaspace because of all
the Proxy objects and special throw-away Class objects it uses, etc.
But I'm not sure why the metaspace filling up would have such a
dramatic effect on performance. If you were stalling during a
metaspace collection, it might make sense, but that's not what you are
reporting. It's like saying that HTTP traffic slows down on your PC
until you empty your Recycle Bin.
At any rate, this is not a problem with Tomcat itself: this is likely
entirely JVM-related. I'm sure everyone is happy to discuss it, here
(on a Tomcat-specific mailing list), but there aren't any
Tomcat-related settings that you will be able to tweak that affect
your environment in any way.
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org