-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Andrea,

On 8/29/17 8:09 AM, Ing. Andrea Vettori wrote:
>> On 29 Aug 2017, at 12:29, Suvendu Sekhar Mondal
>> <suv3...@gmail.com> wrote:
>> 
>> On Tue, Aug 29, 2017 at 2:54 PM, Ing. Andrea Vettori 
>> <a.vett...@b2bires.com> wrote:
>>> - with a fresh started tomcat instance, the time it takes is
>>> around 0,8 seconds. Most of the time is spent on the two RMI
>>> calls the task does. - with an instance that is running from
>>> some time, the time can reach 2/3 seconds; occasionally 5/6
>>> seconds. Most time is still spent on RMI calls. I.e. what slows
>>> down are the RMI calls.
>> 
>> Sill question, do you mean RMI calls generating from Tomcat is
>> getting slower with time? or, JBoss is taking time to return the
>> response?
> 
> Thanks for your help. What I see is that the http requests are
> slower. Looking at the details of that specific request (putting a
> product in the cart) using System.nanotime I can see that most of
> the time is spent during the RMI calls. I’m pretty sure it’s not
> jboss that is slower because doing other calls at the same time
> with fresh tomcats or with a java client they're not slow.
> 
>> 
>>> - restarting the jvm fixes the issue - ***it seems*** but I’m
>>> still testing this since it seems there’s no ‘meatspace gc
>>> trigger command available', that when Metaspace is garbage
>>> collected, tomcat then performs like a fresh instance.
>>> 
>> 
>> Yes, no command to only garbage collect Metaspace area. Without 
>> recycling Tomcat, you can trigger a Full GC either using VisualVM
>> or jcmd's GC.run command.
> 
> I think that explicit GC is not doing a metaspace GC… that’s what I
> observed starting the GC with the java console.
> 
>> 
>>> Since we’re using more than one tomcat instance (2 in
>>> production for this website, 1 for development) I can see that
>>> the issue is isolated to Tomcat or the JVM/Host where it runs
>>> because other Tomcat instances behave well at the same time one
>>> is slow. The same JBoss/Postgres backend is used by java
>>> batches and fat clients and it does work well with consistent
>>> times.
>>> 
>>> To clarify: the moment one production tomcat that is running
>>> from some time finishes the task in 3 seconds, the development
>>> tomcat or a fresh started production tomcat instance does the
>>> same task in less that one second. Note that repeating the task
>>> gives always consistent results, i.e. the instance is running
>>> from some time is always slow,  the fresh running instance is
>>> always fast.
>>> 
>>> Tomcat is running with these VM options:
>>> 
>>> -Xms20480m -Xmx20480m -Dsun.rmi.dgc.client.gcInterval=3600000
>>> -Dsun.rmi.dgc.server.gcInterval=3600000 -Xloggc:/tmp/gc.txt
>>> -Dcom.sun.management.jmxremote
>>> -Dcom.sun.management.jmxremote.port=9012
>>> -Dcom.sun.management.jmxremote.ssl=false
>>> -Dcom.sun.management.jmxremote.authenticate=false
>>> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseG1GC
>>> -XX:ReservedCodeCacheSize=1g -XX:InitialCodeCacheSize=256m
>>> -XX:+UseHugeTLBFS -XX:MetaspaceSize=1g -XX:MaxMetaspaceSize=2g
>>> 
>>> Some of the options have been recently added (for example the
>>> increase in code cache  size) but it seems they had no impact
>>> on the issue.
>>> 
>> 
>> Is there any specific reason to trigger Full GC every 
>> 60min(-Dsun.rmi.dgc.client.gcInterval=3600000)? I think that's 
>> overkill.
> 
> That’s a value I saw “everywhere” as suggested value (the default
> is too short). Note that on the GC logs I don’t see full GCs every
> hour so maybe that option is not triggering full gc ?
> 
> 
>>> Metaspace goes up to 1,6GB before being collected. Value after
>>> garbage collect is around 200MB. Heap usage is variable, it
>>> usually stays under 10G and is around 1G after garbage
>>> collect. CPU usage rarely goes over 10%. Loaded classes between
>>> 20k and 40k. Active sessions around 100/120 for each instance.
>>> 
>> 
>> That sounds like a healthy behavior to me. That means GC is
>> occurring and Metaspace is getting cleaned up. Why do you think
>> Metaspace is the problem?
> 
> Because I can observe that when the metaspace is collected the
> requests become fast. I observer that a few hours ago, looking at
> the metaspace graph of the java console and doing requests just
> after the collect.
> 
>> From the given data, my first impression is JVM is not having
>> any memory pressure. If you can post your gc.txt file to this
>> group, I would be happy to check it.
> 
> Thanks, here’s a link for gceasy report:
> http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0YX
RzLnR4dC5nei0tMTEtNDktMzQ=
> <http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0Y
XRzLnR4dC5nei0tMTEtNDktMzQ=>
>
> 
and the txt log file: http://ge.tt/1Un0DLm2
> Please consider that I triggered a Full GC manually two times on
> this server.
> 
> 
>> 
>> Another thing, is there any way you can measure the processing
>> time in JBoss for each RMI calls?
>> 
> 
> At the moment I’m not measuring the time on the jboss side but I’m
> considering using a telemetry product in the future. I’m measuring
> the time on the client side (tomcat or fat client).

I think it's critical to get metrics on the JBoss side, otherwise you
don't even know where the problem might be. It's good information to
have that the application running within Tomcat performs well
immediately following a metaspace collection, but it's not enough
information to inform a good decision.

RMI is known for flagrantly wasting permgen/metaspace because of all
the Proxy objects and special throw-away Class objects it uses, etc.
But I'm not sure why the metaspace filling up would have such a
dramatic effect on performance. If you were stalling during a
metaspace collection, it might make sense, but that's not what you are
reporting. It's like saying that HTTP traffic slows down on your PC
until you empty your Recycle Bin.

At any rate, this is not a problem with Tomcat itself: this is likely
entirely JVM-related. I'm sure everyone is happy to discuss it, here
(on a Tomcat-specific mailing list), but there aren't any
Tomcat-related settings that you will be able to tweak that affect
your environment in any way.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZpee7AAoJEBzwKT+lPKRYCNEP/1mxwpC8yVH5JqfBBm0CpFhd
VuR5oRpHbAX2+I7fWho75t2QXiw/IXK2NLS1x0Neib0k7RJSj1lh2PSX3o1Gg/F0
uHYPJCalN02gZD/0slzBwodw6ufq2Vd/4FfNFwNDgYAcUv9dUQcwws2DXKwoHnVC
PsNHvyeO/OOiwpDM8xqXBCmHqZkP7N0B/Gzm/M6VLB8hDTvHtnP2Ksroyrxx08Et
tpHfq36Bht6ZFK4eaGpMuLFO0Wqiu9c++C6JWUrtrfqqub7VqUGitLi6RY/vXEQm
8DYeQnVa70DKJOtQXcCp/lyv75LTj9bGJEI3g0NyrWpOzIDHIdkzg3Y261HjkqpF
f0DgDbDWJBBaYd+VhmLuzpm4S8rjIxhrp0MUfEdZgV4kACF8R/rCCDUMDsIXljJO
u0rPCb35Y/AMdXzAlD8dTcr56tdIcbtGCoteON7iPH/Zhrw7OysJTCQFjIPMmJx8
zWCasZQLJYHEJgoWSNLsRWkxFGsSWdPP0RLvdy2eEELq9Xjjxj9tz2Fu5gtmBmnR
kXWL4+VJoJ5slEFtZ8t/Id82VQcUZeQpDSCYltaYUTGxOpo4fXVq+TKsS7EHtZk0
EVJUg3AccvO8oufpt0CrPe08ApOCHr0KptH3WMcu+aqsgCNksFuyyKvZdBEep4M5
xWBptPsgueJBhA9eyc29
=XoyG
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to