Re: Performance problem with NIO under high concurrency

2017-08-31 Thread Thuc Nguyen
As an experimentation, I started Tomcat with 
“-Dorg.apache.tomcat.util.net.NioSelectorShared=false”, and 
“selectorPool.maxSelectors” set to 1000, the same as the number of threads. 
This problem didn’t happen with that setting. Even with 
“selectorPool.maxSelectors” set to 1, it was noticeably slow, but wasn’t stuck.

I’m surprised that there’s only one shared selector by default. I’m reading the 
code to get a better understanding of the SelectorPool implementation. I’d 
appreciate any insights you may have around this.

Thanks,
Thuc

On 8/31/17, 9:09 AM, "Thuc Nguyen"  wrote:

Hi Chris,

Thanks for the quick response.

Yes, the clients were being throttled. These throttled requests were slow 
to start with and there was no noticeable difference in the download speed when 
the problem occurred. The smaller download started out OK. But after 10-15 
successful serial requests it hit the problem. Then it remained slow until the 
big downloads completed or got canceled.

The connector configuration:



We’ll upgrade to 8.0.46 and see if the problem can be reproduced.

Thanks again,
Thuc

On 8/31/17, 8:51 AM, "Christopher Schultz"  
wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thuc,

On 8/31/17 11:25 AM, Thuc Nguyen wrote:
> We run JFrog Artifactory which is fronted by Tomcat 8.0.32. We
> recently upgraded from Tomcat 7.0.56. Since the upgrade,
> Artifactory occasionally slows to a crawl.

Any chance of using the latest Tomcat 8.0.46?

> We could reproduce this problem by downloading a large (1GB) file 
> concurrently from 500 different machines. However, to avoid
> exceeding the network bandwidth, we throttled the downloads at
> 100KB per second.

You throttled the clients?

> While these 500 requests were being processed, we sent a request
> to download a smaller file (1MB, no throttling). The smaller
> download successfully went through several times, then got “stuck”.
> The download speed was reduced to a few KB per second.

Over all connections, or just the single "small" requests? Was is
stuck permanently? Meaning, if it got "stuck" one time (timed out?),
would any later requests succeed?

> We couldn’t reproduce this problem on 7.0.56. We kept 8.0.32 but
> changed the connector to BIO and couldn’t reproduce it either. We
> suspect that it has something to do with switching from BIO in
> 7.0.56 to NIO in 8.0.32.

It's possible, but you are using a fairly old version of Tomcat (~18
months).

> Has anyone run into this problem before?
> 
> Server.xml, a thread dump and Catalina MBean values are attached.
> Tomcat version:

Can you please post your  configuration from server.xml?

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZqDCMAAoJEBzwKT+lPKRYOYMP/RJzkQueb1YORwm6hN0I8u6n
idyq+byMzM+JmEM+uVprrBqFKwCGZEYMqKcrf/cirLlpr1F1bpiDyjWXxX6p2zZt
IfHKH1oMYa4EpCXGn4PDLag9cCqA1sr44MOzkQ9PvQFJGFLo12exwQDd++EL1YxH
DQmMsGh+xGmnaaK9EAJ85e9Jl6t+OHGF0655V8gt5MxsWfDGM+U8i2RDJln0HX0J
GWJCcs6ce6qzi82GYXf597QrKhF7x7lYrZMH2JbghxUMqB6GUw0uAyhGdYO5OfYz
wVUkB9dzDMwMli8UmmSclN/rF2msKL+RD7G72oNNCyxOmZc5oS4BvLVwLa5a7CCP
UPXuCLAXRg+JQ9C4MXGOXvhEKpkOKjaDFBsSSnavycPMSCfK9gCRXIU37DfIZFhN
1k/Jjq13VNaq6KgX2UkEbqwwuEmyjhXLuuz6KC714Hm6raeQEiaE88W2defvc6PG
179/bwXiMxhW6BXa2RCrhMHrqfw1gxIvqdwqoM1xEKIJhAhVD2AjJLpKA2AGJ1Tw
15nictwVIpmjYN13r0/j6UrT3KT6qlbSQT9d9FaErAqBaFWtdknqSe4ZDWcd9i59
Cg8PktKQSpr2dDijxGusydBc9QYzIS28ofLjZwSF2UXTW/rNAVVmtU+8O3y8zCTB
lXk0eAvyyWpVxkis8+Ft
=2s/6
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org







Re: Performance issue 8.5.20 (metaspace related?)

2017-08-31 Thread Ing. Andrea Vettori
> On 31 Aug 2017, at 17:55, Christopher Schultz  
> wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
> 
> Andrea,
> 
> On 8/30/17 4:13 PM, Ing. Andrea Vettori wrote:
>>> On 30 Aug 2017, at 00:16, Christopher Schultz
>>>  wrote: RMI is known for flagrantly
>>> wasting permgen/metaspace because of all the Proxy objects and
>>> special throw-away Class objects it uses, etc. But I'm not sure
>>> why the metaspace filling up would have such a dramatic effect on
>>> performance. If you were stalling during a metaspace collection,
>>> it might make sense, but that's not what you are reporting.
>> 
>> That’s true in fact what I’d like to understand is if a big
>> metaspace can affect performance… one thing I want to try is to run
>> the app with metaspace capped at 400/500MB. On java 7 I had PermGen
>> at 512 and as far as I remember, it was working fine.
> 
> Yes, I think limiting the metaspace might help. It may result in more
> frequent, smaller GC cycles.
> 
> I actually don't know much about the (relatively) new metaspace
> region, but it's possible that GC in the metaspace is a "stop the
> world" operation where the JVM can't do anything else during that time.

I tried that this morning. I set one tomcat with max metaspace set to 512mb, 
left the other one with 2gb. The one with 512mb was faster to call jboss on 
average but triggered a lot more the gc so some calls where very slow (the ones 
when the gc triggered).

If you’ve seen my other reply the performance problem has been fixed but the 
curiosity to understand why big metaspace is slow remains…

Thanks


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance problem with NIO under high concurrency

2017-08-31 Thread Thuc Nguyen
Hi Chris,

Thanks for the quick response.

Yes, the clients were being throttled. These throttled requests were slow to 
start with and there was no noticeable difference in the download speed when 
the problem occurred. The smaller download started out OK. But after 10-15 
successful serial requests it hit the problem. Then it remained slow until the 
big downloads completed or got canceled.

The connector configuration:



We’ll upgrade to 8.0.46 and see if the problem can be reproduced.

Thanks again,
Thuc

On 8/31/17, 8:51 AM, "Christopher Schultz"  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thuc,

On 8/31/17 11:25 AM, Thuc Nguyen wrote:
> We run JFrog Artifactory which is fronted by Tomcat 8.0.32. We
> recently upgraded from Tomcat 7.0.56. Since the upgrade,
> Artifactory occasionally slows to a crawl.

Any chance of using the latest Tomcat 8.0.46?

> We could reproduce this problem by downloading a large (1GB) file 
> concurrently from 500 different machines. However, to avoid
> exceeding the network bandwidth, we throttled the downloads at
> 100KB per second.

You throttled the clients?

> While these 500 requests were being processed, we sent a request
> to download a smaller file (1MB, no throttling). The smaller
> download successfully went through several times, then got “stuck”.
> The download speed was reduced to a few KB per second.

Over all connections, or just the single "small" requests? Was is
stuck permanently? Meaning, if it got "stuck" one time (timed out?),
would any later requests succeed?

> We couldn’t reproduce this problem on 7.0.56. We kept 8.0.32 but
> changed the connector to BIO and couldn’t reproduce it either. We
> suspect that it has something to do with switching from BIO in
> 7.0.56 to NIO in 8.0.32.

It's possible, but you are using a fairly old version of Tomcat (~18
months).

> Has anyone run into this problem before?
> 
> Server.xml, a thread dump and Catalina MBean values are attached.
> Tomcat version:

Can you please post your  configuration from server.xml?

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZqDCMAAoJEBzwKT+lPKRYOYMP/RJzkQueb1YORwm6hN0I8u6n
idyq+byMzM+JmEM+uVprrBqFKwCGZEYMqKcrf/cirLlpr1F1bpiDyjWXxX6p2zZt
IfHKH1oMYa4EpCXGn4PDLag9cCqA1sr44MOzkQ9PvQFJGFLo12exwQDd++EL1YxH
DQmMsGh+xGmnaaK9EAJ85e9Jl6t+OHGF0655V8gt5MxsWfDGM+U8i2RDJln0HX0J
GWJCcs6ce6qzi82GYXf597QrKhF7x7lYrZMH2JbghxUMqB6GUw0uAyhGdYO5OfYz
wVUkB9dzDMwMli8UmmSclN/rF2msKL+RD7G72oNNCyxOmZc5oS4BvLVwLa5a7CCP
UPXuCLAXRg+JQ9C4MXGOXvhEKpkOKjaDFBsSSnavycPMSCfK9gCRXIU37DfIZFhN
1k/Jjq13VNaq6KgX2UkEbqwwuEmyjhXLuuz6KC714Hm6raeQEiaE88W2defvc6PG
179/bwXiMxhW6BXa2RCrhMHrqfw1gxIvqdwqoM1xEKIJhAhVD2AjJLpKA2AGJ1Tw
15nictwVIpmjYN13r0/j6UrT3KT6qlbSQT9d9FaErAqBaFWtdknqSe4ZDWcd9i59
Cg8PktKQSpr2dDijxGusydBc9QYzIS28ofLjZwSF2UXTW/rNAVVmtU+8O3y8zCTB
lXk0eAvyyWpVxkis8+Ft
=2s/6
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org





server.xml
Description: server.xml

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Apache 7.0.81 - Can no longer use non-canonical paths in extraResourcePaths of VirtualDirContext

2017-08-31 Thread Constantin Erckenbrecht
Hi,



A change in 7.0.81/7.0.80 changed the File resolution in VirtualDirContext.

In 7.0.79 and before it was possible to use paths with /../ or any other
non-canonical path. This was particularly useful when using placeholders
that are being replaced at compile time like



extraResourcePaths="/=${project.basedir}/../some/other/dir”



The new calls to validate(File file, boolean mustExist, String
absoluteBase) prevent this, as inside the validate method the file name is
canocialized and compared against the absoluteBase path, which is not being
canonicalized.

Hence, when using a non-canonical path as an extraResourcePath the validate
function incorrectly assumes that the requested file is outside the
application root.



Any chance that this can be fixed?



Thanks.


Re: Performance issue 8.5.20 (metaspace related?)

2017-08-31 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Andrea,

On 8/30/17 4:13 PM, Ing. Andrea Vettori wrote:
>> On 30 Aug 2017, at 00:16, Christopher Schultz
>>  wrote: RMI is known for flagrantly
>> wasting permgen/metaspace because of all the Proxy objects and
>> special throw-away Class objects it uses, etc. But I'm not sure
>> why the metaspace filling up would have such a dramatic effect on
>> performance. If you were stalling during a metaspace collection,
>> it might make sense, but that's not what you are reporting.
> 
> That’s true in fact what I’d like to understand is if a big
> metaspace can affect performance… one thing I want to try is to run
> the app with metaspace capped at 400/500MB. On java 7 I had PermGen
> at 512 and as far as I remember, it was working fine.

Yes, I think limiting the metaspace might help. It may result in more
frequent, smaller GC cycles.

I actually don't know much about the (relatively) new metaspace
region, but it's possible that GC in the metaspace is a "stop the
world" operation where the JVM can't do anything else during that time.

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZqDF7AAoJEBzwKT+lPKRYBlcP/jVUU601Q0L1jszghHUx77Dq
5/SbTdPZassiPzLZ8Jg4NY9gclsFaQNHyvvi3p4dAbR5OdXjfogSgJEb1uFoLhKW
Uokxg7jd79HBVHvGjHKZ5NpNsLbun27DNEF+aLiVQzeuiJfK1j0aHDht8dFfR5Yq
Hm7UnKhxCNScDNZodvwEK1gPS5JgXVzi0qzdwE5fWVhTXeHDeYtgSjhlULv18NSj
7ug3S8OWNUmKfORnIDAcyKmJTNGX8BEfd9yrvQgq83nZciynu0S0AXVrdztQe0nv
3L2+cG94iEhhGWkwmKuGayeVX2BgUqtRQZ8ouNARnQIirmktUdPcd5j3UDtrCckh
siVVmKCv1EAsbtykW/0/VHUr9czPY95hocJTLVT0u+2eX+ez3rtC6zci6vV/kQYZ
XVbCnSDb0Bg5RxjFJe65f58X7SXnOwgWT/8sNjwYUYWVP/XWVFK00pgUbco+zyho
6dzI7pe9v2iU6v1JBf1yNq944SKRZONjklz0QkMSj6GsEcsEX3hepSh6NBL2oaDq
tyl45r/Xc1FipjnfevvMDJtLe3WiSEBKFHI0XEU64l29HzhbM+RzDzaLAYwmeo0+
f3wIDIP3wMX3lWyOgnMalqpNcyUf5ahVMMn6F63mJMvA37N55MDEToxwF1XbtZHB
6NClzihKaEu1PO3dACJm
=uCGN
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance problem with NIO under high concurrency

2017-08-31 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thuc,

On 8/31/17 11:25 AM, Thuc Nguyen wrote:
> We run JFrog Artifactory which is fronted by Tomcat 8.0.32. We
> recently upgraded from Tomcat 7.0.56. Since the upgrade,
> Artifactory occasionally slows to a crawl.

Any chance of using the latest Tomcat 8.0.46?

> We could reproduce this problem by downloading a large (1GB) file 
> concurrently from 500 different machines. However, to avoid
> exceeding the network bandwidth, we throttled the downloads at
> 100KB per second.

You throttled the clients?

> While these 500 requests were being processed, we sent a request
> to download a smaller file (1MB, no throttling). The smaller
> download successfully went through several times, then got “stuck”.
> The download speed was reduced to a few KB per second.

Over all connections, or just the single "small" requests? Was is
stuck permanently? Meaning, if it got "stuck" one time (timed out?),
would any later requests succeed?

> We couldn’t reproduce this problem on 7.0.56. We kept 8.0.32 but
> changed the connector to BIO and couldn’t reproduce it either. We
> suspect that it has something to do with switching from BIO in
> 7.0.56 to NIO in 8.0.32.

It's possible, but you are using a fairly old version of Tomcat (~18
months).

> Has anyone run into this problem before?
> 
> Server.xml, a thread dump and Catalina MBean values are attached.
> Tomcat version:

Can you please post your  configuration from server.xml?

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZqDCMAAoJEBzwKT+lPKRYOYMP/RJzkQueb1YORwm6hN0I8u6n
idyq+byMzM+JmEM+uVprrBqFKwCGZEYMqKcrf/cirLlpr1F1bpiDyjWXxX6p2zZt
IfHKH1oMYa4EpCXGn4PDLag9cCqA1sr44MOzkQ9PvQFJGFLo12exwQDd++EL1YxH
DQmMsGh+xGmnaaK9EAJ85e9Jl6t+OHGF0655V8gt5MxsWfDGM+U8i2RDJln0HX0J
GWJCcs6ce6qzi82GYXf597QrKhF7x7lYrZMH2JbghxUMqB6GUw0uAyhGdYO5OfYz
wVUkB9dzDMwMli8UmmSclN/rF2msKL+RD7G72oNNCyxOmZc5oS4BvLVwLa5a7CCP
UPXuCLAXRg+JQ9C4MXGOXvhEKpkOKjaDFBsSSnavycPMSCfK9gCRXIU37DfIZFhN
1k/Jjq13VNaq6KgX2UkEbqwwuEmyjhXLuuz6KC714Hm6raeQEiaE88W2defvc6PG
179/bwXiMxhW6BXa2RCrhMHrqfw1gxIvqdwqoM1xEKIJhAhVD2AjJLpKA2AGJ1Tw
15nictwVIpmjYN13r0/j6UrT3KT6qlbSQT9d9FaErAqBaFWtdknqSe4ZDWcd9i59
Cg8PktKQSpr2dDijxGusydBc9QYzIS28ofLjZwSF2UXTW/rNAVVmtU+8O3y8zCTB
lXk0eAvyyWpVxkis8+Ft
=2s/6
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: RewriteValve and the ROOT webapp

2017-08-31 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Mark,

On 8/30/17 5:03 PM, Mark Thomas wrote:
> On 30/08/17 21:46, Dan Rabe wrote:
>> I’m using Tomcat 8.5.20, trying to use the rewrite valve to
>> rewrite a root-level URL (/foo) to a URL in my webapp
>> (/mywebapp/bar).
>> 
>> I added the rewrite valve to my server.xml, and I put my
>> rewrite.config in conf/Catalina/localhost.
>> 
>> This all works great IF I create an empty “ROOT” directory in
>> webapps. If I remove the ROOT directory, though, accessing /foo
>> just gives me a 404.
>> 
>> Questions:
>> 
>> 1.  Is this by design, or is this a bug? (If it’s by design, then
>> some additional notes in the documentation would be helpful).
> 
> It is by design. See section 12.1 of the Servlet 3.1
> specification. Particularly the first paragraph.
> 
> The Tomcat docs deliberately try to avoid repeating information
> that is in the Servlet specification.
> 
>> 2.  If in fact I do need to have the ROOT webapp, what security
>> precautions should I take? Security guides such as
>> https://www.owasp.org/index.php/Securing_tomcat recommend
>> removing the ROOT webapp, but without providing reasons or
>> rationale.
> 
> Yes, it would help if OWASP explained their rationale.

I believe the OWASP rationale is that Tomcat ships with a ("welcome to
Tomcat") ROOT web app that is simply unnecessary, and unnecessary
things should be removed from production systems.

They obviously aren't explaining that there is nothing wrong with
having *a* ROOT webapp... it's just that the *default* ROOT webapp
should be removed for production.

> The simplest, and safest, approach would be to deploy your own, 
> completely empty ROOT web application (just a dir named "ROOT" in 
> webapps will be fine). Tomcat will handle the 404 for you in that
> case.

Our production builds always include a generated ROOT webapp that
includes absolutely nothing other than a trivial WEB-INF/web.xml. This
allows Tomcat to return a 404 response instead of a "400 Bad Request"
for any requests that don't map to a valid context path.

(Aside: I think it's probably not appropriate for Tomcat to return 400
in these cases... if the URL can't be mapped to a context, that should
be a 404 response, not a 400, since the request itself was valid.)

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJZqC8SAAoJEBzwKT+lPKRYfjsP/ik3k5Y7U0b1uLaTQQkjoyxt
3Dpg95p8wQTGelVrw3PFmQZMdnhyMFkro3W0XN4B4EHedZIl61NMyapoLki9IgQA
JLf+xcMIL9Fxi9ELw7jpxnA86dR56bLD0SQZ+gyg8oQ2uSNQvADYFRU2G+MsCBq3
WXykSDAyOc7IzUI4jGkVkYCYLTu4Qz3UljeKyYz8X10Hxw6ooNdps/vrOYSebhyX
mVcASlrRBaWJ/AVyUCIcrZaCAbx73kEzCVJrp5qjBdePY9see6dlQk3wRx+kuRoZ
YWO/6sPm9zVW2iKBxx44lQ5yGRrFIfO9vQ0yGHtKbPxYQ8ZLPjsJeQqMHHPsBoIR
OglGNZ0XI7vxVuElIRS2dekgSalxtu0WZ/RY/SaADC+uShVdETwO23y96L60L+QX
r4HU+iBK9U2JWvUqrn+xZCZ7VMzm1LyUXd29Ve5YG+hGg1UQLJT5dTNg4kBe0R96
oowErlvnG70PLmk0pKHBSjZiwhsyuz+lxpxOHyc13orm+hWTQZwGm8hNca6MWSmk
lIwp88C6Q6DHwmVeyQq5lUcEI+SYsRGI0zU/k9aDkY6RYcU1ansq9LafJIQGWPRi
h6eaTtDq2lSkvt27vrZM0mVDYPi8VIK2mSbcu/oYRlzg262eMicQIOpCwLiLcuhx
dZh1L4FBOSyYZ/ISGRW8
=Zsxc
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Performance problem with NIO under high concurrency

2017-08-31 Thread Thuc Nguyen
Hi,

We run JFrog Artifactory which is fronted by Tomcat 8.0.32. We recently 
upgraded from Tomcat 7.0.56. Since the upgrade, Artifactory occasionally slows 
to a crawl.

We could reproduce this problem by downloading a large (1GB) file concurrently 
from 500 different machines. However, to avoid exceeding the network bandwidth, 
we throttled the downloads at 100KB per second. While these 500 requests were 
being processed, we sent a request to download a smaller file (1MB, no 
throttling). The smaller download successfully went through several times, then 
got “stuck”. The download speed was reduced to a few KB per second.

We couldn’t reproduce this problem on 7.0.56. We kept 8.0.32 but changed the 
connector to BIO and couldn’t reproduce it either. We suspect that it has 
something to do with switching from BIO in 7.0.56 to NIO in 8.0.32.

Has anyone run into this problem before?

Server.xml, a thread dump and Catalina MBean values are attached. Tomcat 
version:

Server version: Apache Tomcat/8.0.32
Server built:   Feb 2 2016 19:34:53 UTC
Server number:  8.0.32.0
OS Name:Linux
OS Version: 2.6.32-504.el6.x86_64
Architecture:   amd64
JVM Version:1.8.0_72-b15
JVM Vendor: Oracle Corporation

Thank you,
Thuc Nguyen




tomcat.tar.gz
Description: tomcat.tar.gz

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Re: Performance issue 8.5.20 (metaspace related?)

2017-08-31 Thread Ing. Andrea Vettori
> On 29 Aug 2017, at 14:24, Mark Thomas  wrote:
> 
> On 29/08/17 13:09, Ing. Andrea Vettori wrote:
>>> On 29 Aug 2017, at 12:29, Suvendu Sekhar Mondal  wrote:
>>> 
>>> On Tue, Aug 29, 2017 at 2:54 PM, Ing. Andrea Vettori
>>>  wrote:
 - with a fresh started tomcat instance, the time it takes is around 0,8 
 seconds. Most of the time is spent on the two RMI calls the task does.
 - with an instance that is running from some time, the time can reach 2/3 
 seconds; occasionally 5/6 seconds. Most time is still spent on RMI calls. 
 I.e. what slows down are the RMI calls.
>>> 
>>> Sill question, do you mean RMI calls generating from Tomcat is getting
>>> slower with time? or, JBoss is taking time to return the response?
>> 
>> Thanks for your help.
>> What I see is that the http requests are slower. Looking at the details of 
>> that specific request (putting a product in the cart) using System.nanotime 
>> I can see that most of the time is spent during the RMI calls. I’m pretty 
>> sure it’s not jboss that is slower because doing other calls at the same 
>> time with fresh tomcats or with a java client they're not slow.
> 
> I'd try profiling the problematic code. Given the magnitude of the
> times, sampling should be sufficient which means you could do this on
> production when the problem happens. You'll probably need to configure
> it to log JVM internal calls.


Profiling the code I have been able to find the cause of the big metaspace 
garbage…
Due to a bug, we were not caching remote interfaces when connecting to jboss 
from the web sites. Other client kind was ok.
Fixed this problem today, it’s a few hours that the system is running fine. 
It’s fast (faster than before even with a fresh tomcat); metaspace says low and 
everything is fine.

This solves my problem. Thanks to everybody that helped with comments and 
suggestions.

Still ’m very curios to know why big metaspace causes such increasing slowness… 
looking at the GC logs it seems not a gc problem… maybe metaspace handling by 
the jvm is not optimal when it’s big ?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance issue 8.5.20 (metaspace related?)

2017-08-31 Thread Ing. Andrea Vettori
> On 31 Aug 2017, at 15:24, Suvendu Sekhar Mondal  wrote:
> 
> I will suggest that if you have some budget, please get a decent APM
> like AppDynamics, New Relic, Dynatrace to monitor your prod system
> 24x7x365. Trust me, you will be able to identify and solve this type
> of sporadic slowness issue very quickly.
> 


Thank you, I’m using ‘perfino’ since a couple days and it seems very good.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance issue 8.5.20 (metaspace related?)

2017-08-31 Thread Suvendu Sekhar Mondal
Andrea,

>> Sometimes Full GC were able to clean up some space from Metaspace but
>> only as part of a final last ditch collection effort:
>>
>> 43618.504: [Full GC (Last ditch collection)  1386M->250M(20G), 1.6455823 
>> secs]
>>   [Eden: 0.0B(6408.0M)->0.0B(6408.0M) Survivors: 0.0B->0.0B Heap:
>> 1386.7M(20.0G)->250.5M(20.0G)], [Metaspace:
>> 1646471K->163253K(1843200K)]
>> [Times: user=2.23 sys=0.10, real=1.65 secs]
>> 49034.392: [Full GC (Last ditch collection)  1491M->347M(20G), 1.9965534 
>> secs]
>>   [Eden: 0.0B(5600.0M)->0.0B(5600.0M) Survivors: 0.0B->0.0B Heap:
>> 1491.1M(20.0G)->347.9M(20.0G)], [Metaspace:
>> 1660804K->156199K(2031616K)]
>> [Times: user=2.78 sys=0.10, real=2.00 secs]
>
> This is interesting because I see this pattern for metaspace gc…
>
> https://ibb.co/b2B9HQ
>
> there’s something that clears it but it seems it’s not the full gc. I don’t 
> think there’s something that happens in the app that causes that much 
> metaspace  space to become instantly dead. So I think the full gc is not 
> doing the metaspace cleaning the way it could do it. Maybe it’s a light clean…
>

I believe MinMetaspaceFreeRatio and MaxMetaspaceFreeRatio playing a
role there. Default values for them in JDK 1.8 is 40 and 70
respectively. You can read about it more here:
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/considerations.html

>> That's is the default value. Please see:
>> http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/sunrmiproperties.html
>>  
>> 
>
> They must have changed it recently, it was 180k when I first checked that 
> option. Do you think it’s worth to increase it ? I don’t see a full gc every 
> hour…
>

I have seen that if you use G1 GC, then JVM completely ignores those
RMI DGC parameter values. If you use Parallel GC, then you will see
periodical Full GCs(System.gc) are getting triggered based on those
param values. I did not get chance to go in-depth about this behavior
of G1. As G1 is new and much more efficient than other collectors(I
know this statement can draw some arguments), probably it is cleaning
those RMI objects more efficiently.

I will suggest that if you have some budget, please get a decent APM
like AppDynamics, New Relic, Dynatrace to monitor your prod system
24x7x365. Trust me, you will be able to identify and solve this type
of sporadic slowness issue very quickly.

Thanks!
Suvendu

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org