All,

I'm doing some scalability testing of an app in Tomcat 7.0.73.

Some relevant connector config details:
maxThreads=80
maxKeepAliveRequests=100
keepAliveTimeout=10000
maxConnections unspecified (defaults to maxThreads according to the docs)
acceptCount unspecified (100 according to the docs)
clientAuth=true

FWIW, I'm testing two Tomcat instances on the same server.  They are behind a 
load balancer.

It appears that when the load generator tries to exceed maxThreads, the new 
connection rate goes up quickly and CPU usage shoots up with it.  I assume this 
is because Tomcat is proactively closing idle keep alive connections to service 
new connections.  In an effort to keep the CPU in check, I tried increasing 
maxThreads from 80 to 120.  This seemed to work well in a lot of ways.  New 
connection rate didn't increase as much, CPU didn't increase as much, there was 
more connection reuse (more requests per connection,) and response times didn't 
deteriorate as much.

Great, right?

Then I noticed a large increase in "Connection refused" errors on the load 
generator.  In other words, a higher maxThreads also results in a high error 
rate.  The total hits per second from the client's perspective is about 60 in 
both cases.  With maxThreads=80, there are about 3 connection refused errors 
per second at that volume.  With maxThreads=120, there are about 10 connection 
refused errors per second.

I have no idea why this is.  Can someone explain this or what I can do about it?

Thanks

John

Reply via email to