(working) high load (100K+) websocket + NIO connector setting comparison on 1 Tomcat 7 instance

2013-11-07 Thread Bob DeRemer
Guys,

I wanted to follow back around on some of the websocket load testing we've been 
doing in EC2.The good news is, we were able to get 100K websockets 
connected directly to a single Tomcat instance (EASILY) - once we got the 
settings right.  As a result, I wanted to post my results here for 2 reasons:


1)  Hopefully this may benefit others

2)  I'd like ask the Tomcat experts which of the changes we made is most 
likely to have contributed to this working, or is it a combination?

My theory is it's a combination of: acceptorThreadCount + maxKeepAliveRequests. 
  Logically, it would make sense that running on with 16vCPU(s) should benefit 
from more acceptor threads.  In addition, reading the maxKeepAliveRequests, we 
wondered if making this UNLIMITED would help when many concurrent websocket 
requests come in because they are all HTTP requests initially that get upgraded.

So, if anyone can clarify whether our theory is correct; or, if not - what 
settings below actually made the difference, that would be great!   Without an 
understanding of what Tomcat is doing under the hood, my theory is just that.

Thanks for all the support you guys provide on this list,
Bob


SUMMARY:
I posted earlier this week about having trouble just getting 10 - 20K 
websockets connected to a single Tomcat instance running on a 16vCPU/60GB EC2 
instance running JVM (G1GC, NUMA, 24G - 48G).   The settings during those tests 
were the following, and we were seeing websocket connects fail due to 
TimeoutExceptions and EOFExceptions.

ORIGINAL SETTINGS

Connector port=80

protocol=org.apache.coyote.http11.Http11NioProtocol

connectionTimeout=2

maxConnections=10

maxThreads=10

redirectPort=8443 /

After looking at the Tomcat Connector documentation closer, along with what 
Glassfish recommends when deploying in production, we modified the settings to 
the values shown below:

WORKING SETTINGS
 Connector port=80
 protocol=org.apache.coyote.http11.Http11NioProtocol
 acceptorThreadCount=8
 maxKeepAliveRequests=-1
 connectionTimeout=-1
 maxConnections=-1
 maxThreads=2
 redirectPort=443 /



Bob DeRemer
Senior Director, Architecture and Development

[Description: Description: Description: Description: 
cid:image001.png@01CBE3DE.51A12030]
http://www.thingworx.comhttp://www.thingworx.com/
Skype: bob.deremer.thingworx
O: 610.594.6200 x812
M: 717.881.3986



Re: (working) high load (100K+) websocket + NIO connector setting comparison on 1 Tomcat 7 instance

2013-11-07 Thread Mark Thomas
On 07/11/2013 18:20, Bob DeRemer wrote:
 Guys,
 
 I wanted to follow back around on some of the websocket load testing
 we’ve been doing in EC2.The good news is, we were able to get 100K
 websockets connected directly to a single Tomcat instance (EASILY)

Excellent.

 My theory is it’s a combination of: acceptorThreadCount

I find that unlikely as I have explained previously. The lock on
Socket.accept() quickly becomes the bottleneck above 2 acceptor threads.

 maxKeepAliveRequests

Very unlikely to be a factor. There is only a single HTTP request before
the upgrade to WebSocket so there will never be multiple HTTP requests
on a single connection so HTTP keep-alive will not be a factor.

You'd need to test each setting individually to be sure. Some possible
theories:
- acceptorThreadCount does have an impact
- the code implementing maxConnections is the bottleneck - disabling it
removes it
- there is a bug in maxConnections that causes it to count connections
more than once - disabling it avoids the bug
- connections were timing out due to non-fair processing in
socket.accept(), the volume of new connections and the time taken to
process them - increasing the timeout fixed this

I'd be interested to know which setting it was but without some real
world testing all we are ever going to have is theories.

For the benefit of the archives - these settings worked for this test on
this system. That does not mean they are the best settings for every app
on every possible combination of hardware. The only way to know the best
settings for your app on your hardware is to test it.

Mark



 ORIGINAL SETTINGS
 
 Connector port=80
 protocol=org.apache.coyote.http11.Http11NioProtocol
 connectionTimeout=2
 maxConnections=10
 maxThreads=10
 redirectPort=8443 /
 
  
 
 After looking at the Tomcat Connector documentation closer, along with
 what Glassfish recommends when deploying in production, we modified the
 settings to the values shown below:
 
  
 
 WORKING SETTINGS
 
  Connector port=80
 
  protocol=org.apache.coyote.http11.Http11NioProtocol
 
  acceptorThreadCount=8
 
  maxKeepAliveRequests=-1
 
  connectionTimeout=-1
 
  maxConnections=-1
 
  maxThreads=2
 
  redirectPort=443 /
 
  
 
  
 
  
 
 *Bob DeRemer*
 
 *Senior Director, Architecture and Development*
 
  
 
 Description: Description: Description: Description:
 cid:image001.png@01CBE3DE.51A12030
 
 http://www.thingworx.com http://www.thingworx.com/
 
 Skype: bob.deremer.thingworx
 
 O: 610.594.6200 x812
 
 M: 717.881.3986
 
  
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: (working) high load (100K+) websocket + NIO connector setting comparison on 1 Tomcat 7 instance

2013-11-07 Thread Bob DeRemer


 -Original Message-
 From: Mark Thomas [mailto:ma...@apache.org]
 Sent: Thursday, November 07, 2013 1:52 PM
 To: Tomcat Users List
 Subject: Re: (working) high load (100K+) websocket + NIO connector setting
 comparison on 1 Tomcat 7 instance
 
 On 07/11/2013 18:20, Bob DeRemer wrote:
  Guys,
 
  I wanted to follow back around on some of the websocket load testing
  we've been doing in EC2.The good news is, we were able to get 100K
  websockets connected directly to a single Tomcat instance (EASILY)
 
 Excellent.
 
  My theory is it's a combination of: acceptorThreadCount
 
 I find that unlikely as I have explained previously. The lock on
 Socket.accept() quickly becomes the bottleneck above 2 acceptor threads.
 
  maxKeepAliveRequests
 
 Very unlikely to be a factor. There is only a single HTTP request before the
 upgrade to WebSocket so there will never be multiple HTTP requests on a
 single connection so HTTP keep-alive will not be a factor.
 
 You'd need to test each setting individually to be sure. Some possible
 theories:
 - acceptorThreadCount does have an impact
 - the code implementing maxConnections is the bottleneck - disabling it
 removes it
 - there is a bug in maxConnections that causes it to count connections more
 than once - disabling it avoids the bug
 - connections were timing out due to non-fair processing in socket.accept(), 
 the
 volume of new connections and the time taken to process them - increasing the
 timeout fixed this
 
 I'd be interested to know which setting it was but without some real world
 testing all we are ever going to have is theories.
 
 For the benefit of the archives - these settings worked for this test on this
 system. That does not mean they are the best settings for every app on every
 possible combination of hardware. The only way to know the best settings for
 your app on your hardware is to test it.
 

Understood and agree that this worked in [our] scenario - thanks for the 
analysis.
-bob


 Mark
 
 
 
  ORIGINAL SETTINGS
 
  Connector port=80
  protocol=org.apache.coyote.http11.Http11NioProtocol
  connectionTimeout=2
  maxConnections=10
  maxThreads=10
  redirectPort=8443 /
 
 
 
  After looking at the Tomcat Connector documentation closer, along with
  what Glassfish recommends when deploying in production, we modified
  the settings to the values shown below:
 
 
 
  WORKING SETTINGS
 
   Connector port=80
 
   protocol=org.apache.coyote.http11.Http11NioProtocol
 
   acceptorThreadCount=8
 
   maxKeepAliveRequests=-1
 
   connectionTimeout=-1
 
   maxConnections=-1
 
   maxThreads=2
 
   redirectPort=443 /
 
 
 
 
 
 
 
  *Bob DeRemer*
 
  *Senior Director, Architecture and Development*
 
 
 
  Description: Description: Description: Description:
  cid:image001.png@01CBE3DE.51A12030
 
  http://www.thingworx.com http://www.thingworx.com/
 
  Skype: bob.deremer.thingworx
 
  O: 610.594.6200 x812
 
  M: 717.881.3986
 
 
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: (working) high load (100K+) websocket + NIO connector setting comparison on 1 Tomcat 7 instance

2013-11-07 Thread Terence M. Bandoian
On 11/7/2013 12:51 PM, Mark Thomas wrote:
 On 07/11/2013 18:20, Bob DeRemer wrote:
 Guys,

 I wanted to follow back around on some of the websocket load testing
 we’ve been doing in EC2.The good news is, we were able to get 100K
 websockets connected directly to a single Tomcat instance (EASILY)
 Excellent.

 My theory is it’s a combination of: acceptorThreadCount
 I find that unlikely as I have explained previously. The lock on
 Socket.accept() quickly becomes the bottleneck above 2 acceptor threads.

 maxKeepAliveRequests
 Very unlikely to be a factor. There is only a single HTTP request before
 the upgrade to WebSocket so there will never be multiple HTTP requests
 on a single connection so HTTP keep-alive will not be a factor.

 You'd need to test each setting individually to be sure. Some possible
 theories:
 - acceptorThreadCount does have an impact
 - the code implementing maxConnections is the bottleneck - disabling it
 removes it
 - there is a bug in maxConnections that causes it to count connections
 more than once - disabling it avoids the bug
 - connections were timing out due to non-fair processing in
 socket.accept(), the volume of new connections and the time taken to
 process them - increasing the timeout fixed this

 I'd be interested to know which setting it was but without some real
 world testing all we are ever going to have is theories.

 For the benefit of the archives - these settings worked for this test on
 this system. That does not mean they are the best settings for every app
 on every possible combination of hardware. The only way to know the best
 settings for your app on your hardware is to test it.

 Mark



 ORIGINAL SETTINGS

 Connector port=80
 protocol=org.apache.coyote.http11.Http11NioProtocol
 connectionTimeout=2
 maxConnections=10
 maxThreads=10
 redirectPort=8443 /

  

 After looking at the Tomcat Connector documentation closer, along with
 what Glassfish recommends when deploying in production, we modified the
 settings to the values shown below:

  

 WORKING SETTINGS

  Connector port=80

  protocol=org.apache.coyote.http11.Http11NioProtocol

  acceptorThreadCount=8

  maxKeepAliveRequests=-1

  connectionTimeout=-1

  maxConnections=-1

  maxThreads=2

  redirectPort=443 /

  

  

  

 *Bob DeRemer*

 *Senior Director, Architecture and Development*

  

 Description: Description: Description: Description:
 cid:image001.png@01CBE3DE.51A12030

 http://www.thingworx.com http://www.thingworx.com/

 Skype: bob.deremer.thingworx

 O: 610.594.6200 x812

 M: 717.881.3986


Thanks to all for the very useful information.

-Terence Bandoian


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org