Re: websocket: connections not getting closed properly

2017-06-29 Thread anurag gupta
Beware of the AWS ALB, It has a nasty bug where it terminates a chunk of
connections even if they are active. This happens after 10-12 hrs. and
related to the autoscaling of ALB.

On Thu, Jun 29, 2017 at 5:40 PM, Sandeep Dhameshia <
sandeep.dhames...@gmail.com> wrote:

> Update:
>
> Started using AWS Application Load Balancer, with SSL offload. This has
> reduced CPU utilization by 5 times!
>
> And most importantly, wrt this mail chain, num of connections looks stable
> now, with same load. Can see some Client TLS Negotiations Errors in LB's
> monitoring console. Num of errors for some period of time are more or less
> matching with num of connections increased previously.
>
> Sandeep
>
> On Fri, Jun 23, 2017 at 2:34 PM, Sandeep Dhameshia <
> sandeep.dhames...@gmail.com> wrote:
>
> > One more doubt, does it log anything when max connections limit is
> reached
> > and it starts dropping requests? This happened few days back, and since
> it
> > is a production server I did not have time to see num of file descriptors
> > for tomcat process and response code to client. Nothing was logged,
> > restarted tomcat and it started accepting client requests again.
> >
> > Have created a cron job after this incident, which restarts the server
> > when it is reaching the limit.
> >
> > regards
> >
> > On Wed, Jun 21, 2017 at 9:41 AM, Sandeep Dhameshia <
> > sandeep.dhames...@gmail.com> wrote:
> >
> >> Thanks for your reply Mark,
> >>
> >> *log msg*:
> >>
> >> Jun 08, 2017 10:13:07 AM org.apache.tomcat.websocket.se
> >> rver.WsRemoteEndpointImplServer doClose
> >> INFO: Failed to close the ServletOutputStream connection cleanly
> >> java.io.IOException: Broken pipe
> >> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> >> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> >> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> >> at sun.nio.ch.IOUtil.write(IOUtil.java:51)
> >> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> >> at org.apache.tomcat.util.net.SecureNioChannel.flush(SecureNioC
> >> hannel.java:141)
> >> at org.apache.tomcat.util.net.SecureNioChannel.close(SecureNioC
> >> hannel.java:385)
> >> at org.apache.tomcat.util.net.SecureNioChannel.close(SecureNioC
> >> hannel.java:413)
> >> at org.apache.coyote.http11.upgrade.NioServletOutputStream.doCl
> >> ose(NioServletOutputStream.java:138)
> >> at org.apache.coyote.http11.upgrade.AbstractServletOutputStream
> >> .close(AbstractServletOutputStream.java:129)
> >> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServe
> >> r.doClose(WsRemoteEndpointImplServer.java:138)
> >> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.close(W
> >> sRemoteEndpointImplBase.java:696)
> >> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServe
> >> r.onWritePossible(WsRemoteEndpointImplServer.java:113)
> >> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServe
> >> r.doWrite(WsRemoteEndpointImplServer.java:81)
> >> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.writeMe
> >> ssagePart(WsRemoteEndpointImplBase.java:456)
> >> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.startMe
> >> ssage(WsRemoteEndpointImplBase.java:344)
> >> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.startMe
> >> ssageBlock(WsRemoteEndpointImplBase.java:276)
> >> at org.apache.tomcat.websocket.WsSession.sendCloseMessage(WsSes
> >> sion.java:559)
> >> at org.apache.tomcat.websocket.WsSession.doClose(WsSession.java:465)
> >> at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.onEr
> >> ror(WsHttpUpgradeHandler.java:162)
> >> at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.acce
> >> ss$300(WsHttpUpgradeHandler.java:48)
> >> at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler$WsRe
> >> adListener.onError(WsHttpUpgradeHandler.java:230)
> >> at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler$WsRe
> >> adListener.onDataAvailable(WsHttpUpgradeHandler.java:213)
> >> at org.apache.coyote.http11.upgrade.AbstractServletInputStream.
> >> onDataAvailable(AbstractServletInputStream.java:203)
> >> at org.apache.coyote.http11.upgrade.AbstractProcessor.upgradeDi
> >> spatch(AbstractProcessor.java:93)
> >> at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler
> >> .process(AbstractProtocol.java:623)
> >> at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun
> >> (NioEndpoint.java:1749)
> >> at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(
> >> NioEndpoint.java:1708)
> >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> >> Executor.java:1145)
> >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> >> lExecutor.java:615)
> >> at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.
> >> run(TaskThread.java:61)
> >> at java.lang.Thread.run(Thread.java:745)
> >>
> >> *Connector*:
> >>
> >>  >>protocol="org.apache.coyote.http11.Http11NioProtocol"
> >>port="8443" maxThreads="200"
> >>

Re: Long Polling : Tomcat 7.0.50 / 8.0.9

2014-08-29 Thread anurag gupta
Can anyone help regarding this ?

Update:--

A simple test on Tomcat 7.0.50/7.0.55 of Longpolling implementation using
JAX-RS 2.0 AsyncResponse mechanism.
I'm seeing the following the errors in the logs and a lot many CLOSE_WAIT
connections, Why ?

Exception in thread http-nio-8080-ClientPoller-0
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
at java.util.HashMap$KeyIterator.next(HashMap.java:960)
at
java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1067)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.timeout(NioEndpoint.java:1437)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1231)
at java.lang.Thread.run(Thread.java:744)

[ERROR] [2014-08-29 05:54:19,622] [-8080-ClientPoller-0]
[he.tomcat.util.net.NioEndpoint] Error allocating socket processor
java.lang.NullPointerException
at
org.apache.tomcat.util.net.NioEndpoint.processSocket(NioEndpoint.java:742)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.processKey(NioEndpoint.java:1273)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1226)
at java.lang.Thread.run(Thread.java:744)

[ERROR] [2014-08-29 05:48:35,941] [-8080-ClientPoller-1]
[he.tomcat.util.net.NioEndpoint] Error allocating socket processor
java.lang.NullPointerException
at
org.apache.tomcat.util.net.NioEndpoint.processSocket(NioEndpoint.java:742)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.processKey(NioEndpoint.java:1273)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1226)
at java.lang.Thread.run(Thread.java:744)

Exception in thread http-nio-8080-ClientPoller-0
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
at java.util.HashMap$KeyIterator.next(HashMap.java:960)
at
java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1067)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.timeout(NioEndpoint.java:1437)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1231)
at java.lang.Thread.run(Thread.java:744)






On Fri, Aug 22, 2014 at 5:25 PM, anurag gupta anurag.11...@gmail.com
wrote:

 Ok, So the requests will be idle upto the long poll timeout if no response
 is generated.

 So in our test setup we have 60 clients and each makes 5000 requests.
  These 5000 requests are made at the same time and renewed(i.e. a new
 request is made in a loop ) as soon as
 the app server sends response (which in the worst case i.e no response was
 available, will be a empty json)

 ​A few minutes back I tried with processorCache=50, but still
 tomcat(8.0.9) logged OOM GC Overhead Limit Exceeded and on server around
 70K sockets were open (from /proc/net/sockstat​) .





 On Fri, Aug 22, 2014 at 5:03 PM, Mark Thomas ma...@apache.org wrote:

 On 22/08/2014 11:22, anurag gupta wrote:
  Executors:-
 
   Executor name=tomcatThreadPool namePrefix=catalina-exec-
  maxThreads=2048 minSpareThreads=1024 maxQueueSize=1
  prestartminSpareThreads=true/
 
  This is the connector config :-
 
  Connector port=8080
  protocol=org.apache.coyote.http11.Http11NioProtocol
 redirectPort=8443
  acceptCount=10 maxConnections=-1
 acceptorThreadCount=5 executor=tomcatThreadPool
  connectionLinger=-1 socket.soLingerOn=false socket.soLingerTime=0
 socket.soReuseAddress=true connectionTimeout=1000
  socket.soTimeout=1000 keepAliveTimeout=0 maxKeepAliveRequests=1
 socket.soKeepAlive=false /
 
  The only way to know for sure is if you use a profiler and find out
  where you application is using memory.
 
  Yes we do cache the AsyncResponse objects till the timeout happens or
 some
  response is generated.
 
  How long does a request take to process? Exactly how many concurrent
  requests are you trying to support?
  A long poll request has a timeout of 10 mins (in this test), but we
 want to
  have it upto 60 mins if feasible.
  We are trying to figure out the max achievable concurrent requests.

 Concurrent requests != concurrent connections.

 Concurrent requests (i.e. where the server is actively doing something
 with a connection) will be limited to 2048 with that configuration
 (maximum number of available threads).

 Concurrent connections will depend on you test environment. For a single
 Tomcat HTTP connector, there is a hard limit of 64k connections per
 client but you can use multiple clients (each with their own IP address)
 to get around that. After that, you'll hit OS limits - that should be
 around several hundred k.

 Mark


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




 --
 Regards
 Anurag




-- 
Regards
Anurag


Re: Long Polling : Tomcat 7.0.50 / 8.0.9

2014-08-22 Thread anurag gupta
Thanks Mark.

The same application is running in a jetty9 server. And I ran a test for 5
hours with 300,000 requests (moving window of 9mins) with 10g of heap.
Jetty didn't crash with OOM. So I guess my application is not the source of
OOM.

I'm currently using tomcat 7.0.50 in production and it is doing well and I
don't want to migrate to jetty just for long polling (implemented using
AsyncResponse).

Any suggestions ??

Regards
Anurag
 On Aug 22, 2014 2:10 PM, Mark Thomas ma...@apache.org wrote:

 On 22/08/2014 06:03, anurag gupta wrote:
 
 
  Hi All,
 
   I'm trying to implement long polling using the servlet 3.0 spec.
  Implementation wise it's done and works fine in tomcat. The problem
 occurs
  when it is under load, for eg. when we send just 100,000 requests we see
  weird behaviour like requests timeout before the defined timeout, Tomcat
  goes OOM because of GC overhead limit exceeding.

 The root cause of the OOM is most likely your application rather than
 Tomcat.

  I have tried this on 2 diff versions of tomcat (mentioned in subject).
 
  OS CentOS 6.5
  Process memory 10g both Xmx and Xms
 
  So I have a question, upto how many concurrent open(idle) connections can
  a tomcat instance handle ?

 As many as your operating system will allow. (Hint: It will be less than
 100k).

  How to achieve maximum idle connections ?

 Fix your application so it doesn't trigger an OOME.

 Tune your OS.

 Mark

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Long Polling : Tomcat 7.0.50 / 8.0.9

2014-08-22 Thread anurag gupta
Executors:-

 Executor name=tomcatThreadPool namePrefix=catalina-exec-
maxThreads=2048 minSpareThreads=1024 maxQueueSize=1
prestartminSpareThreads=true/

This is the connector config :-

Connector port=8080
protocol=org.apache.coyote.http11.Http11NioProtocol redirectPort=8443
acceptCount=10 maxConnections=-1
   acceptorThreadCount=5 executor=tomcatThreadPool
connectionLinger=-1 socket.soLingerOn=false socket.soLingerTime=0
   socket.soReuseAddress=true connectionTimeout=1000
socket.soTimeout=1000 keepAliveTimeout=0 maxKeepAliveRequests=1
   socket.soKeepAlive=false /

 The only way to know for sure is if you use a profiler and find out
where you application is using memory.

Yes we do cache the AsyncResponse objects till the timeout happens or some
response is generated.

 How long does a request take to process? Exactly how many concurrent
requests are you trying to support?
A long poll request has a timeout of 10 mins (in this test), but we want to
have it upto 60 mins if feasible.
We are trying to figure out the max achievable concurrent requests.






On Fri, Aug 22, 2014 at 2:36 PM, Mark Thomas ma...@apache.org wrote:

 On 22/08/2014 09:47, anurag gupta wrote:
  Thanks Mark.
 
  The same application is running in a jetty9 server. And I ran a test for
 5
  hours with 300,000 requests (moving window of 9mins) with 10g of heap.
  Jetty didn't crash with OOM. So I guess my application is not the source
 of
  OOM.

 I disagree. I suspect configuration differences.

 The only way to know for sure is if you use a profiler and find out
 where you application is using memory.

  I'm currently using tomcat 7.0.50 in production and it is doing well and
 I
  don't want to migrate to jetty just for long polling (implemented using
  AsyncResponse).

 Which connector?

  Any suggestions ??

 How long does a request take to process? Exactly how many concurrent
 requests are you trying to support?

 Mark


 
  Regards
  Anurag
   On Aug 22, 2014 2:10 PM, Mark Thomas ma...@apache.org wrote:
 
  On 22/08/2014 06:03, anurag gupta wrote:
 
 
  Hi All,
 
   I'm trying to implement long polling using the servlet 3.0 spec.
  Implementation wise it's done and works fine in tomcat. The problem
  occurs
  when it is under load, for eg. when we send just 100,000 requests we
 see
  weird behaviour like requests timeout before the defined timeout,
 Tomcat
  goes OOM because of GC overhead limit exceeding.
 
  The root cause of the OOM is most likely your application rather than
  Tomcat.
 
  I have tried this on 2 diff versions of tomcat (mentioned in subject).
 
  OS CentOS 6.5
  Process memory 10g both Xmx and Xms
 
  So I have a question, upto how many concurrent open(idle) connections
 can
  a tomcat instance handle ?
 
  As many as your operating system will allow. (Hint: It will be less than
  100k).
 
  How to achieve maximum idle connections ?
 
  Fix your application so it doesn't trigger an OOME.
 
  Tune your OS.
 
  Mark
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




-- 
Regards
Anurag


Re: Long Polling : Tomcat 7.0.50 / 8.0.9

2014-08-22 Thread anurag gupta
Ok, So the requests will be idle upto the long poll timeout if no response
is generated.

So in our test setup we have 60 clients and each makes 5000 requests.
 These 5000 requests are made at the same time and renewed(i.e. a new
request is made in a loop ) as soon as
the app server sends response (which in the worst case i.e no response was
available, will be a empty json)

​A few minutes back I tried with processorCache=50, but still
tomcat(8.0.9) logged OOM GC Overhead Limit Exceeded and on server around
70K sockets were open (from /proc/net/sockstat​) .





On Fri, Aug 22, 2014 at 5:03 PM, Mark Thomas ma...@apache.org wrote:

 On 22/08/2014 11:22, anurag gupta wrote:
  Executors:-
 
   Executor name=tomcatThreadPool namePrefix=catalina-exec-
  maxThreads=2048 minSpareThreads=1024 maxQueueSize=1
  prestartminSpareThreads=true/
 
  This is the connector config :-
 
  Connector port=8080
  protocol=org.apache.coyote.http11.Http11NioProtocol redirectPort=8443
  acceptCount=10 maxConnections=-1
 acceptorThreadCount=5 executor=tomcatThreadPool
  connectionLinger=-1 socket.soLingerOn=false socket.soLingerTime=0
 socket.soReuseAddress=true connectionTimeout=1000
  socket.soTimeout=1000 keepAliveTimeout=0 maxKeepAliveRequests=1
 socket.soKeepAlive=false /
 
  The only way to know for sure is if you use a profiler and find out
  where you application is using memory.
 
  Yes we do cache the AsyncResponse objects till the timeout happens or
 some
  response is generated.
 
  How long does a request take to process? Exactly how many concurrent
  requests are you trying to support?
  A long poll request has a timeout of 10 mins (in this test), but we want
 to
  have it upto 60 mins if feasible.
  We are trying to figure out the max achievable concurrent requests.

 Concurrent requests != concurrent connections.

 Concurrent requests (i.e. where the server is actively doing something
 with a connection) will be limited to 2048 with that configuration
 (maximum number of available threads).

 Concurrent connections will depend on you test environment. For a single
 Tomcat HTTP connector, there is a hard limit of 64k connections per
 client but you can use multiple clients (each with their own IP address)
 to get around that. After that, you'll hit OS limits - that should be
 around several hundred k.

 Mark


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




-- 
Regards
Anurag


Re: Long Polling : Tomcat 7.0.50 / 8.0.9

2014-08-21 Thread anurag gupta


 Hi All,

  I'm trying to implement long polling using the servlet 3.0 spec.
 Implementation wise it's done and works fine in tomcat. The problem occurs
 when it is under load, for eg. when we send just 100,000 requests we see
 weird behaviour like requests timeout before the defined timeout, Tomcat
 goes OOM because of GC overhead limit exceeding.

 I have tried this on 2 diff versions of tomcat (mentioned in subject).

 OS CentOS 6.5
 Process memory 10g both Xmx and Xms

 So I have a question, upto how many concurrent open(idle) connections can
 a tomcat instance handle ? How to achieve maximum idle connections ?



-- 
Regards
Anurag