Thanks for looking further into this, Mark,
We are running:
java version 1.7.0_65
OpenJDK Runtime Environment (IcedTea 2.5.1)
(7u65-2.5.1-4ubuntu1~0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
Linux 3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC
On 10/11/2014 09:57, Lars Engholm Johansen wrote:
Hi Mark,
I looked into our javax.websocket.Endpoint implementation and found the
following suspicious code:
When we need to close the WebSocket session already in .onOpen() method
(rejecting a connection), we are calling session.close()
Hi Mark,
I looked into our javax.websocket.Endpoint implementation and found the
following suspicious code:
When we need to close the WebSocket session already in .onOpen() method
(rejecting a connection), we are calling session.close() asynchronously
after 1 second via a java.util.Timer task.
Hi all,
I have good news as I have identified the reason for the devastating
NioEndpoint.Poller thread death:
In rare circumstances a ConcurrentModification can occur in the Poller's
connection timeout handling called from OUTSIDE the try-catch(Throwable) of
Poller.run()
On 06/10/2014 10:11, Lars Engholm Johansen wrote:
Hi all,
I have good news as I have identified the reason for the devastating
NioEndpoint.Poller thread death:
In rare circumstances a ConcurrentModification can occur in the Poller's
connection timeout handling called from OUTSIDE the
Thanks guys for all the feedback.
I have tried the following suggested tasks:
- Upgrading Tomcat to the newest 7.0.55 on all our servers - Problem
still persists
- Force a System.gc() when connection count is on the loose -
Connection count is not dropping
- Lowering the log level
Thanks Lars, if you are indeed experiencing a non caught error, let us know
what it is.
On Thu, Sep 18, 2014 at 2:30 AM, Lars Engholm Johansen lar...@gmail.com
wrote:
Thanks guys for all the feedback.
I have tried the following suggested tasks:
- Upgrading Tomcat to the newest 7.0.55 on
Are there any log entries that would indicate that the poller thread has
died?
This/these thread/s start when Tomcat starts. and a stack over flow on a
processing thread should never affect the poller thread.
Filip
On Thu, Jun 26, 2014 at 4:01 PM, Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Filip,
On 6/27/14, 11:36 AM, Filip Hanik wrote:
Are there any log entries that would indicate that the poller
thread has died? This/these thread/s start when Tomcat starts. and
a stack over flow on a processing thread should never affect the
Thanks for all the replies guys.
Have you observed a performance increase by setting
acceptorThreadCount to 4 instead of a lower number? I'm just curious.
No, but this was the consensus after elongated discussions in my team. We
have 12 cpu cores - better save than sorry. I know that the
Lars Engholm Johansen wrote:
Thanks for all the replies guys.
Have you observed a performance increase by setting
acceptorThreadCount to 4 instead of a lower number? I'm just curious.
No, but this was the consensus after elongated discussions in my team. We
have 12 cpu cores - better save
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Lars,
On 6/26/14, 9:56 AM, Lars Engholm Johansen wrote:
Thanks for all the replies guys.
Have you observed a performance increase by setting
acceptorThreadCount to 4 instead of a lower number? I'm just
curious.
No, but this was the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
André,
On 6/26/14, 11:09 AM, André Warnier wrote:
Lars Engholm Johansen wrote:
Thanks for all the replies guys.
Have you observed a performance increase by setting
acceptorThreadCount to 4 instead of a lower number? I'm just
curious.
I will try to force a GC next time I am at the console about to restart a
Tomcat where one of the http-nio-80-ClientPoller-x threads have died and
connection count is exploding.
But I do not see this as a solution - can you somehow deduct why this
thread died from the outcome from a GC?
And could
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Lars,
On 6/16/14, 5:40 AM, Lars Engholm Johansen wrote:
Our sites can run for days without problems, but once in a while
the tomcat connection count suddenly starts growing abnormally
fast. See this graph: http://imgur.com/s4fOUte netstat shows
2014-06-19 17:10 GMT+04:00 Lars Engholm Johansen lar...@gmail.com:
I will try to force a GC next time I am at the console about to restart a
Tomcat where one of the http-nio-80-ClientPoller-x threads have died and
connection count is exploding.
But I do not see this as a solution - can you
Konstantin Kolinko wrote:
2014-06-19 17:10 GMT+04:00 Lars Engholm Johansen lar...@gmail.com:
I will try to force a GC next time I am at the console about to restart a
Tomcat where one of the http-nio-80-ClientPoller-x threads have died and
connection count is exploding.
But I do not see this
Our sites still functions normally with no cpu spikes during this build up
until around 60,000 connections, but then the server refuses further
connections and a manual Tomcat restart is required.
yes, the connection limit is a 16 bit short count minus some reserved
addresses. So your system
Our company are running several Tomcat 7.0.52 high volume Ubuntu 12.04
production servers.
We are using Tomcat WebSockets (JSR356 implementation) heavily with 100M
text messages (100GiB) per day.
We monitor webserver health by measuring several key parameters every
minute, including tomcat
Lars Engholm Johansen wrote:
Our company are running several Tomcat 7.0.52 high volume Ubuntu 12.04
production servers.
We are using Tomcat WebSockets (JSR356 implementation) heavily with 100M
text messages (100GiB) per day.
We monitor webserver health by measuring several key parameters every
20 matches
Mail list logo