Oh, I forgot to mention, switched back to APR connector for 8080 for the weekend and all was fine. Switched back to NIO this morning to gather these stats and in a few hours it was stuck at 100% CPU again, very little variance in traffic (low traffic site right now, about a constant 1mbit).
----- Original Message ----
From: Peter
Thanks Filip, oversight on my part, here we go:
[EMAIL PROTECTED]:/logs/tomcat> ps -eL -o pid,%cpu,lwp | grep -i 4046 | grep
-iv 0.0
4046 0.6 4047
4046 0.1 4052
4046 0.1 4053
4046 21.9 4078
4046 18.7 4108
4046 0.1 4109
"http-8080-Poller-0" daemon prio=1 tid=0x0000002ae2f860e0 nid=0xfee
runnable [0x00000000412cf000..0x00000000412cfc10]
at java.util.HashMap.newKeyIterator(HashMap.java:889)
at java.util.HashMap$KeySet.iterator(HashMap.java:921)
at java.util.HashSet.iterator(HashSet.java:154)
at
sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)
- locked <0x0000002ab2ac2810> (a java.util.HashSet)
at
sun.nio.ch.PollSelectorImpl.doSelect(PollSelectorImpl.java:60)
at
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x0000002ab2ac2870> (a sun.nio.ch.Util$1)
- locked <0x0000002ab2ac2858> (a
java.util.Collections$UnmodifiableSet)
- locked <0x0000002ab2ab5c80> (a sun.nio.ch.PollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1417)
at java.lang.Thread.run(Thread.java:595)
"http-8080-exec-18" daemon prio=1 tid=0x0000002ae3c5d8e0 nid=0x100c
runnable [0x00000000430ec000..0x00000000430edb10]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
at
org.apache.coyote.http11.filters.GzipOutputFilter.end(GzipOutputFilter.java:122)
at
org.apache.coyote.http11.InternalNioOutputBuffer.endRequest(InternalNioOutputBuffer.java:396)
at
org.apache.coyote.http11.Http11NioProcessor.action(Http11NioProcessor.java:1080)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.coyote.Response.finish(Response.java:305)
at
org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:276)
at
org.apache.catalina.connector.Response.finishResponse(Response.java:486)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:287)
at
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:887)
at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:696)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2009)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
at java.lang.Thread.run(Thread.java:595)
----- Original Message ----
From: Filip Hanik
since you are running on linux, you know that you can get the id of the
thread that is taking up all the CPU, just use a binary top that let
you
list individual threads.
as you can see, the thread dump you have, doesn't really show anything,
you're simply assuming that it's that call taking up CPU, but if your
CPU usage is very high, then very little code is actually moving
through.
there are a few bugs with references, but until you get the actual
thread causing the CPU usage, then you wont know for sure
a few examples are, but nothing concrete.
http://issues.apache.org/bugzilla/show_bug.cgi?id=42090
http://issues.apache.org/bugzilla/show_bug.cgi?id=42925
gather up the data, get the thread id that is causing CPU, match that
with your thread dump, and then you will know for sure
Filip
Peter wrote:
Hi
We are having a problem with Tomcat 6 using the NIO (running on linux
with Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_12-b04, mixed
mode) that it consumes all CPU after a few hours in production, prior
to
that we ran Tomcat 6 with AJP and Apache 2.0 with mod_jk in front of
it
for over a month without any problems. While it is consuming all CPU
it
still serves requests but obviously much slower. During the last
"episode" I collected some thread dumps over the period of 10 - 15
minutes
and found 3 runnable threads that were present in all the dumps and
were
doing the exact same thing:
"http-8080-exec-41" daemon prio=1 tid=0x0000002ae320dad0 nid=0x12ac
runnable [0x0000000045c18000..0x0000000045c18c10]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
--
"http-8080-exec-41" daemon prio=1 tid=0x0000002ae320dad0 nid=0x12ac
runnable [0x0000000045c18000..0x0000000045c18c10]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
--
"http-8080-exec-41" daemon prio=1 tid=0x0000002ae320dad0 nid=0x12ac
runnable [0x0000000045c18000..0x0000000045c18c10]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
"http-8080-exec-29" daemon prio=1 tid=0x0000002ae4152d10 nid=0x6e3a
runnable [0x0000000043af7000..0x0000000043af7b90]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
"http-8080-exec-16" daemon prio=1 tid=0x0000002ae39bf030 nid=0x18d1
runnable [0x0000000043cf9000..0x0000000043cf9e10]
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:44)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:794)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.filters.GzipOutputFilter$FakeOutputStream.write(GzipOutputFilter.java:164)
at
java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:95)
etc. Here is the connector configurations:
<Connector port="8080"
connectionTimeout="20000"
maxThreads="300"
enableLookups="false"
compression="5000"
protocol="org.apache.coyote.http11.Http11NioProtocol"
redirectPort="8443"
keepAliveTimeout="5000"
maxKeepAliveRequests="1000" />
<Connector port="8443"
protocol="org.apache.coyote.http11.Http11AprProtocol"
maxThreads="300"
minSpareThreads="25"
maxSpareThreads="75"
enableLookups="false"
disableUploadTimeout="true"
acceptCount="100"
scheme="https"
secure="true"
SSLEnabled="true"
SSLCertificateFile="xyz.com.crt"
SSLCertificateKeyFile="xyz.com.key"
SSLPassword="" />
The server goes into this state very regularly, this configuration
has been in service for 3 days and it's goes into this state every 2 -
5
hours until restarted.
Has anyone experience any similar behaviour? Any ideas or
suggestions?
Thanks
Peter
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]