On 10/07/2010 08:55, jikai wrote: > I give more details when error occurs: > 1) nginx suddently report lots of " Connection timed out" like this > 2010/07/09 19:26:37 [error] 11558#0: *3067544729 connect() failed (110: > Connection timed out) while connecting to upstream, ... > > 2) there are still a few requests can connect tomcat and well processed, but > most requests from nginx is connection timeout. the cpu and net traffic of > tomcat server decreased a lot > > 3) tomcat acceptor and poller thread is RUNNABLE, but most worker thread is > "TIMED_WAITING" > "http-10.132.23.74-8090-ClientPoller" daemon prio=10 tid=0x00002aab355e2c00 > nid=0x5afd runnable [0x0000000044169000..0x0000000044169e20] > java.lang.Thread.State: RUNNABLE > at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) > at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215) > at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65) > at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) > - locked <0x00002aaaba1d8c90> (a sun.nio.ch.Util$1) > - locked <0x00002aaaba1d8c78> (a > java.util.Collections$UnmodifiableSet) > - locked <0x00002aaaba1d8a98> (a sun.nio.ch.EPollSelectorImpl) > at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) > at > org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1473) > at java.lang.Thread.run(Thread.java:619) > > "http-10.132.23.74-8090-Acceptor-0" daemon prio=10 tid=0x00002aab355df800 > nid=0x5afc runnable [0x0000000044068000..0x0000000044068ca0] > java.lang.Thread.State: RUNNABLE > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145) > - locked <0x00002aaab8936bd8> (a java.lang.Object) > at > org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198) > at java.lang.Thread.run(Thread.java:619) > > 4) there is no "java.lang.OutOfMemoryError" in catalina.out > > 5) I tried JIO instead of NIO, but error still here > > 6) my NIO configuration is below: > <Connector port="8090" enableLookups="false" redirectPort="8443" > protocol="org.apache.coyote.http11.Http11NioProtocol" > connectionTimeout="30000" maxThreads="1000" acceptCount="800" > URIEncoding="UTF-8" /> > > any other reason cause tomcat can't accept requests?
Because it's already saturated with requests? No server has an infinite capacity. How many threads did jstack report were running? Can you connect with JMX and see what state the connector is in? Are you using an Executor in combination with your Connector? p > On 10/07/2010 06:32, jikai wrote: >> Hi all: >> >> Out web site use Nginx infront of tomcat6.0.28, nio connector, in > heavy >> load running about 24 hours, tomcat can't accept request, we must restart >> tomcat . >> >> check nginx error log, there are lots of "Connection timed out", print >> tomcat jstack, found most tomcat threads is "TIMED_WAITING", like this: >> >> >> >> "http-10.132.23.74-8090-exec-54" daemon prio=10 tid=0x00002aab36ecec00 >> nid=0x3175 waiting on condition [0x00000000498c0000..0x00000000498c0e20] >> >> java.lang.Thread.State: TIMED_WAITING (parking) >> >> at sun.misc.Unsafe.park(Native Method) >> >> - parking to wait for <0x00002aaaba3cf768> (a >> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) >> >> at >> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) >> >> at >> > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitN >> anos(AbstractQueuedSynchronizer.java:1963) >> >> at >> > java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:395) >> >> at >> > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:944) >> >> at >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9 >> 06) >> >> at java.lang.Thread.run(Thread.java:619) >> >> >> >> print jmap, we found we have lots of memory leak: >> >> >> >> num #instances #bytes class name >> >> ---------------------------------------------- >> >> 1: 7772927 336536416 [C >> >> 2: 7774657 310986280 java.lang.String >> >> 3: 398536 162602688 com.manu.dynasty.base.domain.User >> >> 4: 3460146 138405840 java.sql.Timestamp >> >> 5: 2334971 112078608 java.util.HashMap$Entry >> >> 6: 792095 101388160 > com.manu.dynasty.function.domain.Friend >> >> . >> >> 108: 437 80408 java.net.SocksSocketImpl >> >> >> >> Objects num in memory like "com.manu.dynasty.base.domain.User", >> "com.manu.dynasty.function.domain.Friend" is too big than normal > situation, >> >> And the num of "java.net.SocksSocketImpl" is too small. >> >> >> >> So my question is the memory leak cause tomcat can't create Socket to > accept >> nginx's request? > > A memory leak will mean that eventually the JVM is unable to allocate > memory for the creation of any new objects, and yes that will include > network connections. > > Work out what's holding onto those objects. Are you storing them in the > user session perhaps? > > > p > >> Our app is running in product envionment now , and every day we have to >> restart tomcat, so any help is very appreciated > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org >
signature.asc
Description: OpenPGP digital signature