Re: blocked / hanging on call to HandlerRequest.checkRequest
Hi Bill If you just want the problem to go away, then look for the attribute request.registerRequests in http://tomcat.apache.org/tomcat-5.5-doc/config/ajp.html . If that is set to false, then there is no locking within the checkRequest method. You lose that ability to get stats for the request threads via JMX (including the manager status page), but that method also quits being a bottleneck if you get a flood of new requests. Thanks for the reply, I found that option yesterday and set up my Connector element with request.registerRequests=false which solved that, then I got blocking on the JspServletWrapper.service method - so I precompiled the jsps - which solved that, then I got blocking on some of my own code where I'd never had a problem before, in short I think the box is simply overloaded - so I'm going to upgrade it, the 5 minute load average peaks at around 20 and is generally over 6 during the day time. Cheers for your reply, Simon - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
blocked / hanging on call to HandlerRequest.checkRequest
Hi I'm running tomcat 5.5.25 on debian sarge 32 bit linux (2.6.8 kernel) with ~ 1.5GB ram on a pentium 4 2GHz with a mysql db 5.0.27 I've got a configuration with apache mod_jk 1.2.25 balancing to 2 tomcats which are both running running on jdk 1.6.0_16 -Xmx=256M periodically, generally at busy times, looking at the JK Status Manager the busy count on one of the tomcats will go up and the requests channelled through that container will start to hang, the busy count will steadily increase the throughput will drop dramatically (i.e. the Acc column in jk_status will stop incrementing by 30 every 10secs and go down to like 4), This will continue until I either stop that tomcat member through the JK Status Manager - by editing the worker settings or the thread count goes up to over the number of permitted apache requests (at 150 at the moment) and apache is restarted automatically by an out of process monitoring app. If I stop the tomcat instance through the JK Status Manager, then the busy count will gradually (over a period of 5 - 10 mins) decrease and get to 0. I took a thread dump by tee-ing the output of catalina.out and then sending a kill -SIGQUIT pid when it was in the described busy state and lots of threads The crux of that seemed to be a lot of threads blocked / waiting for lock on a lock / monitor held in the HandlerRequest.checkRequest here is the printout of the thread holding the lock which is in the RUNNABLE state: TP-Processor65 daemon prio=10 tid=0x08bc9400 nid=0x54bd runnable [0x55dd] java.lang.Thread.State: RUNNABLE at java.lang.Class.getMethod(Class.java:1605) at org.apache.commons.modeler.BaseModelMBean.setManagedResource(BaseModelMBean.java:764) at org.apache.commons.modeler.ManagedBean.createMBean(ManagedBean.java:393) at org.apache.commons.modeler.Registry.registerComponent(Registry.java:835) at org.apache.jk.common.ChannelSocket.registerRequest(ChannelSocket.java:466) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - locked 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) then lots of the following types of threads (e.g. 35) all blocked TP-Processor63 daemon prio=10 tid=0x09ddc800 nid=0x549f waiting for monitor entry [0x55d3] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - waiting to lock 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) TP-Processor62 daemon prio=10 tid=0x09dd4c00 nid=0x549e waiting for monitor entry [0x55ce] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - waiting to lock 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) .etc here is a typical reading from the JK Server Manager for the for the balanced members NametomcatA Typeajp13 Hostlocalhost:8011 Addr127.0.0.1:8011 Act ACT State OK D 0 F 100 M 1 V 104 Acc 42540 Err 0 CE 426 RE 0 Wr 34M Rd 792M Busy80 Max 92 Route tomcatA RR Cd Cd 0/0 NametomcatB Typeajp13 Hostlocalhost:8012 Addr127.0.0.1:8012 Act ACT State OK D 0 F 100 M 1 V 97 Acc 42719 Err 0 CE 377 RE 0 Wr 39M Rd 807M Busy4 Max 57 Route tomcatB RR Cd Cd 0/0 I'm been
Re: blocked / hanging on call to HandlerRequest.checkRequest
Simon Papillon simon.papil...@gmail.com wrote in message news:e9cf50b20909280512o2905849fref38fc97a06dd...@mail.gmail.com... Hi I'm running tomcat 5.5.25 on debian sarge 32 bit linux (2.6.8 kernel) with ~ 1.5GB ram on a pentium 4 2GHz with a mysql db 5.0.27 I've got a configuration with apache mod_jk 1.2.25 balancing to 2 tomcats which are both running running on jdk 1.6.0_16 -Xmx=256M periodically, generally at busy times, looking at the JK Status Manager the busy count on one of the tomcats will go up and the requests channelled through that container will start to hang, the busy count will steadily increase the throughput will drop dramatically (i.e. the Acc column in jk_status will stop incrementing by 30 every 10secs and go down to like 4), This will continue until I either stop that tomcat member through the JK Status Manager - by editing the worker settings or the thread count goes up to over the number of permitted apache requests (at 150 at the moment) and apache is restarted automatically by an out of process monitoring app. If I stop the tomcat instance through the JK Status Manager, then the busy count will gradually (over a period of 5 - 10 mins) decrease and get to 0. I took a thread dump by tee-ing the output of catalina.out and then sending a kill -SIGQUIT pid when it was in the described busy state and lots of threads The crux of that seemed to be a lot of threads blocked / waiting for lock on a lock / monitor held in the HandlerRequest.checkRequest here is the printout of the thread holding the lock which is in the RUNNABLE state: If you just want the problem to go away, then look for the attribute request.registerRequests in http://tomcat.apache.org/tomcat-5.5-doc/config/ajp.html . If that is set to false, then there is no locking within the checkRequest method. You lose that ability to get stats for the request threads via JMX (including the manager status page), but that method also quits being a bottleneck if you get a flood of new requests. TP-Processor65 daemon prio=10 tid=0x08bc9400 nid=0x54bd runnable [0x55dd] java.lang.Thread.State: RUNNABLE at java.lang.Class.getMethod(Class.java:1605) at org.apache.commons.modeler.BaseModelMBean.setManagedResource(BaseModelMBean.java:764) at org.apache.commons.modeler.ManagedBean.createMBean(ManagedBean.java:393) at org.apache.commons.modeler.Registry.registerComponent(Registry.java:835) at org.apache.jk.common.ChannelSocket.registerRequest(ChannelSocket.java:466) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - locked 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) then lots of the following types of threads (e.g. 35) all blocked TP-Processor63 daemon prio=10 tid=0x09ddc800 nid=0x549f waiting for monitor entry [0x55d3] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - waiting to lock 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) TP-Processor62 daemon prio=10 tid=0x09dd4c00 nid=0x549e waiting for monitor entry [0x55ce] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.jk.common.HandlerRequest.checkRequest(HandlerRequest.java:357) - waiting to lock 0x4490ee38 (a java.lang.Object) at org.apache.jk.common.HandlerRequest.decodeRequest(HandlerRequest.java:367) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:261) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:619) .etc here is a typical reading from the JK