Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
On 22.05.2009 03:54, Pantvaidya, Vishwajit wrote: -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Thursday, May 21, 2009 3:37 PM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity On 22.05.2009 00:19, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] I will set - cachesize=1 (doc says jk will autoset this value only for worker-mpm and we use httpd 2.0 prefork) You don't have to: JK will discover this number for the Apache web server automatically and set the pool size to this value. [Pantvaidya, Vishwajit] Does what you say hold true for jk 1.2.15 also? Because I saw that for the 1.2.15 cachesize directive, http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives says that JK will discover the number of threads per child process on Apache 2 web server with worker-mpm and set its default value to match the ThreadsPerChild Apache directive.. Since we use pre-fork MPM, I assumed we need to set cachesize. I would say yes, but now your Ops people who resist upgrading try to play my time against their time. I'm not going to look it up for them, they should upgrade ;) - remove cache and recycle timeouts Chris and me are not having the same opinion here. You can choose :) [Pantvaidya, Vishwajit] I think that may be only because my adding the connectionTimeout led you to believe that I wanted nonpersistent conn's. Now that I know persistent connections are better, I am trying to rollback connectionTimeout - and then I guess you will agree with Chris that I need to rollback the recycletimeouts, etc in workers file on httpd side also? My point is: persistent connections are good, but connections which are idle for a long time are not as good, so close them after some idle time, like e.g. 10 minutes. Of course this means you need to create new ones once your load goes up again, but that's not a big problem. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Chetan, On 5/21/2009 2:08 PM, Chetan Chheda wrote: I am following this thread with great interest. I have a similar issue as Vishwajit and have resorted to adding the connectionTimeout to get rid of a large number of RUNNABLE threads. Why? Are you just offended by the number of threads, or do you have a legitimate resource problem? But mod_jk does not like tomcat threads timing out and logs the message increase the backend idle connection timeout or the connection_pool_minsize in the mod_jk logs which leads me to believe that its apache thats not letting go of the threads in my case. Again, you need to set the Connector's connectionTimeout /and/ your workers' connection_pool_timeout settings to the same time interval (note that they use different semantics... one is in seconds and the other is in ms, so read the documentation carefully). - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkoWnKAACgkQ9CaO5/Lv0PBcTwCbBhuJ8/nwYLq/LAxCSVDer35t jAIAn2oUL3on6ki/x9pZHn8n0tLuVS8H =y10X -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
Chris, As with Vishwajit, my tomcat ends up with all threads busy and not serving any new requests. After setting the connectionTimeout the threads are being recycled but apache is not liking .. as per the messsage in mod_jk.log Chetan From: Christopher Schultz ch...@christopherschultz.net To: Tomcat Users List users@tomcat.apache.org Sent: Friday, May 22, 2009 8:37:52 AM Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Chetan, On 5/21/2009 2:08 PM, Chetan Chheda wrote: I am following this thread with great interest. I have a similar issue as Vishwajit and have resorted to adding the connectionTimeout to get rid of a large number of RUNNABLE threads. Why? Are you just offended by the number of threads, or do you have a legitimate resource problem? But mod_jk does not like tomcat threads timing out and logs the message increase the backend idle connection timeout or the connection_pool_minsize in the mod_jk logs which leads me to believe that its apache thats not letting go of the threads in my case. Again, you need to set the Connector's connectionTimeout /and/ your workers' connection_pool_timeout settings to the same time interval (note that they use different semantics... one is in seconds and the other is in ms, so read the documentation carefully). - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkoWnKAACgkQ9CaO5/Lv0PBcTwCbBhuJ8/nwYLq/LAxCSVDer35t jAIAn2oUL3on6ki/x9pZHn8n0tLuVS8H =y10X -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Chetan Chheda [mailto:chetan_chh...@yahoo.com] Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity As with Vishwajit, my tomcat ends up with all threads busy and not serving any new requests. After setting the connectionTimeout the threads are being recycled but apache is not liking .. as per the messsage in mod_jk.log Again, the most likely cause is something between httpd and Tomcat that is silently dropping the connections; a badly behaving firewall is a possible culprit. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Friday, May 22, 2009 2:53 AM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity My point is: persistent connections are good, but connections which are idle for a long time are not as good, so close them after some idle time, like e.g. 10 minutes. Of course this means you need to create new ones once your load goes up again, but that's not a big problem. [Pantvaidya, Vishwajit] Why are connections idle for a long time not good? I thought threads when idle take only a little memory and cpu. Are there any other reasons? Thanks a lot Rainer, Chuck, Chris, Andre, Pid, Martin and everyone else I missed. I spent quite some time yesterday chewing on everything I gathered in the last few days' interactions and the conflicting behavior we are seeing in our systems - that led to following conclusions and action plan: Behavior observed in diff production systems: a. medium-to-large thread count whether firewall exists or not b. % of runnable threads is much higher where firewall between httpd/tomcat c. atleast 1 server where firewall exists has run out of threads d. atleast 1 server where no firewall exists has run out of threads Conclusions: 1. In general, runnable threads should not be a prob, unless they correspond to dropped connections. Since on our servers that have firewall between httpd and tomcat, runnable connections are not being used for new requests and tomcat keeps on creating new threads (leading to #b/c above), those threads could correspond to: i. connections dropped by firewall or ii. hanging tomcat threads as httpd recycle timeout disconnected the connection from that side (and there was no connectiontimeout in server.xml so that tomcat could do the same) or iii. combination of these i and ii 2. Runnable threads on servers where no firewall exist (and we do not see server running out of threads) should not be a point of concerns as they do not correspond to dropped connections, as seen from netstat o/p at the end of this email. So #a above could be ignored. 3. Observation #d above is puzzling and currently I have no answers for that Action: - check both sides by using netstat -anop (Apache side and the Tomcat side without connectionTimeout, so you can see the problem in the original form). See whether the number of AJP connections in the various TCP states differs much between the netstat output on the Apache and on the Tomcat system. - Bring workers.properties settings in line with Apache recommendations: - Worker...cachesize=10 - set to 1 - Worker...cache_timeout=600 - remove - Worker...recycle_timeout=300 - remove Netstat o/p's: connector running on 21005, no firewall between httpd/tomcat Httpd Side: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer tcp0 0 129.41.29.241:53777 129.41.29.48:21005 ESTABLISHED - keepalive (2869.65/0/0) tcp0 0 129.41.29.241:53943 129.41.29.48:21005 ESTABLISHED - keepalive (3341.39/0/0) tcp0 0 129.41.29.241:49950 129.41.29.48:21005 ESTABLISHED - keepalive (6701.51/0/0) tcp0 0 129.41.29.241:49927 129.41.29.48:21005 ESTABLISHED - keepalive (6240.25/0/0) tcp0 0 129.41.29.241:49926 129.41.29.48:21005 ESTABLISHED - keepalive (6239.47/0/0) tcp0 0 129.41.29.241:49971 129.41.29.48:21005 ESTABLISHED - keepalive (6931.40/0/0) tcp0 0 129.41.29.241:49868 129.41.29.48:21005 ESTABLISHED - keepalive (5743.83/0/0) tcp0 0 129.41.29.241:49865 129.41.29.48:21005 ESTABLISHED - keepalive (5741.65/0/0) tcp0 0 129.41.29.241:49867 129.41.29.48:21005 ESTABLISHED - keepalive (5743.16/0/0) tcp0 0 129.41.29.241:49901 129.41.29.48:21005 ESTABLISHED - keepalive (5906.92/0/0) tcp0 0 129.41.29.241:49795 129.41.29.48:21005 ESTABLISHED - keepalive (4659.11/0/0) tcp0 0 129.41.29.241:49558 129.41.29.48:21005 ESTABLISHED - keepalive (1705.06/0/0) tcp0 0 129.41.29.241:50796 129.41.29.48:21005 ESTABLISHED - keepalive (4551.79/0/0) tcp0 0 129.41.29.241:50784 129.41.29.48:21005 ESTABLISHED - keepalive (4539.53/0/0) tcp0 0 129.41.29.241:50711 129.41.29.48:21005 ESTABLISHED - keepalive
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
On 22.05.2009 21:09, Pantvaidya, Vishwajit wrote: -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Friday, May 22, 2009 2:53 AM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity My point is: persistent connections are good, but connections which are idle for a long time are not as good, so close them after some idle time, like e.g. 10 minutes. Of course this means you need to create new ones once your load goes up again, but that's not a big problem. [Pantvaidya, Vishwajit] Why are connections idle for a long time not good? I thought threads when idle take only a little memory and cpu. Are there any other reasons? Because you might want to monitor connections in order to learn how many threads you need for your load and how things grow or shrink over time. If you keep connections open an infinite number of time, you'll only monitor the biggest need since restart, which is often not very interesting, because it often is artificial (triggered by some performance slowness you might have a very big connection number created during a short time). Second: because they are making trouble so often in combination with firewalls. So in general I like persistent connections as long as they are closed when idle for a longer time. So usually I set the pool min size to 0 and the idle connection timeout to 10 minutes. Thanks a lot Rainer, Chuck, Chris, Andre, Pid, Martin and everyone else I missed. I spent quite some time yesterday chewing on everything I gathered in the last few days' interactions and the conflicting behavior we are seeing in our systems - that led to following conclusions and action plan: Behavior observed in diff production systems: a. medium-to-large thread count whether firewall exists or not b. % of runnable threads is much higher where firewall between httpd/tomcat c. atleast 1 server where firewall exists has run out of threads d. atleast 1 server where no firewall exists has run out of threads Concurrency = Load * ResponseTime Concurrency: number of requests being processed in parallel Load: Number of Requests per Second being handled ResponseTime: Average Response time in seconds. So in case you have a performance problem and for a given load your response time goes up by a factor of ten, the number of connections will also go up by a factpr of 10. That's most often the reason for d) and was the reason, why we asked for thread dumps. Conclusions: 1. In general, runnable threads should not be a prob, unless they correspond to dropped connections. Since on our servers that have firewall between httpd and tomcat, runnable connections are not being used for new requests and tomcat keeps on creating new threads (leading to #b/c above), those threads could correspond to: i. connections dropped by firewall or ii. hanging tomcat threads as httpd recycle timeout disconnected the connection from that side (and there was no connectiontimeout in server.xml so that tomcat could do the same) or iii. combination of these i and ii 2. Runnable threads on servers where no firewall exist (and we do not see server running out of threads) should not be a point of concerns as they do not correspond to dropped connections, as seen from netstat o/p at the end of this email. So #a above could be ignored. 3. Observation #d above is puzzling and currently I have no answers for that If d) happens again, do some thread dumps. Action: - check both sides by using netstat -anop (Apache side and the Tomcat side without connectionTimeout, so you can see the problem in the original form). See whether the number of AJP connections in the various TCP states differs much between the netstat output on the Apache and on the Tomcat system. - Bring workers.properties settings in line with Apache recommendations: - Worker...cachesize=10 - set to 1 respectively when using Apache, remove this. Rely on the defaults for that one. - Worker...cache_timeout=600 - remove - Worker...recycle_timeout=300 - remove Hmmm. Netstat o/p's: connector running on 21005, no firewall between httpd/tomcat Httpd Side: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer tcp0 0 129.41.29.241:53777 129.41.29.48:21005 ESTABLISHED - keepalive (2869.65/0/0) tcp0 0 129.41.29.241:53943 129.41.29.48:21005 ESTABLISHED - keepalive (3341.39/0/0) tcp0 0 129.41.29.241:49950 129.41.29.48:21005 ESTABLISHED - keepalive (6701.51/0/0) tcp0 0 129.41.29.241:49927 129.41.29.48:21005 ESTABLISHED - keepalive (6240.25/0/0) tcp0 0 129.41.29.241:49926 129.41.29.48:21005
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Friday, May 22, 2009 12:39 PM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity [Pantvaidya, Vishwajit] Why are connections idle for a long time not good? I thought threads when idle take only a little memory and cpu. Are there any other reasons? Because you might want to monitor connections in order to learn how many threads you need for your load and how things grow or shrink over time. If you keep connections open an infinite number of time, you'll only monitor the biggest need since restart, which is often not very interesting, because it often is artificial (triggered by some performance slowness you might have a very big connection number created during a short time). [Pantvaidya, Vishwajit] Good reason - I think ultimately after some immediate testing to diagnose the outofthread issues, I will use timeouts. d. atleast 1 server where no firewall exists has run out of threads Concurrency = Load * ResponseTime Concurrency: number of requests being processed in parallel Load: Number of Requests per Second being handled ResponseTime: Average Response time in seconds. So in case you have a performance problem and for a given load your response time goes up by a factor of ten, the number of connections will also go up by a factpr of 10. That's most often the reason for d) and was the reason, why we asked for thread dumps. [Pantvaidya, Vishwajit] Again good explanation and makes a lot of sense - I do seem to remember we had performance problems on that machine. Will keep this in mind and monitor threads and take dumps if outofthreads reoccurs on that server. - Bring workers.properties settings in line with Apache recommendations: - Worker...cachesize=10 - set to 1 respectively when using Apache, remove this. Rely on the defaults for that one. [Pantvaidya, Vishwajit] Sure will do - once we migrate to jk 1.2.28. - Worker...cache_timeout=600 - remove - Worker...recycle_timeout=300 - remove Hmmm. [Pantvaidya, Vishwajit] Considering the excellent reasons you have given above - ultimately I will retain timeouts. But for testing firewall issues, I need to rollback connTimeout in server.xml and to make sure that my settings are consistent I need to rollback the above timeouts also. Again thanks - I think I have reasonable explanations for most of the issues / conflicting observations. This thread may be quiet for some time as I do more testing as per the actions I mentioned - will get back with results and final conclusions later. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Vishwajit, On 5/20/2009 3:01 PM, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should not be an issue. The only reason why I thought that was an issue was that I was observing that the none of the RUNNABLE connections were not being used to serve new requests, only the WAITING ones were - and I do know for sure that the RUNNABLE threads were not servicing any existing requests as I was the only one using the system then. It seems pretty clear that this is what your problem is. See if you can follow the order of events described below: 1. Tomcat and Apache httpd are started. httpd makes one or more (persistent) AJP connections to Tomcat and holds them open (duh). Each connection from httpd-Tomcat puts a Java thread in the RUNNABLE state (though actually blocked on socket read, it's not really runnable) 2. Some requests are received by httpd and sent over the AJP connections to Tomcat (or not ... it really doesn't matter) 3. Time passes, your recycle_timeout (300s) or cache_timeout (600s) expires 4. A new request comes in to httpd destined for Tomcat. mod_jk dutifully follows your instructions for closing the connections expired in #3 above (note that Tomcat has no idea that the connection has been closed, and so those threads remain in the RUNNABLE state, not connected to anything, lost forever) 5. A new connection (or multiple new connections... not sure exactly how mod_jks connection expiration-and-reconnect logic is done) is made to Tomcat which allocates a new thread (or threads) which is/are in the RUNNABLE state Rinse, repeat, your server chokes to death when it runs out of threads. The above description accounts for your loss of 4 threads at a time: your web browser requests the initial page followed by 3 other assets (image, css, whatever). Each one of them hits step #4 above, causing a new AJP connection to be created, with the old one still hanging around on the Tomcat side just wasting a thread and memory. By setting connectionTimeout on the AJP Connector, you are /doing what you should have done in the first place, which is match mod_jk cache_timeout with Connector connectionTimeout/. This allows the threads on the Tomcat side to expire just like those on the httpd side. They should expire at (virtually) the same time and everything works as expected. This problem is compounded by your initial configuration which created 10 connections from httpd-Tomcat for every (prefork) httpd process, resulting in 9 useless AJP connections for every httpd process. I suspect that you were expiring 10 connections at a time instead of just one, meaning that you were running out of threads 10 times faster than you otherwise would. Suggestions: 1. Tell your ops guys we know what we're talking about 2. Upgrade mod_jk 3. Set connection_pool_size=1, or, better yet, remove the config altogether and let mod_jk determine its own value 4. Remove all timeouts unless you know that you have a misbehaving firewall. If you do, enable cping/cpong (the preferred strategy by at least one author or mod_jk) - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEUEARECAAYFAkoVidEACgkQ9CaO5/Lv0PCzwACYsrhskrNVgJFk6hI1gU+Kkmbe WQCfTNbSLTgNtHcTbTAu5kw5igicNMw= =0EWv -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
I am following this thread with great interest. I have a similar issue as Vishwajit and have resorted to adding the connectionTimeout to get rid of a large number of RUNNABLE threads. But mod_jk does not like tomcat threads timing out and logs the message increase the backend idle connection timeout or the connection_pool_minsize in the mod_jk logs which leads me to believe that its apache thats not letting go of the threads in my case. From: Christopher Schultz ch...@christopherschultz.net To: Tomcat Users List users@tomcat.apache.org Sent: Thursday, May 21, 2009 1:05:21 PM Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Vishwajit, On 5/20/2009 3:01 PM, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should not be an issue. The only reason why I thought that was an issue was that I was observing that the none of the RUNNABLE connections were not being used to serve new requests, only the WAITING ones were - and I do know for sure that the RUNNABLE threads were not servicing any existing requests as I was the only one using the system then. It seems pretty clear that this is what your problem is. See if you can follow the order of events described below: 1. Tomcat and Apache httpd are started. httpd makes one or more (persistent) AJP connections to Tomcat and holds them open (duh). Each connection from httpd-Tomcat puts a Java thread in the RUNNABLE state (though actually blocked on socket read, it's not really runnable) 2. Some requests are received by httpd and sent over the AJP connections to Tomcat (or not ... it really doesn't matter) 3. Time passes, your recycle_timeout (300s) or cache_timeout (600s) expires 4. A new request comes in to httpd destined for Tomcat. mod_jk dutifully follows your instructions for closing the connections expired in #3 above (note that Tomcat has no idea that the connection has been closed, and so those threads remain in the RUNNABLE state, not connected to anything, lost forever) 5. A new connection (or multiple new connections... not sure exactly how mod_jks connection expiration-and-reconnect logic is done) is made to Tomcat which allocates a new thread (or threads) which is/are in the RUNNABLE state Rinse, repeat, your server chokes to death when it runs out of threads. The above description accounts for your loss of 4 threads at a time: your web browser requests the initial page followed by 3 other assets (image, css, whatever). Each one of them hits step #4 above, causing a new AJP connection to be created, with the old one still hanging around on the Tomcat side just wasting a thread and memory. By setting connectionTimeout on the AJP Connector, you are /doing what you should have done in the first place, which is match mod_jk cache_timeout with Connector connectionTimeout/. This allows the threads on the Tomcat side to expire just like those on the httpd side. They should expire at (virtually) the same time and everything works as expected. This problem is compounded by your initial configuration which created 10 connections from httpd-Tomcat for every (prefork) httpd process, resulting in 9 useless AJP connections for every httpd process. I suspect that you were expiring 10 connections at a time instead of just one, meaning that you were running out of threads 10 times faster than you otherwise would. Suggestions: 1. Tell your ops guys we know what we're talking about 2. Upgrade mod_jk 3. Set connection_pool_size=1, or, better yet, remove the config altogether and let mod_jk determine its own value 4. Remove all timeouts unless you know that you have a misbehaving firewall. If you do, enable cping/cpong (the preferred strategy by at least one author or mod_jk) - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEUEARECAAYFAkoVidEACgkQ9CaO5/Lv0PCzwACYsrhskrNVgJFk6hI1gU+Kkmbe WQCfTNbSLTgNtHcTbTAu5kw5igicNMw= =0EWv -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Christopher Schultz [mailto:ch...@christopherschultz.net] Sent: Thursday, May 21, 2009 10:05 AM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Vishwajit, On 5/20/2009 3:01 PM, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should not be an issue. The only reason why I thought that was an issue was that I was observing that the none of the RUNNABLE connections were not being used to serve new requests, only the WAITING ones were - and I do know for sure that the RUNNABLE threads were not servicing any existing requests as I was the only one using the system then. It seems pretty clear that this is what your problem is. See if you can follow the order of events described below: 1. Tomcat and Apache httpd are started. httpd makes one or more (persistent) AJP connections to Tomcat and holds them open (duh). Each connection from httpd-Tomcat puts a Java thread in the RUNNABLE state (though actually blocked on socket read, it's not really runnable) 2. Some requests are received by httpd and sent over the AJP connections to Tomcat (or not ... it really doesn't matter) 3. Time passes, your recycle_timeout (300s) or cache_timeout (600s) expires 4. A new request comes in to httpd destined for Tomcat. mod_jk dutifully follows your instructions for closing the connections expired in #3 above (note that Tomcat has no idea that the connection has been closed, and so those threads remain in the RUNNABLE state, not connected to anything, lost forever) 5. A new connection (or multiple new connections... not sure exactly how mod_jks connection expiration-and-reconnect logic is done) is made to Tomcat which allocates a new thread (or threads) which is/are in the RUNNABLE state Rinse, repeat, your server chokes to death when it runs out of threads. The above description accounts for your loss of 4 threads at a time: your web browser requests the initial page followed by 3 other assets (image, css, whatever). Each one of them hits step #4 above, causing a new AJP connection to be created, with the old one still hanging around on the Tomcat side just wasting a thread and memory. By setting connectionTimeout on the AJP Connector, you are /doing what you should have done in the first place, which is match mod_jk cache_timeout with Connector connectionTimeout/. This allows the threads on the Tomcat side to expire just like those on the httpd side. They should expire at (virtually) the same time and everything works as expected. [Pantvaidya, Vishwajit] Thanks Chris - all this makes a lot of sense. However I am not seeing same problem (tomcat running out of threads) on other servers which are running exactly same configuration except that in those cases is no firewall separating websvr and tomcat. Here are the figures of RUNNABLE on 3 different tomcat server running same config: 1. Firewall between httpd and tomcat - 120 threads, 112 runnable (93%) 2. No firewall between httpd and tomcat - 40 threads, 11 runnable (27%) 3. No firewall between httpd and tomcat - 48 threads, 2 runnable (4%) Leads me to believe there is some firewall related mischief happening with #1. This problem is compounded by your initial configuration which created 10 connections from httpd-Tomcat for every (prefork) httpd process, resulting in 9 useless AJP connections for every httpd process. I suspect that you were expiring 10 connections at a time instead of just one, meaning that you were running out of threads 10 times faster than you otherwise would. [Pantvaidya, Vishwajit] I did not note connections expiring in multiple of 10. But I will keep an eye out for this. However from the cachesize explanation at http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives I get the impression that this value imposes an upper limit - meaning it may not necessarily create 10 tomcat/jk connections for an httpd child process Suggestions: 1. Tell your ops guys we know what we're talking about 2. Upgrade mod_jk 3. Set connection_pool_size=1, or, better yet, remove the config altogether and let mod_jk determine its own value 4. Remove all timeouts unless you know that you have a misbehaving firewall. If you do, enable cping/cpong (the preferred strategy by at least one author or mod_jk) - -chris [Pantvaidya, Vishwajit] I will set - cachesize=1 (doc says jk will autoset this value only for worker-mpm and we use httpd 2.0 prefork) - remove cache and recycle timeouts But before all this, I will retest after removing connectionTimeout in server.xml - just to test if there are firewall caused issues as mentioned above. - To unsubscribe, e-mail:
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
On 22.05.2009 00:19, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] I will set - cachesize=1 (doc says jk will autoset this value only for worker-mpm and we use httpd 2.0 prefork) You don't have to: JK will discover this number for the Apache web server automatically and set the pool size to this value. - remove cache and recycle timeouts Chris and me are not having the same opinion here. You can choose :) But before all this, I will retest after removing connectionTimeout in server.xml - just to test if there are firewall caused issues as mentioned above. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Thursday, May 21, 2009 3:37 PM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity On 22.05.2009 00:19, Pantvaidya, Vishwajit wrote: [Pantvaidya, Vishwajit] I will set - cachesize=1 (doc says jk will autoset this value only for worker-mpm and we use httpd 2.0 prefork) You don't have to: JK will discover this number for the Apache web server automatically and set the pool size to this value. [Pantvaidya, Vishwajit] Does what you say hold true for jk 1.2.15 also? Because I saw that for the 1.2.15 cachesize directive, http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives says that JK will discover the number of threads per child process on Apache 2 web server with worker-mpm and set its default value to match the ThreadsPerChild Apache directive.. Since we use pre-fork MPM, I assumed we need to set cachesize. - remove cache and recycle timeouts Chris and me are not having the same opinion here. You can choose :) [Pantvaidya, Vishwajit] I think that may be only because my adding the connectionTimeout led you to believe that I wanted nonpersistent conn's. Now that I know persistent connections are better, I am trying to rollback connectionTimeout - and then I guess you will agree with Chris that I need to rollback the recycletimeouts, etc in workers file on httpd side also? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
RUNNABLE and WAITING are thread states in the JVM. They don't relate in general to states inside Tomcat. In this special situation they do. The states you observe are both completely normal in themselves. One (the stack you abbreviate with RUNNABLE) is handling a persistant connection between web server and Tomcat which could send more requests, but at the moment no request is being processed, the other (you abbreviate with WAITING) is available to be associated with a new connection that might come in some time in the future. [Pantvaidya, Vishwajit] Thanks Rainer. The RUNNABLE thread - is it a connection between Tomcat and webserver, or between Tomcat and AJP? Is it still RUNNABLE and not WAITING because the servlet has not explicitly closed the connection yet (something like HttpServletResponse.getOutputStresm.close) [Pantvaidya, Vishwajit] My problem is that tomcat is running out of threads (maxthreadcount=200). My analysis of the issue is: - threads count is exceeded because of a slow buildup of RUNNABLE threads (and not because number of simultaneous http requests at some point exceeded max thread count) - most/all newly created TP-Processor threads are in RUNNABLE state and remain RUNNABLE - never go back to WAITING state (waiting for thread pool) - in such case, I find that tomcat spawns new threads when a new request comes in - this continues and finally tomcat runs out of threads - Setting connectionTimeout in server.xml seems to have resolved the issue - but I am wondering if that was just a workaround i.e. whether so many threads remaining RUNNABLE indicate a flaw in our webapp i.e. it not doing whatever's necessary to close them and return them to WAITING state. [Pantvaidya, Vishwajit] After setting connectionTimeout in tomcat server.xml, the number of open threads is now consistently under 10 and most of them are now in WAITING stage. So looks like connectionTimeout also destroys idle threads. But I am still wondering - why should I have to set connectionTimeout to prevent tomcat running out of threads. I certainly don't mind if the TP-Processor threads continue to hang around as long as they are in WAITING state. 1. Is it expected behavior that most tomcat threads are in RUNNABLE state? 2. If not, does it indicate a problem in the app or in tomcat configuration? My thinking is that the answer to #1 is no, and that to #2 is that it is an app problem. But just wanted to confirm and find out what people out there think. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
[Pantvaidya, Vishwajit] My problem is that tomcat is running out of threads (maxthreadcount=200). My analysis of the issue is: - threads count is exceeded because of a slow buildup of RUNNABLE threads (and not because number of simultaneous http requests at some point exceeded max thread count) I don't belibve this reason. I would say thread count is exceeded, because you allow a much higher concurrency on the web server layer. [Pantvaidya, Vishwajit] Is there a tool you can recommend for me to monitor/log the http requests so that I have figures to back up my analysis. - most/all newly created TP-Processor threads are in RUNNABLE state and remain RUNNABLE - never go back to WAITING state (waiting for thread pool) So you are using persistent connections. There's no *problem* with that per se. If you ae uncomfortable with it configure the timeouts in the Tomcat connector *and* mod_jk. [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should not be an issue. The only reason why I thought that was an issue was that I was observing that the none of the RUNNABLE connections were not being used to serve new requests, only the WAITING ones were - and I do know for sure that the RUNNABLE threads were not servicing any existing requests as I was the only one using the system then. - in such case, I find that tomcat spawns new threads when a new request comes in request - connection - this continues and finally tomcat runs out of threads That's to simple, usually the new requests should be handled by existing Apache processes that already have a connection to Tomcat and will not create a new one. [Pantvaidya, Vishwajit] In my case the existing persistent connections are not servicing any new requests. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
On 20.05.2009 00:53, Pantvaidya, Vishwajit wrote: -Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Monday, May 18, 2009 11:10 PM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity On 19.05.2009 02:54, Caldarale, Charles R wrote: From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? RUNNABLE and WAITING are thread states in the JVM. They don't relate in general to states inside Tomcat. In this special situation they do. The states you observe are both completely normal in themselves. One (the stack you abbreviate with RUNNABLE) is handling a persistant connection between web server and Tomcat which could send more requests, but at the moment no request is being processed, the other (you abbreviate with WAITING) is available to be associated with a new connection that might come in some time in the future. [Pantvaidya, Vishwajit] Thanks Rainer. The RUNNABLE thread - is it a connection between Tomcat and webserver, or between Tomcat and AJP? Is it still RUNNABLE and not WAITING because the servlet has not explicitly closed the connection yet (something like HttpServletResponse.getOutputStresm.close) The thread handles a connection betwen the web server and Tomcat. AJP is the protocol used on that connection. It is runnable, because a socket read from inside the JVM puts a thread into runnable state. The socket read is used to read the next request and will block until data arrives over the established connection. So could the problem be occurring here because AJP is holding on to connections? Sorry, I haven't been following the thread that closely. Not sure what the problem you're referring to actually is, but having a Tomcat thread reading input from the AJP connector is pretty normal. The same to me. What's the problem? AJP is designed to reuse connections (use persistent connections). If you do not want them to be used for a very long time or like those connections to be closed when being idle, you have to configure the appropriate timeouts. Look at the timeouts documentation page of mod_jk. In general your max thread numbers in the web server layer and in the Tomcat AJP pool need to be set consistently. [Pantvaidya, Vishwajit] My problem is that tomcat is running out of threads (maxthreadcount=200). My analysis of the issue is: - threads count is exceeded because of a slow buildup of RUNNABLE threads (and not because number of simultaneous http requests at some point exceeded max thread count) I don't belibve this reason. I would say thread count is exceeded, because you allow a much higher concurrency on the web server layer. - most/all newly created TP-Processor threads are in RUNNABLE state and remain RUNNABLE - never go back to WAITING state (waiting for thread pool) So you are using persistent connections. There's no *problem* with that per se. If you ae uncomfortable with it configure the timeouts in the Tomcat connector *and* mod_jk. - in such case, I find that tomcat spawns new threads when a new request comes in request - connection - this continues and finally tomcat runs out of threads That's to simple, usually the new requests should be handled by existing Apache processes that already have a connection to Tomcat and will not create a new one. - Setting connectionTimeout in server.xml seems to have resolved the issue - but I am wondering if that was just a workaround i.e. whether so many threads remaining RUNNABLE indicate a flaw in our webapp i.e. it not doing whatever's necessary to close them and return them to WAITING state. No it is a misconifguration of your web server, mod_jk and Tomcat. The use of persistent AJP connections is opaque to the web application. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Finally, is it possible that some bad code in the app could be hanging onto those RUNNABLE connections which is why tomcat is not releasing them? Once more: NO, NO, NO! The threads you see in a RUNNABLE state are perfectly normal and expected. Go do the netstat that Rainer suggested and let us know what you see. Stop fixating on the thread state. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
Finally, is it possible that some bad code in the app could be hanging onto those RUNNABLE connections which is why tomcat is not releasing them? Once more: NO, NO, NO! The threads you see in a RUNNABLE state are perfectly normal and expected. Go do the netstat that Rainer suggested and let us know what you see. Stop fixating on the thread state. - Chuck [Pantvaidya, Vishwajit] Ok will do Chuck - thanks a lot for persisting with me through this issue. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity On httpd machine Proto Recv-Q Send-Q Local Address Foreign Address State tcp1 0 129.41.29.243:43225 172.27.127.201:21065 CLOSE_WAIT tcp1 0 129.41.29.243:43227 172.27.127.201:21065 CLOSE_WAIT tcp1 0 129.41.29.243:43237 172.27.127.201:21065 CLOSE_WAIT tcp1 0 129.41.29.243:43244 172.27.127.201:21065 CLOSE_WAIT tcp1 0 129.41.29.243:43245 172.27.127.201:21065 CLOSE_WAIT On tomcat machine Proto Recv-Q Send-Q Local Address Foreign Address State tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43204 TIME_WAIT tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43205 TIME_WAIT tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43206 TIME_WAIT tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43211 TIME_WAIT tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43212 FIN_WAIT2 tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43213 FIN_WAIT2 tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43214 FIN_WAIT2 tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43215 TIME_WAIT tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43216 FIN_WAIT2 tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43217 FIN_WAIT2 tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43218 FIN_WAIT2 (The above was edited to remove irrelevant IP addresses and sort by port.) The fact that *none* of the ports match would suggest (but not prove) that someone in the middle is closing the connections, and not telling either end about it. - why do the 11 threads in the httpd o/p show port 21069 in foreign addr. They're for a different IP address. - currently I do have connectionTimeout set in server.xml. I will need to wait until night to reset that. Do the netstat -anop again; it should be more interesting. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
The fact that *none* of the ports match would suggest (but not prove) that someone in the middle is closing the connections, and not telling either end about it. Do the netstat -anop again; it should be more interesting. - Chuck [Pantvaidya, Vishwajit] Tomcat server port 11065, connector port 21065 On Httpd Side: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 0.0.0.0:25 0.0.0.0:* LISTEN - off (0.00/0/0) tcp1 0 129.41.29.243:44003 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.80/0/0) tcp1 0 129.41.29.243:44002 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.43/0/0) tcp1 0 129.41.29.243:44001 172.27.127.201:21065 CLOSE_WAIT - keepalive (7192.26/0/0) tcp1 0 129.41.29.243:44000 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.64/0/0) tcp1 0 129.41.29.243:43990 172.27.127.201:21065 CLOSE_WAIT - keepalive (7016.23/0/0) tcp1 0 129.41.29.243:43999 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.30/0/0) tcp1 0 129.41.29.243:43998 172.27.127.201:21065 CLOSE_WAIT - keepalive (7186.76/0/0) tcp1 0 129.41.29.243:43996 172.27.127.201:21065 CLOSE_WAIT - keepalive (7183.86/0/0) tcp1 0 129.41.29.243:43994 172.27.127.201:21065 CLOSE_WAIT - keepalive (7174.09/0/0) tcp1 0 129.41.29.243:43993 172.27.127.201:21065 CLOSE_WAIT - keepalive (7164.63/0/0) ... On Tomcat side: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 :::21065:::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::127.0.0.1:11065 :::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43992 FIN_WAIT2 - timewait (56.71/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43991 FIN_WAIT2 - timewait (56.24/0/0) ... - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
The fact that *none* of the ports match would suggest (but not prove) that someone in the middle is closing the connections, and not telling either end about it. Do the netstat -anop again; it should be more interesting. - Chuck [Pantvaidya, Vishwajit] Tomcat server port 11065, connector port 21065 On Httpd Side: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 0.0.0.0:25 0.0.0.0:* LISTEN - off (0.00/0/0) tcp1 0 129.41.29.243:44003 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.80/0/0) tcp1 0 129.41.29.243:44002 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.43/0/0) tcp1 0 129.41.29.243:44001 172.27.127.201:21065 CLOSE_WAIT - keepalive (7192.26/0/0) tcp1 0 129.41.29.243:44000 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.64/0/0) tcp1 0 129.41.29.243:43990 172.27.127.201:21065 CLOSE_WAIT - keepalive (7016.23/0/0) tcp1 0 129.41.29.243:43999 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.30/0/0) tcp1 0 129.41.29.243:43998 172.27.127.201:21065 CLOSE_WAIT - keepalive (7186.76/0/0) tcp1 0 129.41.29.243:43996 172.27.127.201:21065 CLOSE_WAIT - keepalive (7183.86/0/0) tcp1 0 129.41.29.243:43994 172.27.127.201:21065 CLOSE_WAIT - keepalive (7174.09/0/0) tcp1 0 129.41.29.243:43993 172.27.127.201:21065 CLOSE_WAIT - keepalive (7164.63/0/0) ... On Tomcat side: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 :::21065:::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::127.0.0.1:11065 :::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43992 FIN_WAIT2 - timewait (56.71/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43991 FIN_WAIT2 - timewait (56.24/0/0) ... [Pantvaidya, Vishwajit] By the way, in the thread console, I see 8 TP-Processor threads (2 RUNNABLE, 6 WAITING). But above netstat output on tomcat side shows only 2 connections on port 21065. Shouldn't there be 1 thread / connection? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
definitely not TC ..the problem is with your WebServer please read this tutorial on diagnosing CLOSE_WAIT sockets from your WebServer http://publib.boulder.ibm.com/infocenter/wasinfo/v4r0/index.jsp?topic=/com.ibm.support.was40.doc/html/Plug_in/swg21163659.html Martin __ Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen. Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le destinataire prévu, nous te demandons avec bonté que pour satisfaire informez l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est interdite. Ce message sert à l'information seulement et n'aura pas n'importe quel effet légalement obligatoire. Étant donné que les email peuvent facilement être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité pour le contenu fourni. From: vpant...@selectica.com To: users@tomcat.apache.org Date: Wed, 20 May 2009 15:17:15 -0700 Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity The fact that *none* of the ports match would suggest (but not prove) that someone in the middle is closing the connections, and not telling either end about it. Do the netstat -anop again; it should be more interesting. - Chuck [Pantvaidya, Vishwajit] Tomcat server port 11065, connector port 21065 On Httpd Side: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 0.0.0.0:25 0.0.0.0:* LISTEN - off (0.00/0/0) tcp1 0 129.41.29.243:44003 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.80/0/0) tcp1 0 129.41.29.243:44002 172.27.127.201:21065 CLOSE_WAIT - keepalive (7194.43/0/0) tcp1 0 129.41.29.243:44001 172.27.127.201:21065 CLOSE_WAIT - keepalive (7192.26/0/0) tcp1 0 129.41.29.243:44000 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.64/0/0) tcp1 0 129.41.29.243:43990 172.27.127.201:21065 CLOSE_WAIT - keepalive (7016.23/0/0) tcp1 0 129.41.29.243:43999 172.27.127.201:21065 CLOSE_WAIT - keepalive (7189.30/0/0) tcp1 0 129.41.29.243:43998 172.27.127.201:21065 CLOSE_WAIT - keepalive (7186.76/0/0) tcp1 0 129.41.29.243:43996 172.27.127.201:21065 CLOSE_WAIT - keepalive (7183.86/0/0) tcp1 0 129.41.29.243:43994 172.27.127.201:21065 CLOSE_WAIT - keepalive (7174.09/0/0) tcp1 0 129.41.29.243:43993 172.27.127.201:21065 CLOSE_WAIT - keepalive (7164.63/0/0) ... On Tomcat side: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nameTimer ... tcp0 0 :::21065:::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::127.0.0.1:11065 :::* LISTEN 6988/java off (0.00/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43992 FIN_WAIT2 - timewait (56.71/0/0) tcp0 0 :::172.27.127.201:21065 :::129.41.29.243:43991 FIN_WAIT2 - timewait (56.24/0/0) ... [Pantvaidya, Vishwajit] By the way, in the thread console, I see 8 TP-Processor threads (2 RUNNABLE, 6 WAITING). But above netstat output on tomcat side shows only 2 connections on port 21065. Shouldn't there be 1 thread / connection? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org _ Hotmail® has ever-growing storage! Don’t worry about storage limits. http://windowslive.com/Tutorial/Hotmail/Storage?ocid=TXT_TAGLM_WL_HM_Tutorial_Storage1_052009
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
On 19.05.2009 02:54, Caldarale, Charles R wrote: From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? RUNNABLE and WAITING are thread states in the JVM. They don't relate in general to states inside Tomcat. In this special situation they do. The states you observe are both completely normal in themselves. One (the stack you abbreviate with RUNNABLE) is handling a persistant connection between web server and Tomcat which could send more requests, but at the moment no request is being processed, the other (you abbreviate with WAITING) is available to be associated with a new connection that might come in some time in the future. That's my understanding; I would presume some from of keep-alive is in play. However, others know the AJP characteristics better than I HTTP Keep-Alive does not change the picture. It's transparent to Tomcat and mod_jk. Those Keep-Alice packets do not count as requests. do. Rainer is the ultimate resource, but I suspect he's asleep right now. So could the problem be occurring here because AJP is holding on to connections? Sorry, I haven't been following the thread that closely. Not sure what the problem you're referring to actually is, but having a Tomcat thread reading input from the AJP connector is pretty normal. The same to me. What's the problem? AJP is designed to reuse connections (use persistent connections). If you do not want them to be used for a very long time or like those connections to be closed when being idle, you have to configure the appropriate timeouts. Look at the timeouts documentation page of mod_jk. In general your max thread numbers in the web server layer and in the Tomcat AJP pool need to be set consistently. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Chuck, On 5/18/2009 8:54 PM, Caldarale, Charles R wrote: From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? That's my understanding; I would presume some from of keep-alive is in play. However, others know the AJP characteristics better than I do. Rainer is the ultimate resource, but I suspect he's asleep right now. My expectation would be that an AJP connection waiting for the next request in a set of keepalive requests would be WAITING: blocked on a socket read, rather than RUNNABLE. Or, maybe Java's thread states don't differentiate between actually runnable and runnable but blocked (as opposed to WAITING which means waiting on a synchronization monitor). - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkoSpdcACgkQ9CaO5/Lv0PAcnwCgq33fogBqYaYD5INtQk8D/x7d RewAn23Ft0nSsgSQeupKhuanWdlwsIsS =jHcT -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Rainer Jung [mailto:rainer.j...@kippdata.de] Sent: Monday, May 18, 2009 11:10 PM To: Tomcat Users List Subject: Re: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity On 19.05.2009 02:54, Caldarale, Charles R wrote: From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? RUNNABLE and WAITING are thread states in the JVM. They don't relate in general to states inside Tomcat. In this special situation they do. The states you observe are both completely normal in themselves. One (the stack you abbreviate with RUNNABLE) is handling a persistant connection between web server and Tomcat which could send more requests, but at the moment no request is being processed, the other (you abbreviate with WAITING) is available to be associated with a new connection that might come in some time in the future. [Pantvaidya, Vishwajit] Thanks Rainer. The RUNNABLE thread - is it a connection between Tomcat and webserver, or between Tomcat and AJP? Is it still RUNNABLE and not WAITING because the servlet has not explicitly closed the connection yet (something like HttpServletResponse.getOutputStresm.close) So could the problem be occurring here because AJP is holding on to connections? Sorry, I haven't been following the thread that closely. Not sure what the problem you're referring to actually is, but having a Tomcat thread reading input from the AJP connector is pretty normal. The same to me. What's the problem? AJP is designed to reuse connections (use persistent connections). If you do not want them to be used for a very long time or like those connections to be closed when being idle, you have to configure the appropriate timeouts. Look at the timeouts documentation page of mod_jk. In general your max thread numbers in the web server layer and in the Tomcat AJP pool need to be set consistently. [Pantvaidya, Vishwajit] My problem is that tomcat is running out of threads (maxthreadcount=200). My analysis of the issue is: - threads count is exceeded because of a slow buildup of RUNNABLE threads (and not because number of simultaneous http requests at some point exceeded max thread count) - most/all newly created TP-Processor threads are in RUNNABLE state and remain RUNNABLE - never go back to WAITING state (waiting for thread pool) - in such case, I find that tomcat spawns new threads when a new request comes in - this continues and finally tomcat runs out of threads - Setting connectionTimeout in server.xml seems to have resolved the issue - but I am wondering if that was just a workaround i.e. whether so many threads remaining RUNNABLE indicate a flaw in our webapp i.e. it not doing whatever's necessary to close them and return them to WAITING state. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity [Pantvaidya, Vishwajit] Posting the thread dumps for the above 3 cases, The list usually filters out attachments, as it has done with yours. Either put them on a publicly accessible web site, or right in the text of the e-mail. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity From whatever I have read on this, it seems to me that this could happen if a servlet writes something to a response stream, closes the response stream, but after that keeps on doing some processing (e.g. running an infinite loop). No - the thread would be inside the servlet in that case. The thread here in the RUNNABLE state is waiting for a *new* request to come in over an active AJP connection; a thread in the WAITING state would be assigned to a new connection when one is accepted. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
-Original Message- From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com] Sent: Monday, May 18, 2009 4:02 PM To: Tomcat Users List Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity From whatever I have read on this, it seems to me that this could happen if a servlet writes something to a response stream, closes the response stream, but after that keeps on doing some processing (e.g. running an infinite loop). No - the thread would be inside the servlet in that case. The thread here in the RUNNABLE state is waiting for a *new* request to come in over an active AJP connection; a thread in the WAITING state would be assigned to a new connection when one is accepted. [Pantvaidya, Vishwajit] Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? So could the problem be occurring here because AJP is holding on to connections? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Ok - so then the question is when does tomcat transition the thread from Running to Waiting? Does that happen after AJP drops that connection? That's my understanding; I would presume some from of keep-alive is in play. However, others know the AJP characteristics better than I do. Rainer is the ultimate resource, but I suspect he's asleep right now. So could the problem be occurring here because AJP is holding on to connections? Sorry, I haven't been following the thread that closely. Not sure what the problem you're referring to actually is, but having a Tomcat thread reading input from the AJP connector is pretty normal. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] Subject: RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity Since I did not get any responses to this, just wanted to ask - did I post this to the wrong list and should I be posting this to the tomcat developers list instead? This should be the correct list, but there's probably only one person who can definitively answer your question and he may be busy (or on holiday). - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org