Re: Too many connections in keepalive state in jk threadpool
Thank you for your reply Rainer, with netstat -an I see a lot of connections in ESTABLISHED status on port 8009 coming from localhost so I think I can assume that those are the connections established from Apache and Tomcat both residing on the same machine. In any case on tomcat manager webapp I see all of them are in a K state. Googling and reading more carefully the documentation I came to this discussion: http://serverfault.com/questions/149171/keep-alive-header-not-sent-from-tomcat-5-5-http-connector that moved my attention to the AJP connector configured in server.xml file on tomcat. It seems the connectionTimeout parameter (doc says The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. The default value is infinite (i.e. no timeout).) is set to infinite and this affects the other parameter keepAliveTimeout (doc says The number of milliseconds this Connector will wait for another AJP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute.). The keepAliveTimeout exists only for Tomcat 6+, for Tomcat 5.5 you can touch only connectionTimeout. I suppose that, without touching this parameters, the connections remain open in K state also if they do not receive a PING or a new request. In any case, I tried to change values for this two parameters both in Tomcat 6.0 and Tomcat 5.5 and this seems to close the connections after the time configured: Connector port=8009 protocol=AJP/1.3 redirectPort=8443 connectionTimeout=1 keepAliveTimeout=1 / I tried with short values (for example 10s) and long ones (300s) and everything seems to work correctly. I also did some tests with a jsp page that takes a lot of time to serve the response and also in that cases everything is working fine. If the page takes more time to respond than the timeout the connection is not closed. If the page takes more time to respond than the TIMEOUT and TTL configured on the Apache side, the browser gets a proxy timeout error but on the server (tomcat manager app) I see my page in Service state, till it finishes all its work. Now I see in tomcat manager app that the list of connections in the pool is normally very short, as expected. Thank you for your help, Marco. -- View this message in context: http://tomcat.10.n6.nabble.com/Too-many-connections-in-keepalive-state-in-jk-threadpool-tp4539290p4985607.html Sent from the Tomcat - User mailing list archive at Nabble.com. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Too many connections in keepalive state in jk threadpool
On 23.08.2012 09:50, marcobuc wrote: Hi, we are experiencing a very similar problem with the difference that we are using mod_proxy_ajp instead of mod_jk to connect Apache with tomcat. As for mod_jk, the connection is done to the 8009-jk port opened by a connector configured in tomcat server.xml file. Connector port=8009 enableLookups=false redirectPort=8443 protocol=AJP/1.3 / We tried configuring the timeout parameters for mod_proxy_ajp to tell Apache to drop connection older than 2 minutes, but we see in tomcat manager application that the jk-8009 connector retains Keepalive connections open for millions of milliseconds: K 1783874292 ms ? ? 84.18.132.114 ? ? Can you see the connections in the output of netstat -an? What is there state there? I would like to try configuring the ping_mode parameter but I do not know if this is possible, i.e. if this parameter exists only for mod_jk. Here an example of configuration we added in httpd.conf file for the mod_proxy_ajp configuration. ProxyPass /manager ajp://localhost:8009/manager max=10 retry=10 timeout=30 ttl=120 ProxyPassReverse /manager ajp://localhost:8009/manager Look for ping and ttl on http://httpd.apache.org/docs/2.2/mod/mod_proxy.html if using 2.2 or http://httpd.apache.org/docs/2.4/mod/mod_proxy.html if using httpd 2.4. Note that for 2.4 there was a connection closing bug which was fixed very recently in 2.4.3. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Too many connections in keepalive state in jk threadpool
Hallo Herr Beier, On 02.03.2012 11:19, Beier Michael wrote: Hi all, we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32. I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat. I tested three connector configurations and with all I see connections in tomcat server status where the Time value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout. This results in having 60-80 percent of the thread pool being in state keepAlive. 1) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false keepAliveTimeout=30 connectionTimeout=30 / 2) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false keepAliveTimeout=30 / 3) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false / In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds). I verified that the values are set correctly querying the parameters via JMX. How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration. Educated guess: you have an interval based cping/cpong connection check configured for mod_jk. Any cping will wake up the thread waiting for data on the connection and will reset the timeouts. But a cping will be ommediately answered by a cpong and not update the last request time. So that would explain, why your connections never timeout though the Manager shows constantly increasing times for the last request seen. Usually that feature would be activated for mo_jk using the JkWatchdogInterval in combination with ping_mode I or A. In case you are unsure about the effects of the various jk configuration options, you might post them here (remove sensitive data before posting). I'd say the current behaviour is a bit problematic, but I don't see an easy improvement. So if your focus is on keeping the number of idle connections low you would need to switch off interval cpings. Cping before rquests and after opening connections are fine (improves stability and reduces the likeliness of race conditions). HTH Rainer Jung - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Too many connections in keepalive state in jk threadpool
Beier Michael wrote: Hi all, we're running tomcat 7.0.23 on sun jdk 1.6.0_29, connected via ajp to httpd 2.2.21 using mod_jk 1.2.32. I observed the behavior, that tomcat keeps threads in its ajp pool in keepalive state, regardless of which timeouts (connectionTimeout and keepAliveTimeout) are configured in tomcat. I tested three connector configurations and with all I see connections in tomcat server status where the Time value amounts up to several million milliseconds, which is more than configured in connectionTimeout/keepAliveTimeout. This results in having 60-80 percent of the thread pool being in state keepAlive. 1) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false keepAliveTimeout=30 connectionTimeout=30 / 2) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false keepAliveTimeout=30 / 3) Connector port=8309 protocol=AJP/1.3 maxThreads=200 redirectPort=8343 tomcatAuthentication=false / In mod_jk the connection_pool_timeout is set to the same value as connectionTimeout (only in seconds, not milliseconds). I verified that the values are set correctly querying the parameters via JMX. How can I avoid having so many threads in keepalive state - I don't have any idea at the moment and can't see that there is an error in my configuration. Before discussing this, I find it useful to review the basics, such as in : http://en.wikipedia.org/wiki/HTTP_persistent_connection and http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html In other words, at the level of your front-end webserver (which I suppose you have, since you are talking about mod_jk and AJP), do you really need a long KeepAliveTimeout ? (and similarly at the level of your Tomcat Connector's above). As per the documentation : connectionTimeout The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. The default value is 6 (i.e. 60 seconds). keepAliveTimeout The number of milliseconds this Connector will wait for another AJP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute. In other words, - connectionTimeout defaults to 60 seconds - if you do not specify either one of them, then they both default to 60 seconds. - if you do specify connectionTimeout and not KeepAliveTimeout, then KeepAliveTimeout defaults to the same value as connectionTimeout. - your value above for KeepAliveTimeout (30) means 5 minutes Do you really want one Tomcat thread to wait for 5 minutes doing nothing, just in case the browser would decide to send another request on the same connection ? And do you really want, when a browser creates its initial TCP connection to your webserver, to give it 60 seconds (or 5 mintes !) before it even starts sending its HTTP request on that connection ? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org