Configuring timeouts for apache + mod_jk + tomcat
Hello, I'm writing because at last we are about to upgrade the Apache httpd+mod_jk we use in the production environment from version 2.0.52 to version 2.2.14 (on Linux Servers), and I'd like to receive advice from you about timeout settings. For some reason I don't know, the system has the following *odd* configuration: * Apache Timeout directive is not used (I assume that it times out after the default 300 seconds) * mod_jk is configured with connection_pool_timeout=5 socket_timeout=5 (and it also has connection_pool_size=1) * Tomcat has its AJP connector configured to timeout after 30 milliseconds I believe that the mod_jk timeouts are a bit too low, especially if compared to the other ones. We are experiencing a strange behaviour with our current configuration, and I believe it may be related to the configuration settings: many Tomcat threads get stuck while flushing the output buffer (in particular when the system is under heavy load), as if there were no other thread on the Apache side to receive their input. Do you think it is reasonable to have timeout values that differ so much between mod_jk and Tomcat? I'd rather us a connection_pool_timeout that's comparable to the Apache timeout, especially because we need to get rid of anything that may cause our workers to behave in such a weird manner. Thank you for any help you can give. Regards, Alessandro PS this post is related to this thread: http://mail-archives.apache.org/mod_mbox/tomcat-users/200911.mbox/%3cce3aa0b60911180337o6469bdb4tcf3a90b44cea4...@mail.gmail.com%3 - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CSV File Save as dialogue defaults to HTM file
On Mon, Jan 25, 2010 at 11:02 AM, Pid p...@pidster.com wrote: On 25/01/2010 09:17, Ran Harpaz wrote: Hello, I'm using Jetspeed 1.6, running on Tomcat. In a portlet I developed, I create a .csv file and print a link to it. The user then needs to right-click on the file and select save file as. The dialogue that pops up defaults to file type HTML file, and replaces the .csv extension of the file I link to with .htm. Is there anyway to resolve this? I really need to give access to the csv file as-is, and not bother my clients more than neccessary. Are you setting the Content-type header to text/csv, or are you just generating it with a JSP? The latter will automatically set text/html as the content type. You may also want to change the content-disposition header in order to make your server prompt the user with the save as dialog: For example, response.setHeader(Content-disposition, attachment; filename= + defaultCsvFilename ); Regards, Alessandro - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up
On Sat, Dec 12, 2009 at 4:57 PM, Rainer Jung rainer.j...@kippdata.de wrote: On 12.12.2009 13:26, Alessandro Bahgat wrote: We actually found out a lot of unrecoverable error 200, request failed error messages in the mod_jk log (roughly around 1k per hour), so I'm starting to wonder if there's any issue with the firewalls and the network infrastructure. What would you think about that? [Thu Dec 03 16:58:52 2009][31539:42688] [info] service::jk_lb_worker.c (873): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Thu Dec 03 16:58:52 2009][31539:42688] [info] jk_handler::mod_jk.c (2056): Aborting connection for worker=applprod The above two lines belong together, the next lines are something different. The pait [pid:tid} changed. The above lines are logged, whenever there was a problem sending back gthe response from Apache to the client/browser. It may happen, if a user in the meatime clicked on something else or pressed the reload button. If you get it a lot, maybe your app is to slow, your users are to nervous, or indeed there might be a network problem. Occasional occurrences are normal. Well, I saw that error on yesterday's logs about 25k times (roughly 1% of the total requests) and the website isn't particularly slow these days (it takes less than 1s to render entirely the homepage). When it happens, we have some connections stuck in the ESTABLISHED state with non-zero values in the Send-Q, and Java's CPU consumption increases significantly. We'll upgrade Apache and mod_jk and see if this anything changes. Thank you for your help, Alessandro - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up
On Fri, Dec 11, 2009 at 4:27 PM, Rainer Jung rainer.j...@kippdata.de wrote: On 09.12.2009 12:18, Pid wrote: It could be, but while you're upgrading you might consider upgrading HTTPD to the best available version too, 2.0.52 release date: 1 Oct 2004. (That's 35 internet years ago.) ... and mod_jk 1.2.19 dates back to September 2006, so according to your math 21 internet years ago. There were so many bugs fixed since then, that you'll hardly find anyone that really tries to help debugging those old versions. At least use a recent mod_jk and look at http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html for some important hints about configuration. Thank you both for your advice. I'm pushing towards upgrading the Apache+mod_jk stack as well. Our last tests with the latest Tomcat and mod_jk still showed a lot of CPU time being spent in sendbb methods, with some threads being stuck in that method for long time. We actually found out a lot of unrecoverable error 200, request failed error messages in the mod_jk log (roughly around 1k per hour), so I'm starting to wonder if there's any issue with the firewalls and the network infrastructure. What would you think about that? [Thu Dec 03 16:58:52 2009][31539:42688] [info] service::jk_lb_worker.c (873): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. [Thu Dec 03 16:58:52 2009][31539:42688] [info] jk_handler::mod_jk.c (2056): Aborting connection for worker=applprod [Thu Dec 03 16:58:53 2009][31612:42688] [info] jk_open_socket::jk_connect.c (450): connect to XXX.XXX.XXX.XXX:8009 failed with errno=111 [Thu Dec 03 16:58:53 2009][31612:42688] [info] ajp_connect_to_endpoint::jk_ajp_common.c (872): Failed opening socket to (XXX.XXX.XXX.XXX:8009) with (errno=111) [Thu Dec 03 16:58:53 2009][31612:42688] [info] ajp_send_request::jk_ajp_common.c (1247): (applprod05) error connecting to the backend server (errno=111) [Thu Dec 03 16:58:53 2009][31612:42688] [info] ajp_service::jk_ajp_common.c (1867): (applprod05) sending request to tomcat failed, recoverable operation attempt=1 Thank you for your help, Alessandro - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up
Nishant, that didn't quite work, actually. After some struggle with our outsourcers, we added a new machine running Tomcat 6.0.20 *without tcnative* (they misplaced the .so files). That eventually resulted in a worse behaviour: within a few minutes from startup all the TP-Processor threads on that machine get stuck on the java.net.SocketInputStream.socketRead0 method. We are about to test the same server using - Tomcat 6.0.20 + tcnative 1.1.18 - Tomcat 6.0.16 + tcnative 1.1.18 (Our original configuration runs on Tomcat 6.0.16 + tcnative 1.1.10) In the meanwhile, we found out these lines in the mod_jk log: [Thu Dec 03 16:58:52 2009][31539:42688] [info] service::jk_lb_worker.c (873): unrecoverable error 200, request failed. Client failed in the middle of request, we can't recover to another instance. Could that be related to a firewall not working properly between Apache and Tomcat? Regards, Alessandro On Mon, Dec 7, 2009 at 8:20 AM, Nishant Hadole nishant.had...@siemens.com wrote: Dear Ale, I am interested in solution of issue mentioned, as we are having similar one. Did the upgrade resolves the issue? Alessandro Bahgat wrote: On Wed, Nov 18, 2009 at 3:26 PM, Caldarale, Charles R chuck.caldar...@unisys.com wrote: From: Alessandro Bahgat [mailto:ale.bah...@gmail.com] Subject: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up Our Tomcat (6.0.16) servers have many ajp threads that are stuck executing the the native sendbb method of the class org.apache.tomcat.jni.Socket. Try upgrading to the current Tomcat version (6.0.20), or at least using the latest version of tcnative (1.1.16). The symptoms you describe have been observed in older versions. Alternatively, turn off APR by removing or renaming the .so file in Tomcat's bin directory. - Chuck Thank you, I will do that. I just managed to persuade our customer to upgrade tcnative (now 1.1.10) on one of the production systems to see if it makes any difference. We'll probably plan a Tomcat upgrade in the next weeks as well. - Ale - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org -- View this message in context: http://old.nabble.com/Tomcat-6.0.16-%2B-mod_jk-1.2.19---request-threads-hanging-up-tp26406347p26673416.html Sent from the Tomcat - User mailing list archive at Nabble.com. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up
Hi everyone, I'm having some issues with an Apache + Tomcat setup behaving strangely. Our Tomcat (6.0.16) servers have many ajp threads that are stuck executing the the native sendbb method of the class org.apache.tomcat.jni.Socket. [You can find an example stack trace at the end of this message.] Apparently, those threads have completed the elaboration of the original requests (it may have taken a while to do that), and they are trying to flush the output buffer. This happens frequently when our servers are under heavy load: Tomcat stays executing sendbb for a long time, and we usually find many threads which are stuck in that state. Every time this happens, the Tomcat process starts using a lot of CPU time, and it goes like that for a few hours, when it doesn't crash before. The traffic is routed to the Tomcat servers (belonging to a cluster of 4 nodes) by two Apache web servers (2.0.52) with mod_jk (1.2.19). We're having a hard time figuring out the cause for this behavior: it may depend on the interaction between mod_jk and Tomcat, but I couldn't find any definitive explanation for that by looking at the documentation and this list's archives. Any advice will be welcome :) Below you'll find the configuration properties of mod_jk (pay attention to the timeouts, are they too low?) and an example stack trace for one of the stuck threads. Thank you all. Regards, Alessandro Bahgat * Our configuration is: OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 4) JVM: Sun 1.6.0_10 23 bit Apache: 2.0.52 mod_jk: 1.2.19 Tomcat: 6.0.16 * mod_jk properties: # DefineNode1 (applprod01) worker.applprod01.port=8009 worker.applprod01.host=###.###.###.### worker.applprod01.type=ajp13 worker.applprod01.lbfactor=1 worker.applprod01.connection_pool_size=1 worker.applprod01.socket_keepalive=true worker.applprod01.socket_timeout=5 worker.applprod01.connection_pool_timeout=5 * Sample stack trace for one of the hang up threads: ajp-8009-300 - Thread t...@962 java.lang.Thread.State: RUNNABLE at org.apache.tomcat.jni.Socket.sendbb(Native Method) at org.apache.coyote.ajp.AjpAprProcessor.flush(AjpAprProcessor.java:1181) at org.apache.coyote.ajp.AjpAprProcessor$SocketOutputBuffer.doWrite(AjpAprProcessor.java:1268) at org.apache.coyote.Response.doWrite(Response.java:560) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349) at org.apache.tomcat.util.buf.IntermediateOutputStream.write(C2BConverter.java:242) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202) at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:263) at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:106) - locked org.apache.tomcat.util.buf.writeconver...@1d7d7b0 at java.io.OutputStreamWriter.write(OutputStreamWriter.java:190) at org.apache.tomcat.util.buf.WriteConvertor.write(C2BConverter.java:196) at org.apache.tomcat.util.buf.C2BConverter.convert(C2BConverter.java:81) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:438) at org.apache.catalina.connector.CoyoteWriter.write(CoyoteWriter.java:143) at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:277) at java.io.PrintWriter.write(PrintWriter.java:382) - locked org.apache.jasper.runtime.jspwriteri...@1919913 at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:119) at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:326) at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:342) - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up
On Wed, Nov 18, 2009 at 3:26 PM, Caldarale, Charles R chuck.caldar...@unisys.com wrote: From: Alessandro Bahgat [mailto:ale.bah...@gmail.com] Subject: Tomcat 6.0.16 + mod_jk 1.2.19 - request threads hanging up Our Tomcat (6.0.16) servers have many ajp threads that are stuck executing the the native sendbb method of the class org.apache.tomcat.jni.Socket. Try upgrading to the current Tomcat version (6.0.20), or at least using the latest version of tcnative (1.1.16). The symptoms you describe have been observed in older versions. Alternatively, turn off APR by removing or renaming the .so file in Tomcat's bin directory. - Chuck Thank you, I will do that. I just managed to persuade our customer to upgrade tcnative (now 1.1.10) on one of the production systems to see if it makes any difference. We'll probably plan a Tomcat upgrade in the next weeks as well. - Ale - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org