Re: Fwd: File upload fails after upgrade to 7.0.59
On Mon, Mar 2, 2015 at 6:20 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 13:34 schrieb Umesh Sehgal: On Mon, Mar 2, 2015 at 5:25 PM, Umesh Sehgal umesh.seh...@gmail.com wrote: On Mon, Mar 2, 2015 at 3:52 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 11:02 schrieb Umesh Sehgal: Thanks for the quick reply, I tried using the maxSwallowSize with increased value but to no effect. The max size that I have been able to upload is ~16 KB. I also see that the maxSwallowSize got introduced with update 55 but the behavior I'm seeing is 50 update onwards, is there any other param too? is there any logging that can be turned on tomcat to help debug? Please do not top post. For the rest see below. On Mon, Mar 2, 2015 at 2:32 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 09:34 schrieb Umesh Sehgal: Hi, We recently upgraded our application's tomcat from 7.0.30 to 7.0.59. After upgrade the file upload feature has broken. I have been able to nail it down to the point that the problem manifests 7.0.50 onwards. Here is the exception that I see inside logs: Caused by: java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) at sun.security.ssl.OutputRecord.writeBuffer(Unknown Source) at sun.security.ssl.OutputRecord.write(Unknown Source) Also, I notice that the problem doesn't happen with a 2KB file but 2MB. I don't see anything obvious in the 7.0.50 changelog which could explain this behavior. Can someone please provide some pointer as what could be causing this? https://bz.apache.org/bugzilla/show_bug.cgi?id=57617 Fixed for next 7.0.60 in http://svn.apache.org/r1659295 The original change can be found looking for maxSwallowSize in the changelog Could it be If a request that includes an Expect: 100-continue header receives anything other than a 2xx response, close the connection This protects against misbehaving clients that may not sent the request body in that case and send the next request instead. (markt) ? It was changed in 7.0.49, but 49 was not released, so 50 was the first version with this change. Regards, Rainer I did see this in changelog but in the captured traffic don't see any expect 100 header request. Any other way I can confirm this on the tomcat side? Thanks, Umesh Can you please point me to SVN change for : If a request that includes an Expect: 100-continue header receives anything other than a 2xx response, close the connection This protects against misbehaving clients that may not sent the request body in that case and send the next request instead. (markt) ? http://svn.apache.org/r1540689 Hi Rainer, Thanks, I think the problem is indeed caused by this change. I downloaded the tomcat source, removed the above change from AbstractHttp11Processor and delpoyed the updated jar. The file upload didn't work right away but at least now maxSwallowSize is honored and I can upload the files per the size specified. I did the above work to confirm, but of course I don't want to ship it carrying modified code. Can you please suggest as what could be done in this case? Thanks again. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Fwd: File upload fails after upgrade to 7.0.59
Am 03.03.2015 um 13:45 schrieb Umesh Sehgal: On Mon, Mar 2, 2015 at 6:20 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 13:34 schrieb Umesh Sehgal: On Mon, Mar 2, 2015 at 5:25 PM, Umesh Sehgal umesh.seh...@gmail.com wrote: On Mon, Mar 2, 2015 at 3:52 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 11:02 schrieb Umesh Sehgal: Thanks for the quick reply, I tried using the maxSwallowSize with increased value but to no effect. The max size that I have been able to upload is ~16 KB. I also see that the maxSwallowSize got introduced with update 55 but the behavior I'm seeing is 50 update onwards, is there any other param too? is there any logging that can be turned on tomcat to help debug? Please do not top post. For the rest see below. On Mon, Mar 2, 2015 at 2:32 PM, Rainer Jung rainer.j...@kippdata.de wrote: Am 02.03.2015 um 09:34 schrieb Umesh Sehgal: Hi, We recently upgraded our application's tomcat from 7.0.30 to 7.0.59. After upgrade the file upload feature has broken. I have been able to nail it down to the point that the problem manifests 7.0.50 onwards. Here is the exception that I see inside logs: Caused by: java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) at sun.security.ssl.OutputRecord.writeBuffer(Unknown Source) at sun.security.ssl.OutputRecord.write(Unknown Source) Also, I notice that the problem doesn't happen with a 2KB file but 2MB. I don't see anything obvious in the 7.0.50 changelog which could explain this behavior. Can someone please provide some pointer as what could be causing this? https://bz.apache.org/bugzilla/show_bug.cgi?id=57617 Fixed for next 7.0.60 in http://svn.apache.org/r1659295 The original change can be found looking for maxSwallowSize in the changelog Could it be If a request that includes an Expect: 100-continue header receives anything other than a 2xx response, close the connection This protects against misbehaving clients that may not sent the request body in that case and send the next request instead. (markt) ? It was changed in 7.0.49, but 49 was not released, so 50 was the first version with this change. Regards, Rainer I did see this in changelog but in the captured traffic don't see any expect 100 header request. Any other way I can confirm this on the tomcat side? Thanks, Umesh Can you please point me to SVN change for : If a request that includes an Expect: 100-continue header receives anything other than a 2xx response, close the connection This protects against misbehaving clients that may not sent the request body in that case and send the next request instead. (markt) ? http://svn.apache.org/r1540689 Hi Rainer, Thanks, I think the problem is indeed caused by this change. I downloaded the tomcat source, removed the above change from AbstractHttp11Processor and delpoyed the updated jar. The file upload didn't work right away but at least now maxSwallowSize is honored and I can upload the files per the size specified. I did the above work to confirm, but of course I don't want to ship it carrying modified code. Can you please suggest as what could be done in this case? OK, good to know. I'd say now it would be good to find out why your webapp sends a non-2xx response code and hwich it is. Since you already suceeded to build tomcat, simply add a custom log or System.out.println() statement printing response.getStatus() where the change in r1540689 was added and tell us what it is for the failing uploads. Regards, Rainer - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Changing Tomcat's SSL ciphers
Eric, On 2.3.2015 23:45, Eric wrote: I am trying to change the ciphers that my Tomcat 7 server supports. I am using the APR connector. Here's the connector information in server.xml with the line saying which ciphers to support: Connector port=8443 executor=edgeExecutor maxHttpHeaderSize=32768 enableLookups=false disableUploadTimeout=true connectionTimeout=3000 socketBuffer=122880 maxKeepAliveRequests=1 scheme=https secure=true SSLProtocol=TLSv1 SSLEnabled=true SSLCertificateFile=/etc/tomcat/star_mydomain_com.crt SSLCertificateKeyFile=/etc/tomcat/star_mydomain_com.key SSLCACertificateFile=/etc/tomcat/DigiCertCA.crt / SSLCipherSuite=ECDHE-RSA-AES128-GCM-SHA256 ... apache-tomcat-7.0.32-ak.9.x86_64 (apparently our own custom RPM of Tomcat, could it be that an option was turned off that prevents changing the SSL cipher? How would I check?) ECDHE-RSA-AES128-GCM-SHA256 is TLSv1.2 protocol [1]. In order to utilze TLSv1.2, you need to: 1. Use tcnative 1.1.32 or later. 2. Use Tomcat that supports it (8.0.15+, 7.0.57+). 3. Set SSLProtocol=TLSv1+TLSv1.1+TLSv1.2 4. Put SSLCipherSuite inside Connector tag (you put it outside). -Ognjen [1] https://www.openssl.org/docs/apps/ciphers.html#TLS-v1.2-cipher-suites - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Jersey loaded on startup even if declared in tomcat.util.scan.DefaultJarScanner.jarsToSkip on tomcat7.0.52+
Hi, One of the applications we are managing is using Jersey for its rest interface. This application initializes the rest service with tapestry, so we don't want it to be initialized at startup by tomcat, and set the following property in catalina.properties: tomcat.util.scan.DefaultJarScanner.jarsToSkip=(... jar list provided by stock tomcat ...)*jersey*.jar It does as expected in tomcat 7.0.50 but it fails starting at version 7.0.52 (I could not test with 7.0.51 as its not available anymore), and it still fails at version 7.0.59. If I add org.apache.tomcat.util.level = FINEST in logging.properties, I can see that the jars are skipped as expected, but jersey is eventually initialized: - ... FINER: Not scanning JAR [file:/usr/local/apache-tomcat-7.0.59/webapps/ROOT/WEB-INF/lib/jersey-client-1.12.jar] from classpath Mar 03, 2015 6:46:15 PM org.apache.tomcat.util.scan.StandardJarScanner scan FINER: Not scanning JAR [file:/usr/local/apache-tomcat-7.0.59/webapps/ROOT/WEB-INF/lib/jersey-core-1.12.jar] from classpath FINE: preRegister StandardEngine[Catalina].StandardHost[localhost].StandardContext[].StandardWrapper[com.....RestModule$RestApplication] Catalina:j2eeType=Servlet,name=com....RestModule$RestApplication,WebModule=//localhost/,J2EEApplication=none,J2EEServer=none Mar 03, 2015 6:46:16 PM com.sun.jersey.server.impl.container.servlet.JerseyServletContainerInitializer addServletWithApplication INFO: Registering the Jersey servlet application, named com....RestModule$RestApplication, at the servlet mapping, /rest/*, with the Application class of the same name Mar 03, 2015 6:46:16 PM org.apache.tomcat.util.modeler.Registry registerComponent ... - Here is the list of the jersey related jars we are loading (which are located in WEB-INF/lib): jersey-multipart-1.12.jar jersey-servlet-1.12.jar tapestry-jersey-1.0.3.jar jersey-core-1.12.jar jersey-client-1.12.jar jersey-server-1.12.jar odata4j-jersey-0.7.0.jar jersey-json-1.12.jar I have tried many things, but nothing helped. Any clue of what is going on ? Any lead will be highly appreciated! Best regards, Matt P.
tomat 8 async servlet net::ERR_INCOMPLETE_CHUNKED_ENCODING
I am trying to use atmosphere for my web app running under tomcat 8.0.14, for which I have to use async servlet and filters, In chrome I get error *net::ERR_INCOMPLETE_CHUNKED_ENCODING * when call the async web resource , when I deploy this locally in the development machine I don't get this error , but when deployed in remote server I get this, I assume this is something to do with async-timeout ,is there any configuration for async-timeout for tomcat server ? I posted this in atmosphere forums and they said Its tomcat and not atmosphere issue here is link to my post in atmosphere https://groups.google.com/forum/#!topic/atmosphere-framework/aMRYy-E9xas https://groups.google.com/forum/#!topic/atmosphere-framework/aMRYy-E9xas here is a similar error post in stackoverflow , about async http://stackoverflow.com/questions/8431547/tomcat-7-async-processing-failing-only-one-request-processed-simultanously http://stackoverflow.com/questions/8431547/tomcat-7-async-processing-failing-only-one-request-processed-simultanously . Please advice. -- View this message in context: http://tomcat.10.x6.nabble.com/tomat-8-async-servlet-net-ERR-INCOMPLETE-CHUNKED-ENCODING-tp5031064.html Sent from the Tomcat - User mailing list archive at Nabble.com. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: tomat 8 async servlet net::ERR_INCOMPLETE_CHUNKED_ENCODING
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Fachhoch, On 3/3/15 1:30 PM, fachhoch wrote: I am trying to use atmosphere for my web app running under tomcat 8.0.14, for which I have to use async servlet and filters, In chrome I get error *net::ERR_INCOMPLETE_CHUNKED_ENCODING * when call the async web resource , when I deploy this locally in the development machine I don't get this error , but when deployed in remote server I get this, I assume this is something to do with async-timeout ,is there any configuration for async-timeout for tomcat server ? I posted this in atmosphere forums and they said Its tomcat and not atmosphere issue here is link to my post in atmosphere https://groups.google.com/forum/#!topic/atmosphere-framework/aMRYy-E9xas https://groups.google.com/forum/#!topic/atmosphere-framework/aMRYy-E9xas here is a similar error post in stackoverflow , about async http://stackoverflow.com/questions/8431547/tomcat-7-async-processing-failing-only-one-request-processed-simultanously http://stackoverflow.com/questions/8431547/tomcat-7-async-processing-failing-only-one-request-processed-simultanously [also] this is the link in staackoverflow http://stackoverflow.com/questions/26863972/how-to-prevent-neterr-incomplete-chunked-encoding-when-using-html5-server-even Can you try again with 8.0.20? There have been lots of good fixes lately. - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: GPGTools - http://gpgtools.org iQIcBAEBCAAGBQJU9gjcAAoJEBzwKT+lPKRYM4gQAK8xn6+3vvYSHULiYzwodRGG QefTUWnsB3ziQOM90yYBak8EtHXSg/NGFWX1K2jA6UoQ/AnspUXFil4v3SSkp7rA BQgKdptjtH5HJo1mJeGYGARzN2yGEppuLhmTgwKygGgic5sfTC/OQmvxy04YnPxm RbCFp3x0TVTl2zHPjKX4ulWWsFaRvN78PDBlWp65Eh+x4Ut98e8IoELteOzYZ0o4 BgHQwjU/jNyRSXLxBOHBPPqQ+p2ygK2WaZn7DfpF/TgK4gTgLtFwW+okdIlpE87z FUJxdkQiBqflSVDL1NN3wqdZR9VOw3rxyqEU8cGECX6FRIm0LaFMJvSlXwOoGAZG N60XI4KW5v9pWagywr1XEAkP23I8t3YUDYymURI7SCJWMJhOmJk39hw+dd5R2tS6 rXAW6/4X6uRbR+E1aT0qJQDSZtlwkYIvpzH10c3BXsldpMcU58bToj15UCfcjFSx qDyug3oVXHMyOKEZFmW+LPoulLIVnWV84GhvVl02Qa8+NoPxmI9PHVXLI72+n1tH wZMBXi1wyLA212yFAQtv8vYlbyiEw2MyNDkD2/3qKpv7BzPTMZaIBttqanv0+J/O +8A3smjz4Z9ICGrDQ6sjX4tnzvOLqBovnn9knOZ4s7LWQyu+FB5FuaxamhNmYxHj bx36buWHyFZ0HJUCgeT4 =j9qY -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: tomat 8 async servlet net::ERR_INCOMPLETE_CHUNKED_ENCODING
this is the link in staackoverflow http://stackoverflow.com/questions/26863972/how-to-prevent-neterr-incomplete-chunked-encoding-when-using-html5-server-even http://stackoverflow.com/questions/26863972/how-to-prevent-neterr-incomplete-chunked-encoding-when-using-html5-server-even -- View this message in context: http://tomcat.10.x6.nabble.com/tomat-8-async-servlet-net-ERR-INCOMPLETE-CHUNKED-ENCODING-tp5031064p5031065.html Sent from the Tomcat - User mailing list archive at Nabble.com. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: mod_jk causing stuck Apache processes
any update on the behavior of the patched nodes? Did it behave better, ie. did the problem reoccur on other nodes but not on the patched ones, or were all nodes mostly without problems during the last 2 days? Any strange things on the patched ones? I'd like to add the patch for 1.2.41 but your feedback would be very valuable. Thanks! Rainer It's been quiet, no stuck processes anywhere in the farm. The patched jk is working fine, no issues found so far. I'll continue to monitor and let you know. Just to make sure: no stuck processes anywhere in the farm means we can't be sure, that the patch helps, because the unpatched nodes didn't have a problem either during that time. So we have to wait until the problem happens a few more times so we can see whether the patch helps, or the patched nodes have run long enough so that at least we don't expect a negative impact from the patch. Rainer, The patch has not completely fixed the issue, I had a host today exhibit the behavior. It does seem to have better behavior: stuck process don't hang around as long though there are still some 300 second ones. It also seemed to recover by itself, though it took about 40 minutes. It also had more idle processes on average, though it did hit 90% in W for a few minutes. I have also converted 4 (different) hosts to the event mpm and was tweaking the mpm settings all last week. I think they are in a good state now and so far I haven't seen the issue on them. I now have 8 on stock mod_jk/prefork, 4 on your patched mod_jk/prefork, and 4 on stock mod_jk/event. We are upgrading to httpd 2.4.12 on Friday and I am also gone on vacation all next week. We haven't decided yet if we are switching to event at the same time, but it seems likely. I am willing to continue testing your patch but I won't be able to resume until the week of the 16th and it may not be under prefork. Respectfully, Jesse
Potential IndexOutBounds in AbstractServletInputStream::readLine() ?
Hi, I was wondering if there is an unintentional potential index of out bounds exception in AbstractServletInputStream::readLine() ? I was looking at the code in java/org/apache/coyote/http11/upgrade/AbstractServletInputStream , specifically the readLine() function: = public final int readLine(byte[] b, int off, int len) throws IOException { preReadChecks(); if (len = 0) { return 0; } int count = 0, c; while ((c = readInternal()) != -1) { b[off++] = (byte) c; count++; if (c == '\n' || count == len) { break; } } return count 0 ? count : -1; } = It seems that len is partially sanitized, but the offset parameter 'off' is not. In particular, 'off' could be allowed to be outside of 'buf', causing an exception while executing the statement b[off++]=(byte)c; The way the inner conditionals are implemented, it seems that also starting with a valid offset, but a large value of len can also cause this exception. One could change the loop condition to something like ((c=readInternal())!= -1 0=off offb.length) This function seems to be inherited by concrete classes AprServletInputStream and BioServletInputStream without being overriden. I believe that the implementation of readLine() in javax.ServletInputStream handles these border cases by returning -1 whenver an access outside of the array is attempted, so it doesn't suffer from this problem. Is this an issue that needs to be changed or is it the intended behavior to leave the responsibility of sanitizing the parameters to the caller ? Thanks, Oswaldo.