Tomcat 9.0.36 - JDK 13/14

2020-06-25 Thread Kiran Badi
Hi All,

I wanted to check if tomcat 9.0.36 supports open jdk 13/14.

I created a simple spring boot war file and compiled/built it with openjdk
13/14. After running maven install , I deployed the war file from the
target directory to tomcat webapps using tomcat manager. It did not work
and gave me 404 messages with both 13/14. No error or any exception
anywhere in logs.Catalina log just says a war file is deployed.

Then i compiled the same spring boot app with jdk 8 and deployed it with
tomcat and it works fine. I am able to call my endpoints with no issues.

I am having a hard time deploying angular/spring boot and building war file
and deploying it on tomcat 9.0x with openjdk 13. So I thought this might be
a good place to start with.

I used the below pom file.


http://maven.apache.org/POM/4.0.0; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
4.0.0

org.springframework.boot
spring-boot-starter-parent
2.3.1.RELEASE
 

com.kiran
springwar
1.0.2-SNAPSHOT
war
springwar
Sample project to deploy war to tomcat


14




org.springframework.boot
spring-boot-starter-web



org.springframework.boot
spring-boot-starter-tomcat
provided


org.springframework.boot
spring-boot-starter-test
test


org.junit.vintage
junit-vintage-engine






Re: Tomcat 9.0.30 seems to not reset Http11InputBuffer properly in certain scenarios? Responses change for same requests

2020-06-25 Thread Fabian Morgan
Mark,

Thanks for your explanation.

Fabian

On Thu, Jun 25, 2020 at 3:29 PM Mark Thomas  wrote:

> Fabian,
>
> Tomcat's behaviour is as expected and as per spec.
>
> The content-length header is used to determine the end of the request
> body. HTTP/1.1 allows pipe-linign requests. Whatever bytes on the wire
> are seen next will be treated as the next request.
>
> Mark
>
>
> On 25/06/2020 23:08, Fabian Morgan wrote:
> > Hi --
> >
> > While testing various scenarios in Tomcat 9.0.30, I’ve found Tomcat
> returns
> > different responses when the same request is issued twice in a row.  I
> have
> > three such scenarios (all related) to illustrate.  I used Postman to
> issue
> > the requests.
> >
> > First, here is some environment information:
> >
> > Operating System: Mac OS Mojave 10.14.6
> >
> > Http Client: Postman 7.24.0
> >
> > Relevant Automatic/Hidden headers for Postman:
> >
> > Cache-Control: no-cache
> >
> > Accept: */*
> >
> > Accept-Encoding: gzip, deflate, br
> >
> > Connection: keep-alive
> >
> > Java version: 1.8.0_221
> >
> > All of these scenarios are on fresh install of Tomcat 9.0.30 with default
> > port of 8080.
> >
> > Note: In each of the following scenarios, the steps must be done fairly
> > quickly one right after the other with no delay.  Please also stop and
> > restart Tomcat in between each scenario.
> >
> > Steps for First Scenario:
> >
> >1.
> >
> >In Postman, issue PUT request to invalid url, such as
> >http://localhost:8080/thisisnotvalid.  Ensure Content-Length header
> is
> >sent with value 12345.  Ensure the request has a request body that is
> a
> >file attached with size >= 26545 bytes.  In Postman, I marked it with
> >binary radiobutton.  I receive response with 405 (Method Not Allowed)
> >status and HTML in the body.
> >2.
> >
> >In Postman, issue GET request to http://localhost:8080/thisisnotvalid
> .
> >Ensure Content-Length header is sent with value 12345.  The request
> must
> >NOT have a body (in Postman I marked it with none radiobutton).  I
> receive
> >response with 404 (Not Found) status and HTML in the body.
> >3.
> >
> >In Postman, issue GET request to http://localhost:8080/thisisnotvalid
> .
> >Ensure Content-Length header is sent with value 12345.  Ensure the
> request
> >has a request body that is a file attached with size >= 26545 bytes
> (yes on
> >a GET request).  In Postman, I marked it with binary radiobutton.
> NOTE: I
> >receive 400 (Bad Request) response and HTML in the body.  This is NOT
> >expected.
> >4.
> >
> >Issue same request in (3) again, and now I receive response with 404
> >(Not Found) status and HTML in the body as expected.  Continuing to
> issue
> >the request again seems to return 404 response as expected hereafter.
> >
> >
> > Note that after step (3), I see the following exception trace in
> > catalina.out:
> >
> > org.apache.coyote.http11.Http11Processor.service Error parsing HTTP
> request
> > header
> >
> >  Note: further occurrences of HTTP request parsing errors will be logged
> at
> > DEBUG level.
> >
> > java.lang.IllegalArgumentException: Invalid character found in method
> name.
> > HTTP method names must be tokens
> >
> > at
> >
> org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:415)
> >
> > at
> >
> org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)
> >
> > at
> >
> org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
> >
> > at
> >
> org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860)
> >
> > at
> > org.apache.tomcat.util.net
> .NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1598)
> >
> > at
> > org.apache.tomcat.util.net
> .SocketProcessorBase.run(SocketProcessorBase.java:49)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >
> > at
> >
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> > Steps for Second Scenario:
> >
> >1.
> >
> >In Postman, issue GET request to invalid url, such as
> >http://localhost:8080/thisisnotvalid.  Ensure Content-Length header
> is
> >sent with value 1.  Ensure the request has a request body that is a
> text
> >file attached containing 4 a’s in it as the only content (yes on a GET
> >request).  I receive response with 404 (Not Found) status and HTML in
> the
> >body.
> >2.
> >
> >Issue same request in (1) again, and now the server responds with 501
> >(Not Implemented) status and HTML in the body.  This is NOT expected.
> >3.
> >
> >Issue same request in (1) again, and now it responds again with 404
> >error as expected. Continuing to issue the same request will continue
> to
> >alternate 

Question around catalina.policy change back with 9.0.33, etc.

2020-06-25 Thread jonmcalexander
I have a developer that is asking WHY the following policies were set to read 
only. The Change Log doesn't illuminate why.

// The cookie code needs these.
permission java.util.PropertyPermission
 "org.apache.catalina.STRICT_SERVLET_COMPLIANCE", "read";
permission java.util.PropertyPermission
 "org.apache.tomcat.util.http.ServerCookie.STRICT_NAMING", "read";
permission java.util.PropertyPermission
 "org.apache.tomcat.util.http.ServerCookie.FWD_SLASH_IS_SEPARATOR", "read";

Any information I can share with her?

Thanks,

Dream * Excel * Explore * Inspire
Jon McAlexander
Asst Vice President

Middleware Product Engineering
Enterprise CIO | Platform Services | Middleware | Infrastructure Solutions

8080 Cobblestone Rd | Urbandale, IA 50322
MAC: F4469-010
Tel 515-988-2508 | Cell 515-988-2508

jonmcalexan...@wellsfargo.com


This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.



Re: Tomcat 9.0.30 seems to not reset Http11InputBuffer properly in certain scenarios? Responses change for same requests

2020-06-25 Thread Mark Thomas
Fabian,

Tomcat's behaviour is as expected and as per spec.

The content-length header is used to determine the end of the request
body. HTTP/1.1 allows pipe-linign requests. Whatever bytes on the wire
are seen next will be treated as the next request.

Mark


On 25/06/2020 23:08, Fabian Morgan wrote:
> Hi --
> 
> While testing various scenarios in Tomcat 9.0.30, I’ve found Tomcat returns
> different responses when the same request is issued twice in a row.  I have
> three such scenarios (all related) to illustrate.  I used Postman to issue
> the requests.
> 
> First, here is some environment information:
> 
> Operating System: Mac OS Mojave 10.14.6
> 
> Http Client: Postman 7.24.0
> 
> Relevant Automatic/Hidden headers for Postman:
> 
> Cache-Control: no-cache
> 
> Accept: */*
> 
> Accept-Encoding: gzip, deflate, br
> 
> Connection: keep-alive
> 
> Java version: 1.8.0_221
> 
> All of these scenarios are on fresh install of Tomcat 9.0.30 with default
> port of 8080.
> 
> Note: In each of the following scenarios, the steps must be done fairly
> quickly one right after the other with no delay.  Please also stop and
> restart Tomcat in between each scenario.
> 
> Steps for First Scenario:
> 
>1.
> 
>In Postman, issue PUT request to invalid url, such as
>http://localhost:8080/thisisnotvalid.  Ensure Content-Length header is
>sent with value 12345.  Ensure the request has a request body that is a
>file attached with size >= 26545 bytes.  In Postman, I marked it with
>binary radiobutton.  I receive response with 405 (Method Not Allowed)
>status and HTML in the body.
>2.
> 
>In Postman, issue GET request to http://localhost:8080/thisisnotvalid.
>Ensure Content-Length header is sent with value 12345.  The request must
>NOT have a body (in Postman I marked it with none radiobutton).  I receive
>response with 404 (Not Found) status and HTML in the body.
>3.
> 
>In Postman, issue GET request to http://localhost:8080/thisisnotvalid.
>Ensure Content-Length header is sent with value 12345.  Ensure the request
>has a request body that is a file attached with size >= 26545 bytes (yes on
>a GET request).  In Postman, I marked it with binary radiobutton.  NOTE: I
>receive 400 (Bad Request) response and HTML in the body.  This is NOT
>expected.
>4.
> 
>Issue same request in (3) again, and now I receive response with 404
>(Not Found) status and HTML in the body as expected.  Continuing to issue
>the request again seems to return 404 response as expected hereafter.
> 
> 
> Note that after step (3), I see the following exception trace in
> catalina.out:
> 
> org.apache.coyote.http11.Http11Processor.service Error parsing HTTP request
> header
> 
>  Note: further occurrences of HTTP request parsing errors will be logged at
> DEBUG level.
> 
> java.lang.IllegalArgumentException: Invalid character found in method name.
> HTTP method names must be tokens
> 
> at
> org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:415)
> 
> at
> org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)
> 
> at
> org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
> 
> at
> org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860)
> 
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1598)
> 
> at
> org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
> 
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> 
> at java.lang.Thread.run(Thread.java:748)
> 
> Steps for Second Scenario:
> 
>1.
> 
>In Postman, issue GET request to invalid url, such as
>http://localhost:8080/thisisnotvalid.  Ensure Content-Length header is
>sent with value 1.  Ensure the request has a request body that is a text
>file attached containing 4 a’s in it as the only content (yes on a GET
>request).  I receive response with 404 (Not Found) status and HTML in the
>body.
>2.
> 
>Issue same request in (1) again, and now the server responds with 501
>(Not Implemented) status and HTML in the body.  This is NOT expected.
>3.
> 
>Issue same request in (1) again, and now it responds again with 404
>error as expected. Continuing to issue the same request will continue to
>alternate server responding with 404 and 501.
> 
> 
> Note: The alternating responses don’t occur when Content-Length header is
> not present.
> 
> Note: The following lines can be seen in localhost_access_log:
> 
> 0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:17 -0700] "GET /thisisnotvalid
> HTTP/1.1" 404 723
> 
> 0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:18 -0700] "aaaGET /thisisnotvalid
> HTTP/1.1" 

Tomcat 9.0.30 seems to not reset Http11InputBuffer properly in certain scenarios? Responses change for same requests

2020-06-25 Thread Fabian Morgan
Hi --

While testing various scenarios in Tomcat 9.0.30, I’ve found Tomcat returns
different responses when the same request is issued twice in a row.  I have
three such scenarios (all related) to illustrate.  I used Postman to issue
the requests.

First, here is some environment information:

Operating System: Mac OS Mojave 10.14.6

Http Client: Postman 7.24.0

Relevant Automatic/Hidden headers for Postman:

Cache-Control: no-cache

Accept: */*

Accept-Encoding: gzip, deflate, br

Connection: keep-alive

Java version: 1.8.0_221

All of these scenarios are on fresh install of Tomcat 9.0.30 with default
port of 8080.

Note: In each of the following scenarios, the steps must be done fairly
quickly one right after the other with no delay.  Please also stop and
restart Tomcat in between each scenario.

Steps for First Scenario:

   1.

   In Postman, issue PUT request to invalid url, such as
   http://localhost:8080/thisisnotvalid.  Ensure Content-Length header is
   sent with value 12345.  Ensure the request has a request body that is a
   file attached with size >= 26545 bytes.  In Postman, I marked it with
   binary radiobutton.  I receive response with 405 (Method Not Allowed)
   status and HTML in the body.
   2.

   In Postman, issue GET request to http://localhost:8080/thisisnotvalid.
   Ensure Content-Length header is sent with value 12345.  The request must
   NOT have a body (in Postman I marked it with none radiobutton).  I receive
   response with 404 (Not Found) status and HTML in the body.
   3.

   In Postman, issue GET request to http://localhost:8080/thisisnotvalid.
   Ensure Content-Length header is sent with value 12345.  Ensure the request
   has a request body that is a file attached with size >= 26545 bytes (yes on
   a GET request).  In Postman, I marked it with binary radiobutton.  NOTE: I
   receive 400 (Bad Request) response and HTML in the body.  This is NOT
   expected.
   4.

   Issue same request in (3) again, and now I receive response with 404
   (Not Found) status and HTML in the body as expected.  Continuing to issue
   the request again seems to return 404 response as expected hereafter.


Note that after step (3), I see the following exception trace in
catalina.out:

org.apache.coyote.http11.Http11Processor.service Error parsing HTTP request
header

 Note: further occurrences of HTTP request parsing errors will be logged at
DEBUG level.

java.lang.IllegalArgumentException: Invalid character found in method name.
HTTP method names must be tokens

at
org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:415)

at
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)

at
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)

at
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860)

at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1598)

at
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

at java.lang.Thread.run(Thread.java:748)

Steps for Second Scenario:

   1.

   In Postman, issue GET request to invalid url, such as
   http://localhost:8080/thisisnotvalid.  Ensure Content-Length header is
   sent with value 1.  Ensure the request has a request body that is a text
   file attached containing 4 a’s in it as the only content (yes on a GET
   request).  I receive response with 404 (Not Found) status and HTML in the
   body.
   2.

   Issue same request in (1) again, and now the server responds with 501
   (Not Implemented) status and HTML in the body.  This is NOT expected.
   3.

   Issue same request in (1) again, and now it responds again with 404
   error as expected. Continuing to issue the same request will continue to
   alternate server responding with 404 and 501.


Note: The alternating responses don’t occur when Content-Length header is
not present.

Note: The following lines can be seen in localhost_access_log:

0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:17 -0700] "GET /thisisnotvalid
HTTP/1.1" 404 723

0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:18 -0700] "aaaGET /thisisnotvalid
HTTP/1.1" 501 731

0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:21 -0700] "GET /thisisnotvalid
HTTP/1.1" 404 723

0:0:0:0:0:0:0:1 - - [25/Jun/2020:13:51:22 -0700] "aaaGET /thisisnotvalid
HTTP/1.1" 501 731


Steps for Third Scenario:

   1.

   In Postman, issue GET request to invalid url, such as
   http://localhost:8080/thisisnotvalid.  Ensure Content-Length header is
   sent with value 12345.  The request must NOT have a body (in Postman I
   marked it with none radiobutton).  I receive response with 404 (Not Found)
   status and HTML in the body.
   2.

   Issue same request in (1) 

[SECURITY] CVE-2020-11996 Apache Tomcat HTTP/2 Denial of Service

2020-06-25 Thread Mark Thomas
CVE-2020-11996 Apache Tomcat HTTP/2 Denial of Service

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected:
Apache Tomcat 10.0.0-M1 to 10.0.0-M5
Apache Tomcat 9.0.0.M1 to 9.0.35
Apache Tomcat 8.5.0 to 8.5.55

Description:
A specially crafted sequence of HTTP/2 requests could trigger high CPU
usage for several seconds. If a sufficient number of such requests were
made on concurrent HTTP/2 connections, the server could become unresponsive.

Mitigation:
- Upgrade to Apache Tomcat 10.0.0-M6 or later
- Upgrade to Apache Tomcat 9.0.36 or later
- Upgrade to Apache Tomcat 8.5.56 or later

Credit:
This issue was reported publicly via the Apache Tomcat Users mailing
list without reference to the potential for DoS. The DoS risks were
identified by the Apache Tomcat Security Team.

References:
[1] http://tomcat.apache.org/security-10.html
[2] http://tomcat.apache.org/security-9.html
[3] http://tomcat.apache.org/security-8.html

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: File "catalina.out" not being created/populated when using Tomcat 9.0.31 + Ubuntu 20.04, and content goes to the Ubuntu syslog instead?

2020-06-25 Thread Emmanuel Bourg
Le 24/06/2020 à 03:33, Brian a écrit :

> To be honest with you, I'm happy about the catalina.out file finally getting 
> created and I really appreciate your kind help, I really do. But I'm not 
> really happy about having to restart rsyslog before every time I need to 
> restart Tomcat. It is weird, and I guess a lot of users will never imagine 
> that they have to do that and they will not feel very pleased when they 
> realize that the catalina.out file doesn't get created after restarting 
> Tomcat. And probably most of them will not even notice that the Tomcat log is 
> being added to the syslog, for that matter. This whole new relation between 
> syslog and Tomcat is really weird and I don't think the users are being 
> warned about it. I have used Tomcat+Ubuntu for several years and I haven't 
> seen this complication before. If there is an advantage about this relation 
> between syslog and Tomcat, I really can't see it. 

This is weird I fully agree, and I'll try do to something better.

There is a way to write to catalina.out without using rsyslogd, it's
possible to instruct systemd to write the process output directly to the
file by overriding the StandardOutput directive of the service file:

  StandardOutput=file:/var/log/tomcat9/catalina.out

This would go in a /etc/systemd/system/tomcat9.service.d/override.conf
file for example.

The downside is that you can no longer access the Tomcat output with
'journalctl -t tomcat9', nor see the last lines of the log when
displaying the status with 'systemctl status tomcat9'.

Ideally systemd should support writing to the journal and to a file
simultaneously, with something like StandardOutput=journal+file:... If
there is no other way to achieve the same result I'll file an
enhancement request on systemd.

Emmanuel Bourg

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-25 Thread Ayub Khan
Chris,

What do you suggest now to debug this issue ?  Check with Nginx support if
they can verify it ?

On Thu, Jun 25, 2020 at 8:17 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/25/20 11:06, Ayub Khan wrote:
> > Was just thinking if the file descriptors belonged to nginx why do
> > they disappear as soon as I restart tomcat ? I tried restarting
> > nginx and the open file descriptors don't disappear.
>
> When you restart Tomcat, the OS cleans-up the TCP/IP stack. Tomcat is
> waiting on some cleanup information on those sockets. nginx has
> evidently given-up on them and so the OS has adopted them.
>
> > When I execute lsof -p  I do not see file descriptors
> > in close wait state
>
> Because nginx has cleaned them up already.
>
> I could encourage you to take a look at the TCP/IP state diagram to
> see how everything works. Beware: it's very complicated.
>
> I can tell you that Tomcat isn't connecting to itself (unless your
> application is doing that). Unless some other process is connecting to
> your Tomcat's port 8080, it seems like nginx is the only possibility.
>
> Note: you can run lsof without a process parameter. You can search all
> open files for who owns file handle X and see who owns it. My guess is
> you'll get "kernel" if you look.
>
> - -chris
>
> > On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:
> >
> >> Chris,
> >>
> >> Ok, I will investigate nginx side as well. Thank you for the
> >> pointers
> >>
> >> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
> >> ch...@christopherschultz.net> wrote:
> >>
> >
> > Ayub,
> >
> > On 6/24/20 11:05, Ayub Khan wrote:
> > If some open file is owned by nginx why would it show up if
> > I run the below command> sudo lsof -p $(cat
> > /var/run/tomcat8.pid)
> >
> > Because network connections have two ends.
> >
> > -chris
> >
> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 19:17, Ayub Khan wrote:
>  Yes we have nginx as reverse proxy, below is the
>  nginx config. We notice this issue only when there is
>  high number of requests, during non peak hours we do
>  not see this issue.> location /myapp/myservice{
>  #local machine proxy_pass http://localhost:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost $host; proxy_set_header
>  X-Real-IP $remote_addr; proxy_set_header
>  X-Forwarded-For $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > You might want to read about tuning nginx to drop
> > connections after a certain period of time, number of
> > requests, etc. Looks like either a bug in nginx or a
> > misconfiguration which allows connections to stick-around
> > like this. You may have to ask the nginx people. I have no
> > experience with nginx myself, while others here may have
> > some experience.
> >
>  location / { #  if using AWS Load balancer, this bit
>  checks for the presence of the https proto flag.  if
>  regular http is found, then issue a redirect
> > to hit
>  the https endpoint instead if
>  ($http_x_forwarded_proto != 'https') { rewrite ^
>  https://$host$request_uri? permanent; }
> 
>  proxy_pass  http://127.0.0.1:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost $host; proxy_set_header
>  X-Real-IP $remote_addr; proxy_set_header
>  X-Forwarded-For $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> 
>  *below is the connector*
> 
>    protocol="org.apache.coyote.http11.Http11NioProtocol"
>   connectionTimeout="2000" maxThreads="5"
>  URIEncoding="UTF-8" redirectPort="8443" />
> >
> > 50k threads is a LOT of threads. Do you expect to handle
> > 50k requests simultaneously?
> >
>  these ports are random, I am not sure who owns the
>  process.
> 
>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) ,
>  here port 55866 is a random port.
> > I'm sure you'll find that 55866 is owned by nginx. netstat
> > will tell you .
> >
> > I think you need to look at your nginx configuration. It
> > would also be a great time to upgrade to a supported
> > version of Tomcat. I would recommend 8.5.56 or 9.0.36.
> >
> > -chris
> >
>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz
> 

Re: broken pipe error keeps increasing open files

2020-06-25 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/25/20 11:06, Ayub Khan wrote:
> Was just thinking if the file descriptors belonged to nginx why do
> they disappear as soon as I restart tomcat ? I tried restarting
> nginx and the open file descriptors don't disappear.

When you restart Tomcat, the OS cleans-up the TCP/IP stack. Tomcat is
waiting on some cleanup information on those sockets. nginx has
evidently given-up on them and so the OS has adopted them.

> When I execute lsof -p  I do not see file descriptors
> in close wait state

Because nginx has cleaned them up already.

I could encourage you to take a look at the TCP/IP state diagram to
see how everything works. Beware: it's very complicated.

I can tell you that Tomcat isn't connecting to itself (unless your
application is doing that). Unless some other process is connecting to
your Tomcat's port 8080, it seems like nginx is the only possibility.

Note: you can run lsof without a process parameter. You can search all
open files for who owns file handle X and see who owns it. My guess is
you'll get "kernel" if you look.

- -chris

> On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:
>
>> Chris,
>>
>> Ok, I will investigate nginx side as well. Thank you for the
>> pointers
>>
>> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
>> ch...@christopherschultz.net> wrote:
>>
>
> Ayub,
>
> On 6/24/20 11:05, Ayub Khan wrote:
> If some open file is owned by nginx why would it show up if
> I run the below command> sudo lsof -p $(cat
> /var/run/tomcat8.pid)
>
> Because network connections have two ends.
>
> -chris
>
> On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
 Yes we have nginx as reverse proxy, below is the
 nginx config. We notice this issue only when there is
 high number of requests, during non peak hours we do
 not see this issue.> location /myapp/myservice{
 #local machine proxy_pass http://localhost:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost $host; proxy_set_header
 X-Real-IP $remote_addr; proxy_set_header
 X-Forwarded-For $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop
> connections after a certain period of time, number of
> requests, etc. Looks like either a bug in nginx or a
> misconfiguration which allows connections to stick-around
> like this. You may have to ask the nginx people. I have no
> experience with nginx myself, while others here may have
> some experience.
>
 location / { #  if using AWS Load balancer, this bit
 checks for the presence of the https proto flag.  if
 regular http is found, then issue a redirect
> to hit
 the https endpoint instead if
 ($http_x_forwarded_proto != 'https') { rewrite ^
 https://$host$request_uri? permanent; }

 proxy_pass  http://127.0.0.1:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost $host; proxy_set_header
 X-Real-IP $remote_addr; proxy_set_header
 X-Forwarded-For $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }

 *below is the connector*

 >>> protocol="org.apache.coyote.http11.Http11NioProtocol"
  connectionTimeout="2000" maxThreads="5"
 URIEncoding="UTF-8" redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle
> 50k requests simultaneously?
>
 these ports are random, I am not sure who owns the
 process.

 localhost:http-alt->localhost:55866 (CLOSE_WAIT) ,
 here port 55866 is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat
> will tell you .
>
> I think you need to look at your nginx configuration. It
> would also be a great time to upgrade to a supported
> version of Tomcat. I would recommend 8.5.56 or 9.0.36.
>
> -chris
>
 On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz
 < ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/23/20 16:23, Ayub Khan wrote:
>>> I executed  *sudo lsof -p $(cat
>>> /var/run/tomcat8.pid) *and I saw the below
>>> output, some in CLOSE_WAIT and others in
>>> ESTABLISHED. If there are 200 open file
>>> descriptors 160 are in CLOSE_WAIT state. When
>>> the count for CLOSE_WAIT increases I just have

Re: broken pipe error keeps increasing open files

2020-06-25 Thread Ayub Khan
Chris,

Was just thinking if the file descriptors belonged to nginx why do they
disappear as soon as I restart tomcat ? I tried restarting nginx and the
open file descriptors don't disappear.

When I execute lsof -p  I do not see file descriptors in close
wait state

On Wed, 24 Jun 2020, 20:32 Ayub Khan,  wrote:

> Chris,
>
> Ok, I will investigate nginx side as well. Thank you for the pointers
>
> On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
> ch...@christopherschultz.net> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>>
>> Ayub,
>>
>> On 6/24/20 11:05, Ayub Khan wrote:
>> > If some open file is owned by nginx why would it show up if I run
>> > the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)
>>
>> Because network connections have two ends.
>>
>> - -chris
>>
>> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
>> > ch...@christopherschultz.net> wrote:
>> >
>> > Ayub,
>> >
>> > On 6/23/20 19:17, Ayub Khan wrote:
>>  Yes we have nginx as reverse proxy, below is the nginx
>>  config. We notice this issue only when there is high number
>>  of requests, during non peak hours we do not see this issue.>
>>  location /myapp/myservice{ #local machine proxy_pass
>>  http://localhost:8080; proxy_http_version  1.1;
>> 
>>  proxy_set_headerConnection  $connection_upgrade;
>>  proxy_set_headerUpgrade $http_upgrade;
>>  proxy_set_headerHost$host;
>>  proxy_set_header X-Real-IP   $remote_addr;
>>  proxy_set_header X-Forwarded-For
>>  $proxy_add_x_forwarded_for;
>> 
>> 
>>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
>> >
>> > You might want to read about tuning nginx to drop connections after
>> > a certain period of time, number of requests, etc. Looks like
>> > either a bug in nginx or a misconfiguration which allows
>> > connections to stick-around like this. You may have to ask the
>> > nginx people. I have no experience with nginx myself, while others
>> > here may have some experience.
>> >
>>  location / { #  if using AWS Load balancer, this bit checks
>>  for the presence of the https proto flag.  if regular http is
>>  found, then issue a redirect
>> > to hit
>>  the https endpoint instead if ($http_x_forwarded_proto !=
>>  'https') { rewrite ^ https://$host$request_uri? permanent; }
>> 
>>  proxy_pass  http://127.0.0.1:8080;
>>  proxy_http_version 1.1;
>> 
>>  proxy_set_headerConnection  $connection_upgrade;
>>  proxy_set_headerUpgrade $http_upgrade;
>>  proxy_set_headerHost$host;
>>  proxy_set_header X-Real-IP   $remote_addr;
>>  proxy_set_header X-Forwarded-For
>>  $proxy_add_x_forwarded_for;
>> 
>> 
>>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
>> 
>>  *below is the connector*
>> 
>>  >  protocol="org.apache.coyote.http11.Http11NioProtocol"
>>  connectionTimeout="2000" maxThreads="5"
>>  URIEncoding="UTF-8" redirectPort="8443" />
>> >
>> > 50k threads is a LOT of threads. Do you expect to handle 50k
>> > requests simultaneously?
>> >
>>  these ports are random, I am not sure who owns the process.
>> 
>>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
>>  55866 is a random port.
>> > I'm sure you'll find that 55866 is owned by nginx. netstat will
>> > tell you .
>> >
>> > I think you need to look at your nginx configuration. It would also
>> > be a great time to upgrade to a supported version of Tomcat. I
>> > would recommend 8.5.56 or 9.0.36.
>> >
>> > -chris
>> >
>>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
>>  ch...@christopherschultz.net> wrote:
>> 
>>  Ayub,
>> 
>>  On 6/23/20 16:23, Ayub Khan wrote:
>> >>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
>> >>> *and I saw the below output, some in CLOSE_WAIT and
>> >>> others in ESTABLISHED. If there are 200 open file
>> >>> descriptors 160 are in CLOSE_WAIT state. When the count
>> >>> for CLOSE_WAIT increases I just have to restart
>> >>> tomcat.
>> >>>
>> >>> java65189 tomcat8  715u IPv6
>> >>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
>> >>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
>> >>> 237848923   0t0 TCP
>> >>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)
>> 
>>  These are connections from some process into Tomcat listening
>>  on port 8080 (that's what localhost:http-alt is). So what
>>  process owns the outgoing connection on port 40568 on the
>>  same host?
>> 
>>  Are you using a reverse proxy?
>> 
>> >>> most of the open files are in CLOSE_WAIT state I do not
>> >>> see anything related to database ip.
>> 
>>  Agreed. It looks like you have a reverse proxy who is
>>  losing-track of connections, or who is 

Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-25 Thread Mark Thomas
Thanks.

I've looked at the code and I have tried various tests but I am unable
to re-create a memory leak.

The code used to (before I made a few changes this afternoon) retain a
lot more memory per Stream and it is possible that what you are seeing
is a system that doesn't have enough memory to achieve steady state.

If you are able to build the latest 9.0.x and test that, that could be
helpful. Alternatively, I could provide a test build for you to
experiment with.

Some additional questions that might aid understanding:

- What is the typical response size for one of these requests?
- How long does a typical test take to process?
- What are the GC roots for those RequestInfo objects?

Thanks again,

Mark




On 25/06/2020 15:10, Chirag Dewan wrote:
> Hi Mark,
> 
> Its the default APR connector with 150 Threads.
> 
> Chirag
> 
> On Thu, 25 Jun, 2020, 7:30 pm Mark Thomas,  wrote:
> 
>> On 25/06/2020 11:00, Chirag Dewan wrote:
>>> Thanks for the quick check Mark.
>>>
>>> These are the images I tried referring to:
>>>
>>> https://ibb.co/LzKtRgh
>>>
>>> https://ibb.co/2s7hqRL
>>>
>>> https://ibb.co/KmKj590
>>>
>>>
>>> The last one is the MAT screenshot showing many RequestInfo objects.
>>
>> Thanks. That certainly looks like a memory leak. I'll take a closer
>> look. Out of interest, how many threads is the Connector configured to use?
>>
>> Mark
>>
>>
>>>
>>>
>>> Thanks,
>>>
>>> Chirag
>>>
>>> On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas  wrote:
>>>
 On 24/06/2020 12:17, Mark Thomas wrote:
> On 22/06/2020 11:06, Chirag Dewan wrote:
>> Hi,
>>
>> Update: We found that Tomcat goes OOM when a client closes and opens
>> new
>> connections every second. In the memory dump, we see a lot of
>> RequestInfo objects that are causing the memory spike.
>>
>> After a while, Tomcat goes OOM and start rejecting request(I get a
>> request timed out on my client). This seems like a bug to me.
>>
>> For better understanding, let me explain my use case again:
>>
>> I have a jetty client that sends HTTP2 requests to Tomcat. My
>> requirement is to close a connection after a configurable(say 5000)
>> number of requests/streams and open a new connection that continues to
>> send requests. I close a connection by sending a GoAway frame.
>>
>> When I execute this use case under load, I see that after ~2hours my
>> requests fail and I get a series of errors like request
>> timeouts(5seconds), invalid window update frame, and connection close
>> exception on my client.
>> On further debugging, I found that it's a Tomcat memory problem and it
>> goes OOM after sometime under heavy load with multiple connections
>> being
>> re-established by the clients.
>>
>> image.png
>>
>> image.png
>>
>> Is this a known issue? Or a known behavior with Tomcat?
>
> Embedded images get dropped by the list software. Post those images
> somewhere we can see them.
>
>> Please let me know if you any experience with such a situation. Thanks
>> in advance.
>
> Nothing comes to mind.
>
> I'll try some simple tests with HTTP/2.

 I don't see a memory leak (the memory is reclaimed eventually) but I do
 see possibilities to release memory associated with request processing
 sooner.

 Right now you need to allocate more memory to the Java process to enable
 Tomcat to handle the HTTP/2 load it is presented with.

 It looks like a reasonable chunk of memory is released when the
 Connection closes that could be released earlier when the associated
 Stream closes. I'll take a look at what can be done in that area. In the
 meantime, reducing the number of Streams you allow on a Connection
 before it is closed should reduce overall memory usage.

 Mark

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


>>>
>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: AJP error using mod_proxy__ajp

2020-06-25 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

This issue is apparently trivially reproducible in my dev environment.

Do I have to get a protocol-trace to get any more helpful information?

Thanks,
- -chris

On 6/24/20 10:46, Christopher Schultz wrote:
> All,
>
> On 6/24/20 10:29, Christopher Schultz wrote:
>> All,
>
>> I'm slowly switching from mod_jk to mod_proxy_ajp and I have a
>> development environment where I'm getting Bad Gateway responses
>> sent to clients along with this exception in my Tomcat log file:
>
>> java.lang.IllegalArgumentException: Header message of length
>> [8,194] received but the packetSize is only [8,192] at
>> org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:685)
>
>>
>
> at
>> org.apache.coyote.ajp.AjpProcessor.receive(AjpProcessor.java:626)
>>
>>
at
>> org.apache.coyote.ajp.AjpProcessor.refillReadBuffer(AjpProcessor.java
:
>
>>
73
>
>
> 4)
>> at
>> org.apache.coyote.ajp.AjpProcessor$SocketInputBuffer.doRead(AjpProces
s
>
>>
or
>
>
> .java:1456)
>> at org.apache.coyote.Request.doRead(Request.java:581) at
>> org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.j
a
>
>>
va
>
>
> :344)
>> at
>> org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuf
f
>
>>
er
>
>
> .java:663)
>> at
>> org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:3
5
>
>>
8)
>
>
> at
>> org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStrea
m
>
>>
.j
>
>
> ava:93)
>> at
>> org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.ja
v
>
>>
a:
>
>
> 53)
>> at
>> org.apache.commons.io.input.TeeInputStream.read(TeeInputStream.java:1
0
>
>>
6)
>
>
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
>> at my.product.MacInputStream.read(MacInputStream.java:29) at
>> java.io.FilterInputStream.read(FilterInputStream.java:83) at
>> com.sun.org.apache.xerces.internal.impl.XMLEntityManager$RewindableIn
p
>
>>
ut
>
>
> Stream.read(XMLEntityManager.java:2890)
>> at
>> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrent
E
>
>>
nt
>
>
> ity(XMLEntityManager.java:674)
>> at
>> com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineD
o
>
>>
cV
>
>
> ersion(XMLVersionDetector.java:148)
>> at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
M
>
>>
L1
>
>
> 1Configuration.java:806)
>> at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
M
>
>>
L1
>
>
> 1Configuration.java:771)
>> at
>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.
j
>
>>
av
>
>
> a:141)
>> at
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Ab
s
>
>>
tr
>
>
> actSAXParser.java:1213)
>> at
>> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.p
a
>
>>
rs
>
>
> e(SAXParserImpl.java:643)
>> at
>> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParser
I
>
>>
mp
>
>
> l.java:327)
>> at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
>
>> This is a web service which is reading the request with a
>> SAXParser. It's been running in production (and dev!) for years
>> without any issues. It''s been running for a few months in
>> development, now, with mod_proxy_ajp without any errors.
>
>> I know about the "max packet size" and the default is 8192
>> bytes. I haven't changed the default. Here's my 
>> configuration:
>
>> > secretRequired="false" redirectPort="443" protocol="AJP/1.3"
>> URIEncoding="UTF-8" executor="tomcatThreadPool" />
>
>> Here's the configuration in httpd.conf:
>
>>  BalancerMember
>> "ajp://localhost:8245" timeout=300 ping=5 ttl=60 
>
>> ProxyPass "/my-api/" "balancer://my-api/my-api/"
>> ProxyPassReverse "/my-api/" "balancer://my-api/my-api/"
>
>> The documentation for mod_proxy_ajp[1] seems to indicate that
>> the "Packet Size" for AJP is fixed at 8192 bytes:
>
>> " Packet Size
>
>> According to much of the code, the max packet size is 8 * 1024
>> bytes (8K). The actual length of the packet is encoded in the
>> header.
>
>> Packet Headers
>
>> Packets sent from the server to the container begin with 0x1234.
>> Packets sent from the container to the server begin with AB
>> (that's the ASCII code for A followed by the ASCII code for B).
>> After those first two bytes, there is an integer (encoded as
>> above) with the length of the payload. Although this might
>> suggest that the maximum payload could be as large as 2^16, in
>> fact, *the code sets the maximum to be 8K*. " (emphasis mine)
>
>> Does anyone know under what circumstances mod_proxy_ajp might
>> send more than 8192 bytes? It looks like mod_proxy_ajp doesn't
>> have any way to set the max packet size like mod_jk does.
>
>> I should probably be able to set the max packet size on the
>> Tomcat side to something higher than 8192 to catch this kind of
>> thing... but it looks like it might be a bug in mod_proxy_ajp.
>
>> Versions are Apache httpd 2.4.25 (Debian) and Tomcat 8.5.trunk
>> (8.5.55). mod_jk is not being used.
>
>> Any 

Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-25 Thread Chirag Dewan
Hi Mark,

Its the default APR connector with 150 Threads.

Chirag

On Thu, 25 Jun, 2020, 7:30 pm Mark Thomas,  wrote:

> On 25/06/2020 11:00, Chirag Dewan wrote:
> > Thanks for the quick check Mark.
> >
> > These are the images I tried referring to:
> >
> > https://ibb.co/LzKtRgh
> >
> > https://ibb.co/2s7hqRL
> >
> > https://ibb.co/KmKj590
> >
> >
> > The last one is the MAT screenshot showing many RequestInfo objects.
>
> Thanks. That certainly looks like a memory leak. I'll take a closer
> look. Out of interest, how many threads is the Connector configured to use?
>
> Mark
>
>
> >
> >
> > Thanks,
> >
> > Chirag
> >
> > On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas  wrote:
> >
> >> On 24/06/2020 12:17, Mark Thomas wrote:
> >>> On 22/06/2020 11:06, Chirag Dewan wrote:
>  Hi,
> 
>  Update: We found that Tomcat goes OOM when a client closes and opens
> new
>  connections every second. In the memory dump, we see a lot of
>  RequestInfo objects that are causing the memory spike.
> 
>  After a while, Tomcat goes OOM and start rejecting request(I get a
>  request timed out on my client). This seems like a bug to me.
> 
>  For better understanding, let me explain my use case again:
> 
>  I have a jetty client that sends HTTP2 requests to Tomcat. My
>  requirement is to close a connection after a configurable(say 5000)
>  number of requests/streams and open a new connection that continues to
>  send requests. I close a connection by sending a GoAway frame.
> 
>  When I execute this use case under load, I see that after ~2hours my
>  requests fail and I get a series of errors like request
>  timeouts(5seconds), invalid window update frame, and connection close
>  exception on my client.
>  On further debugging, I found that it's a Tomcat memory problem and it
>  goes OOM after sometime under heavy load with multiple connections
> being
>  re-established by the clients.
> 
>  image.png
> 
>  image.png
> 
>  Is this a known issue? Or a known behavior with Tomcat?
> >>>
> >>> Embedded images get dropped by the list software. Post those images
> >>> somewhere we can see them.
> >>>
>  Please let me know if you any experience with such a situation. Thanks
>  in advance.
> >>>
> >>> Nothing comes to mind.
> >>>
> >>> I'll try some simple tests with HTTP/2.
> >>
> >> I don't see a memory leak (the memory is reclaimed eventually) but I do
> >> see possibilities to release memory associated with request processing
> >> sooner.
> >>
> >> Right now you need to allocate more memory to the Java process to enable
> >> Tomcat to handle the HTTP/2 load it is presented with.
> >>
> >> It looks like a reasonable chunk of memory is released when the
> >> Connection closes that could be released earlier when the associated
> >> Stream closes. I'll take a look at what can be done in that area. In the
> >> meantime, reducing the number of Streams you allow on a Connection
> >> before it is closed should reduce overall memory usage.
> >>
> >> Mark
> >>
> >> -
> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-25 Thread Mark Thomas
On 25/06/2020 11:00, Chirag Dewan wrote:
> Thanks for the quick check Mark.
> 
> These are the images I tried referring to:
> 
> https://ibb.co/LzKtRgh
> 
> https://ibb.co/2s7hqRL
> 
> https://ibb.co/KmKj590
> 
> 
> The last one is the MAT screenshot showing many RequestInfo objects.

Thanks. That certainly looks like a memory leak. I'll take a closer
look. Out of interest, how many threads is the Connector configured to use?

Mark


> 
> 
> Thanks,
> 
> Chirag
> 
> On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas  wrote:
> 
>> On 24/06/2020 12:17, Mark Thomas wrote:
>>> On 22/06/2020 11:06, Chirag Dewan wrote:
 Hi,

 Update: We found that Tomcat goes OOM when a client closes and opens new
 connections every second. In the memory dump, we see a lot of
 RequestInfo objects that are causing the memory spike.

 After a while, Tomcat goes OOM and start rejecting request(I get a
 request timed out on my client). This seems like a bug to me.

 For better understanding, let me explain my use case again:

 I have a jetty client that sends HTTP2 requests to Tomcat. My
 requirement is to close a connection after a configurable(say 5000)
 number of requests/streams and open a new connection that continues to
 send requests. I close a connection by sending a GoAway frame.

 When I execute this use case under load, I see that after ~2hours my
 requests fail and I get a series of errors like request
 timeouts(5seconds), invalid window update frame, and connection close
 exception on my client.
 On further debugging, I found that it's a Tomcat memory problem and it
 goes OOM after sometime under heavy load with multiple connections being
 re-established by the clients.

 image.png

 image.png

 Is this a known issue? Or a known behavior with Tomcat?
>>>
>>> Embedded images get dropped by the list software. Post those images
>>> somewhere we can see them.
>>>
 Please let me know if you any experience with such a situation. Thanks
 in advance.
>>>
>>> Nothing comes to mind.
>>>
>>> I'll try some simple tests with HTTP/2.
>>
>> I don't see a memory leak (the memory is reclaimed eventually) but I do
>> see possibilities to release memory associated with request processing
>> sooner.
>>
>> Right now you need to allocate more memory to the Java process to enable
>> Tomcat to handle the HTTP/2 load it is presented with.
>>
>> It looks like a reasonable chunk of memory is released when the
>> Connection closes that could be released earlier when the associated
>> Stream closes. I'll take a look at what can be done in that area. In the
>> meantime, reducing the number of Streams you allow on a Connection
>> before it is closed should reduce overall memory usage.
>>
>> Mark
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



HTTP Header Security Filter (antiClickJackingEnabled x-frame-options) doesn't work with mod_proxy as expected

2020-06-25 Thread Michele Mase'
I'm trying to configure the header x-frame-options in tomcat8

web.xml:

httpHeaderSecurity

org.apache.catalina.filters.HttpHeaderSecurityFilter
true

antiClickJackingOption
SAMEORIGIN



httpHeaderSecurity
/*
REQUEST


Testing it with tomcat works as expected:

curl -I http://ip_of_tomcat:port_of_tomcat/myapp/
HTTP/1.1 200 OK
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Set-Cookie: JSESSIONID=5B3F02AE2484BB1A66B1875DCC4337BD.myapp1;
Path=/myapp; Secure; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Transfer-Encoding: chunked
Date: Thu, 25 Jun 2020 12:36:14 GMT
Server:

Testing it with tomcat behind an apache reverse proxy with mod_proxy_http
does not work as expected

web.xml: the same as above
server.xml


apache.conf

ServerName xframe.example.com
ProxyPass / http://ip_of_tomcat:port_of_tomcat/
ProxyPassReverse / http://ip_of_tomcat:port_of_tomcat/


curl -I https://xframe.example.com/myapp/
HTTP/1.1 200 OK
Date: Thu, 25 Jun 2020 13:20:48 GMT
Server:
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=ISO-8859-1
Transfer-Encoding: chunked
Set-Cookie: JSESSIONID=7F94B0FFC3905A6CA4B4C192E0559AF4.myapp1;
Path=/myapp; Secure; HttpOnly
Vary: Accept-Encoding,User-Agent

The x-frame-options header is missing. The only workaround I have found is
by enabling mod_headers in apache.conf, i.e:


= 2.4.7 >
Header always setifempty X-Frame-Options SAMEORIGIN


Header always merge X-Frame-Options SAMEORIGIN



And it finally works:
curl -I https://xframe.example.com/myapp/
HTTP/1.1 200 OK
Date: Thu, 25 Jun 2020 13:24:48 GMT
Server:
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=ISO-8859-1
Transfer-Encoding: chunked
Set-Cookie: JSESSIONID=990791DCF707F972D7C2CF09D47F4BE4.myapp1;
Path=/myapp; Secure; HttpOnly
Vary: Accept-Encoding,User-Agent

Is it possible to use x-frame-options with mod_proxy without also having to
use mod_headers?
I would like to configure only tomcat and not apache.

-- 
Michele Masè


Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-25 Thread Chirag Dewan
Thanks for the quick check Mark.

These are the images I tried referring to:

https://ibb.co/LzKtRgh

https://ibb.co/2s7hqRL

https://ibb.co/KmKj590


The last one is the MAT screenshot showing many RequestInfo objects.


Thanks,

Chirag

On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas  wrote:

> On 24/06/2020 12:17, Mark Thomas wrote:
> > On 22/06/2020 11:06, Chirag Dewan wrote:
> >> Hi,
> >>
> >> Update: We found that Tomcat goes OOM when a client closes and opens new
> >> connections every second. In the memory dump, we see a lot of
> >> RequestInfo objects that are causing the memory spike.
> >>
> >> After a while, Tomcat goes OOM and start rejecting request(I get a
> >> request timed out on my client). This seems like a bug to me.
> >>
> >> For better understanding, let me explain my use case again:
> >>
> >> I have a jetty client that sends HTTP2 requests to Tomcat. My
> >> requirement is to close a connection after a configurable(say 5000)
> >> number of requests/streams and open a new connection that continues to
> >> send requests. I close a connection by sending a GoAway frame.
> >>
> >> When I execute this use case under load, I see that after ~2hours my
> >> requests fail and I get a series of errors like request
> >> timeouts(5seconds), invalid window update frame, and connection close
> >> exception on my client.
> >> On further debugging, I found that it's a Tomcat memory problem and it
> >> goes OOM after sometime under heavy load with multiple connections being
> >> re-established by the clients.
> >>
> >> image.png
> >>
> >> image.png
> >>
> >> Is this a known issue? Or a known behavior with Tomcat?
> >
> > Embedded images get dropped by the list software. Post those images
> > somewhere we can see them.
> >
> >> Please let me know if you any experience with such a situation. Thanks
> >> in advance.
> >
> > Nothing comes to mind.
> >
> > I'll try some simple tests with HTTP/2.
>
> I don't see a memory leak (the memory is reclaimed eventually) but I do
> see possibilities to release memory associated with request processing
> sooner.
>
> Right now you need to allocate more memory to the Java process to enable
> Tomcat to handle the HTTP/2 load it is presented with.
>
> It looks like a reasonable chunk of memory is released when the
> Connection closes that could be released earlier when the associated
> Stream closes. I'll take a look at what can be done in that area. In the
> meantime, reducing the number of Streams you allow on a Connection
> before it is closed should reduce overall memory usage.
>
> Mark
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Tomcat 9 and response.setTrailerFields

2020-06-25 Thread Mark Thomas
On 25/06/2020 07:44, Julian Reschke wrote:
> On 24.06.2020 17:35, Julian Reschke wrote:
>> ... > So it does set "Trailer" (so the response was not committed
>> yet), but it
>> doesn't switch to chunked encoding.
>>
>> There must be something that I'm doing wrong...
>> ...
> 
> Found the issue.
> 
> I was using a HttpServletResponse object that *delegates* to the real
> one, and as the trailer field related methods have default
> implementations, I actually executed the default "no op" implementation.

Glad you found it and thanks for reporting back with the root cause.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9 and response.setTrailerFields

2020-06-25 Thread Julian Reschke

On 24.06.2020 17:35, Julian Reschke wrote:

... > So it does set "Trailer" (so the response was not committed yet), but it
doesn't switch to chunked encoding.

There must be something that I'm doing wrong...
...


Found the issue.

I was using a HttpServletResponse object that *delegates* to the real
one, and as the trailer field related methods have default
implementations, I actually executed the default "no op" implementation.

Best regards, Julian

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org