Re: NullPointerException on statrup - possible bug in Tomcat

2020-06-24 Thread tomcat-subs
Problem resolved. Thank you.

On Wed, Jun 24, 2020, at 12:46 PM, Konstantin Kolinko wrote:
> ср, 24 июн. 2020 г. в 19:25, :
> >
> > I have a web application which is failing in RestEasy initialization with 
> > an NPE. It worked for many years until I added a large number of jar 
> > dependencies because of a new development effort. I've debugged the code by 
> > stepping through the Tomcat source to the point I've found where it is 
> > failing. It seems to be a Tomcat bug but of course I'm not convinced since 
> > it is highly more likely it is my problem.
> >
> > Tomcat version is 9.0.36, though the failure happens in the Tomcat 8 
> > versions I've tried as well.
> >
> > The NPE is triggered by a single "return null" statement in 
> > org.apache.catalina.core.ApplicationContext line 933. Below is a code 
> > snippet of where the return statement is. In my failing scenario the 
> > wrapper is NOT null and isOverridable is already returning false. So it 
> > falls through to return null.
> >
> > So here is my question: Why in the world in the code below does the return 
> > null statement even exist? It seems like the return null at line 933 is the 
> > precondition the code is trying to establish.
> 
> This method is documented in the specification of Servlet API (in
> their javadoc) to return null if such servlet has already been
> registered.
> See Java EE 8 javadoc
> https://javaee.github.io/javaee-spec/javadocs/javax/servlet/ServletContext.html#addServlet-java.lang.String-java.lang.Class-
> 
> (Following the links from Specifications page
> https://cwiki.apache.org/confluence/display/TOMCAT/Specifications
> 
> K.Kolinko
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-24 Thread Ayub Khan
Chris,

Ok, I will investigate nginx side as well. Thank you for the pointers

On Wed, 24 Jun 2020, 19:45 Christopher Schultz, <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
> Ayub,
>
> On 6/24/20 11:05, Ayub Khan wrote:
> > If some open file is owned by nginx why would it show up if I run
> > the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)
>
> Because network connections have two ends.
>
> - -chris
>
> > On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 19:17, Ayub Khan wrote:
>  Yes we have nginx as reverse proxy, below is the nginx
>  config. We notice this issue only when there is high number
>  of requests, during non peak hours we do not see this issue.>
>  location /myapp/myservice{ #local machine proxy_pass
>  http://localhost:8080; proxy_http_version  1.1;
> 
>  proxy_set_headerConnection  $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost$host;
>  proxy_set_header X-Real-IP   $remote_addr;
>  proxy_set_header X-Forwarded-For
>  $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > You might want to read about tuning nginx to drop connections after
> > a certain period of time, number of requests, etc. Looks like
> > either a bug in nginx or a misconfiguration which allows
> > connections to stick-around like this. You may have to ask the
> > nginx people. I have no experience with nginx myself, while others
> > here may have some experience.
> >
>  location / { #  if using AWS Load balancer, this bit checks
>  for the presence of the https proto flag.  if regular http is
>  found, then issue a redirect
> > to hit
>  the https endpoint instead if ($http_x_forwarded_proto !=
>  'https') { rewrite ^ https://$host$request_uri? permanent; }
> 
>  proxy_pass  http://127.0.0.1:8080;
>  proxy_http_version 1.1;
> 
>  proxy_set_headerConnection  $connection_upgrade;
>  proxy_set_headerUpgrade $http_upgrade;
>  proxy_set_headerHost$host;
>  proxy_set_header X-Real-IP   $remote_addr;
>  proxy_set_header X-Forwarded-For
>  $proxy_add_x_forwarded_for;
> 
> 
>  proxy_buffers 16 16k; proxy_buffer_size 32k; }
> 
>  *below is the connector*
> 
>    protocol="org.apache.coyote.http11.Http11NioProtocol"
>  connectionTimeout="2000" maxThreads="5"
>  URIEncoding="UTF-8" redirectPort="8443" />
> >
> > 50k threads is a LOT of threads. Do you expect to handle 50k
> > requests simultaneously?
> >
>  these ports are random, I am not sure who owns the process.
> 
>  localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
>  55866 is a random port.
> > I'm sure you'll find that 55866 is owned by nginx. netstat will
> > tell you .
> >
> > I think you need to look at your nginx configuration. It would also
> > be a great time to upgrade to a supported version of Tomcat. I
> > would recommend 8.5.56 or 9.0.36.
> >
> > -chris
> >
>  On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
>  ch...@christopherschultz.net> wrote:
> 
>  Ayub,
> 
>  On 6/23/20 16:23, Ayub Khan wrote:
> >>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
> >>> *and I saw the below output, some in CLOSE_WAIT and
> >>> others in ESTABLISHED. If there are 200 open file
> >>> descriptors 160 are in CLOSE_WAIT state. When the count
> >>> for CLOSE_WAIT increases I just have to restart
> >>> tomcat.
> >>>
> >>> java65189 tomcat8  715u IPv6
> >>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
> >>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
> >>> 237848923   0t0 TCP
> >>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)
> 
>  These are connections from some process into Tomcat listening
>  on port 8080 (that's what localhost:http-alt is). So what
>  process owns the outgoing connection on port 40568 on the
>  same host?
> 
>  Are you using a reverse proxy?
> 
> >>> most of the open files are in CLOSE_WAIT state I do not
> >>> see anything related to database ip.
> 
>  Agreed. It looks like you have a reverse proxy who is
>  losing-track of connections, or who is (re)opening
>  connections when it may be unnecessar y.
> 
>  Can you share your  configuration from
>  server.xml? Remember to remove any secrets.
> 
>  -chris
> 
> >>> On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
> >>> felix.schumac...@internetallee.de> wrote:
> >>>
> 
>  Am 22.06.20 um 13:22 schrieb Ayub Khan:
> > Felix,
> >
> > I executed ls -l 

Re: NullPointerException on statrup - possible bug in Tomcat

2020-06-24 Thread Mark Thomas
On 24/06/2020 17:25, tomcat-s...@stiprus.com wrote:
> I have a web application which is failing in RestEasy initialization with an 
> NPE. It worked for many years until I added a large number of jar 
> dependencies because of a new development effort. I've debugged the code by 
> stepping through the Tomcat source to the point I've found where it is 
> failing. It seems to be a Tomcat bug but of course I'm not convinced since it 
> is highly more likely it is my problem.
> 
> Tomcat version is 9.0.36, though the failure happens in the Tomcat 8 versions 
> I've tried as well.
> 
> The NPE is triggered by a single "return null" statement in 
> org.apache.catalina.core.ApplicationContext line 933. Below is a code snippet 
> of where the return statement is. In my failing scenario the wrapper is NOT 
> null and isOverridable is already returning false. So it falls through to 
> return null.
> 
> So here is my question: Why in the world in the code below does the return 
> null statement even exist?

Because the Servlet API methods it supports are documented to return
null if:


...this ServletContext already contains a complete ServletRegistration
for the given servletName


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NullPointerException on statrup - possible bug in Tomcat

2020-06-24 Thread Konstantin Kolinko
ср, 24 июн. 2020 г. в 19:25, :
>
> I have a web application which is failing in RestEasy initialization with an 
> NPE. It worked for many years until I added a large number of jar 
> dependencies because of a new development effort. I've debugged the code by 
> stepping through the Tomcat source to the point I've found where it is 
> failing. It seems to be a Tomcat bug but of course I'm not convinced since it 
> is highly more likely it is my problem.
>
> Tomcat version is 9.0.36, though the failure happens in the Tomcat 8 versions 
> I've tried as well.
>
> The NPE is triggered by a single "return null" statement in 
> org.apache.catalina.core.ApplicationContext line 933. Below is a code snippet 
> of where the return statement is. In my failing scenario the wrapper is NOT 
> null and isOverridable is already returning false. So it falls through to 
> return null.
>
> So here is my question: Why in the world in the code below does the return 
> null statement even exist? It seems like the return null at line 933 is the 
> precondition the code is trying to establish.

This method is documented in the specification of Servlet API (in
their javadoc) to return null if such servlet has already been
registered.
See Java EE 8 javadoc
https://javaee.github.io/javaee-spec/javadocs/javax/servlet/ServletContext.html#addServlet-java.lang.String-java.lang.Class-

(Following the links from Specifications page
https://cwiki.apache.org/confluence/display/TOMCAT/Specifications

K.Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Ayub,

On 6/24/20 11:05, Ayub Khan wrote:
> If some open file is owned by nginx why would it show up if I run
> the below command> sudo lsof -p $(cat /var/run/tomcat8.pid)

Because network connections have two ends.

- -chris

> On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
 Yes we have nginx as reverse proxy, below is the nginx
 config. We notice this issue only when there is high number
 of requests, during non peak hours we do not see this issue.>
 location /myapp/myservice{ #local machine proxy_pass
 http://localhost:8080; proxy_http_version  1.1;

 proxy_set_headerConnection  $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost$host;
 proxy_set_header X-Real-IP   $remote_addr;
 proxy_set_header X-Forwarded-For
 $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop connections after
> a certain period of time, number of requests, etc. Looks like
> either a bug in nginx or a misconfiguration which allows
> connections to stick-around like this. You may have to ask the
> nginx people. I have no experience with nginx myself, while others
> here may have some experience.
>
 location / { #  if using AWS Load balancer, this bit checks
 for the presence of the https proto flag.  if regular http is
 found, then issue a redirect
> to hit
 the https endpoint instead if ($http_x_forwarded_proto !=
 'https') { rewrite ^ https://$host$request_uri? permanent; }

 proxy_pass  http://127.0.0.1:8080;
 proxy_http_version 1.1;

 proxy_set_headerConnection  $connection_upgrade;
 proxy_set_headerUpgrade $http_upgrade;
 proxy_set_headerHost$host;
 proxy_set_header X-Real-IP   $remote_addr;
 proxy_set_header X-Forwarded-For
 $proxy_add_x_forwarded_for;


 proxy_buffers 16 16k; proxy_buffer_size 32k; }

 *below is the connector*

 >>> protocol="org.apache.coyote.http11.Http11NioProtocol"
 connectionTimeout="2000" maxThreads="5"
 URIEncoding="UTF-8" redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle 50k
> requests simultaneously?
>
 these ports are random, I am not sure who owns the process.

 localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port
 55866 is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat will
> tell you .
>
> I think you need to look at your nginx configuration. It would also
> be a great time to upgrade to a supported version of Tomcat. I
> would recommend 8.5.56 or 9.0.36.
>
> -chris
>
 On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
 ch...@christopherschultz.net> wrote:

 Ayub,

 On 6/23/20 16:23, Ayub Khan wrote:
>>> I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)
>>> *and I saw the below output, some in CLOSE_WAIT and
>>> others in ESTABLISHED. If there are 200 open file
>>> descriptors 160 are in CLOSE_WAIT state. When the count
>>> for CLOSE_WAIT increases I just have to restart
>>> tomcat.
>>>
>>> java65189 tomcat8  715u IPv6
>>> 237878311 0t0 TCP localhost:http-alt->localhost:43760
>>> (CLOSE_WAIT) java 65189 tomcat8  716u IPv6
>>> 237848923   0t0 TCP
>>> localhost:http-alt->localhost:40568 (CLOSE_WAIT)

 These are connections from some process into Tomcat listening
 on port 8080 (that's what localhost:http-alt is). So what
 process owns the outgoing connection on port 40568 on the
 same host?

 Are you using a reverse proxy?

>>> most of the open files are in CLOSE_WAIT state I do not
>>> see anything related to database ip.

 Agreed. It looks like you have a reverse proxy who is
 losing-track of connections, or who is (re)opening
 connections when it may be unnecessar y.

 Can you share your  configuration from
 server.xml? Remember to remove any secrets.

 -chris

>>> On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
>>> felix.schumac...@internetallee.de> wrote:
>>>

 Am 22.06.20 um 13:22 schrieb Ayub Khan:
> Felix,
>
> I executed ls -l /proc/$(cat
> /var/run/tomcat8.pid)/fd/ and from the
 output
> I see majority of them are related to sockets as
> shown below, some of
 them
> point to the jar file of tomcat and others to the
> log file which is
 created.
>
> socket:[2084570754] socket:[2084579487]
> socket:[2084578478] socket:[2084570167]

 Can you try the 

NullPointerException on statrup - possible bug in Tomcat

2020-06-24 Thread tomcat-subs
I have a web application which is failing in RestEasy initialization with an 
NPE. It worked for many years until I added a large number of jar dependencies 
because of a new development effort. I've debugged the code by stepping through 
the Tomcat source to the point I've found where it is failing. It seems to be a 
Tomcat bug but of course I'm not convinced since it is highly more likely it is 
my problem.

Tomcat version is 9.0.36, though the failure happens in the Tomcat 8 versions 
I've tried as well.

The NPE is triggered by a single "return null" statement in 
org.apache.catalina.core.ApplicationContext line 933. Below is a code snippet 
of where the return statement is. In my failing scenario the wrapper is NOT 
null and isOverridable is already returning false. So it falls through to 
return null.

So here is my question: Why in the world in the code below does the return null 
statement even exist? It seems like the return null at line 933 is the 
precondition the code is trying to establish.

//code from 'org.apache.catalina.core.ApplicationContext'
Wrapper wrapper = (Wrapper) context.findChild(servletName);

// Assume a 'complete' ServletRegistration is one that has a class and
// a name
if (wrapper == null) {
wrapper = context.createWrapper();
wrapper.setName(servletName);
context.addChild(wrapper);
} else {
if (wrapper.getName() != null &&
wrapper.getServletClass() != null) {
if (wrapper.isOverridable()) {
wrapper.setOverridable(false);
} else {
return null; // Line 933
}
}
}

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9 and response.setTrailerFields

2020-06-24 Thread Julian Reschke

On 24.06.2020 17:13, Mark Thomas wrote:

On 24/06/2020 15:59, Julian Reschke wrote:


Hi,

I just tried to figure out whether Tomcat 9 will let be send trailer
fields in a chunked HTTP/1.1 response, using

https://tomcat.apache.org/tomcat-9.0-doc/servletapi/javax/servlet/http/HttpServletResponse.html#setTrailerFields-java.util.function.Supplier-


I couldn't get it to work yet, so below some questions:

1) Is this actually supported for HTTP/1.1?


Yes.


2) Will it automatically switch to chunked encoding (leaving out
content-length), or does the application code need to take care of that?


Tomcat will handle all of that but you need to make sure you set the
trailer fields before the response is committed so Tomcat can set the
appropriate headers.


3) I understand that I need to send "Trailer" upfront, but what about
"TE" and "Connection"?


Just the Trailer header. Tomcat will look after the rest.

Mark
...


Weird.

I'm trying this with Apache Jackrabbit trunk and the minimal patch below:

Index: jackrabbit-parent/pom.xml
===
--- jackrabbit-parent/pom.xml   (Revision 1879148)
+++ jackrabbit-parent/pom.xml   (Arbeitskopie)
@@ -554,7 +554,7 @@
   
 javax.servlet
 javax.servlet-api
-3.1.0
+4.0.1
 provided
   
   
Index: 
jackrabbit-webdav/src/main/java/org/apache/jackrabbit/webdav/server/AbstractWebdavServlet.java
===
--- 
jackrabbit-webdav/src/main/java/org/apache/jackrabbit/webdav/server/AbstractWebdavServlet.java
  (Revision 1879148)
+++ 
jackrabbit-webdav/src/main/java/org/apache/jackrabbit/webdav/server/AbstractWebdavServlet.java
  (Arbeitskopie)
@@ -95,8 +95,10 @@
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Enumeration;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Locale;
+import java.util.Map;

 /**
  * AbstractWebdavServlet
@@ -602,6 +604,11 @@
 }
 }

+response.setHeader("Trailer", "foo");
+Map trailers = new HashMap<>();
+trailers.put("foo", "bar");
+response.setTrailerFields(() -> trailers);
+
 // spool resource properties and eventually resource content.
 OutputStream out = (sendContent) ? response.getOutputStream() : null;
 resource.spool(getOutputContext(response, out));


When I deploy jackrabbit-webapp in Tomcat 9.0.36, and curl a resource, I
see:


< HTTP/1.1 200
< Trailer: foo
< ETag: "2692247-1592985065092"
< Last-Modified: Wed, 24 Jun 2020 07:51:05 GMT
< Content-Type: image/jpeg
< Content-Length: 2692247
< Date: Wed, 24 Jun 2020 15:33:11 GMT
<
{ [7996 bytes data]


So it does set "Trailer" (so the response was not committed yet), but it
doesn't switch to chunked encoding.

There must be something that I'm doing wrong...

Best regards, Julian

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9 and response.setTrailerFields

2020-06-24 Thread Mark Thomas
On 24/06/2020 15:59, Julian Reschke wrote:
> 
> Hi,
> 
> I just tried to figure out whether Tomcat 9 will let be send trailer
> fields in a chunked HTTP/1.1 response, using
> 
> https://tomcat.apache.org/tomcat-9.0-doc/servletapi/javax/servlet/http/HttpServletResponse.html#setTrailerFields-java.util.function.Supplier-
> 
> 
> I couldn't get it to work yet, so below some questions:
> 
> 1) Is this actually supported for HTTP/1.1?

Yes.

> 2) Will it automatically switch to chunked encoding (leaving out
> content-length), or does the application code need to take care of that?

Tomcat will handle all of that but you need to make sure you set the
trailer fields before the response is committed so Tomcat can set the
appropriate headers.

> 3) I understand that I need to send "Trailer" upfront, but what about
> "TE" and "Connection"?

Just the Trailer header. Tomcat will look after the rest.

Mark


> 
> (pointer to working example would be cool as well)
> 
> Best regards, Julian
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-24 Thread Ayub Khan
Chris,

If some open file is owned by nginx why would it show up if I run the below
command

sudo lsof -p $(cat /var/run/tomcat8.pid)



On Wed, Jun 24, 2020 at 5:53 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ayub,
>
> On 6/23/20 19:17, Ayub Khan wrote:
> > Yes we have nginx as reverse proxy, below is the nginx config. We
> > notice this issue only when there is high number of requests,
> > during non peak hours we do not see this issue.> location
> > /myapp/myservice{ #local machine proxy_pass
> > http://localhost:8080; proxy_http_version  1.1;
> >
> > proxy_set_headerConnection  $connection_upgrade;
> > proxy_set_headerUpgrade $http_upgrade;
> > proxy_set_headerHost$host; proxy_set_header
> > X-Real-IP   $remote_addr; proxy_set_header
> > X-Forwarded-For $proxy_add_x_forwarded_for;
> >
> >
> > proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> You might want to read about tuning nginx to drop connections after a
> certain period of time, number of requests, etc. Looks like either a
> bug in nginx or a misconfiguration which allows connections to
> stick-around like this. You may have to ask the nginx people. I have
> no experience with nginx myself, while others here may have some
> experience.
>
> > location / { #  if using AWS Load balancer, this bit checks for the
> > presence of the https proto flag.  if regular http is found, then
> > issue a redirect
> to hit
> > the https endpoint instead if ($http_x_forwarded_proto != 'https')
> > { rewrite ^ https://$host$request_uri? permanent; }
> >
> > proxy_pass  http://127.0.0.1:8080; proxy_http_version
> > 1.1;
> >
> > proxy_set_headerConnection  $connection_upgrade;
> > proxy_set_headerUpgrade $http_upgrade;
> > proxy_set_headerHost$host; proxy_set_header
> > X-Real-IP   $remote_addr; proxy_set_header
> > X-Forwarded-For $proxy_add_x_forwarded_for;
> >
> >
> > proxy_buffers 16 16k; proxy_buffer_size 32k; }
> >
> > *below is the connector*
> >
> >  > protocol="org.apache.coyote.http11.Http11NioProtocol"
> > connectionTimeout="2000" maxThreads="5" URIEncoding="UTF-8"
> > redirectPort="8443" />
>
> 50k threads is a LOT of threads. Do you expect to handle 50k requests
> simultaneously?
>
> > these ports are random, I am not sure who owns the process.
> >
> > localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port 55866
> > is a random port.
> I'm sure you'll find that 55866 is owned by nginx. netstat will tell you
> .
>
> I think you need to look at your nginx configuration. It would also be
> a great time to upgrade to a supported version of Tomcat. I would
> recommend 8.5.56 or 9.0.36.
>
> - -chris
>
> > On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> > Ayub,
> >
> > On 6/23/20 16:23, Ayub Khan wrote:
>  I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I
>  saw the below output, some in CLOSE_WAIT and others in
>  ESTABLISHED. If there are 200 open file descriptors 160 are
>  in CLOSE_WAIT state. When the count for CLOSE_WAIT increases
>  I just have to restart tomcat.
> 
>  java65189 tomcat8  715u IPv6  237878311
>  0t0 TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java
>  65189 tomcat8  716u IPv6  237848923   0t0
>  TCP localhost:http-alt->localhost:40568 (CLOSE_WAIT)
> >
> > These are connections from some process into Tomcat listening on
> > port 8080 (that's what localhost:http-alt is). So what process owns
> > the outgoing connection on port 40568 on the same host?
> >
> > Are you using a reverse proxy?
> >
>  most of the open files are in CLOSE_WAIT state I do not see
>  anything related to database ip.
> >
> > Agreed. It looks like you have a reverse proxy who is losing-track
> > of connections, or who is (re)opening connections when it may be
> > unnecessar y.
> >
> > Can you share your  configuration from server.xml?
> > Remember to remove any secrets.
> >
> > -chris
> >
>  On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
>  felix.schumac...@internetallee.de> wrote:
> 
> >
> > Am 22.06.20 um 13:22 schrieb Ayub Khan:
> >> Felix,
> >>
> >> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
> >> and from the
> > output
> >> I see majority of them are related to sockets as shown
> >> below, some of
> > them
> >> point to the jar file of tomcat and others to the log
> >> file which is
> > created.
> >>
> >> socket:[2084570754] socket:[2084579487]
> >> socket:[2084578478] socket:[2084570167]
> >
> > Can you try the other command (lsof -p $(cat
> > ...tomcat.pid))? It should give a bit more details on the
> > used sockets that the proc directory.
> >
> > Felix
> >
> >>
> >> On 

Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-24 Thread Mark Thomas
On 24/06/2020 12:17, Mark Thomas wrote:
> On 22/06/2020 11:06, Chirag Dewan wrote:
>> Hi,
>>
>> Update: We found that Tomcat goes OOM when a client closes and opens new
>> connections every second. In the memory dump, we see a lot of
>> RequestInfo objects that are causing the memory spike.
>>
>> After a while, Tomcat goes OOM and start rejecting request(I get a
>> request timed out on my client). This seems like a bug to me. 
>>
>> For better understanding, let me explain my use case again:
>>
>> I have a jetty client that sends HTTP2 requests to Tomcat. My
>> requirement is to close a connection after a configurable(say 5000)
>> number of requests/streams and open a new connection that continues to
>> send requests. I close a connection by sending a GoAway frame.
>>
>> When I execute this use case under load, I see that after ~2hours my
>> requests fail and I get a series of errors like request
>> timeouts(5seconds), invalid window update frame, and connection close
>> exception on my client.
>> On further debugging, I found that it's a Tomcat memory problem and it
>> goes OOM after sometime under heavy load with multiple connections being
>> re-established by the clients.
>>
>> image.png
>>
>> image.png
>>
>> Is this a known issue? Or a known behavior with Tomcat?
> 
> Embedded images get dropped by the list software. Post those images
> somewhere we can see them.
> 
>> Please let me know if you any experience with such a situation. Thanks
>> in advance.
> 
> Nothing comes to mind.
> 
> I'll try some simple tests with HTTP/2.

I don't see a memory leak (the memory is reclaimed eventually) but I do
see possibilities to release memory associated with request processing
sooner.

Right now you need to allocate more memory to the Java process to enable
Tomcat to handle the HTTP/2 load it is presented with.

It looks like a reasonable chunk of memory is released when the
Connection closes that could be released earlier when the associated
Stream closes. I'll take a look at what can be done in that area. In the
meantime, reducing the number of Streams you allow on a Connection
before it is closed should reduce overall memory usage.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat 9 and response.setTrailerFields

2020-06-24 Thread Julian Reschke



Hi,

I just tried to figure out whether Tomcat 9 will let be send trailer
fields in a chunked HTTP/1.1 response, using

https://tomcat.apache.org/tomcat-9.0-doc/servletapi/javax/servlet/http/HttpServletResponse.html#setTrailerFields-java.util.function.Supplier-

I couldn't get it to work yet, so below some questions:

1) Is this actually supported for HTTP/1.1?

2) Will it automatically switch to chunked encoding (leaving out
content-length), or does the application code need to take care of that?

3) I understand that I need to send "Trailer" upfront, but what about
"TE" and "Connection"?

(pointer to working example would be cool as well)

Best regards, Julian


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: broken pipe error keeps increasing open files

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ayub,

On 6/23/20 19:17, Ayub Khan wrote:
> Yes we have nginx as reverse proxy, below is the nginx config. We
> notice this issue only when there is high number of requests,
> during non peak hours we do not see this issue.> location
> /myapp/myservice{ #local machine proxy_pass
> http://localhost:8080; proxy_http_version  1.1;
>
> proxy_set_headerConnection  $connection_upgrade;
> proxy_set_headerUpgrade $http_upgrade;
> proxy_set_headerHost$host; proxy_set_header
> X-Real-IP   $remote_addr; proxy_set_header
> X-Forwarded-For $proxy_add_x_forwarded_for;
>
>
> proxy_buffers 16 16k; proxy_buffer_size 32k; }

You might want to read about tuning nginx to drop connections after a
certain period of time, number of requests, etc. Looks like either a
bug in nginx or a misconfiguration which allows connections to
stick-around like this. You may have to ask the nginx people. I have
no experience with nginx myself, while others here may have some
experience.

> location / { #  if using AWS Load balancer, this bit checks for the
> presence of the https proto flag.  if regular http is found, then
> issue a redirect
to hit
> the https endpoint instead if ($http_x_forwarded_proto != 'https')
> { rewrite ^ https://$host$request_uri? permanent; }
>
> proxy_pass  http://127.0.0.1:8080; proxy_http_version
> 1.1;
>
> proxy_set_headerConnection  $connection_upgrade;
> proxy_set_headerUpgrade $http_upgrade;
> proxy_set_headerHost$host; proxy_set_header
> X-Real-IP   $remote_addr; proxy_set_header
> X-Forwarded-For $proxy_add_x_forwarded_for;
>
>
> proxy_buffers 16 16k; proxy_buffer_size 32k; }
>
> *below is the connector*
>
>  protocol="org.apache.coyote.http11.Http11NioProtocol"
> connectionTimeout="2000" maxThreads="5" URIEncoding="UTF-8"
> redirectPort="8443" />

50k threads is a LOT of threads. Do you expect to handle 50k requests
simultaneously?

> these ports are random, I am not sure who owns the process.
>
> localhost:http-alt->localhost:55866 (CLOSE_WAIT) , here port 55866
> is a random port.
I'm sure you'll find that 55866 is owned by nginx. netstat will tell you
.

I think you need to look at your nginx configuration. It would also be
a great time to upgrade to a supported version of Tomcat. I would
recommend 8.5.56 or 9.0.36.

- -chris

> On Wed, Jun 24, 2020 at 12:48 AM Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>
> Ayub,
>
> On 6/23/20 16:23, Ayub Khan wrote:
 I executed  *sudo lsof -p $(cat /var/run/tomcat8.pid)  *and I
 saw the below output, some in CLOSE_WAIT and others in
 ESTABLISHED. If there are 200 open file descriptors 160 are
 in CLOSE_WAIT state. When the count for CLOSE_WAIT increases
 I just have to restart tomcat.

 java65189 tomcat8  715u IPv6  237878311
 0t0 TCP localhost:http-alt->localhost:43760 (CLOSE_WAIT) java
 65189 tomcat8  716u IPv6  237848923   0t0
 TCP localhost:http-alt->localhost:40568 (CLOSE_WAIT)
>
> These are connections from some process into Tomcat listening on
> port 8080 (that's what localhost:http-alt is). So what process owns
> the outgoing connection on port 40568 on the same host?
>
> Are you using a reverse proxy?
>
 most of the open files are in CLOSE_WAIT state I do not see
 anything related to database ip.
>
> Agreed. It looks like you have a reverse proxy who is losing-track
> of connections, or who is (re)opening connections when it may be
> unnecessar y.
>
> Can you share your  configuration from server.xml?
> Remember to remove any secrets.
>
> -chris
>
 On Mon, Jun 22, 2020 at 4:27 PM Felix Schumacher <
 felix.schumac...@internetallee.de> wrote:

>
> Am 22.06.20 um 13:22 schrieb Ayub Khan:
>> Felix,
>>
>> I executed ls -l /proc/$(cat /var/run/tomcat8.pid)/fd/
>> and from the
> output
>> I see majority of them are related to sockets as shown
>> below, some of
> them
>> point to the jar file of tomcat and others to the log
>> file which is
> created.
>>
>> socket:[2084570754] socket:[2084579487]
>> socket:[2084578478] socket:[2084570167]
>
> Can you try the other command (lsof -p $(cat
> ...tomcat.pid))? It should give a bit more details on the
> used sockets that the proc directory.
>
> Felix
>
>>
>> On Mon, Jun 22, 2020 at 1:28 PM Felix Schumacher <
>> felix.schumac...@internetallee.de> wrote:
>>
>>> Am 22.06.20 um 11:41 schrieb Ayub Khan:
 Chris,

 I am using HikariCP for connection pooling. If the
 database is leaking connections then I should see
 connection not available exception.

 How do I find out which file descriptors are leaking
 ? these are not
>>> files
 open on disk as there is no 

Re: File "catalina.out" not being created/populated when using Tomcat 9.0.31 + Ubuntu 20.04, and content goes to the Ubuntu syslog instead?

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Brian,

On 6/23/20 21:33, Brian wrote:
>
>
> -Original Message- From: Emmanuel Bourg
>  Reply-To: Tomcat Users List
>  Date: Tuesday, June 23, 2020 at 20:01 To:
> "users@tomcat.apache.org"  Subject: Re:
> File "catalina.out" not being created/populated when using Tomcat
> 9.0.31 + Ubuntu 20.04, and content goes to the Ubuntu syslog
> instead?
>
> Le 24/06/2020 à 02:35, Brian a écrit :
>
>> Good news: I updated "/etc/tmpfiles.d/tomcat9.conf" (the file I
>> created) with the new value of 2770. Deleted all the logs inside
>> "/val/log/tomcat9" and restarted Ubuntu. "catalina.out" got
>> created and populated. Bad news: Then I deleted all the logs
>> inside "/val/log/tomcat9" and just restarted Tomcat (which is
>> something I do sometimes, in production). "catalina.out" didn't
>> get created this time.
>>
>> Just to confirm, again I deleted all the logs inside
>> "/val/log/tomcat9" and restarted Ubuntu. "catalina.out" got
>> created and populated again.
>>
>> Any ideas?
>
> The catalina.out file is held by rsyslogd and isn't recreated
> unless you restart rsyslogd. Try this when you clean the logs and
> restart Tomcat:
>
> systemctl restart rsyslog tomcat9
>
>
> Hi,
>
> I just realized that when the "bad news" experiment took place, in
> the syslog there was NOT another of those " file
> '/var/log/tomcat9/catalina.out': open error: Permission denied.."
> errors, so I guess it was not a permissions issue anymore, which
> makes me think that the "2770" value finally solved that issue.
> That is nice, thanks! OK, I restarted rsyslog and the started again
> Tomcat as you adviced and... you are right, the catalina.out file
> got created again. So I think you are right about rsyslogd still
> holding the log file.
>
> To be honest with you, I'm happy about the catalina.out file
> finally getting created and I really appreciate your kind help, I
> really do. But I'm not really happy about having to restart rsyslog
> before every time I need to restart Tomcat.
You don't have to restart rsyslog every time you restart Tomcat. You
just have to restart if if you delete its log file out from underneath i
t.

> It is weird, and I guess a lot of users will never imagine that
> they have to do that and they will not feel very pleased when they
> realize that the catalina.out file doesn't get created after
> restarting Tomcat. And probably most of them will not even notice
> that the Tomcat log is being added to the syslog, for that matter.
> This whole new relation between syslog and Tomcat is really weird
> and I don't think the users are being warned about it. I have used
> Tomcat+Ubuntu for several years and I haven't seen this
> complication before. If there is an advantage about this relation
> between syslog and Tomcat, I really can't see it.
This is how logging is done with systemd for better or worse.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7zZ+kACgkQHPApP6U8
pFiD2xAApR/XqVexdg2uH/srGfUWL7DyRSebfAvf1fJuzLc+KlRSm205VeDl5KI3
P3Qoxdy7cHeW0GSQHpsf5Tdr9PqRyIN6hnCioFJMScNej1sqybjT9qeE43VVg4JY
rSTplVfaPvw+61ukdXC8SKcJniQGzCWAn0bL7B9Ij7rltN//UjlYtV1N30dvYdpD
6c5+kQbp0CcS/wHGz01VwUFYPCde3wLpbZNcBO39/rriCsGjCoN97moAM5A6HtWk
EYYXmByFrhbxjoQPiX/lqrJqVDRwCmGUcWdiv88qaZwlmKWgEUJEGpAROTGpBXWV
DoWEQH5U7te59A/FPTcIAAb2zayNEvDfYqUuR/uwGppQ4cnCC6YX4jKJuKNWuuJe
2pYim+2jDZGggG/XGtmNYJY97JprQGjGw3XkMcnQadTrKcMZNL9BnDVOEhubQfVy
dRQnq1vm1ObAW73xM38Sej2xL/llOOdkqr3icVenW31J02+RNf8Iu9gdxuXrls8P
3daaNZmlzPcgLVkKuOpprnpnCMd2z/RdiEQRhRO/jyj1HqB43KAauM+OPxh1Eq/a
ez2/I4amw7p+nZzHPy5YZ6kdjuY59qTehAhYPZY+CD7qQ0n0JYhCw9cQ43isVXYq
QnfY8fFvtOnc9Q6XuOdRz2BpmJwMD2mqq2GVlcOklvJcXyhnPV4=
=c7Q2
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: AJP error using mod_proxy__ajp

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

On 6/24/20 10:29, Christopher Schultz wrote:
> All,
>
> I'm slowly switching from mod_jk to mod_proxy_ajp and I have a
> development environment where I'm getting Bad Gateway responses
> sent to clients along with this exception in my Tomcat log file:
>
> java.lang.IllegalArgumentException: Header message of length
> [8,194] received but the packetSize is only [8,192] at
> org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:685)
>
>
at
> org.apache.coyote.ajp.AjpProcessor.receive(AjpProcessor.java:626)
> at
> org.apache.coyote.ajp.AjpProcessor.refillReadBuffer(AjpProcessor.java:
73
>
>
4)
> at
> org.apache.coyote.ajp.AjpProcessor$SocketInputBuffer.doRead(AjpProcess
or
>
>
.java:1456)
> at org.apache.coyote.Request.doRead(Request.java:581) at
> org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.ja
va
>
>
:344)
> at
> org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuff
er
>
>
.java:663)
> at
> org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:35
8)
>
>
at
> org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream
.j
>
>
ava:93)
> at
> org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.jav
a:
>
>
53)
> at
> org.apache.commons.io.input.TeeInputStream.read(TeeInputStream.java:10
6)
>
>
at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at my.product.MacInputStream.read(MacInputStream.java:29) at
> java.io.FilterInputStream.read(FilterInputStream.java:83) at
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager$RewindableInp
ut
>
>
Stream.read(XMLEntityManager.java:2890)
> at
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentE
nt
>
>
ity(XMLEntityManager.java:674)
> at
> com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDo
cV
>
>
ersion(XMLVersionDetector.java:148)
> at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XM
L1
>
>
1Configuration.java:806)
> at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XM
L1
>
>
1Configuration.java:771)
> at
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.j
av
>
>
a:141)
> at
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Abs
tr
>
>
actSAXParser.java:1213)
> at
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.pa
rs
>
>
e(SAXParserImpl.java:643)
> at
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserI
mp
>
>
l.java:327)
> at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
>
> This is a web service which is reading the request with a
> SAXParser. It's been running in production (and dev!) for years
> without any issues. It''s been running for a few months in
> development, now, with mod_proxy_ajp without any errors.
>
> I know about the "max packet size" and the default is 8192 bytes.
> I haven't changed the default. Here's my 
> configuration:
>
>  redirectPort="443" protocol="AJP/1.3" URIEncoding="UTF-8"
> executor="tomcatThreadPool" />
>
> Here's the configuration in httpd.conf:
>
>  BalancerMember "ajp://localhost:8245"
> timeout=300 ping=5 ttl=60 
>
> ProxyPass "/my-api/" "balancer://my-api/my-api/" ProxyPassReverse
> "/my-api/" "balancer://my-api/my-api/"
>
> The documentation for mod_proxy_ajp[1] seems to indicate that the
> "Packet Size" for AJP is fixed at 8192 bytes:
>
> " Packet Size
>
> According to much of the code, the max packet size is 8 * 1024
> bytes (8K). The actual length of the packet is encoded in the
> header.
>
> Packet Headers
>
> Packets sent from the server to the container begin with 0x1234.
> Packets sent from the container to the server begin with AB
> (that's the ASCII code for A followed by the ASCII code for B).
> After those first two bytes, there is an integer (encoded as above)
> with the length of the payload. Although this might suggest that
> the maximum payload could be as large as 2^16, in fact, *the code
> sets the maximum to be 8K*. " (emphasis mine)
>
> Does anyone know under what circumstances mod_proxy_ajp might send
> more than 8192 bytes? It looks like mod_proxy_ajp doesn't have any
> way to set the max packet size like mod_jk does.
>
> I should probably be able to set the max packet size on the Tomcat
> side to something higher than 8192 to catch this kind of thing...
> but it looks like it might be a bug in mod_proxy_ajp.
>
> Versions are Apache httpd 2.4.25 (Debian) and Tomcat 8.5.trunk
> (8.5.55). mod_jk is not being used.
>
> Any ideas?
>
> -chris
>
> [1] https://httpd.apache.org/docs/2.4/mod/mod_proxy_ajp.html


Some additional information:

1. The headers of the HTTP request seem to be arriving in a correct
packet before this error occurs. The headers are only a few hundred
bytes (~340) and the request line should be relatively short (~50
bytes or so). Method is POST, protocol is HTTP/1.1.

2. Apache httpd is terminating TLS. I have no configuration for
forwarding TLS information over 

AJP error using mod_proxy__ajp

2020-06-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

I'm slowly switching from mod_jk to mod_proxy_ajp and I have a
development environment where I'm getting Bad Gateway responses sent
to clients along with this exception in my Tomcat log file:

java.lang.IllegalArgumentException: Header message of length [8,194]
received but the packetSize is only [8,192]
at
org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:685)
at
org.apache.coyote.ajp.AjpProcessor.receive(AjpProcessor.java:626)
at
org.apache.coyote.ajp.AjpProcessor.refillReadBuffer(AjpProcessor.java:73
4)
at
org.apache.coyote.ajp.AjpProcessor$SocketInputBuffer.doRead(AjpProcessor
.java:1456)
at org.apache.coyote.Request.doRead(Request.java:581)
at
org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java
:344)
at
org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer
.java:663)
at
org.apache.catalina.connector.InputBuffer.readByte(InputBuffer.java:358)
at
org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.j
ava:93)
at
org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:
53)
at
org.apache.commons.io.input.TeeInputStream.read(TeeInputStream.java:106)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at my.product.MacInputStream.read(MacInputStream.java:29)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at
com.sun.org.apache.xerces.internal.impl.XMLEntityManager$RewindableInput
Stream.read(XMLEntityManager.java:2890)
at
com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEnt
ity(XMLEntityManager.java:674)
at
com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocV
ersion(XMLVersionDetector.java:148)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML1
1Configuration.java:806)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML1
1Configuration.java:771)
at
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.jav
a:141)
at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Abstr
actSAXParser.java:1213)
at
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.pars
e(SAXParserImpl.java:643)
at
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImp
l.java:327)
at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)

This is a web service which is reading the request with a SAXParser.
It's been running in production (and dev!) for years without any
issues. It''s been running for a few months in development, now, with
mod_proxy_ajp without any errors.

I know about the "max packet size" and the default is 8192 bytes. I
haven't changed the default. Here's my  configuration:



Here's the configuration in httpd.conf:


BalancerMember "ajp://localhost:8245" timeout=300
ping=5 ttl=60


ProxyPass "/my-api/" "balancer://my-api/my-api/"
ProxyPassReverse "/my-api/" "balancer://my-api/my-api/"

The documentation for mod_proxy_ajp[1] seems to indicate that the
"Packet Size" for AJP is fixed at 8192 bytes:

"
Packet Size

According to much of the code, the max packet size is 8 * 1024 bytes
(8K). The actual length of the packet is encoded in the header.

Packet Headers

Packets sent from the server to the container begin with 0x1234.
Packets sent from the container to the server begin with AB (that's
the ASCII code for A followed by the ASCII code for B). After those
first two bytes, there is an integer (encoded as above) with the
length of the payload. Although this might suggest that the maximum
payload could be as large as 2^16, in fact, *the code sets the maximum
to be 8K*.
"
(emphasis mine)

Does anyone know under what circumstances mod_proxy_ajp might send
more than 8192 bytes? It looks like mod_proxy_ajp doesn't have any way
to set the max packet size like mod_jk does.

I should probably be able to set the max packet size on the Tomcat
side to something higher than 8192 to catch this kind of thing... but
it looks like it might be a bug in mod_proxy_ajp.

Versions are Apache httpd 2.4.25 (Debian) and Tomcat 8.5.trunk
(8.5.55). mod_jk is not being used.

Any ideas?

- -chris

[1] https://httpd.apache.org/docs/2.4/mod/mod_proxy_ajp.html
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7zY1MACgkQHPApP6U8
pFjB9xAAn6Qx/3oxL9LrE716x84vACmUiiFeSY/8VeDYdNuys9+s2ULrARdx62rC
61hV1y1pNwVVK4ZlwxF/DVVUhClfqb3P/lbR76gOWwNJvJs4qXpqF7PSVHyMD3LR
3ze8XXK8NMZMEPoMrOwg3wI7mQoQpj66QhD3fMNz4hbp2iwxXzZfVDLN7D/g8/6Y
E4vseY44x1mLQ5BlBy7DGEwpdrVQR50tW8BG4uPfWgyZlnkf5o4AHX7s0oAgLzep
VCFCXMK3tYArQrZNv+k7yOgb7Lk4dacRt8pd5Wf2VVHy+1sBZwpRcFXaD0O6N2lc

Re: Connection Closure due to Fatal Stream with HTTP2

2020-06-24 Thread Mark Thomas
On 22/06/2020 11:06, Chirag Dewan wrote:
> Hi,
> 
> Update: We found that Tomcat goes OOM when a client closes and opens new
> connections every second. In the memory dump, we see a lot of
> RequestInfo objects that are causing the memory spike.
> 
> After a while, Tomcat goes OOM and start rejecting request(I get a
> request timed out on my client). This seems like a bug to me. 
> 
> For better understanding, let me explain my use case again:
> 
> I have a jetty client that sends HTTP2 requests to Tomcat. My
> requirement is to close a connection after a configurable(say 5000)
> number of requests/streams and open a new connection that continues to
> send requests. I close a connection by sending a GoAway frame.
> 
> When I execute this use case under load, I see that after ~2hours my
> requests fail and I get a series of errors like request
> timeouts(5seconds), invalid window update frame, and connection close
> exception on my client.
> On further debugging, I found that it's a Tomcat memory problem and it
> goes OOM after sometime under heavy load with multiple connections being
> re-established by the clients.
> 
> image.png
> 
> image.png
> 
> Is this a known issue? Or a known behavior with Tomcat?

Embedded images get dropped by the list software. Post those images
somewhere we can see them.

> Please let me know if you any experience with such a situation. Thanks
> in advance.

Nothing comes to mind.

I'll try some simple tests with HTTP/2.

Mark



> 
> On Sun, Jun 14, 2020 at 11:30 AM Chirag Dewan  > wrote:
> 
> Hi,
> 
> This is without load balancer actually. I am directly sending to Tomcat.
> 
> Update:
> 
> A part issue I found was to be 9.0.29. I observed that when request
> were timed out on client (2seconds), the client would send a RST
> frame. And the GoAway from Tomcat was perhaps a bug. In 9.0.36, RST
> frame is replied with a RST from Tomcat. 
> 
> Now the next part to troubleshoot is why after about an hour or so,
> requests are timed out at Tomcat.
> 
> Could close to 100 HTTP2 connections per second cause this on Tomcat?
> 
> Thanks
> 
> On Sun, 14 Jun, 2020, 12:27 AM Michael Osipov,  > wrote:
> 
> Am 2020-06-13 um 08:42 schrieb Chirag Dewan:
> > Hi,
> >
> > We are observing that under high load, my clients start
> receiving a GoAway
> > frame with error:
> >
> > *Connection[{id}], Stream[{id}] an error occurred during
> processing that
> > was fatal to the connection.*
> >
> > Background : We have implemented our clients to close
> connections after
> > every 500-1000 requests (streams). This is a load balancer
> requirement that
> > we are working on and hence such a behavior. So with a
> throughput of around
> > 19k, almost 40 connections are closed and recreated every second.
> >
> > After we receive this frame, my clients start behaving
> erroneously. Before
> > this as well, my clients start sending RST_STREAM with
> canceled for each
> > request. Could this be due to the number of connections we
> open? Is it
> > related to the version of Tomcat? Or maybe my clients are
> misbehaving?
> >
> > Now since I only receive this under heavy load, I can't quite
> picture
> > enough reasons for this to happen.
> >
> > Any possible clues on where I should start looking?
> >
> > My Stack:
> > Server - Tomcat 9.0.29
> > Client - Jetty 9.x
> 
> Does the same happen w/o the load balancer?
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> 
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org