RE: Tomcat 9.0.65 suspected memory leak

2023-02-10 Thread Chen Levy
Thanks Mark, workaround seem to be working
Chen

> -Original Message-
> From: Mark Thomas 
> Sent: Thursday, February 9, 2023 12:41
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.65 suspected memory leak
> 
> On 09/02/2023 13:25, Mark Thomas wrote:
> > On 09/02/2023 13:04, Mark Thomas wrote:
> >> On 04/02/2023 22:06, Chen Levy wrote:
> >>
> >>> Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate
> >>> open connections:
> >>> I took a fresh Tomcat, unzipped and modified server.xml with only
> >>> the
> >>> following:
> >>> 1. Changed port 8080 to port 80
> >>> 2. Changed port 8443 to port 443
> >>> 3. Uncommented the nio connector and added the snippet
> >>>      >>> className="org.apache.coyote.http2.Http2Protocol" />
> >>>  
> >>>   >>> certificateKeystoreFile="conf/tomcat_noroot.p12"
> >>>   certificateKeyAlias="..."
> >>>   certificateKeystorePassword="..."
> >>>   certificateKeystoreType="PKCS12"/>
> >>>  
> >>>
> >>> I used Chrome to call the default index.html with Wireshark in the
> >>> middle:
> >>> With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY
> >>> from the server.
> >>> With 9.0.65 - No GOAWAY was sent, and the server and client kept
> >>> ACKing each other.
> >>>
> >>> Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.
> >>>
> >>> Test was conducted with:
> >>> Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23) Chrome Version
> >>> 109.0.5414.120 JDK 17.0.6+10 Windows 11
> >>
> >> Thanks for the reproduction details. I'll take a look now.
> >
> > A quick workaround is to configure useAsyncIO="false" on the Connector.
> 
> Fixed for the next round of releases.
> 
> Mark
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.65 suspected memory leak

2023-02-09 Thread Mark Thomas

On 09/02/2023 13:25, Mark Thomas wrote:

On 09/02/2023 13:04, Mark Thomas wrote:

On 04/02/2023 22:06, Chen Levy wrote:

Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate 
open connections:
I took a fresh Tomcat, unzipped and modified server.xml with only the 
following:

1. Changed port 8080 to port 80
2. Changed port 8443 to port 443
3. Uncommented the nio connector and added the snippet
    className="org.apache.coyote.http2.Http2Protocol" />

 
 certificateKeystoreFile="conf/tomcat_noroot.p12"

  certificateKeyAlias="..."
  certificateKeystorePassword="..."
  certificateKeystoreType="PKCS12"/>
 

I used Chrome to call the default index.html with Wireshark in the 
middle:
With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY 
from the server.
With 9.0.65 - No GOAWAY was sent, and the server and client kept 
ACKing each other.


Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.

Test was conducted with:
Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23)
Chrome Version 109.0.5414.120
JDK 17.0.6+10
Windows 11


Thanks for the reproduction details. I'll take a look now.


A quick workaround is to configure useAsyncIO="false" on the Connector.


Fixed for the next round of releases.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.65 suspected memory leak

2023-02-09 Thread Mark Thomas

On 09/02/2023 13:04, Mark Thomas wrote:

On 04/02/2023 22:06, Chen Levy wrote:

Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate open 
connections:
I took a fresh Tomcat, unzipped and modified server.xml with only the 
following:

1. Changed port 8080 to port 80
2. Changed port 8443 to port 443
3. Uncommented the nio connector and added the snippet
    className="org.apache.coyote.http2.Http2Protocol" />

 
 certificateKeystoreFile="conf/tomcat_noroot.p12"

  certificateKeyAlias="..."
  certificateKeystorePassword="..."
  certificateKeystoreType="PKCS12"/>
 

I used Chrome to call the default index.html with Wireshark in the 
middle:
With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY from 
the server.
With 9.0.65 - No GOAWAY was sent, and the server and client kept 
ACKing each other.


Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.

Test was conducted with:
Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23)
Chrome Version 109.0.5414.120
JDK 17.0.6+10
Windows 11


Thanks for the reproduction details. I'll take a look now.


A quick workaround is to configure useAsyncIO="false" on the Connector.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.65 suspected memory leak

2023-02-09 Thread Mark Thomas

On 04/02/2023 22:06, Chen Levy wrote:


Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate open 
connections:
I took a fresh Tomcat, unzipped and modified server.xml with only the following:
1. Changed port 8080 to port 80
2. Changed port 8443 to port 443
3. Uncommented the nio connector and added the snippet

 
 
 

I used Chrome to call the default index.html with Wireshark in the middle:
With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY from the 
server.
With 9.0.65 - No GOAWAY was sent, and the server and client kept ACKing each 
other.

Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.

Test was conducted with:
Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23)
Chrome Version 109.0.5414.120
JDK 17.0.6+10
Windows 11


Thanks for the reproduction details. I'll take a look now.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.65 suspected memory leak

2023-02-04 Thread Chen Levy

> -Original Message-
> From: Mark Thomas 
> Sent: Monday, September 19, 2022 13:02
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.65 suspected memory leak
> 
> On 15/09/2022 14:11, Chen Levy wrote:
> > Hello Experts
> >
> > We’ve recently upgraded some of our production servers to Tomcat
> > 9.0.65; every upgraded server crashed with java.lang.OutOfMemoryError
> > within an hour or so under load.
> >
> > The exact same setup (same application, Linux kernel, Java version
> > etc.) with Tomcat 9.0.63 does not exhibit this issue.
> >
> > A heap-dump through MAT gave the following leak suspect (leak report
> > attached):
> >
> > “
> >
> > 14,364 instances of
> > "org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper", loaded by
> > "java.net.URLClassLoader @ 0x6be257090" occupy 4,489,221,944 (91.95%)
> bytes.
> >
> > These instances are referenced from one instance of
> > "java.util.concurrent.ConcurrentHashMap$Node[]", loaded by " > class loader>", which occupies 590,736 (0.01%) bytes.
> >
> > Keywords
> >
> >      org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper
> >
> >      java.net.URLClassLoader @ 0x6be257090
> >
> >      java.util.concurrent.ConcurrentHashMap$Node[]
> >
> > “
> >
> > Please let me know if I should provide additional information
> 
> That looks like 14k current connections which isn't unreasonable for a
> Tomcat instance under load.
> 
> There are connector related changes between 9.0.63 and 9.0.65 but nothing
> that is obviously related to the issue you are seeing.
> 
> At this point there isn't enough information to differentiate between:
> - a regression introduced in Tomcat between 9.0.63 and 9.0.65
> - a change in Tomcat between 9.0.63 and 9.0.65 that exposed a bug in the
>deployed web application
> - a change in Tomcat between 9.0.63 and 9.0.65 that triggered an
>increase memory usage sufficient to trigger an OOME in your
>environment
> 
> What we would need to investigate this further is a test case that
> demonstrates a leak. It doesn't have to trigger an OOME - it just has to
> demonstrate the JVM retaining references to objects you'd expect to have
> been eligible for GC. If you can reduce it to a single request even better.
> 
> Mark


Mark, I believe a change in Tomcat 9.0.65 causes it to accumulate open 
connections:
I took a fresh Tomcat, unzipped and modified server.xml with only the following:
1. Changed port 8080 to port 80
2. Changed port 8443 to port 443
3. Uncommented the nio connector and added the snippet
   




I used Chrome to call the default index.html with Wireshark in the middle:
With 9.0.63 - 20 seconds after the last data frame, came a GOAWAY from the 
server.
With 9.0.65 - No GOAWAY was sent, and the server and client kept ACKing each 
other.

Tomcat 9.0.71 and 10.1.5 behaved similarly - no GOAWAY was sent.

Test was conducted with:
Wireshark Version 4.0.3 (v4.0.3-0-gc552f74cdc23)
Chrome Version 109.0.5414.120
JDK 17.0.6+10
Windows 11

Chen


Re: Tomcat 9.0.65 suspected memory leak

2022-09-19 Thread Mark Thomas

On 15/09/2022 14:11, Chen Levy wrote:

Hello Experts

We’ve recently upgraded some of our production servers to Tomcat 9.0.65; 
every upgraded server crashed with java.lang.OutOfMemoryError within an 
hour or so under load.


The exact same setup (same application, Linux kernel, Java version etc.) 
with Tomcat 9.0.63 does not exhibit this issue.


A heap-dump through MAT gave the following leak suspect (leak report 
attached):


“

14,364 instances of 
"org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper", loaded by 
"java.net.URLClassLoader @ 0x6be257090" occupy 4,489,221,944 (91.95%) bytes.


These instances are referenced from one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]", loaded by "class loader>", which occupies 590,736 (0.01%) bytes.


Keywords

     org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper

     java.net.URLClassLoader @ 0x6be257090

     java.util.concurrent.ConcurrentHashMap$Node[]

“

Please let me know if I should provide additional information


That looks like 14k current connections which isn't unreasonable for a 
Tomcat instance under load.


There are connector related changes between 9.0.63 and 9.0.65 but 
nothing that is obviously related to the issue you are seeing.


At this point there isn't enough information to differentiate between:
- a regression introduced in Tomcat between 9.0.63 and 9.0.65
- a change in Tomcat between 9.0.63 and 9.0.65 that exposed a bug in the
  deployed web application
- a change in Tomcat between 9.0.63 and 9.0.65 that triggered an
  increase memory usage sufficient to trigger an OOME in your
  environment

What we would need to investigate this further is a test case that 
demonstrates a leak. It doesn't have to trigger an OOME - it just has to 
demonstrate the JVM retaining references to objects you'd expect to have 
been eligible for GC. If you can reduce it to a single request even better.


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat 9.0.65 suspected memory leak

2022-09-15 Thread Chen Levy
Hello Experts

We've recently upgraded some of our production servers to Tomcat 9.0.65; every 
upgraded server crashed with java.lang.OutOfMemoryError within an hour or so 
under load.
The exact same setup (same application, Linux kernel, Java version etc.) with 
Tomcat 9.0.63 does not exhibit this issue.

A heap-dump through MAT gave the following leak suspect (leak report attached):

"
14,364 instances of "org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper", 
loaded by "java.net.URLClassLoader @ 0x6be257090" occupy 4,489,221,944 (91.95%) 
bytes.

These instances are referenced from one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]", loaded by "", which occupies 590,736 (0.01%) bytes.

Keywords

org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper
java.net.URLClassLoader @ 0x6be257090
java.util.concurrent.ConcurrentHashMap$Node[]
"

Please let me know if I should provide additional information

Java: OpenJDK Runtime Environment Temurin-17.0.4+8 (build 17.0.4+8)
Linux: 4.14.281-212.502.amzn2.aarch64








Thanks
Chen
<>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org