Re: maxSwallowSize and misbehaving clients (e.g., mod_proxy_http)

2020-05-22 Thread Osipov, Michael




Am 2020-05-22 um 18:51 schrieb [ext] Osipov, Michael:


Am 2020-05-22 um 13:26 schrieb Mark Thomas:

On 21/05/2020 23:30, Osipov, Michael wrote:




Output will be sent privately.


Got it. Tx.

Looking at the direct case.

It looks like you have debug logging enabled for everything. You only
need it for the org.apache.coyote.http2 package.

grep "http2" catalina.2020-05-21.log | less

gives a nice clear view of what is happening which is roughly:

22:34:29 Client sends request headers and ~64k of body on stream 1
22:34:29 Server sends 401 response and resets stream 1
22:34:29 Client resends request with authorization header but NO body
22:34:31 Server tries to read request body
22:34:51 Server times out trying to read request body
22:34:51 Server sends 500 response due to timeout and resets stream 3

It looks like the response to stream 3 includes an authorization 
challenge.


I think something isn't quite right with the authentication process.

Which authenticator are you using?


I am using my SPNEGO authenticator which is publically available [1] and 
based on the code which I have donated many years ago.



Would expect to see a second challenge form the server or should the
client be authenticated once the first auth header has been processed?


No, from a server's perspective the authentication has been completed 
already, but the client failed to provide the body.



What is triggering the read of the body here?

 >

Why isn't the client sending the body?


That's a good question. I need to ask fellow curl committers, this maybe 
a bug in curl.


I am looking at this with another client now, HttpClient 5.0. Even w/o 
authentication. the situation is even worse:

...


I found one issue with HttpClient and Tomcat via HTTP/1.1. I have 
decrypted the TLS traffic [1]. I can see that HttpClient sends the 
headers also with a 4 KiB large chunk of the ZIP file. In return a 
Tomcats send the 401 response with:

> Keep-Alive: timeout=300
> Connection: keep-alive

The client keeps sending in 8 KiB large blocks. After 2 134 016 written 
bytes there is a TLS alert: Close Notify. A few packets later: RST.


I would expect to see here: "Connection: close".

If necessary, I can provide the pcap and the keylog file.


[1] https://github.com/neykov/extract-tls-secrets

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



ANN: Bill Stewart's Apache Tomcat Setup for Windows [9.0.35]

2020-05-22 Thread Bill Stewart
Please see here:

https://github.com/Bill-Stewart/ApacheTomcatSetup

The Setup executable is available on the Releases tab.

Bill

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: [OT] Loading KeyStores, detecting types

2020-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

On 5/22/20 17:01, Christopher Schultz wrote:
> All,
>
> I've been writing a utility to scan a bunch of arbitrary files for
> certificates that are nearing expiration. It's written in Java and
> it currently works with PEM-encoded DER files (aka OpenSSL files)
> and PKCS12 keystores. I'm sure it would also work with the other
> flavors of Java key store, but I haven't (yet) tried them.
>
> What I have noticed is that:
>
> KeyStore ks = KeyStore.getInstance("JKS");
> ks.load(pkcs12InputStream, null);
>
> ...seems to have no problem whatsoever with the fact that the
> "keystore type" is JKS but the file being loaded is PKCS12. That
> makes sense to me, since the in-memory keystore doesn't really have
> a "type": only the on-disk representation of the keystore has a
> "type", et c.
>
> All of the information I can find online seems to indicate that
> the (in-memory) KeyStore "type" must match what you are loading, or
> you'll get an exception. But I'm finding that the in-memory type
> doesn't matter, and the load works as long as the file is legit.
>
> But the in-memory type doesn't change when the file is loaded.
> Hmm.
>
> So two off-topic questions:
>
> 1. Can I rely on the "type doesn't matter" behavior I'm seeing, or
> do I have to loop-over all the supported keystore types,
> attempting to (re)load the file each time looking for the right
> type -- just to be safe?
Answering my own question, here. Evidently, in Java 1.8 u60 or so,
Oracle added the "keystore.type.compat" security property which
defaults to "true". This allows auto-detection of formats regardless
of the in-memory type.

So it seems that, to be safe, I'll have to iterate through the
supported formats "just in case" because that setting can always be
disabled.

> 2. Is there a way to determine the type of file that WAS loaded
> into a KeyStore? It seems that there is a magic header I can use if
> I want to look at the raw bytes to detect the Java keystore formats
> (0-feedfeed, 0xcececece), but I think that's not exactly true for
> PKCS12 and maybe some other supported formats. I'd rather not look
> at the bytes myself unless it's absolutely necessary.

I'm still not sure about this.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7IQBcACgkQHPApP6U8
pFgLoBAAyahoNxDrYeF0ouyglKefgpyF5L225PeQMH+CVpWIsJV6itYpAxzhereh
oqBBGhRlS+n/uaZ6pFrS8zXpvd0cJznfYv/v5A3zerNI4YOxYTbDYCRoMvuhoDZf
M3/cRvT9axv6eNrYbwM1LXRzXHOXPJ6lDA3P3ImgeY4qYXRqFBaWKW0mZ7VxFzZ4
hJ4Vbc+2/lZIUngIlRdfQbKpcpWR37sPthX5WAI0+rST3+8QFZfnl/UVF6cQF1gl
kEzGvD9g9QcmxtLfPXeZuCxUfftDISCOGswg9+CvWJPKtdjui4FqgR2yMoiKHHMb
rF+LxX330456z63C8soIMHl+8/6ycNFK1VZJVIIcX0I/4jgjGfA40e007xDHNDt/
GpQtRw/veir/e2h6lgwZd/gyG91Szq5Bpd66kMSFh9ucjTaZp13b7qta4Zxf9khB
bGC8JKrz4qs+vQwF8BcF0l12+j3NoSOzgjvQytaSQNYyAAIqqYz1EgGHyxPZXPT2
koBfVHo7ZTXFcEJ/PiOHThEGQwbHr22Ovb+sWs/j9Z21kqwr8+uuvZHrrueAYtW6
zN69YxosPYat3NMLdBYhAWEDJSrVD+dClcyUKWbMGTnQs0bjTScMR8Avw24IVjAJ
dgScanoXudlqN8w/nLm1zBy50dMsnVRQBXt7A19PmzEgF1y8sZw=
=bRoO
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



[OT] Loading KeyStores, detecting types

2020-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

I've been writing a utility to scan a bunch of arbitrary files for
certificates that are nearing expiration. It's written in Java and it
currently works with PEM-encoded DER files (aka OpenSSL files) and
PKCS12 keystores. I'm sure it would also work with the other flavors
of Java key store, but I haven't (yet) tried them.

What I have noticed is that:

  KeyStore ks = KeyStore.getInstance("JKS");
  ks.load(pkcs12InputStream, null);

...seems to have no problem whatsoever with the fact that the
"keystore type" is JKS but the file being loaded is PKCS12. That makes
sense to me, since the in-memory keystore doesn't really have a
"type": only the on-disk representation of the keystore has a "type", et
c.

All of the information I can find online seems to indicate that the
(in-memory) KeyStore "type" must match what you are loading, or you'll
get an exception. But I'm finding that the in-memory type doesn't
matter, and the load works as long as the file is legit.

But the in-memory type doesn't change when the file is loaded. Hmm.

So two off-topic questions:

1. Can I rely on the "type doesn't matter" behavior I'm seeing, or do
I have to loop-over all the supported keystore types, attempting to
(re)load the file each time looking for the right type -- just to be saf
e?

2. Is there a way to determine the type of file that WAS loaded into a
KeyStore? It seems that there is a magic header I can use if I want to
look at the raw bytes to detect the Java keystore formats (0-feedfeed,
0xcececece), but I think that's not exactly true for PKCS12 and maybe
some other supported formats. I'd rather not look at the bytes myself
unless it's absolutely necessary.

Thanks,
- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7IPbgACgkQHPApP6U8
pFiyxw//RRNFh6nhgpSfqSzxsPDks3DqOfOnBLkI+0aRNtWkawlg+hM/nP96r9v5
IgL4qHZCz5fc2a8a5g+BsE4kM5yInSuvgAU+UtJhePNwE+2soSEYkAmjOEDOFT1J
3XCcKYAwS2c1laO3TeYXQdzgFJqwyu7my97eY3pjYB8WpfsRebYu693GOzGE8jII
HxWTiyavtzvu7dzRCx2czEyGlUVjkC7vClhC6njcEsG5lZXGW2rMupnSbYsGA9WZ
6UowafaEIAFWwLmBVauBP+4ZrUkJCoG2+Od8B55XpHdLvpKIeik6yqzshamb5z+5
rs2QTrm5Ta5UnP5l9Kf/99zuwmc4JQYLG2IeNwbw4SGVkn1MHOAArkkXJCBhWPKg
tqhYRIFkj6ZAqbVMIQe8j7dGzfr4+ZJa+yHCp4UT1N4GSQu2HGJ9i/olfXcCU3YV
j/YvO/qUcPudIwTIMbF0fKCSd4dUt4rt1zMBNUvJKW404EsQCt1HJuo3nluhBGTt
DIB4/qkhQKkHlQDq9FNzGHoQ5sX4qJcBAh6EMnjxFYyv3sPpbHFkGHbqylW9Wvwe
pVY9HcrPzocS0y/FCDQThiG5S2BMA+5eC7Fgmz2Hlqzl8dokoIDuOrJ4P50spsmc
HAhNaHQicGKq14fCT3kCh0lBqFMwDnat/JzJ1d5arZDN7PALCcg=
=RuBR
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Clustering/Session Replication in docker swarm

2020-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Praveen,

On 5/20/20 12:27, Praveen Kumar K S wrote:
> Hello,
>
> I'm not sure if this is the right forum to ask this question. Since
> this is a bigger community, I hope someone might have faced this
> issue and hope I will get some help.
>
> I'm seeing many posts achieving Tomcat session replication in
> docker swarm using traefik. But I just don't want to add another
> component. I use httpd as frontend for tomcat. Tomcat will be
> deployed as a service with 4 replicas and will be scaled when
> required. httpd is running as docker service and both are in same
> network. My question is, is there any way to achieve Tomcat session
> replication in docker swarm in this case ?

I'm not an auto-scaling guy, so this might be a stupid question: does
Docker-swarm have its own orchestration service, or does it use
something like Kubernetes?

There is a "cloud" cluster membership manager which is currently
undocumented but should be usable. The only current implementation is
for a Kubernetes back-end, but I'm sure another implementation could
be built for whatever orchestration scheme you have in your environment.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7IPCYACgkQHPApP6U8
pFhOdg/+NA7SMtLCml3WsLEtbM10HzbBU79qSfwsvoT+wZIduqYyhbXIehxxub6C
rrwX6r+WMF2qxSwiHg8XT1LYBuq15x0YctHUzHA4CJ0iuaUVCsIaHJxT32Gh2BUe
rAYsRH4THjqxCTMzRzr5FjMEPqicOm0l8B/LXStAjUJi2uX4UQqOhn5aFAAFoR2L
r8P4xAj0Mlzr+XHAnzGWvXGULsEYJZZyAgZTpudJ40/l6pv50gKQK5qz5j3NRQSK
YgFrixSMkzke6TvW6Mc+Kz/5y95XZ/xA+DG0C59+ulUsHKqf2Asy06Nk/4aGzdRt
FdYGjAlCMeWMKRt2p6gseckNUxoZUx8VSwf1/0i6GQ//ynA1RBqmwblbHc6x0UCv
AM2mBObvCjlbDlv0o/3Mdn4NR8iQfY12SWcbL0/VrjzqDI8nYoUk46pc1oIs2Ree
0nivzZd54k9ukT3oypm8wJl8eZYFdsYRXTIMabvxwe/CZAo8etgjqN3YaWy8gyrg
cw1WydsFPBVEj+QtLlgiF9Q++sWOh1o7oQT1xp+EJ1v0zWeQnfxc2Cqt20IZtjUi
RWDn8gD+VxpGYQlvS3IGLSSHZfepBKk0SEDrTZDbo+/TeDHWh6Plc/dBKWiUCzqY
nqE3+HPAgIQpBpJu9je2pinNTMiRSFaG8AVhkQMk0Cd0empqdsA=
=yEhA
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: maxSwallowSize and misbehaving clients (e.g., mod_proxy_http)

2020-05-22 Thread Osipov, Michael


Am 2020-05-22 um 13:26 schrieb Mark Thomas:

On 21/05/2020 23:30, Osipov, Michael wrote:




Output will be sent privately.


Got it. Tx.

Looking at the direct case.

It looks like you have debug logging enabled for everything. You only
need it for the org.apache.coyote.http2 package.

grep "http2" catalina.2020-05-21.log | less

gives a nice clear view of what is happening which is roughly:

22:34:29 Client sends request headers and ~64k of body on stream 1
22:34:29 Server sends 401 response and resets stream 1
22:34:29 Client resends request with authorization header but NO body
22:34:31 Server tries to read request body
22:34:51 Server times out trying to read request body
22:34:51 Server sends 500 response due to timeout and resets stream 3

It looks like the response to stream 3 includes an authorization challenge.

I think something isn't quite right with the authentication process.

Which authenticator are you using?


I am using my SPNEGO authenticator which is publically available [1] and 
based on the code which I have donated many years ago.



Would expect to see a second challenge form the server or should the
client be authenticated once the first auth header has been processed?


No, from a server's perspective the authentication has been completed 
already, but the client failed to provide the body.



What is triggering the read of the body here?

>

Why isn't the client sending the body?


That's a good question. I need to ask fellow curl committers, this maybe 
a bug in curl.


I am looking at this with another client now, HttpClient 5.0. Even w/o 
authentication. the situation is even worse:



69 [main] DEBUG org.apache.hc.client5.http.impl.classic.InternalHttpClient - 
ex-0001: preparing request execution
145 [main] DEBUG org.apache.hc.client5.http.protocol.RequestAddCookies - Cookie 
spec selected: strict
150 [main] DEBUG org.apache.hc.client5.http.protocol.RequestAuthCache - Auth 
cache not set in the context
150 [main] DEBUG org.apache.hc.client5.http.impl.classic.ProtocolExec - 
ex-0001: target auth state: UNCHALLENGED
151 [main] DEBUG org.apache.hc.client5.http.impl.classic.ProtocolExec - 
ex-0001: proxy auth state: UNCHALLENGED
151 [main] DEBUG org.apache.hc.client5.http.impl.classic.ConnectExec - ex-0001: 
acquiring connection with route {s}->https://:1
151 [main] DEBUG org.apache.hc.client5.http.impl.classic.InternalHttpClient - 
ex-0001: acquiring endpoint (3 MINUTES)
152 [main] DEBUG org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - 
ex-0001: endpoint lease request (3 MINUTES) [route: 
{s}->https://:1][total available: 0; route allocated: 0 of 5; 
total allocated: 0 of 25]
175 [main] DEBUG org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - 
ex-0001: endpoint leased [route: {s}->https://:1][total 
available: 0; route allocated: 1 of 5; total allocated: 1 of 25]
183 [main] DEBUG 
org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - 
ex-0001: acquired ep-
183 [main] DEBUG org.apache.hc.client5.http.impl.classic.InternalHttpClient - 
ex-0001: acquired endpoint ep-
183 [main] DEBUG org.apache.hc.client5.http.impl.classic.ConnectExec - ex-0001: 
opening connection {s}->https://:1
184 [main] DEBUG org.apache.hc.client5.http.impl.classic.InternalHttpClient - 
ep-: connecting endpoint (3 MINUTES)
184 [main] DEBUG 
org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager - ep-: 
connecting endpoint to https://:1 (3 MINUTES)
201 [main] DEBUG org.apache.hc.client5.http.impl.io.DefaultHttpClientConnectionOperator - 
http-outgoing-0: connecting to /:1
201 [main] DEBUG org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory - Connecting 
socket to /:1 with timeout 3 MINUTES
237 [main] DEBUG org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory - 
Enabled protocols: [TLSv1.3, TLSv1.2]
237 [main] DEBUG org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory - 
Enabled cipher suites:[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, 
TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, 
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, 
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, 
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, 
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, 
TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, 
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, 
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, 
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, 
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, 
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, 

Re: Performance Comparison HTTP1 vs HTTP2 in Tomcat 9.0.29

2020-05-22 Thread Mark Thomas
On 22/05/2020 16:01, Chirag Dewan wrote:
> Thanks for the quick response Mark.
> I agree 1024 concurrent streams are a bit far fetched and may cause an
> overhead. But at the same time, I have tried the same test with the Jetty
> Multiplexed connection pool with 100 concurrent streams(that is actually
> updated from the initial Settings frame).
> 
> And even in such kind of a connection strategy, we see ~6K around
> throughput. And to my surprise, even with 20 established connections, we
> could not reach the throughput in an HTTP1.1 connector.
> 
> Are there any benchmarking results for HTTP2, in comparison to HTTP1.1 I
> can refer?

Have a look at  Jean-Frederic's HTTP/2 presentations.

As with most performance tests, the results you get depend on a lot on
how you structure the test.

Mark



http://tomcat.apache.org/presentations.html
> 
> Chirag
> 
> On Fri, May 22, 2020 at 4:29 PM Mark Thomas  wrote:
> 
>> On 22/05/2020 11:23, Chirag Dewan wrote:
>>> Hi,
>>>
>>> I am trying to move to HTTP2 based APR connector from my HTTP1 based
>>> connector because of some customer requirements.
>>>
>>> I am trying to form some sort of throughput benchmark for HTTP2 in
>>> comparison to HTTP1. I have a simple Jersey service that accepts a JSON
>>> request and sends 200 with some headers.
>>>
>>> I have observed that HTTP2 is somehow stuck at 6K as compared to 15K in
>>> HTTP1.1 with the same amount of CPU and memory consumed.
>>> My client with HTTP1.1 is based on HTTP components and opens up to 100
>>> connections with Tomcat. On HTTP2, I have a Jetty client that opens 2
>>> connections with multiplexing of 1024. I tried increasing the
>>> connections to 20 as well, but that only has adverse affects.
>>>
>>> I am running Tomcat on a K8 pod with 3Gi CPU and 2Gi memory. With both
>>> HTTP2 and HTTP1.1 Tomcat consumes all 3 cores and approximately 800m
>> memory.
>>>
>>> In the thread dumps with HTTP2, I see a lot of BLOCKED threads:
>>> image.png
>>> Most of the threads are blocked in /writeHeaders. /
>>> /
>>> /
>>> Am I missing something here? Any help is much appreciated.
>>
>> With such a simple response and high concurrency I suspect you are
>> hitting a bottleneck with 1024 (or 100 if you haven't changed the
>> defaults) threads all trying to write to a single network connection at
>> once. That is never going to perform well.
>>
>> HTTP/2 is not a magic "make things faster" protocol. It reduces overhead
>> in some areas and increases overhead in others. Whether you see a
>> benefit is going to depend on where the bottleneck is in your system.
>>
>> If you are testing on a single machine or on a local network I'd expect
>> the additional complexity of HTTP/2 multiplexiing to quickly dominate
>> the results.
>>
>> If you want an idea of what is going on, I recommend using a profiler
>> although be aware that - unless there is an obvious performance issue -
>> you can quickly get to the point where getting the level of detail
>> required to track down the next bottleneck causes the profiler to create
>> more overhead than the issue you are trying to measure thereby
>> distorting the results.
>>
>> Mark
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Implementing Store and getting java.io.StreamCorruptedException

2020-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Jonathan,

On 5/20/20 10:55, Jonathan Yom-Tov wrote:
> I implemented my own Store which uses Redis to persist sessions
> (I'm using Jedis as the interface library). I copied most of the
> load()/save() code from FileStore. When my Store loads the session
> from Redis I consistently get java.io.StreamCorruptedException:
> Inconsistent vector internals. Any ideas on why this might be
> happening?

Does everything work if you don't have a java.util.Vector in your sessio
n?

Possibly relevant: https://bugs.openjdk.java.net/browse/JDK-8216331

- -chris

>
> Here's the relevant code: @Override public Session load(String
> sessionId) throws ClassNotFoundException, IOException {
> System.out.println("JEDIS load " + sessionId); String key =
> getKey(sessionId); byte[] bytes = jedis.get(key.getBytes(UTF8));
> System.out.println("JEDIS loaded " + bytes.length + " bytes");
>
> ClassLoader oldThreadContextCL =
> manager.getContext().bind(Globals.IS_SECURITY_ENABLED, null); try
> (ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
> ObjectInputStream ois = new ObjectInputStream(bis)) {
> StandardSession session = (StandardSession)
> manager.createEmptySession(); session.readObjectData(ois);
> session.setManager(manager);
>
> return session; } catch (Exception e) {
> System.err.println(e.getMessage()); e.printStackTrace(); return
> null; } finally {
> manager.getContext().unbind(Globals.IS_SECURITY_ENABLED,
> oldThreadContextCL); } }
>
> @Override public void save(Session session) throws IOException {
> System.out.println("JEDIS save " + session.getId()); String key =
> getKey(session.getId()); try (ByteArrayOutputStream bos = new
> ByteArrayOutputStream(); ObjectOutputStream oos = new
> ObjectOutputStream(bos)) {
> ((StandardSession)session).writeObjectData(oos);
> jedis.set(key.getBytes(UTF8), bos.toByteArray()); } }
>
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl7H6Y0ACgkQHPApP6U8
pFinPxAAitgiD5+T/KfdmKdRuwgUjT+mZi94HaajY9gRmkcFYSvTp3Dp7f1fJIfz
nPb/TlzfzCnhqeJLMNcTGB5AH7UG/SrQRftBdb/sTcxW3M4+WNklhfwAw3OrjnQG
GbHVF8O38T6dfHhc7WonXY90Cs1qZ0c1jtsQUStPZHdvD7cee3vO69RYyxH8AB1D
dc6m5Yp15MdK3l18iCawotvNXqtSHEiWQtArKRh/xWtv/O+0H5EqanzWgKdc0tWf
e2UqDL7w+pMuD9gTP5yYtSfaYjXyJoT3Yf9yqfGlaKizvsceTFImU1e4/cj9x1fK
9sds6G0SPRVBcdT2yhzN/qLeXWlcpDh8+TImqtVL1lEJzVXQgol5aWzDJivyLv62
t5Xp+RB9YdA9cbY2OP5lDQI5sD6p7FP1TeZMdsd49ns4TkkTQBffTmX4jwlP0Mrh
zJPJg88u1lGsmJ/TRxLPHMgyEDU2VSduYMzcFYiLcYweYfYxlyFhIcZvGxj1+knq
dEe3CfYKG1IHBvhcR5maXV7cjUr5bf5dLOWbme+XT+Qd9i2Lnd1p9aVm/lGDieOi
C3L67vS+vQ7s1cly96vMMtDjnuPHyKsmBpyMMjGJV3WsYl5nuPxKzyywbKjWHa2l
m7ouCRI39pf3zUYbbWs52qEGg2uDjax+BDnvAkj6V+HI7mLdqsQ=
=p/9Q
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Performance Comparison HTTP1 vs HTTP2 in Tomcat 9.0.29

2020-05-22 Thread Chirag Dewan
Thanks for the quick response Mark.
I agree 1024 concurrent streams are a bit far fetched and may cause an
overhead. But at the same time, I have tried the same test with the Jetty
Multiplexed connection pool with 100 concurrent streams(that is actually
updated from the initial Settings frame).

And even in such kind of a connection strategy, we see ~6K around
throughput. And to my surprise, even with 20 established connections, we
could not reach the throughput in an HTTP1.1 connector.

Are there any benchmarking results for HTTP2, in comparison to HTTP1.1 I
can refer?

Chirag

On Fri, May 22, 2020 at 4:29 PM Mark Thomas  wrote:

> On 22/05/2020 11:23, Chirag Dewan wrote:
> > Hi,
> >
> > I am trying to move to HTTP2 based APR connector from my HTTP1 based
> > connector because of some customer requirements.
> >
> > I am trying to form some sort of throughput benchmark for HTTP2 in
> > comparison to HTTP1. I have a simple Jersey service that accepts a JSON
> > request and sends 200 with some headers.
> >
> > I have observed that HTTP2 is somehow stuck at 6K as compared to 15K in
> > HTTP1.1 with the same amount of CPU and memory consumed.
> > My client with HTTP1.1 is based on HTTP components and opens up to 100
> > connections with Tomcat. On HTTP2, I have a Jetty client that opens 2
> > connections with multiplexing of 1024. I tried increasing the
> > connections to 20 as well, but that only has adverse affects.
> >
> > I am running Tomcat on a K8 pod with 3Gi CPU and 2Gi memory. With both
> > HTTP2 and HTTP1.1 Tomcat consumes all 3 cores and approximately 800m
> memory.
> >
> > In the thread dumps with HTTP2, I see a lot of BLOCKED threads:
> > image.png
> > Most of the threads are blocked in /writeHeaders. /
> > /
> > /
> > Am I missing something here? Any help is much appreciated.
>
> With such a simple response and high concurrency I suspect you are
> hitting a bottleneck with 1024 (or 100 if you haven't changed the
> defaults) threads all trying to write to a single network connection at
> once. That is never going to perform well.
>
> HTTP/2 is not a magic "make things faster" protocol. It reduces overhead
> in some areas and increases overhead in others. Whether you see a
> benefit is going to depend on where the bottleneck is in your system.
>
> If you are testing on a single machine or on a local network I'd expect
> the additional complexity of HTTP/2 multiplexiing to quickly dominate
> the results.
>
> If you want an idea of what is going on, I recommend using a profiler
> although be aware that - unless there is an obvious performance issue -
> you can quickly get to the point where getting the level of detail
> required to track down the next bottleneck causes the profiler to create
> more overhead than the issue you are trying to measure thereby
> distorting the results.
>
> Mark
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: maxSwallowSize and misbehaving clients (e.g., mod_proxy_http)

2020-05-22 Thread Mark Thomas
On 21/05/2020 23:30, Osipov, Michael wrote:



> Output will be sent privately.

Got it. Tx.

Looking at the direct case.

It looks like you have debug logging enabled for everything. You only
need it for the org.apache.coyote.http2 package.

grep "http2" catalina.2020-05-21.log | less

gives a nice clear view of what is happening which is roughly:

22:34:29 Client sends request headers and ~64k of body on stream 1
22:34:29 Server sends 401 response and resets stream 1
22:34:29 Client resends request with authorization header but NO body
22:34:31 Server tries to read request body
22:34:51 Server times out trying to read request body
22:34:51 Server sends 500 response due to timeout and resets stream 3

It looks like the response to stream 3 includes an authorization challenge.

I think something isn't quite right with the authentication process.

Which authenticator are you using?

Would expect to see a second challenge form the server or should the
client be authenticated once the first auth header has been processed?

What is triggering the read of the body here?

Why isn't the client sending the body?



The proxy logs don't show nay http2 traffic at all.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: RST on TCP level sent by Tomcat

2020-05-22 Thread Mark Thomas
On 22/05/2020 07:39, Arshiya Shariff wrote:
> Hi Mark ,
> 
> 1.Currently we have configured max http2 threads as 40 , but tomcat is 
> allowing more than 300 connections , is there a way to check how many http2 
> connections tomcat will allow ?
> 
> 2. Is maxThreads the maxConnections Or is there any other way to set max 
> connections ?
> We are setting properties by extending the Connector.java class.

These are explained in the documentation:

http://tomcat.apache.org/tomcat-9.0-doc/config/http.html#Standard_Implementation

http://tomcat.apache.org/tomcat-9.0-doc/config/http2.html#Common_Attributes

Mark


> 
> Embedded Tomcat : 9.0.22
> 
> Thanks and Regards
> Arshiya Shariff
> 
> -Original Message-
> From: Mark Thomas  
> Sent: Wednesday, May 20, 2020 3:42 PM
> To: users@tomcat.apache.org
> Subject: Re: RST on TCP level sent by Tomcat
> 
> 
> 
> On 20/05/2020 10:07, Arshiya Shariff wrote:
>> Hi Mark,
>> Thank you for the response.
>>
>> Getting back on Query 3 and 4.
>>
> There are no active streams and still connection is not being closed by 
> tomcat , and after sometime for new requests tomcat is sending RST.
> As it is a production issue, it's hard for us to reproduce this at our 
> end and retest.
>>
>>   1.How long does new connection have to wait when connection limit reached 
>> , when TCP closed it with RST forsuch waiting connections ?
> 
> That will depend on the client's connection timeout. Tomcat has no control 
> over that.
> 
>>  2.What is the idle timeout in 9.0.22 for http2 if not provided , will there 
>> be issues if it is infinite also ?
> 
> Again. You need to upgrade. There issues with HTTP/2 timeouts in that version.
> 
> Mark
> 
> 
>>
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>>
>> -Original Message-
>> From: Mark Thomas 
>> Sent: Wednesday, May 20, 2020 1:00 PM
>> To: users@tomcat.apache.org
>> Subject: Re: RST on TCP level sent by Tomcat
>>
>> On 20/05/2020 07:02, Arshiya Shariff wrote:
>>> Hi Team ,
>>>
>>> 1.We are facing a problem where tomcat is closing the http2 connections 
>>> silently without sending GOAWAY and FIN. Under what cases does this happen ?
>>
>> Tomcat always tries to write the GOAWAY frame. Network issues may prevent 
>> the client receiving it.
>>
>>> 2. What happens when maxkeepaliverequests reaches the configured limit, 
>>> will it close connections silently?
>>
>> Nothing. The limit does not exist in HTTP/2.
>>
>>> 3. What happens when max Connections is reached, will it close older 
>>> connections?
>>
>> No. New connections will have to wait until a connection is available.
>>
>>> 4. Currently we see keepalive timeout is default 20 seconds, but the 
>>> connection is not closed after that.   For requests received after 3 hours 
>>> also we are sending response .Is there any way to close idle-connections ?
>>
>> Again, please upgrade and re-test.
>>
>> The keep-alive timeout only applies once the entire connection is idle - 
>> i.e. there are no currently processing streams.
>>
>> Mark
>>
>>
>>>
>>> Embedded Tomcat : 9.0.22
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>>
>>> -Original Message-
>>> From: Arshiya Shariff
>>> Sent: Monday, May 18, 2020 4:45 PM
>>> To: Mark Thomas ; users@tomcat.apache.org
>>> Cc: M Venkata Pratap M 
>>> Subject: RE: RST on TCP level sent by Tomcat
>>>
>>> Hi Mark,
>>> Thank you for the quick response.
>>>
>>> Please provide us a little more clarity on the 3rd query :
>>>
>>> 3. We see that RST is sent by tomcat on receiving http2 request, when  does 
>>> this happen ? 
>> When things go wrong. E.g. when the client sends a request to a 
>> connection that has been closed.
>>>
>>>  Why does tomcat not send GOAWAY on connection close, upon next request 
>>> from client it sends RST ?
>>>
>>> Also, Can you please send us the references to the timeout related fixes in 
>>> 9.0.35 (since 9.0.22).
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>>
>>>
>>> -Original Message-
>>> From: Mark Thomas 
>>> Sent: Monday, May 18, 2020 4:17 PM
>>> To: users@tomcat.apache.org
>>> Subject: Re: RST on TCP level sent by Tomcat
>>>
>>> On 18/05/2020 11:01, Arshiya Shariff wrote:
 Hi Team,

 Can you please help us with the below queries :
>>>
>>> There have been various timeout related fixes since 9.0.22. Please upgrade 
>>> to 9.0.35 and re-test.
>>>
 1. When does a http2 connection close ? We see that the 
 keepAliveTimeout is
 20 seconds by default, but it is not closing the connection on 
 keepAliveTimeout.
>>>
>>> Please re-test with 9.0.35.
>>>
 2. How to keep the connections alive / How to enable ping frames to 
 be sent to the other end to keep the connection alive ?
>>>
>>> There is no standard API to send an HTTP/2 ping. If you want to keep the 
>>> connections alive for longer, use a longer keep-alive setting.
>>>
 3. We see that RST is sent by tomcat on receiving http2 request, 
 when does 

Re: Performance Comparison HTTP1 vs HTTP2 in Tomcat 9.0.29

2020-05-22 Thread Mark Thomas
On 22/05/2020 11:23, Chirag Dewan wrote:
> Hi,
> 
> I am trying to move to HTTP2 based APR connector from my HTTP1 based
> connector because of some customer requirements.
> 
> I am trying to form some sort of throughput benchmark for HTTP2 in
> comparison to HTTP1. I have a simple Jersey service that accepts a JSON
> request and sends 200 with some headers.
> 
> I have observed that HTTP2 is somehow stuck at 6K as compared to 15K in
> HTTP1.1 with the same amount of CPU and memory consumed. 
> My client with HTTP1.1 is based on HTTP components and opens up to 100
> connections with Tomcat. On HTTP2, I have a Jetty client that opens 2
> connections with multiplexing of 1024. I tried increasing the
> connections to 20 as well, but that only has adverse affects.
> 
> I am running Tomcat on a K8 pod with 3Gi CPU and 2Gi memory. With both
> HTTP2 and HTTP1.1 Tomcat consumes all 3 cores and approximately 800m memory.
> 
> In the thread dumps with HTTP2, I see a lot of BLOCKED threads:
> image.png
> Most of the threads are blocked in /writeHeaders. /
> /
> /
> Am I missing something here? Any help is much appreciated.

With such a simple response and high concurrency I suspect you are
hitting a bottleneck with 1024 (or 100 if you haven't changed the
defaults) threads all trying to write to a single network connection at
once. That is never going to perform well.

HTTP/2 is not a magic "make things faster" protocol. It reduces overhead
in some areas and increases overhead in others. Whether you see a
benefit is going to depend on where the bottleneck is in your system.

If you are testing on a single machine or on a local network I'd expect
the additional complexity of HTTP/2 multiplexiing to quickly dominate
the results.

If you want an idea of what is going on, I recommend using a profiler
although be aware that - unless there is an obvious performance issue -
you can quickly get to the point where getting the level of detail
required to track down the next bottleneck causes the profiler to create
more overhead than the issue you are trying to measure thereby
distorting the results.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Performance Comparison HTTP1 vs HTTP2 in Tomcat 9.0.29

2020-05-22 Thread Chirag Dewan
Hi,

I am trying to move to HTTP2 based APR connector from my HTTP1 based
connector because of some customer requirements.

I am trying to form some sort of throughput benchmark for HTTP2 in
comparison to HTTP1. I have a simple Jersey service that accepts a JSON
request and sends 200 with some headers.

I have observed that HTTP2 is somehow stuck at 6K as compared to 15K in
HTTP1.1 with the same amount of CPU and memory consumed.
My client with HTTP1.1 is based on HTTP components and opens up to 100
connections with Tomcat. On HTTP2, I have a Jetty client that opens 2
connections with multiplexing of 1024. I tried increasing the connections
to 20 as well, but that only has adverse affects.

I am running Tomcat on a K8 pod with 3Gi CPU and 2Gi memory. With both
HTTP2 and HTTP1.1 Tomcat consumes all 3 cores and approximately 800m memory.

In the thread dumps with HTTP2, I see a lot of BLOCKED threads:
[image: image.png]
Most of the threads are blocked in *writeHeaders. *

Am I missing something here? Any help is much appreciated.

Thank you
Chirag


Re: Http2 tomact server taking time in responding when 1st StreamId is a large integer value like 2147483641

2020-05-22 Thread Mark Thomas
On 22/05/2020 04:46, Prateek Kohli wrote:
> Thanks Mark.
> 
> Do we need to raise a bug for this?

Generally, if the committers know about a bug it will get fixed. Having
a Bugzilla issue is not a requirement for a bug to get fixed. This is on
my TODO list for today unless someone beats me to it.

That said, opening a Bugzilla issue is generally a useful thing to do
because:
- it provides a reference to the specific issue (helpful if duplicates
  are reported)
- it won't get forgotten about if the committers get distracted by some
  bigger /more urgent issue

Kind regards,

Mark


> 
> Regards,
> Prateek Kohli
> 
> -Original Message-
> From: Mark Thomas  
> Sent: Thursday, May 21, 2020 8:43 PM
> To: users@tomcat.apache.org
> Subject: Re: Http2 tomact server taking time in responding when 1st StreamId 
> is a large integer value like 2147483641
> 
> On 21/05/2020 13:30, Prateek Kohli wrote:
>> Hi,
>>
>> I debugged this further and the problem seems to be because of the below 
>> code in Http2UpgradeHandler class:
>>
>> private void closeIdleStreams(int newMaxActiveRemoteStreamId) throws 
>> Http2Exception {
>> for (int i = maxActiveRemoteStreamId + 2; i < 
>> newMaxActiveRemoteStreamId; i += 2) {
>> Stream stream = getStream(i, false);
>> if (stream != null) {
>> stream.closeIfIdle();
>> }
>> }
>> maxActiveRemoteStreamId = newMaxActiveRemoteStreamId;
>> }
>>
>> When we take 1st StreamId as 2147483641, the above loop takes around 4~5 
>> seconds to execute and hence, the response is delayed.
> 
> That is where I suspected the issue would be but hadn't got around to 
> confirming it. This will get fixed for the next release round (due in a 
> couple of weeks).
> 
> Mark
> 
> 
>>
>> Regards,
>> Prateek Kohli
>>
>> -Original Message-
>> From: Manuel Dominguez Sarmiento 
>> Sent: Thursday, May 21, 2020 3:34 PM
>> To: Tomcat Users List ; Prateek Kohli 
>> 
>> Subject: Re: Http2 tomact server taking time in responding when 1st 
>> StreamId is a large integer value like 2147483641
>>
>> I must say that we're also seeing weird, seemingly random response 
>> delays from Tomcat on HTTP/2 We haven't looked into it at such a low 
>> level though. We're currently on
>> 9.0.35 but we've been seeing this on previous versions as well.
>>
>> *Manuel Dominguez Sarmiento*
>>
>> On 21/05/2020 05:32, Prateek Kohli wrote:
>>>
>>> Hello,
>>>
>>> Tomcat version : 9.0.29
>>>
>>> We are running a Tomcat Http2 Server and a Jetty http2 client.
>>>
>>> When we send 1^st request from Jetty client to tomcat server with 
>>> streamId number as 1, tomcat sends the WINDOW_UPDATE header and the 
>>> response in 1~2 milliseconds.
>>>
>>> Packet number 164 is the response in the below tcpdump.
>>>
>>> But when we send the 1st request from jetty client to tomcat server 
>>> with streamId as 2147483641, the 1^st response from tomcat comes 
>>> after almost 5 seconds
>>>
>>> And the response for subsequent requests comes within 1~2 milliseconds.
>>>
>>> In the below tcpdump it can be seen that the response packet number
>>> 167 comes after almost 5 seconds from the tomcat server.
>>>
>>> Would you please be able to explain why the response from tomcat 
>>> server is getting delayed when the 1^st StreamId number is a large 
>>> integer i.e. 2147483641.
>>>
>>> Regards,
>>>
>>> Prateek Kohli
>>>
>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Vulnerability ---Remote Web Server Apache Tomcat Contains Default Files

2020-05-22 Thread Mark Thomas
On 22/05/2020 10:06, Reddy, Tippana Krishnanandan wrote:
> Hi All,
> 
> We are using Tomcat version 8.5.31 we have observed below vulnerability
> 
> Title: Remote Web Server Apache Tomcat Contains Default Files
> 
> Issue: The default error page, default index page, example JSPs, /example 
> servlets are installed on the remote Apache Tomcat server. These files should 
> be removed as they may help an attacker uncover information about the remote 
> Tomcat install or host itself or they may themselves contain vulnerabilities 
> such as
> cross-site scripting issues.
> 
> Please let us know how to fix this Vulnerability.

http://tomcat.apache.org/tomcat-8.5-doc/security-howto.html

In particular:

http://tomcat.apache.org/tomcat-8.5-doc/security-howto.html#Default_web_applications

and

http://tomcat.apache.org/tomcat-8.5-doc/security-howto.html#Valves


You should also review https://tomcat.apache.org/security-8.html


In Tomcat 9 onwards there is the option to configure a static file as
the default error page.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Vulnerability ---Remote Web Server Apache Tomcat Contains Default Files

2020-05-22 Thread Reddy, Tippana Krishnanandan
Hi All,

We are using Tomcat version 8.5.31 we have observed below vulnerability

Title: Remote Web Server Apache Tomcat Contains Default Files

Issue: The default error page, default index page, example JSPs, /example 
servlets are installed on the remote Apache Tomcat server. These files should 
be removed as they may help an attacker uncover information about the remote 
Tomcat install or host itself or they may themselves contain vulnerabilities 
such as
cross-site scripting issues.

Please let us know how to fix this Vulnerability.


Thanks in Advance

Regards,
Krishna


This message (including any attachments) contains confidential information 
intended for a specific individual and purpose, and is protected by law. If you 
are not the intended recipient, you should delete this message and any 
disclosure, copying, or distribution of this message, or the taking of any 
action based on it, by you is strictly prohibited.

Deloitte refers to a Deloitte member firm, one of its related entities, or 
Deloitte Touche Tohmatsu Limited ("DTTL"). Each Deloitte member firm is a 
separate legal entity and a member of DTTL. DTTL does not provide services to 
clients. Please see www.deloitte.com/about to learn more.

v.E.1


RE: RST on TCP level sent by Tomcat

2020-05-22 Thread Arshiya Shariff
Hi Mark ,

1.Currently we have configured max http2 threads as 40 , but tomcat is allowing 
more than 300 connections , is there a way to check how many http2 connections 
tomcat will allow ?

2. Is maxThreads the maxConnections Or is there any other way to set max 
connections ?
We are setting properties by extending the Connector.java class.

Embedded Tomcat : 9.0.22

Thanks and Regards
Arshiya Shariff

-Original Message-
From: Mark Thomas  
Sent: Wednesday, May 20, 2020 3:42 PM
To: users@tomcat.apache.org
Subject: Re: RST on TCP level sent by Tomcat



On 20/05/2020 10:07, Arshiya Shariff wrote:
> Hi Mark,
> Thank you for the response.
> 
> Getting back on Query 3 and 4.
> 
 There are no active streams and still connection is not being closed by 
 tomcat , and after sometime for new requests tomcat is sending RST.
 As it is a production issue, it's hard for us to reproduce this at our end 
 and retest.
> 
>   1.How long does new connection have to wait when connection limit reached , 
> when TCP closed it with RST forsuch waiting connections ?

That will depend on the client's connection timeout. Tomcat has no control over 
that.

>  2.What is the idle timeout in 9.0.22 for http2 if not provided , will there 
> be issues if it is infinite also ?

Again. You need to upgrade. There issues with HTTP/2 timeouts in that version.

Mark


> 
> 
> Thanks and Regards
> Arshiya Shariff
> 
> 
> -Original Message-
> From: Mark Thomas 
> Sent: Wednesday, May 20, 2020 1:00 PM
> To: users@tomcat.apache.org
> Subject: Re: RST on TCP level sent by Tomcat
> 
> On 20/05/2020 07:02, Arshiya Shariff wrote:
>> Hi Team ,
>>
>> 1.We are facing a problem where tomcat is closing the http2 connections 
>> silently without sending GOAWAY and FIN. Under what cases does this happen ?
> 
> Tomcat always tries to write the GOAWAY frame. Network issues may prevent the 
> client receiving it.
> 
>> 2. What happens when maxkeepaliverequests reaches the configured limit, will 
>> it close connections silently?
> 
> Nothing. The limit does not exist in HTTP/2.
> 
>> 3. What happens when max Connections is reached, will it close older 
>> connections?
> 
> No. New connections will have to wait until a connection is available.
> 
>> 4. Currently we see keepalive timeout is default 20 seconds, but the 
>> connection is not closed after that.   For requests received after 3 hours 
>> also we are sending response .Is there any way to close idle-connections ?
> 
> Again, please upgrade and re-test.
> 
> The keep-alive timeout only applies once the entire connection is idle - i.e. 
> there are no currently processing streams.
> 
> Mark
> 
> 
>>
>> Embedded Tomcat : 9.0.22
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>>
>> -Original Message-
>> From: Arshiya Shariff
>> Sent: Monday, May 18, 2020 4:45 PM
>> To: Mark Thomas ; users@tomcat.apache.org
>> Cc: M Venkata Pratap M 
>> Subject: RE: RST on TCP level sent by Tomcat
>>
>> Hi Mark,
>> Thank you for the quick response.
>>
>> Please provide us a little more clarity on the 3rd query :
>>
>> 3. We see that RST is sent by tomcat on receiving http2 request, when  does 
>> this happen ? 
> When things go wrong. E.g. when the client sends a request to a 
> connection that has been closed.
>>
>>  Why does tomcat not send GOAWAY on connection close, upon next request from 
>> client it sends RST ?
>>
>> Also, Can you please send us the references to the timeout related fixes in 
>> 9.0.35 (since 9.0.22).
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>>
>>
>> -Original Message-
>> From: Mark Thomas 
>> Sent: Monday, May 18, 2020 4:17 PM
>> To: users@tomcat.apache.org
>> Subject: Re: RST on TCP level sent by Tomcat
>>
>> On 18/05/2020 11:01, Arshiya Shariff wrote:
>>> Hi Team,
>>>
>>> Can you please help us with the below queries :
>>
>> There have been various timeout related fixes since 9.0.22. Please upgrade 
>> to 9.0.35 and re-test.
>>
>>> 1. When does a http2 connection close ? We see that the 
>>> keepAliveTimeout is
>>> 20 seconds by default, but it is not closing the connection on 
>>> keepAliveTimeout.
>>
>> Please re-test with 9.0.35.
>>
>>> 2. How to keep the connections alive / How to enable ping frames to 
>>> be sent to the other end to keep the connection alive ?
>>
>> There is no standard API to send an HTTP/2 ping. If you want to keep the 
>> connections alive for longer, use a longer keep-alive setting.
>>
>>> 3. We see that RST is sent by tomcat on receiving http2 request, 
>>> when does this happen ?
>>
>> When things go wrong. E.g. when the client sends a request to a connection 
>> that has been closed.
>>
>>> 4. What are the recommended ipv4.tcp settings for these kind of scenarios ?
>>
>> There are no recommended settings.
>>
>> Mark
>>
>>
>>>
>>>
>>>
>>> Embedded Tomcat : 9.0.22
>>>
>>> Java Version : 1.8.0.201
>>>
>>> Hardware  : Red Hat Enterprise