Bug ID: 64574
           Summary: Some HTTP/1.1 responses are broken (missing headers,
                    empty chunks)
           Product: Tomcat 8
           Version: 8.5.55
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P2
         Component: Connectors
  Target Milestone: ----

We are experiencing a very strange behaviour with our application running on
Tomcat 8.5.55, since we changed our frontend reverse proxy from Nginx to
Traefik. Traefik access logs show HTTP result code 500 for responses from
Tomcat where Tomcat was reporting them with HTTP 200 in its own logs. This was
only concerning a few requests per day on our production servers and we where
not able to reproduce the effect on test systems. So we started a network
capture on the production servers and where able to capture a few broken
responses from Tomcat.

It turns out, that Tomcat is producing malformed HTTP responses every now and
then. The HTTP headers are missing and the response immediately starts with a
chunked response:

GET / HTTP/1.1
Host: ***
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
Accept: image/webp,image/apng,image/*,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
Cookie: JSESSIONID=****
Referer: https://***
Sec-Fetch-Dest: image
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: same-origin
X-Forwarded-For: ***
X-Forwarded-Host: ***
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: ***
X-Real-Ip: ***




We also captured responses where only an empty chunk and no headers were in the

Since the issue first appeared with the change from Nginx to Traefik, the most
obvious change was that Nginx is using HTTP/1.0 in its requests to Tomcat and
Traefik is using HTTP/1.1. So we where suspecting keep-alive and pipelining to
trigger the issue. We disabled keep-alive in Traefik and still had the same
issues. We found only one bug report on Stackoverflow describing a similar
(In the bug report Nginx was used with HTTP/1.1.) But the proposed solution was
not appropriate for our situation. 

Code reading and further experimenting showed, that setting the system property
org.apache.catalina.connector.RECYCLE_FACADES to 'true' is preventing the
issue. The property name is actually misleading since setting it to 'true' is
effectively disabling recycling of facade objects:

It seems when facades (especially response' OutputSteam) is recycled there may
be some kind of concurrency issue or the code obtaining the OutputStream from
the response is not flushing or closing it correctly before the same
OutputStream is handed over to another thread in processing the next request. 

We have not yet fully understand what is actually triggering the issue - it may
also be in the framework and our own code processing the request - however, we
would like to report for other users experiencing the same issue.

Also we are wondering, if recycling the OutputStream has any relevant
performance advantage or for what other reason it is done by default. Since it
may cause issues not easy to track down, it may be a better idea to disable it
by default.

Interestingly it only causes broken responses when HTTP/1.1 is used, but there
are differences in the request processing in HTTP11Processor depending on the
protocol used.

You are receiving this mail because:
You are the assignee for the bug.
To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to