I think I might have found a cause for this to happen, or at least a way to
fully replicate the issue
I saw that the issue also happens in Firefox whenever doing POST requests
(Let’s say, logging into a website) – I got a bunch of “400 Bad Request”
whenever trying to actually do the post request.
I copied the curl command-line from Firefox to get the exact headers and
cookies that are set during the request towards the site.
I then tried to run the curl command directly and no issues what so ever.
However, if I decide to try the same with nghttp – I get a
“COMPRESSION_ERROR(0x09)”:
[ 0.170] recv GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=13, error_code=COMPRESSION_ERROR(0x09),
opaque_data(0)=[])
I then went back to the basics, started out with a very simple nghttp call to
the URL, posting only the required POST data (using --data postfile) – were
postfile contains my post parameters such as:
username=awesome_user&password=very_secure
It worked as it should, so I started adding more and more headers, until I hit
the culprit: -H “Connection: keep-alive” or -H “Connection: close” (or even
“Connection: test”)
If I do without –H “Connection: keep-alive” or -H “Connection: close” then my
response will look something like:
[ 0.297] recv (stream_id=13) :status: 302
[ 0.297] recv (stream_id=13) server: nginx/1.13.5
[ 0.297] recv (stream_id=13) content-type: text/html; charset=UTF-8
[ 0.297] recv (stream_id=13) x-powered-by: PHP/7.1.11
[ 0.297] recv (stream_id=13) cache-control: no-cache, private
[ 0.297] recv (stream_id=13) date: Thu, 28 Dec 2017 08:05:40 GMT
[ 0.297] recv (stream_id=13) location: https://dashboard.domain.com/login
[ 0.297] recv (stream_id=13) set-cookie: XSRF-TOKEN=VERY_SECURE_TOKEN%3D%3D;
expires=Thu, 28-Dec-2017 10:05:40 GMT; Max-Age=7200; path=/
[ 0.297] recv (stream_id=13) set-cookie: laravel_session=VERY_SECURE_SESSION;
expires=Thu, 28-Dec-2017 10:05:40 GMT; Max-Age=7200; path=/; HttpOnly
[ 0.297] recv HEADERS frame <length=902, flags=0x04, stream_id=13>
; END_HEADERS
(padlen=0)
; First response header
[ 0.297] recv (stream_id=13, length=384, srecv=384, crecv=384) DATA
[ 0.297] recv DATA frame <length=384, flags=0x00, stream_id=13>
[ 0.297] recv DATA frame <length=0, flags=0x01, stream_id=13>
; END_STREAM
[ 0.297] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=0, error_code=NO_ERROR(0x00), opaque_data(0)=[])
However if I do the same with “connection: keep-alive”, I’ll get following:
[ 0.127] send HEADERS frame <length=777, flags=0x24, stream_id=13>
; END_HEADERS | PRIORITY
(padlen=0, dep_stream_id=11, weight=16, exclusive=0)
; Open new stream
:method: POST
:path: /login
:scheme: https
:authority: dashboard.domain.com
accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding: gzip, deflate
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:57.0)
Gecko/20100101 Firefox/57.0
content-length: 50
host: dashboard.domain.com
accept-language: en-US,en;q=0.5
referer: https://dashboard.domain.com/login
content-type: application/x-www-form-urlencoded
cookie: XSRF-TOKEN=VERY_SECURE_TOKEN%3D%3D;
laravel_session=VERY_SECURE_SESSION%3D%3D
connection: keep-alive
upgrade-insecure-requests: 1
pragma: no-cache
cache-control: no-cache
[ 0.127] send DATA frame <length=50, flags=0x01, stream_id=13>
; END_STREAM
[ 0.170] recv SETTINGS frame <length=6, flags=0x00, stream_id=0>
(niv=1)
[SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
[ 0.170] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
[ 0.170] recv GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=13, error_code=COMPRESSION_ERROR(0x09),
opaque_data(0)=[])
[ 0.170] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
Some requests were not processed. total=1, processed=0
As you can see, it does the normal send of my headers, it receives the SETTINGS
frames, and directly receives a “GOAWAY” with the error_code
COMPRESSION_ERROR(0x09)
In the haproxy log I get following:
Dec 28 08:09:53 localhost haproxy[29879]: 92.70.20.xx:58897
[28/Dec/2017:08:09:53.497] https_frontend~ https_frontend/<NOSRV>
-1/-1/-1/-1/90 400 0 - - CR-- 25/1/0/0/0 0/0 "<BADREQ>"
The same error as when files in Firefox sometimes dies, even on get requests.
I tried to replicate the issue in haproxy version 1.8.1, 1.8.2 and latest
commit from master – all with the same result, I also tried playing around with
the options of forceclose, http-server-close etc on both the frontend and
backend in haproxy, none of them seem to “fix” the issue.
However in 1.8.2 I have 100% chance of replicating it using post requests in
Firefox and nghttp, where in 1.8.1 the issue in the majority of the time works
in Firefox and only have the few percentage failure rate.
I haven’t been able to replicate the issue in other than Firefox and nghttp –
I’m not sure if they’re more strict regarding HPACK (which it seems to be
related to, since headers are never received in nghttp either), and the other
browsers has a more “slacking” approach towards handling possible compression
issues, or some kind of error correction – I’m not sure – but it seems to be
related to how headers are compressed, but only when using “keep-alive”,
“close”, etc.
Simply removing the header (in nghttp), resolves the issue – since I cannot
remove the “connection” header in Firefox, I’m not sure if it will actually fix
it there as well.
Also, sorry for the lengthy email
Best Regards,
Lucas Rolff
From: Lucas Rolff <[email protected]>
Date: Wednesday, 27 December 2017 at 23.08
To: Lukas Tribus <[email protected]>
Cc: "[email protected]" <[email protected]>
Subject: Re: HTTP/2 Termination vs. Firefox Quantum
My small site is basically a html page with two pages (home and about for
example), each page contains basic markup, some styling and some JavaScript,
switching pages tends to replicate the issue every now and then (differs a bit
how often it happens, but possibly every 20-30 request or so)
I’m single user being able to replicate the issue, no other traffic than myself
So the test scenario is fairly easy to replicate in that sense
Tomorrow I’ll check if I can replicate the same issue in other browsers as well
I haven’t been able to replicate it with curl yet and haven’t tried with
nghttp, I’ll continue to troubleshoot meanwhile, but it’s a bit odd it happens
Best regards,
Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: [email protected] <[email protected]> on behalf of Lukas Tribus <[email protected]>
Sent: Wednesday, December 27, 2017 10:51:01 PM
To: Lucas Rolff
Cc: [email protected]
Subject: Re: HTTP/2 Termination vs. Firefox Quantum
Hello Lucas,
On Wed, Dec 27, 2017 at 9:24 PM, Lucas Rolff <[email protected]> wrote:
> Can't even compose an email correctly..
>
> So:
>
> I experience the same issue however with nginx as a backend.
>
> I tried enabling “option httplog” within my frontend, it's rather easy for
> me to replicate, it affects a few percent of the traffic.
So you have this html endpoint and you hit F5 in FF Quantum until you
can see the issue or how is it that you actually reproduce? Does this
occur in a idle test environment as well, or do you need production
traffic to hit this issue?
> I have a site, with a total of 3 requests being performed:
>
> - The HTML itself
> - 1 app.css file
> - 1 app.js file
Please clarify:
- if any of those responses are cached (or if they are uncachable) and
if they use any kind of revalidation (If-modified-since --> 304)
- if any of those files are compressed, by haproxy or nginx, and which
compression is used
- the exact uncompressed content-length of each of those responses
- the exact client OS
- is Quantum a 32 or 64 bit executable on the client?
- is haproxy a 32 or 64 bit executable?
- can you run this repro in debug mode on show the output when the issue occurs?
- RTT and bandwidth between server and client (race conditions may
depend on specific network performance - not every issue is
reproducible on localhost)
- confirm that FF sandboxing is not affecting this issue by lowering
security.sandbox.content.level to 2 or 0 in about:config (then restart
FF) - don't forget to turn it back on
Thanks,
Lukas