Hi Willy, Aleks,

I will try the things suggested this afternoon (hopefully) or tomorrow and get 
back to you.

> At least if nginx does this it should send a GOAWAY
> frame indicating that it will stop after stream #2001.

That's my understanding as well (and the docs say as much). I assumed HAProxy 
would properly handle it, as well, so perhaps it's something else nefarious 
going on in our particular setup. There is still the possibility that the bug 
fixed by Aleks' patches regarding HTX & headers were causing this issue in a 
back-handed sort of way. I will apply those patches, establish that the headers 
bug is fixed, and then try the recommendations from this bug to rule out any 
interactions on that side (a badly written header in our situation could result 
in a 404, which seemed to be the worst user-facing case of this bug).

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, January 22, 2019 9:37 AM, Willy Tarreau <w...@1wt.eu> wrote:

> Hi Luke,
> 

> On Mon, Jan 21, 2019 at 09:30:39AM +0000, Luke Seelenbinder wrote:
> 

> > After enabling h2 backends (technically `server ... alpn h2,http/1.1`), we
> > began seeing a high number of backend /server/ connection resets. A
> > reasonable number of client-side connection resets due to timeouts, etc., is
> > normal, but the server connection resets were new.
> > I believe the root cause is that our backend servers are NGINX servers, 
> > which
> > by default have a 1000 request limit per h2 connection
> > (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests).
> > As far as I can tell there's no way to set this to unlimited. That resulted
> > in NGINX resetting the HAProxy backend connections and thus resulted in user
> > requests being dropped
> 

> That's rather strange. At least if nginx does this it should send a GOAWAY
> frame indicating that it will stop after stream #2001. We normally respect
> stream limits advertised by the server before deciding if a connection is
> still usable (but we could very well have a bug of course). If it only
> rejects new stream creation, that's extremely inefficient and unfriendly
> to clients, so I doubt it's doing something like this.
> 

> We'll need to run some interoperability tests on nginx so see what happens.
> It might indeed be that the only short-term solution would be to add an
> option to limit the total number of streams per connection. I don't see
> any value in doing something as gross, except working around some memory
> leak bugs, but we also need to be able to adapt to such servers.
> 

> Could you try h2load on your server to see if it reports errors ? Just
> use a single connection (-c 1) and a few streams (-m 10), and no more
> than 10k requests (-n 10000). It could give us some hints about how it
> works and behaves.
> 

> Thanks,
> Willy

Attachment: publickey - luke.seelenbinder@stadiamaps.com - 0xB23C1E8A.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to