On Fri, Dec 19, 2025 at 05:20:08PM +0100, Hank Bowen wrote:
> However, having touched on the subject of HTTP/2 - I'm wondering if I
> understand correctly why in case of http-reuse set to "aggressive" or
> "always" one client can cause head-of-line blocking problem on the rest of
> the clients.

Yes in theory, though since 3.1 or so, it has been significantly mitigated
by the fact that we now support a dynamic Rx buffer size and we advertise
only the allocated size. Prior to 3.1 we'd advertise 64kB despite a 16kB
per-stream buffer by default. Most of the time it would be fine thanks to
other internal buffering but not always. But HoL is inherent to H2 and is
always a trade-off between HoL risk and BDP (bandwidth-delay product).

> Is it that the given connection's TCP buffer then significantly
> fills up and when other (fast) clients request to download data, haproxy can
> only download so much of them as much there is remaining space in its TCP
> buffer which is low, so it must perform this operation by executing many
> separate downloads and each such download is related to some overhead
> (compared to the situation when all the data were downloaded to haproxy's
> TCP buffer at once)?

The issue is that if you aggregate a slow and a fast reader into the same
connection, and the connection is filled with data for the slow reader,
there's no way to make the data for the fast reader bypass it since TCP
is in-order (something that QUIC addresses).

> If we have a sequence of frames from the server and they are for different
> clients, haproxy does not have to wait for the n-th frame to be sent to a
> client in order to send the n+1-th frame to another client, am I right?

That's it, you just cannot realistically do that otherwise you send only one
frame per network round-trip, which can limit the connection's performance
to 16kB per round-trip, e.g. 160kB per second for 100ms. But as explained,
with 3.1+ and dynamic buffers we can now modulate what we advertise and do
our best to adjust to the number of readers in a same connection. However,
slow readers will stlil reserve a number of buffers that will possibly be
under-utilized and not usable by faster ones. But that's a minor issue
compared to the initial one.

> I have also some more questions although I'm not sure if it is best to send
> them here or to create a new topic, but they are rather closely related to
> this discussion.

If they're related, let's keep going on this thread ;-)

Willy


Reply via email to