Hey Mark,

I calculated the BDP on our benchmark infra 
when tuning these parameters but couldn't
get it to be anywhere close to HTTP/1. It could
be my error or some issue with Jetty itself.

In my mental model of http/2 there must be
some contention that grows with the amount
of concurrent streams all sending data at
once. If it's just, say 2 concurrent streams,
and they have ample session window space
then they rarely step on each other toes.
But if you begin to scale that and they start
fighting for that space (by default they are
expected to share a lot) then the throughput
becomes a lot more sensitive to your
client's consumption rate.

I thought that by "mimicking" http/1 by
isolating each stream to a distinct chunk
of session space (making stream window
small relative to session window) I could
reduce this contention but this didn't greatly improve performance (although it 
did mitigate stall). Thus there must be something else
going on.

I almost forgot one thing that emerged from
the investigation is that in our set-up a single
connection couldn't saturate our total
bandwidth (at least two were required as 
indicated by a simple iperf test). So this
does explain some of the difference but not
all.

At any rate, it's a bit frustrating that http/2
is just more sensitive to the environment
it runs on and is generally more finnicky
so I support the default being http/1 for
these use cases. I don't know what the long
term fix is but I suppose we can kick the 
can on that one as http/1 isn't going
anywhere anytime soon.

Luke

Sent from Bloomberg Professional for Android

----- Original Message -----
From: Mark Miller <[email protected]>
To: [email protected]
At: 02/19/26 20:12:30 UTC-05:00


That’s a nice investigation.

You can get around the streaming “cannibalism” issue by doing something
like: read from the streams in parallel and put the tuples on a bounded
queue, and then read from that with a single thread. You just have to avoid
a single thread doing the reads sequentially.

But that just puts the streaming situation into the replication issue -
meaning performance problems.
So moving to that issue.

From what I’ve read, HTTP/2 can be just as fast as HTTP/1 for these large
stream cases. The main issue that tends to cause the stall performance
problems is the client not reading fast enough to keep up with the server.
But that will cause stalls in HTTP/1 as well.

I’ve also read, that outside of that issue, HTTP/2 performance for this can
match HTTP/1 by configuring the client receive window using a formula based
on the bandwidth-delay-product (based on network bandwidth and latency.)

Well, that’s a lot less than ideal.

These two use cases should just explicitly use HTTP/1.


- MRM

Reply via email to