> read from the streams in parallel and put the tuples on a bounded
> queue, and then read from that with a single thread. You just have to avoid
> a single thread doing the reads sequentially.

From: [email protected] At: 02/19/26 20:12:30 UTC-5:00To:  [email protected]
Subject: Re: HTTP/2 Struggles With Large Responses

That’s a nice investigation.

You can get around the streaming “cannibalism” issue by doing something
like: read from the streams in parallel and put the tuples on a bounded
queue, and then read from that with a single thread. You just have to avoid
a single thread doing the reads sequentially.

But that just puts the streaming situation into the replication issue -
meaning performance problems.
So moving to that issue.

From what I’ve read, HTTP/2 can be just as fast as HTTP/1 for these large
stream cases. The main issue that tends to cause the stall performance
problems is the client not reading fast enough to keep up with the server.
But that will cause stalls in HTTP/1 as well.

I’ve also read, that outside of that issue, HTTP/2 performance for this can
match HTTP/1 by configuring the client receive window using a formula based
on the bandwidth-delay-product (based on network bandwidth and latency.)

Well, that’s a lot less than ideal.

These two use cases should just explicitly use HTTP/1.


- MRM


Reply via email to