> Hello QUIC and HTTP enthusiasts,
> 
> We, Lucas and I, have submitted two drafts aimed at broadening the reach of
> HTTP/3 - yes, making it available over TCP as well. We are eager to hear
> your thoughts on these:
> 
> QUIC on Streams: A polyfill for operating QUIC on top of TCP.
> https://datatracker.ietf.org/doc/html/draft-kazuho-quic-quic-on-streams
> 
> HTTP/3 on Streams: How to run HTTP/3 unmodified over TCP, utilizing QUIC on
> Streams.
> https://datatracker.ietf.org/doc/html/draft-kazuho-httpbis-http3-on-streams
> 
> As the co-author of the two drafts, let me explain why we have submitted
> these.
> 
> The rationale behind our proposal is the complexity of having two major
> HTTP versions (HTTP/2 and HTTP/3), both actively used and extended. This
> might not be the situation that we want to be in.
> 
> HTTP/2 is showing its age. We discussed its challenges at the IETF 118 side
> meeting in Prague.
> 
> Despite these challenges, we are still trying to extend HTTP/2, as seen
> with WebTransport. WebTransport extends both HTTP/3 and HTTP/2, but it does
> so differently for each, due to the inherent differences between the HTTP
> versions.
> 
> Why are we doing this?
> 
> Because HTTP/3 works only on QUIC. Given that UDP is not as universally
> accessible as TCP, we find ourselves in a position where we need to
> maintain and extend not only HTTP/3 but also HTTP/2 as a backstop protocol.
> 
> This effort comes with its costs, which we have been attempting to manage.
> 
> However, if we could create a polyfill for QUIC that operates on top of
> TCP, and then use it to run HTTP/3 over TCP, do we still need to invest in
> HTTP/2?
> 
> Of course, HTTP/2 won’t disappear overnight.
> 
> Yet, by making HTTP/3 more universally usable, we can at least stop
> extending HTTP/2.
> 
> By focusing our new efforts solely on HTTP/3, we can conserve energy.
> 
> By making HTTP/3 universally accessible, and by having new extensions
> solely to HTTP/3, we can expect a shift of traffic towards HTTP/3.
> 
> This shift would reduce the necessity to modify our HTTP/2 stacks (we’d be
> less concerned about performance issues), and provide us with a better
> chance to phase out HTTP/2 sooner.
> 
> Some might argue that implementing a polyfill of QUIC comes with its own
> set of costs. However, it is my understanding that many QUIC stacks already
> have the capability to read QUIC frames other than from QUIC packets,
> primarily for testing purposes. This suggests that the effort would be more
> about leveraging existing code paths rather than writing new code from
> scratch. Furthermore, a QUIC polyfill would extend its benefits beyond just
> HTTP, by aiding other application protocols that aim to be built on top of
> QUIC, providing them accessibility over TCP.
> 
> Please let us know what you think. Best regards,
It's an interesting proposal. Looks fairly sensible.
I could see a lot of other uses also for having a mapping of the QUIC
application-level semantics without QUIC itself, such as for diagnostic
use or intra-DC backhaul of incoming traffic.

I question the utility of implicit length signalling. Unless there's a
real use for this (maybe there is and I'm just not seeing it) I would
probably just prohibit these encodings. The max_frame_size transport
parameter proposed here cannot be reduced below 16384. So you're saving
at most 3 bytes (to encode 16384) for every 16384 bytes. That would seem
to yield an efficiency increase of 0.018%. For larger max_frame_size
values this obviously gets even smaller.

Is there a rationale to supporting this I'm not seeing?

Reply via email to