Connecting to the wrong place costs more than slightly longer handshake, and I 
suspect often happens because our name mapping solution is a host mapping 
solution, whereas for many use-cases today we need object mapping, since it is 
impractical to host every object in every place the service exists.
 
The object mapping problem is one of the reasons why I'd originally hoped to 
get something like DoH into HTTP2 (server push of a DoH-equivalent record plus 
a redirect allows for good object-> address mapping).
That combination didn't happen, but the problem that made it interesting to 
think about still persists-- getting the mapping right matters more than a 
little bit more data being sent in the handshake.

Ideally, such an object mapping would be targetable, redistributable, and 
dynamic.
We've been using this internally for storage-related things (i.e. not just the 
read() part, but the write() and associated other calls as well), and it has 
helped quite a bit. I imagine it'd help for the web/HTTP as well. 

-=R

On 4/19/21, 2:14 PM, "QUIC on behalf of Paul Vixie" <[email protected] on 
behalf of [email protected]> wrote:

    hello. can you explain how you get from:

    On Mon, Apr 19, 2021 at 01:45:48PM -0700, Matt Joras wrote:
    > ... The
    > vast majority of QUIC connections in our deployment (and TCP + TLS for
    > that matter) are resumed.

    to:

    > ... Resumption makes
    > this particular concern a non-issue for most real world connections
    > and has other positive benefits.

    that is, how is your deployment known to represent most real world use?

    i love resumption -- that's why RFC 6013 had it. but i also love DANE, which
    is having strong success in the SMTPS market but has been eschewed by the
    HTTPS market. thus my question as to how the QUIC team is prioritizing use
    cases. "big tech" is shiny but not nec'ily representative of the whole web.

    -- 
    Paul Vixie


Reply via email to