Phillip Hallam-Baker <[email protected]> writes: >> > Google is currently working on HTTP over UDP to shave a second of page >> load >> > times. This group is working is proposing to move the most latency >> critical >> > interaction from UDP to TLS. >> >> Some people here pointed out that the initial goal is for stub >> resolving, which is not latency critical. I believe this point can be >> made more clear in the documents and in the discussion. One easily gets >> the idea that this is about Internet-wide DNS. Confusing these two >> use-cases is bad. >> > > Stub resolving is totally latency critical. Go talk to some folk who work > on browsers.
I should have qualified it: "connection setup latency of stub-resolving is not critical". That is at least how I understand other posts on this list. Browsers already have a built-in stub-resolver as far as I am aware, and appear to be likely to continue to have them. The latency for one query will be the same with DNS-over-TLS -- 1RTT. To be fair, my experience is mixed, depending on application, the setup cost could be an issue -- consider for example a normal Unix application that does getaddrinfo(). In normal settings, that will cause libc to open a UDP or TCP connection depending on what's in /etc/resolv.conf. If that would involve doing a TLS handshake and pulling in a TLS stack, that has its own share of issues including latency. However I believe this model is old-fashioned. It would be better for /etc/resolv.conf to always point at 127.0.0.1 or ::1 and have a local daemon running for name resolution, and that daemon have a long-lived connection to its local stub resolver. There is the occational bootstrap issue during boot, but I don't believe that warrants a complicated in-process stub resolver. The DNS stub resolver code in most libc's is crap anyway. /Simon
signature.asc
Description: PGP signature
_______________________________________________ dns-privacy mailing list [email protected] https://www.ietf.org/mailman/listinfo/dns-privacy
