Dear colleagues,

Several meetings ago, I was at the mic complaining about various
incremental changes to the DNS architecture and how we didn't seem to
be thinking holistically about the matter.  I think it was in response
to something Ray Bellis said, and Ray quite correctly challenged me to
put up or shut up.  But as too often has been the case in recent
years, I never managed to write down in a coherent form what I was
talking about, and was left to splutter incoherently.  This message is
an attempt to lay out some of the questions that are nagging me, in
the hope that someone else will find them interesting, since it seems
pretty unlikely I'm going to get to put this together more coherently.

This message is inspired in part by watching the exchange between Paul
Hoffman and Stewart Bryant over the DoH I-D progress.  What struck me
there was Stewart's model of how things work: he was wondering how a
client that had a host name was going to resolve anything, if that
host name was its "DNS provider".  I suspect that is actually the
mental model an _awful lot_ of people have: a given host on the
network has a source for resolution service, and that source provides
all the answers.  We (here) all know this is wrong, and has been
essentially forever.  But if large numbers of people continue to hold
this assumption about the basic resolution path, I am prepared to
believe that many things will continue to be built with that
assumption built in.  This means that we need to make the notion of
"resolution context" a first-class notion, so that people understand
that (e.g.) the link-local, homenet, split-brain,
stub-to-full-service-resolution, and DoH contexts are all potential
issues to build into one's assumptions.

Indeed, even as that is going on, the IESG is considering
draft-ietf-ipsecme-split-dns-12.txt.  This follows from the MIF work
some years ago that kind of but not really addressed split DNS and how
it ought to work.

We've also introduced mechanisms for signalling sessions (an idea for
which I was an early proponent, I should note), altered the kind of
queries people make by encouraging minimal queries (this has
interesting consequences for strategies where parent and child zones
are hosted on the same servers), and so on.  This is all part of the
complexity that I think Bert was talking about in his Camel talk.

But I would say that the complexity _might_ be a requirement, and I
think the problem is that we can't tell because we no longer (if we
ever did) have a common and crisp description of what problems we are
trying to solve and how the parts are supposed to fit together.
(There is another claim about the camel -- that it is a horse designed
by a committee.)

I guess, therefore, I want to ask whether long-standing assumptions
about the DNS are still true:

    • Is the stub::full-service resolver::auth server model just over?
    • Do we think resolution context needs signal?  If so, how?
    • Is the age of the stub coming to an end?
    • Do we need something like "submission port for DNS", so that
    large concentrated systems can protect themselves and still
    provide service to important resolvers?
    • Does TCP need to become the norm (particularly for the above use
    case)? 
    • How can we explain these changes to others working on network
    systems?
    • Do we have an appropriate venue to discuss these issues, on the
    presumption that they're not really operations issues?

I really don't know the answer to much of this.  I will note that some
of these are questions similar to those asked by John Klensin in a
draft he put out some time ago, but that I think he has declined to
discuss on this list.

I am aware that perhaps I'm alone in my worries.  If so, you can
continue with your regularly scheduled programming :)

Best regards,

A

-- 
Andrew Sullivan
[email protected]

_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to