On 4/17/2012 9:51 AM, Shane Kerr wrote:
Is it just me, or does the NTA discussion remind anyone else of NAT
discussions? And not just because they have the same letters. ;)
Operators say, wow, here is a useful tool that can solve real-world
problems, maybe it would be helpful to recognize these problems and
perhaps standardize the solution?
Protocol zealots answer, but this is bad for this long list of very
valid reasons, so sorry about your problems but really we know better.
If we insist on DNS Purity, I predict a similar end state with NTA as
with NAT: they will be almost universally deployed, and completely
non-standard, and result in a lot of potential breakage due to
inconsistencies and semi-broken implementations. Yay.
what potential breakage or semi-broken implementations are you thinking
of? in NAT, it's all rather broken. the fix for NAT, had the IETF been
willing to recognize this technology in spite NAT's non-purity, would
have been in the form of what we now call firewall traversal or perhaps
PnP. neither of which the world was ready for at the time, and neither
of which was something that could have been standardized soon enough to
stop bad NAT from getting out, nor standardized at a high enough
quality and relevance level at that time since in the early days noone
knew yet how many things NAT would break.
but let's continue the analogy anyway. what would have saved the world
from bad NAT was a change to the underlying connection model of the
internet -- not just an RFC that says here's how you can use 192.168
but rather one that said here's how you can preserve some aspects of
end-to-end, not break FTP, not have to reframe your TCP sessions at the
gateway layer if they might contain embedded IP addresses. in that
sense, i agree with your comparison: a fundamental change to the nature
of DNSSEC to allow for secure signalling of errors and secure signalling
of middle-man policy would definitely help us head off a generation of
bad NTA. but that's not what's on offer here.
DNSSEC currently presumes reliable end-to-end failure, and offers no way
to signal other conditions. so if somebody breaks into your primary name
server and alters your zone and either doesn't sign their changes or
signs the whole zone with some key that's not matched by your delegating
DS RRset, right now it will cause the modified content to be marked as
bad and ignored or dropped by any validating server. NTA will change
that, by adding middlemen who can decide as a matter of their own policy
to just ignore these bad or missing signatures and pass your data to
their stub clients as though DNSSEC had not been in use. that's
horrible. that's worse for deployment confidence in DNSSEC than anything
the social security administration or NASA could otherwise do by failing
to re-sign or using the wrong keys.
so i'm all for changing DNSSEC itself to accomodate these other use
cases. but i'm not on-board at all for a standardized way for middlemen
to block unwanted (by them) DNSSEC signalling. this means i'm in favour
of the thing you're suggesting (do better with DNSSEC than we did with
NAT) but i'm also opposed to the thing you're supporting (make
end-to-end DNSSEC failures less reliable). i think this means you're
offering me a false dichotomy above. let me know if i'm misreading you.
paul
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop