On 2012-04-15, at 20:02, Scott Schmit wrote:

> On Fri, Apr 13, 2012 at 04:38:10PM -0700, David Conrad wrote:
>> On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
>>>> More pragmatically, while I understand the theory behind rejecting NTAs,
>>>> I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
>>>> redirection. I would be surprised if folks who implement NTAs will stop
>>>> using them if they are not accepted by the IETF.
>>>> 
>>> it is still not a reason for the IETF to standardize this.
>> 
>> With the implication that multiple vendors go and implement the same
>> thing in incompatible ways. I always get a headache when this sort of
>> thing happens as the increased operational costs of non-interoperable
>> implementations usually seems more damaging to me than violations of
>> architectural purity. Different perspectives I guess.
> 
> What's to standardize (or be incompatible)?

Details like:

 - what data ought to be recorded with the NTA (e.g. reason, instantiation 
timestamp, expiration timestamp)
 - whether other available trust anchors to domains under an NTA should also be 
invalidated
 - whether there ought to be any signalling to a client to let them know that 
they're getting an answer despite a validation failure

> Each recursive resolver
> already has different mechanisms for configuring it, and I'd imagine
> that the list of NTAs would be configured similarly to (for example)
> its TAs & DLVs.

 - if NTAs were to be published as RRs, a bit like DLV
   - what RRs should be used?
   - should NTAs read from the DNS be cached?
   - are there requirements that the zone data be signed?

I think there's more to standardise here than you think. It's not that any of 
this is hard; it's just that it'd be so much less pain operationally if 
everybody's validator was configured along similar lines. If we did it right I 
can imagine subscription services for ISPs run by reliable people that ISPs 
could opt-in to in order to get an automatic whitelist. (Look at that! I just 
made all the security people on this list bang their fists on the table.)

Clients are very touchy about the performance of the caches they use. I have 
some dealings with a large (i.e. tiny by comparison with Comcast) residential 
ISP here in Canada, and I've seen first-hand the dramatic traffic shifts from 
the ISP-operated resolvers to OpenDNS and Google DNS if the ISP caches ever 
malfunction. Comcast's experience (as I heard about it in Teddington) rang very 
true -- unless you have the ops to mitigate signing failures, there is no way 
in hell you should validate in your cache.

Another thought from Teddington via Paris: if Comcast hadn't whitelisted NZ 
during the period when the NZ zone was tripping validation failures on CNS due 
to the BAABAA encoding oddity, there would have been a whole country's worth of 
content off the air to Comcast customers for a prolonged period. Someone might 
argue that the right thing to do was to suppress resolution of all names under 
NZ for reasons of architectural purity, but I would have a hard time agreeing 
with them (and I doubt they'd find many kindred spirits amongst kiwi expats 
living in Comcast service areas).

(And as I hope is obvious, even to those who didn't see Sebastian's talk about 
it, the NZ people actually know what they're doing. DNSSEC amateurs who are 
just blindly clicking "sign" have no hope. And most of the DNS isn't even 
signed -- the longer and wrigglier the entrails of trust become, the bigger 
problem this is, and I don't think it's necessarily the case that more 
deployment of signatures in the namespace will make things more reliable.)

I understand the reluctance to appear to sanction selective tolerance of 
validation failures -- from a security perspective it's ugly, it muddies the 
whole DNSSEC message, it smacks of "click OK to continue" certificate failures. 
But I do not see how we can expect validation in the cache to ever make sense 
to ISPs in any general, pervasive sense without some mechanism to mitigate 
signing failures. And if there's a mechanism, I think it should be standardised.

If all that sounds horrible, then the alternative is to issue new guidance that 
nobody expects caches to do validation anyway, ever, and that validation 
properly belongs in or near the application, on hosts. Reduce the advice to 
ISPs that they should make sure they can receive and generate large responses 
reliably and respond properly to clients that are willing and able to do their 
own validation. Re-point the energy currently directed at ISPs to Microsoft, 
Apple, Google and Mozilla.

That way the practical problems surrounding the use of a remote validator (the 
support cost of validation, the lack of benefit from validation from the 
perspective of the naïve end-user, the unfortunate user comparisons between a 
BROKEN! validating ISP and a WORKING! non-validating one next door, dealing 
with NTAs/whitelists/whatever, the direction of the user's anger when broken 
zones fail to validate, the unsecured channel between the client and the cache) 
all disappear.


Joe

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to