Hi Tommy, (Since you brought it up!)
On 16 Apr 2025, at 18:02, [email protected] wrote: >> Personally I don't think that special actions in resolvers for INVALID and >> TEST are a good >> idea either; I would prefer consistent behaviour and no special cases, >> especially as I >> suspect that resolver operators that pay attention to this kind of thing and >> keep their >> software up-to-date probably already do aggressive NSEC caching and hence >> the risk to >> the root server system is lower than the risks related to increased >> complexity and >> camel exhaustion. But both risks seem small. > > Camel exhaustion, sure, but I'm not convinced that the complexity added by > saying queries for every name under a given TLD should always be given > negative responses. That's about as simple as a TLD-specific handling can be. We should not need TLD-specific handling. TLDs in general are and should not be special. Designing and deploying a general mechanism like aggressive negative caching and then simultaneously deciding to ignore it and recommend that specific non-existant domains be hard-coded as such seems like lunacy to me. Baking specific magic code points into the DNS and singling them out for special handling has overhead and there surely needs to be a good reason to endure the consequences. There is no good reason here. > If we expect network operators to move to use of .internal, they ought to > have some confidence that the ecosystem isn't going to be forwarding these to > the global DNS. Maybe that's just my eternal hatred for split DNS speaking, > hoping to confine all such uses to the one box we can then hide away forever. (a) They will leak, anyway. Requiring special handling of INTERNAL puts additional burden on resolver operators who are interested in being (in some sense) "compliant" but will not provide meaningful defence against leaks. To put it another way, the suggestion that some people may have decided to handle the INTERNAL domain in a special way coupled with the knowledge that some people will almost certainly not do that means that nobody can rely on INTERNAL queries not leaking, with or without the extra requirements imposed on resolver operators. (b) Whether or not people use INTERNAL is an entirely untested question. Perhaps if we had statistically-significant data that showed leaked INTERNAL queries actually existed and represented some kind of real problem, we could imagine thinking about a solution. We do not have such data. We are speculating that one day there might be a problem, and we are trying to anticipate a solution to that imagined problem with the benefit of zero knowledge. This is madness. > I am more sensitive to risks to the root server system than any other > component, so I'm not comfortable saying aggressive negative caching is good > enough. For context, I have been a root server operator in past lives and I have spent time building out the infrastructure of particular corners of the root server system. That was 15 years ago and there is a lot more diversity and capacity along every axis now than there was then. I am less sensitive to risks of this kind to the root servers than any other DNS servers. I would be surprised if today's root server operators lost sleep over QNAMEs in the INTERNAL domain. The root server system is not the part of the DNS infrastructure we need to worry about. In addition to the extreme diversity and capacity of the system, it's the most straightforward for any resolver operator to de-risk because in practice no resolver operator needs to rely upon it (and even those that do only need to obtain responses very infrequently). > Even in the happy case where every implementation that would honor .internal > would also use aggressive negative caching, there is still some emission of > traffic that we know, by design, in every case, is a complete waste of time. > In that we would want the DNS ecosystem to become more diverse, then we scale > that traffic. That traffic will exist regardless of whether INTERNAL is added to the registry. This proposal imposes work on resolver operators and offers no meaningful protection to infrastructure or end users. I realise that the work we are talking about is small. I know there is no protocol police, Warren's sheriff badge notwithstanding. All the tiny exceptions and niggly extra considerations add up on top of a protocol that is already unwieldy and undocumented. Every little extra nibble from the duck moves us closer to mortal injury. This is Bert's camel; the additional burden is tiny and insignificant until the camel dies. > So: weighing the benefit of reducing traffic to the root by even a very small > percentage (and if there is adoption of .internal, how small will it be after > enterprises create as many names as they want?) against managing another TLD > special case, in a world where several such already exist and there are no > resolvers that can exist without conditional logic... I say we go for it. Following the same line of thinking, it seems clear to me that there is almost zero benefit to go along with the work involved in this. Joe _______________________________________________ DNSOP mailing list -- [email protected] To unsubscribe send an email to [email protected]
