Let me try to tackle a few things before heading off to my next meeting to conjure other stupid ideas! :-)
Florian Weimer <[email protected]>: > This is a bit over the top. I've suggested multiple times that one > possible way to make DNS cache poisoning less attractive is to cache > only records which are stable over multiple upstream responses, and > limit the time-to-live not just in seconds, but also in client > responses. Expiry in terms of client responses does not cause a cache > expiration, but a new upstream query once the record is needed again. > If it the new response matches what is currently in the cache, double > the new client response time-to-live count from the previous starting > value. If not, start again at the default low value (perhaps even 1). Sure... this may well be a fine enough idea. Lots of such point solutions are perfectly fine ideas. Our point is to step back and wonder whether these resolvers (whether ISP-level, institution-level, CPE, whatever) are really a large enough benefit compared against the cost of the attack surface. And, the paper shows---at least in an initial fashion---that the benefits might not be all that great. Andrew Sullivan <[email protected]>: > There's a third cost here, and that is a large increase in costs to > authoritative server operators. > > That might be worth trading off, but it won't do to pretend that isn't > a cost that's incurred. I absolutely agree. Please read section 5 which addresses exactly this question. We use .com as an example of a popular authoritative domain in this work. Frank Sweetser <[email protected]>: > We make pretty heavy use of RPZ to block outbound malware traffic, > especially to prevent people from inadvertently browsing malicious web > sites. Yup, this would be a cost of getting rid of resolvers, I agree. We mention policy implementation in the paper, but don't say a lot about it. My view here is that you could certainly still do exactly this by funneling traffic through a resolver. That might be the right tradeoff for you. But, the right default may well be to let clients do lookups themselves and let this situation happen as the exception in places where folks want to implement such policy. In other words, just because there may well be times we want to use an intermediate resolver for good and valid reasons does not mean that running zillions of such boxes all over the place (as we do now) is the right general approach. Matthew Pounsett <[email protected]>: > The paper also appears to make the assumption that eliminating > existing resolvers is a thing we can do. Open recursive resolvers > wont go away simply because we, as an industry, decide to stop > setting up new ones. Theres no way to prevent them from sending > queries (or to selectively block them), and they are almost by > definition unmanaged, so we cannot expect they will be taken offline > by their respective administrators. Sure. I agree with this. But, if we make clients default to not using resolvers then the harm resolvers can do is reduced. I.e., so what if I can cache poison a CPE if none of the clients behind it utilize the CPE for lookups? Of course, if the CPE is open then we still have reflection/amplification problems. But, if nobody is using the DNS forwarders on these things then maybe they eventually go away. We are not under the illusion that one can wave a magic wand and get rid of these things. But, individual clients can get benefits from ignoring them. And, if they are generally not found to be useful anymore then maybe they start going away. Thanks folks! allman
pgphCJGAo8fWN.pgp
Description: PGP signature
_______________________________________________ dns-operations mailing list [email protected] https://lists.dns-oarc.net/mailman/listinfo/dns-operations dns-jobs mailing list https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
