> When all the source are broken your test works. Try daisy chaining > servers which always send CD=1 (current advice). Have 2 sets of > servers for the zone, some with good answers and some with broken > answers. Turn off the good servers. Prime the daisy chain. Turn > on the good servers. Try to retrieve the answer. This simulates > spoofed answers being accepted by the end of the daisy chain.
Section 3.1 of the draft seems to be a bit at odds with Section 3.2.2 of RFC-4035. The draft suggests that a validating resolver, when processing a query with CD=1 puts RRsets in the cache without validating them and then re-uses those cached RRsets for queries with CD=0. My understanding of the model in RFC 4035 is that a validating resolver only puts validated RRsets in the normal cache and puts RRsets that fail validation in a BAD cache. And then in Section 4.2 of the draft it seems to propose what is already in RFC-4035. Though it adds a more agressive retry strategy to try to fetch good data. One option is to have this draft as informational and describe how caching and validation should be done in a validating recursor. The draft already does this to some extent with references to various RFCs that discuss setting TTLs, EDNS(0), etc. The question is if a more agressive retry strategy warrants a standards track document. As part of an attack it seems highly specific and requires the attacker to be able to spoof packets. With that capacitity it seems very unlikely that the attacker will try to DoS a DNSSEC signed zone. More aggressive retrying may also help with configuration errors or other operational issues. But in that case it would be good to have some data on how often this will fix a realworld problem. The cost of retrying is that it may provide more opportunities for other DoS attacks and in general increase the network load during a failure. _______________________________________________ DNSOP mailing list -- [email protected] To unsubscribe send an email to [email protected]
