> http://www.cloudshield.com/applications/dns-control-traffic-load.asp
Besides questions others have raised, I wonder how filtering requests that would produce NODATA and so forcing clients to retry and so tripling that traffic reduces total DNS traffic. A single DNSSEC signed NODATA can be somewhat larger than the two request retries, but I've generally hit packets/second limits in hosts, routers, and firewalls before bits/second limits. I also wonder how sending "300 A 127.0.0.1" instead of NXDOMAIN and so replacing negative cache entries that probably have much larger TTLs "will drastically reduce traffic in case of an attack". How do those A records replace NXDOMAIN responses? Would he put wildcards in the DNS server or have one of his employer's boxes respond to requests before they reach the DNS server? Given the talk of filtering requests upstream from DNS servers, I assume he envisions the latter tactic. If so, I wonder how he expects that to play with DNSSEC. Maybe he sees problems in general from DNSSEC for smart boxes upstream of DNS servers and that why he has the negative view of DNSSEC expressed in http://www.cloudshield.com/applications/dns-truth-about-dnssec.asp On the other hand, this statement in that document might suggest confusion about DNSSEC: Even if DNSSEC were deployed broadly, it still would not ensure that DNS for a domain could not be misdirected. This is because DNSSEC does nothing to ensure that the listed authoritative name server for a domain name is one that is legitimately controlled by the owner of the domain name. Or maybe the definition of "legitimately controlled" is unrelated to delegations from parent domains. Or maybe he is thinking of regimes in which the bad guys control the root trust anchors on all computers. Vernon Schryver [email protected] _______________________________________________ dns-operations mailing list [email protected] https://lists.dns-oarc.net/mailman/listinfo/dns-operations dns-jobs mailing list https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
