James Craig Burley wrote:

Going back to my earlier questions, which I'll rephrase and ask you:

  Does DNS rely on local caching to avoid latencies related to network
  topology and potential problems with overloaded or unreachable Root
  servers?

Your question is based on a false premise. You seem to be obsessed with the root servers being the primary point of failure (or you are being imprecise in your terminology). The 13 root servers contain only records for the TLD servers, which is a very small set of records. The Root servers were attacked in late 2002 and there was no effect noticed by anyone (except the media).


Perhaps you are making the all-to-common conflation of the Root servers with the gTLD and ccTLD servers. These are the servers which actually contain the information on every registered domain. The gTLD (.COM, .NET) servers in particular are massively redundant (using among other things multicast to host the same IP address in multiple physical locations). The same attack that targeted the root servers was then shifted to the gTLD servers, at which point the upstream networks were set to block the ICMP packets which were being used. At no point in the attack was there anything noticible by the average user.

At least in the case of the 8 gTLD servers in the US, they are being operated on machines with enough RAM that the entire database is resident in RAM. It is, however, true that some (most?) of the ccTLD's are not so heavily redundant, but if they are following the recommendations in RFC2780, they are scaled to handle at least three times their measured peak loads.

So Yes, local DNS caching is used to avoid some network latencies, but No, the root (or even the gTLD) servers are not an issue. If person or persons unknown decide to attack the root or TLD servers (and they should somehow succeed), the fact that SPF queries would be effected too is a minor footnote. Everything related to DNS would stop working.

  Does the local caching rely on locality of reference over the set of
  lookups performed?

By this, I have to assume that you are referring to glue records, which can be cached to permit queries to short circuit the entire resolution from the root. But, again, your question is reliant on a hidden (and I would argue false) premise that the majority of the queries are of significant depth into the heirachy. In the case of rDNS queries, there can be multiple delegations to smaller and smaller IP blocks, so the depth of the queries would be higher.


However, we are specifically discussing queries based on domain name, which in the case of gTLD will never be more than 3 queries away (ignoring the effect of local caching) in practice. And I say "in practice" meaning specifically "the Real World." No domain administrator is going to assign a subdomain to any agent that they do not completely trust. So the *only* thing to worry about is rogue TLD domains themselves (see next answer).


Are SPF-based DNS lookups under the control of a local user population, or of external, potentially hostile, entities?

The SPF-based *lookups* are under the control of the MTA administrator; the *answers* are under the control of the external entities. There is no doubt in my mind that if SPF/Caller ID efforts begin to have an effect, that spammers will start to publish domain SPF records. However, the local MTA administrators can choose not to trust specific domains, though the use of an SPF-specific blacklist.


The real strength of systems like SPF comes when you consider the current trend towards zombie armies of spam relays. These exploited computers send e-mail with forged Sender/From information from random machine IP's inside networks who haven't firewalled port 25 (through either sloth or stupidity). If there exists an SPF record for the forged Sender domain, the traffic from these zombies is blocked in the SMTP transaction as 100% guaranteed spam. If Comcast decides to publish an SPF record permitting all IP addresses in their block to send e-mail (hint, this is bad), then no one will trust Comcast's SPF records.

This discussion is going nowhere; either you are a troll (someone who argues just for the intellectual challenge), or you just don't agree that SPF will help prevent certain unauthorized e-mail. Either way, I don't see any hope to convince you that your theoretical weaknesses are unrealistic based on actual, real world, usage patterns. Consider this my last posting on the subject.

FWIW, I am publishing SPF records and I am experimenting with using SPF with my domains. I'm also using multiple blacklists to block known spamsources and various content analysis tools to tag additional spam.

John

Reply via email to