Tracy R Reed wrote:

I have been a big fan of distributed hash tables (DHT's) for the efficient distribution of information for a few years now. Ever since I first heard about Freenet, really. Freenet itself has given up on much of the DHT concept for various reasons, most having to do with anonymity. Kademlia is another very interesting DHT. Bittorrent uses the Kademlia concept to good effect. But there are much more important infrastructure type issues that could make use of DHT's to dramatically improve the reliability and efficiency of the Internet. DNS is one place where a DHT would seem to be a very good solution:

http://www.cs.cornell.edu/people/egs/beehive/codons.php


Here is a nice summary of how it all works which I found while googling:

http://www.cs.toronto.edu/syslab/courses/csc2231/05au/reviews/HTML/16/0001.html

The problem is that this is using a hammer to swat a fly.

What DNS really needs is for major ISP's to get regular zone transfers with incremental update from the roots. A good example of this hierarchical structure is ntp.

An ntp client is never supposed to hit a Tier-1 or Tier-2, it is supposed to hit something much more local (see the fiasco with Netgear and Poul-Henning Kamp for what happens when this breaks down). Generally, every ISP runs a Tier-3 time service. This should be the same with DNS.

Then, recursive queries stop at the ISP 99.9% of the time. In addition, the DNS system can quite easily survive the loss of a root since the ISP's would just hold the entire table until the next zone transfer occurs (ie. no timeout at full zone level).

All it would take is one single root and one major ISP to start making this work. Then network effects can take over.

-a


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to