At 09:32 AM 2/20/99 -0800, Greg Skinner wrote:
>Roeland Meyer wrote:
>
>> Greg Skinner wrote:
>
>>> Once upon a time, there were some people who thought that if you added
>>> more bandwidth to the Arpanet, the congestion problems that were
>>> occuring at the time would go away. However, it took some studies by
>>> a control theorist to show that changes needed to be made to the TCP
>>> protocol to relieve congestion.
>
>> Nice fairy tale, however, it does not apply here.
>
>If you don't believe me, go to the IETF archives of 1987 (if you can
>find them), and look up the discussions concerning congestion on the
>Arpanet backbone of the Internet at that time. Or ask Van Jacobson
>yourself. I am not making any of this up.
Maybe my use of the term "fairy tale" may have been inappropriate, however
the point remains, it does not apply. BTW, one horsepower solution is to
have root-server clusters, rather than single-servers, and to have many of
these distributed around the world. In this case it is architectural
horsepower. MHSC has been developing work-clusters for quite a few years
and distributed meta-clusters are a very exciting field. The point is that
there *are* solutions that are screaming to be used. That we have not found
pressing need to use them yet, speaks well for the original BIND design.
>> Sure, but first you'll have to prove that there is a problem, Chicken
>> Little. Show me a failure mode that I can repeat. Point to code that shows
>> the architectural flaw. Yes, there is one small section, in the caching
>> code, that is slightly non-deterministic in certain conditions. However, my
>> personal examination did not yield any failure modes in the code. testing
>> specifically, and generally, also did not reveal any flaws.
>
>Here's an idea:
>
>Find some site that supports a very large mailing list where the
>subscribers are uniformly distributed throughout all the TLDs, and the
>mail server does a bidirectional name-to-address lookup on each
>incoming message (such things are done as part of spam prevention).
>Now consider what the effect would be, particularly on the DNS server
>that is taking on the bulk of queries from the mail server, if most of
>those addresses were within lots of new TLDs, rather than deeper in
>the existing tree). It seems to me that far more queries would be
>going to the roots. So now a burden is imposed not only on the root
>servers, but on any servers that might otherwise be able to cache
>intermediate results.
This is not an adequate test as the sendmail over-head would swamp the test
results.
>If everyone (?) kept a copy of their own root zone, sure, you wouldn't
>need the root servers any more. However, they still need to be kept
>in sync. Having dedicated root servers is a tradeoff. It's far
>easier to keep a small number of root servers in sync than a huge
>number of DNS servers. After all, part of the reason we have DNS is
>so we don't need to keep a large host table in sync any more, right?
We have been discussing ways and means to do this synchronization. It is
much better than host files since primary DNS servers are vastly
out-numbered by secondaries. Only the primaries need be synch in this way,
normal DNS operations can update their secondaries. A 12-hour sync delay is
also acceptable, since the root-servers only get updated every 24 hours anyway.
___________________________________________________
Roeland M.J. Meyer -
e-mail: mailto:[EMAIL PROTECTED]
Internet phone: hawk.lvrmr.mhsc.com
Personal web pages: http://staff.mhsc.com/~rmeyer
Company web-site: http://www.mhsc.com
___________________________________________________
KISS ... gotta love it!