The following is an excerpt from section "4.2.  FQDNs are not
sufficient" in Brian's draft-carpenter-referral-ps-02.txt

>    o  In large networks, it is now quite common that the DNS
>       administrator is out of touch with the applications user or
>       administrator, and as a result, that the DNS is out of sync with
>       reality.
>    o  DNS was never designed to accommodate mobile or roaming hosts,
>       whose locator may change rapidly.
>    o  DNS has never been satisfactorily adapted to isolated,
>       transiently-connected, or ad hoc networks.
>    o  It is no longer reasonable to assume that all addresses associated
>       with a DNS name are bound to a single host.  One result is that
>       the DNS name might suffice for an initial connection, but a
>       specific address is needed to rebind to the same peer, say, to
>       recover from a broken connection.
>    o  It is no longer reasonable to assume that a DNS query will return
>       all usable addresses for a host.
>    o  Hosts may be identified by a different URI per service: no unique
>       URI scheme, meaning no single FQDN, will apply.

The bottom line is I really don't see any serious problem with DNS.
It doesn't solve everything but wasn't intended to and replacing it
with a new mapping of names to address and other stuff won't change
anything.

The following is comments on each point individually.

>    o  In large networks, it is now quite common that the DNS
>       administrator is out of touch with the applications user or
>       administrator, and as a result, that the DNS is out of sync with
>       reality.

This is not a technical problem with DNS.  Large DNS domains can be
split into multiple subdomains.  For example, universities have always
delegated subdomains to departments that wanted them.

It may be that a lot of enterprises that run Microsoft junk that maps
into DNS are loath to use subdomains because it relinquishes control
(turf issue) or because the Microsoft junk gets harder to administer
when subdomains are in use.  [That's the excuse I heard at the last
corporation I was at - not sure if it is valid].

>    o  DNS was never designed to accommodate mobile or roaming hosts,
>       whose locator may change rapidly.

A highly dynamic name resolution is not the answer to mobility.  If
so, then getting rid of the 15 minute minimal TTL would do it.

But that would not solve the problem.  The name would have to be
updated exactly simultaneously to the new address being acquired if
the old address is lost at the same time.

The only reliable means of supporting unlimited mobility so far
involves using a fixed address, and a fixed base station and a means
of updating a tunnel from the base to the mobile device.

>    o  DNS has never been satisfactorily adapted to isolated,
>       transiently-connected, or ad hoc networks.

The suggestion to delegate to the home and provide secondary addresses
the transiently-connected domain which wants to be primary.

The same solutions to mobility apply to ad-hoc networks.  The
infrastructure is unnamed, but the nodes within it can be handled as
mobile nodes, including any routers that are mobile.

>    o  It is no longer reasonable to assume that all addresses associated
>       with a DNS name are bound to a single host.  One result is that
>       the DNS name might suffice for an initial connection, but a
>       specific address is needed to rebind to the same peer, say, to
>       recover from a broken connection.

In the cases where this is true, the FQDN (www.content-provider.com)
is intentionally mapped to a set of servers.  Using HTTP 1.1 for
example, there is no need to get the remaining chunks of a broken
connection from the same server.  Any of them should return the same
result.  In cases such as queries where results could vary, a redirect
is done on initial query.  Often this redirect is done anyway to point
to a topologically closer server or a server with lower utilization.

>    o  It is no longer reasonable to assume that a DNS query will return
>       all usable addresses for a host.

Nor does it matter.  Only one always reachable address is needed.  It
is about time that more than just routers started assigning a GUA to
the loopback and putting that in the DNS such that no matter which
interfaces were down or unreachable, the host was always reachable if
at least one interface was up and reachable.

>    o  Hosts may be identified by a different URI per service: no unique
>       URI scheme, meaning no single FQDN, will apply.

CNAMEs are very helpful.  I like keeping certain services as CNAME,
such as www, krb, cvs, svn.  Mail has its own redirection built in as
MX records.  If I want to ssh into the kerberos kdc I can use the
CNAME kdc.  It doesn't matter that this host may have other names.  I
can easily find out the FQDN that the CNAME points to.

Curtis
_______________________________________________
homenet mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/homenet

Reply via email to