Erik Nordmark <[EMAIL PROTECTED]> wrote:

|Being the probable guilty party for introducing this thought back in
|draft-*-site-prefixes-00.txt I can offer a slightly expanded perspective.
|
|I don't think stable addresses per se is the key thing - it is
|the robustness of the communication that is important.

Given that nobody seems to know how to make communication (as many applications
currently understand/use it) robust in the face of _unstable_ addresses it isn't
obvious that this distinction is of much utility.

|This robustness has at least two factors that are relevant in this
|discussion: the stability of the addresses, and the leakage of
|non-global scope addresses.  I think the question is how to weigh those
|together.

Ok, but it isn't clear that these two factors are of even remotely similar 
weight.  Leakage is a problem that can be addressed, but there are a lot of
things that simply will not work without stable addresses (at least not without
a complete overhaul of many higher-level protocols).

|In terms of the stability of the addresses one has to take into account
|both stability as it relates to local communication and stability for
|global communication.

We have always been told that stable global v6 addresses will not be available
to end users, or at least will not be available to end users at a low cost.
Unless you are proposing to revise the whole address allocation architecture
*and* have a way to force ISPs to change their business models I think we must
accept this as a given.

|If you assume that the value/importance of local
|communication is much higher than the value/importance of global communication
|then site-locals make sense to explore.

I think you have made an unreasonable leap by dropping the "stable" qualifier.
The value/importance of _stable_ local communication is almost certainly much
higher than the value/importance of _stable_ global communication.  We already
accept that sometimes when we click a link in a browser things go wrong and we
have to try again.  Protocols like SMTP that drive global email assume that
almost anything can go wrong and have the capability to queue messages for days
if necessary.  But my NFS client is simply not prepared to have its server's
address renumbered out from under it.  My multi-hour build will fail unless
I notice the problem and fire up adb on the kernel in a hurry.  Similarly my
print job will fail if the printer and/or client is/are renumbered in the middle
of a tcp session.  My distributed home automation system, while quite tolerant
of temporary lost connections and machine reboots, can not deal with addresses
changing out from under it.  This is hardly unreasonable because the tools to
deal gracefully with such situations have not yet been invented.  To make such
things work now each application would have to implement its own procedures to
deal with unstable addresses.  This is obviously not acceptable to application
writers.

|(FWIW that was my assumption
|way back).
|Such an assumption might make sense we say that a site is an
|administrative concept (such as a company), but it makes less sense
|if a site is a geographic concept and as I understand the original
|thoughts a site was intended to be a geographic concept like a building
|or a campus.

We understand that sites are administrative.

|In any case, for a home user I suspect that the value/importance of
|local communication would typically be less than the value/importance
|of global communication.

Again assuming we are talking about _stable_ communication, I believe that 
you are incorrect.  Granted those of us who depend on our home networks for
automation and such are currently on the bleeding edge, but what about the
future when every stereo and tv is on the net?  It's one thing to have to
re-click that remote link in the browser, but quite another to have your
stereo refuse to change channels.  Consumers are not going to pay their ISP
a premium to keep their stereos working.  I know it sounds nice in theory,
but look what happened to Divx.  If you take away scoped addressing we _will_
use NAT.

|Thus the ISP offering a service with unstable
|global addresses I don't think it would be that satisfactory
|for the peer-to-peer communication that we wish to enable with IPv6,
|even if there are stable site-local addresses so that the user can
|communicate inside their home without a glitch.

I'm having trouble parsing the above.  ISPs currently offer unstable v4
addresses (unless you pay extra) and they aren't satisfactory for the
peer-to-peer communication we used to have back when address space was
portable.  People have worked around this with various application-level
kludges.  You obviously recognize that stable addresses have value, so
I don't understand why you expect that ISPs will suddenly stop charging for
that value.  Depriving users of the tools necessary to make productive use
of their networks without paying for stable globals for all internal nodes
will just encourage yet another round of kludges.

|So let's not loose sight of the fact that the goal is a robust network.

I think that the goal is a useful network--useful not only for ISPs and
application vendors but for consumers.

                                Dan Lanciani
                                ddl@danlan.*com
--------------------------------------------------------------------
IETF IPng Working Group Mailing List
IPng Home Page:                      http://playground.sun.com/ipng
FTP archive:                      ftp://playground.sun.com/pub/ipng
Direct all administrative requests to [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to