Tim Chown <[EMAIL PROTECTED]> wrote:

|On Mon, Nov 18, 2002 at 10:49:42PM -0500, Dan Lanciani wrote:
|> 
|> We have always been told that stable global v6 addresses will not be available
|> to end users, or at least will not be available to end users at a low cost.
|
|Told by who?

Folks on this list every time provider-independent global addresses or stable
identifiers are brought up...

|I can see ISPs wanting to charge for extra services where they
|can, and thus for a static /48 as they do now for a static single IPv4
|address.  But I would hope that enough ISPs would offer free static /48's
|to take custom from those who charge.

Given that this is not happening now for single static v4 addresses, your
analogy would seen to suggest the opposite.

|The only "snag" is that such ISPs
|with 10M customers would need a lot more than a /32, esp. taking into
|account the HD-ratio.

This isn't just a "snag."  The hierarchical addressing architecture consumes
address space exponential in the number of providers in the chain.  This makes
it very difficult to break away from the limited business models currently
contemplated.  In order to be the kind of provider you suggest the ISP probably
has to move "up" a level in the chain.  Suddenly that huge address space is
looking a lot smaller...


Erik Nordmark <[EMAIL PROTECTED]> wrote:

|If the ISP provide the disservice of unstable IP addresses I think we have
|a large class of problems. The fact that site-local addresses might ameliorate
|some of those problems doesn't magically make such ISP disservice useful.

No, but the existence of an alternative may help to avoid the disservice in
the first place.  By taking away the alternative you give the ISPs a huge
lever to charge whatever the market will bear, at least until NAT is available.
By reducing the need for (and thus the perceived value of) stable addresses
you may be able to reduce their cost.


Keith Moore <[EMAIL PROTECTED]> wrote:

|> We have always been told that stable global v6 addresses will not be available
|> to end users, or at least will not be available to end users at a low cost.
|
|I think this depends on what you mean by "stable".  For some reason the
|community has been reluctant to specify a number here, and the result
|is that people have widely varying ideas about what we can expect in practice.

The community has been reluctant to specify a number because it isn't clear
that stability is other than a binary attribute.  The worst part of keeping this
whole stability issue in limbo is that it's always possible to argue that things
will be "stable enough" that we don't have to solve the renumbering problem.
This in turn means that lack of stability will always be an encumbrance and
thus stability will command a premium.

|I don't think we want to let the reliable operation of applications be
|an accident, nor do I think we want to trust market forces to sort this
|out. 

I agree.  This is exactly why we desperately need scoped addressing or a
comparable mechanism that allows users to control the stability of their
networks.

|> Unless you are proposing to revise the whole address allocation architecture
|> *and* have a way to force ISPs to change their business models I think we must
|> accept this as a given.
|
|I don't think it's necessary to "force" anything, and casting things in these
|terms makes them seem more difficult than they really are.

I am merely observing current business practices and projecting a reasonable
continuation of those practices.  You on the other hand keep implying that
there will be a significant change in the business models, but you decline to
provide any plausible justification for your expectations.  Repeatedly asserting
that everything is going to be copacetic will not make it so.

|> I think you have made an unreasonable leap by dropping the "stable" qualifier.
|> The value/importance of _stable_ local communication is almost certainly much
|> higher than the value/importance of _stable_ global communication.  
|
|No.  You erroneously assume that different applications are used for local
|and global communications,

No.  The assumption is that people have different expectations and needs for
local versus global connectivity.  This is not my assumption but the assumption
of the v6 addressing architecture.  This assumption, while aesthetically
unappealing, is clearly correct for many users.  This assumption is behind the
compromise of giving up portable globals in exchange for a local solution that
is better integrated than NAT.

|and you are over-generalizing the case for 
|stable global communications from two very specific cases.  Too many people
|think that the Internet only needs to support web and email.  If that were
|the case we wouldn't need IPv6 at all.  
|
|The 'local' versus 'global' distinction is a false one.

These would be great points if you were arguing for (and I were arguing
against) a return to user-owned portable global/stable addresses.  But I've
never seen you arguing for such a return while I in fact have tried to make
the case for portable globals.  The v6 addressing architecture has already
taken away the ability to have stable global addressing without the (presumably
paid) cooperation of your ISP.  You are not proposing to fix this.  You are
proposing to also take away the ability to have stable local addressing.  Your
argument that we should give up stable local communication because it would be
nice if global communication were stable is specious.

If you would like to campaign for a return to user-owned global/routable/stable
addresses I'll support you 100%.  If that battle were won, the need for scoped
addressing would be significantly reduced.  But win that battle before you try
to take away stable locals.

|I run NFS over TCP 
|over IPv6 over long distances, and it works.

I ran RVD over the ARPANet before NFS ever existed.  Unfortunately, we are
in the minority of users.  The v6 architecture was designed to mimic (perhaps
even to the point of caricature) what the v4 internet has become: predominantly
a big cloud of ephemeral, dynamically-addressed clients talking to a smaller
cloud of static servers.  The v6 address architecture does not lend itself to
the peer-to-peer applications that were possible in the flat, pre-aggregation
v4 internet.  That doesn't mean that there won't be peer-to-peer applications
in v6, but they will end up using the same kinds of kludgy rendezvous protocols
that dynamically addressed v4 nodes now use.

|And yes, I'm screwed if 
|my ISP changes my IP address (fortunately they have agreed to not do that).  
|I also regularly send print jobs over the same connection.

I'm happy for you if you have a relationship with an ISP that will allow you
to keep stable address space when/if v6 gets beyond its experimental stage.
But don't you think you are being a bit shortsighted to try to take away the
only kinds of stable addresses that will be available to people who do not have
such relationships?

|> |In any case, for a home user I suspect that the value/importance of
|> |local communication would typically be less than the value/importance
|> |of global communication.
|> 
|> Again assuming we are talking about _stable_ communication, I believe that 
|> you are incorrect.  Granted those of us who depend on our home networks for
|> automation and such are currently on the bleeding edge, but what about the
|> future when every stereo and tv is on the net?  It's one thing to have to
|> re-click that remote link in the browser, but quite another to have your
|> stereo refuse to change channels.  Consumers are not going to pay their ISP
|> a premium to keep their stereos working.  I know it sounds nice in theory,
|> but look what happened to Divx.  If you take away scoped addressing we _will_
|> use NAT.
|
|Threatening to destroy the utility of IPv6 if you don't get your way 
|won't get you much support here.  Perhaps you should take another tack.

So in other words you don't have a solution for the problem without using
NAT or site locals?

|I don't understand why you expect that ISPs will treat v6 exactly the
|same as v4,

I've explained this several times already, but I'll explain it again.  ISPs
use addresses as a surrogate measure of bandwidth.  They use the number of
addresses to control the number of machines using bandwidth.  They use the
(lack of) stability of addresses to control server usage because servers are
perceived to consume more bandwidth (and in any case are a "premium" usage).
Nothing in the change from v4 to v6 in any way affects these considerations.

Now, as I've asked several times before, can you please explain why you think
things will change?

|when if they do it this way there is no reason for their 
|customers to pay for this additional service.

Are you talking about v6 as an additional service over v4 or about stable
addresses as an additional service?  If the latter then the answer is that
they will pay for the same reason they pay now plus (if you have you way)
they will pay to keep their stereo and tv going smoothly right up until
v6 NAT is implemented in all the consumer/retail router appliances.  If
the former then I think the answer is that most typical end users will not
seek out v6 as an additional service at all since it offers virtually no
benefit to the them while provoking significant upgrade costs.  They may
be forced to accept v6 first by claims that there are no more v4 addresses
and (much) later by owning widgets that use v6 exclusively.

In any case, I thought you said above that you did want to trust market forces
to sort things out.  You seem to now be falling back on an (IMHO) extremely
naive analysis of how market forces will sort things out...

|perhaps ISPs will charge 
|more for v6 than for v4, and they'l claim that they're doing so because 
|v6 addresses are stable.   

Perhaps.  I'm not sure what difference it makes.

|> Depriving users of the tools necessary to make productive use
|> of their networks without paying for stable globals for all internal nodes
|> will just encourage yet another round of kludges.
|
|Bottom line is that we need addresses to be (reasonably) stable at both
|the local and the global level.

We would like global addresses to be stable, but clearly we have a current
existence proof that we do not need this.  The world is getting on just fine
with unstable v4 addresses for lowly end users.  Even if we did need stable
global addresses the _need_ for stable global addresses would not obviate the
need for stable local addresses.  Perhaps the ready _availability_ of stable
global addresses would obviate the need for separate stable local addresses
(or perhaps not) but we are a far cry from being able to guarantee such
availability.

|It's not sufficient to just have stable
|local addresses

It may not be sufficient to have stable locals, but clearly it is necessary.

|- especially given the problems that SLs cause. 

The difficulty of implementing a solution has nothing to do with its
sufficiency.


Erik Nordmark <[EMAIL PROTECTED]> wrote:

|Sorry for the delayed response - didn't see me in the to: or cc: fields.

I try to keep all the mail to the list just to the list...

|> |In terms of the stability of the addresses one has to take into account
|> |both stability as it relates to local communication and stability for
|> |global communication.
|> 
|> We have always been told that stable global v6 addresses will not be
|> available to end users, or at least will not be available to end users at a
|> low cost. Unless you are proposing to revise the whole address allocation
|> architecture *and* have a way to force ISPs to change their business models
|> I think we must accept this as a given.
|
|The temporal stability of addresses have a temporary component - it isn't
|black and white.

That is at best unclear.  Ultimately I think many users will find that
stability is a binary property.  Either some outside party can change the
address out from under you or they can't.

|Crystal ball:
|I wouldn't be surprised if small sites renumber the IPv6 addresses once
|a year with an overlap (both new and old working for a week perhaps). 

How much will I have to pay to get this "small site" level of service
for my home?

|I see no reason why one would ever want to change them once a day.

Obviously there is no reason for a user to want to do this, but ISPs certainly
have reasons.

|Taking together globally things means that there will be lots of renumbering
|in progress at any given time (given millions of sites and a reasonably long
|overlap).

I agree that there will be lots of renumbering going on.

|Thus I'm not too worried about e.g. my home site changing IPv6 prefixes
|all the time.

I am worried.

| Of course, there is a concern for applications which keep
|a connection open for a long time (weeks),

My distributed home automation system's processes keep their connections open
constantly from the point where they are started in /etc/rc to the point where
the system(s) crash.  My systems have reached uptimes of over a year.  I have
big batteries on my UPS's and a backup generator for when they start to get low.

|but I suspect that such applications
|are, or need to be, capable of reconnecting since they need to be able to deal
|with peer failure.

My applications are fully capable of reconnecting in any sequence after any
machine or process dies for whatever reason.  But that is completely different
from being able to reconnect after addresses have changed.

|> I think you have made an unreasonable leap by dropping the "stable"
|> qualifier. The value/importance of _stable_ local communication is almost
|> certainly much higher than the value/importance of _stable_ global
|> communication.  
|
|Even with the "stable" qualifier my opinion is the same. 
|You are welcome to disagree.

I do disagree, but I get the feeling that it doesn't matter much.

|> But my NFS client is simply not prepared to have its server's
|> address renumbered out from under it. My multi-hour build will fail unless
|> I notice the problem and fire up adb on the kernel in a hurry.  Similarly my
|> print job will fail if the printer and/or client is/are renumbered in the
|> middle of a tcp session. 
|
|You seem to be assuming flash renumbering without overlap.

Yes, that is the only reasonable assumption to make.  Just as ISPs now use
short DHCP leases and related dynamic adderssing techniques to discourage
what they perceive as "bandwidth hungry server activity" they will use frequent
renumbering to achieve the same goals.  Without site locals they will have the
added advantage of being able to disrupt not only your "server's" accessibility
from the internet, but your stereo and your tv as well.  At least they will be
able to do this until v6 NAT appears.

Please explain why you assume that ISPs will change their business models
under v6.

|> My distributed home automation system, while quite tolerant
|> of temporary lost connections and machine reboots, can not deal with
|> addresses changing out from under it.  This is hardly unreasonable because
|> the tools to deal gracefully with such situations have not yet been
|> invented.  To make such things work now each application would have to
|> implement its own procedures to deal with unstable addresses.  This is
|> obviously not acceptable to application writers.
|
|And the tools to deal in a robust manner with multiple-scopes of addresses
|have not been applied to your home automation system either.

Sure they have.  As far as I can tell, my applications would work just fine
with site-locals and the default address selection rules.  I wouldn't even have
to play DNS games because all these applications look up their servers or peers
under a sub-domain I set up for that purpose.  All I would have to do is
populate that sub-domain with stable site-locals.

|To be it makes more sense to get applications to 
| - not cache DNS answers forever

So how long can you cache DNS answers?

| - be prepared to restart connections after redoing a DNS lookup 
|   if the connections are open for a long time (days or more).
|I think this is less work, since many applications can get away with
|doing nothing, than having all applications handle different scope addresses.

I think you are trivializing an extremely complicated process.  To make this
work well DNS has to become a fully push system and domain names have to replace
literals pretty much everywhere.  We are talking about a major paradigm shift
for applications.  This at the same time that other site-local detractors are
encouraging the use of literals in, e.g., web pages.  Can we see some commitment
from application writers that they actually plan to do as you suggest?  I don't
want to hear later that, well, this is too much to expect from applications and
you should really pay for those stable addresses.

But I begin to recognize the same argument that was used against global
identifiers.  Addresses are going to be stable *enough* and renumbering is
going to be infrequent *enough* and dynamic DNS is going to work well *enough*
and applications are going to learn to cope with dynamic addresses *enough*
and ISPs are going to become philanthropic *enough* that somehow all mixed
together things will work well *enough*.  The only difference is that the
previous argument went on, ``and if you really need local communication
stability you can always use site-local addresses.''  I'm still extremely
dubious that things will just work out.  It's getting very, very late in
the game to be deferring the solution with yet another round of buck-passing.

|> We understand that sites are administrative.
|
|Section 4 in draft-ietf-ipngwg-scoping-arch seems to say something differnet.
|A site doesn't span multiple administrations, but it is limited
|to a single geographic location, such  as an office, an office complex, or 
|a campus. 

Your implication before seemed to be that sites locals didn't make sense
because the geographic description could span administrative domains.  I
don't really follow your reasoning either way, but it doesn't matter.  My
house is a single administrative domain and a single geographic location.

|> |So let's not loose sight of the fact that the goal is a robust network.
|> 
|> I think that the goal is a useful network--useful not only for ISPs and
|> application vendors but for consumers.
|
|Agreed. I don't think I was saying anything different.

Actually, I think there is a big difference.  You can make the network very
robust and easy to maintain for ISPs while at the same time making it less
useful for consumers.  You can make the network an easier environment for
application programmers while at the same time making it less useful for
consumers.  I've noticed that the v6 design process has paid a great deal
of attention to the needs of the ISP, some attention to the needs of the
application developer, and very little attention to the needs of the end
user.  In fact, many important end-user issues have been treated as after-
thoughts or addressed only under pressure from the media (e.g., privacy
concerns over MAC IDs).  Several of the proposals for fake multi-homing
(where you pay both providers but get no benefit from the redundancy) would
have made for really funny reading were the problem not so serious.  A complete
lack of user-controlled address space is going to make v6 a very hard sell...

                                Dan Lanciani
                                ddl@danlan.*com
--------------------------------------------------------------------
IETF IPng Working Group Mailing List
IPng Home Page:                      http://playground.sun.com/ipng
FTP archive:                      ftp://playground.sun.com/pub/ipng
Direct all administrative requests to [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to