> What I find interesting throughout discussions that mention IPv6 as a
> solution for a shortage of addresses in IPv4 is that people see the
> problems with IPv4, but they don't realize that IPv6 will run into the
> same difficulties.  _Any_ addressing scheme that uses addresses of
> fixed length will run out of addresses after a finite period of time,

I suppose that's true - as long as addresses are consumed at a rate
faster than they are recycled.  But the fact that we will run out of
addresses eventually might not be terribly significant - the Sun will
also run out of hydrogen eventually, but in the meantime we still find
it useful.

> and that period may be orders of magnitude shorter than anyone might
> at first believe.

it is certainly true that without careful management IPv6 address
space could be consumed fairly quickly.  but to me it looks like that
with even moderate care IPv6 space can last for several tens of years.

> Consider IPv4.  Thirty-two bits allows more than four billion
> individual machines to be addressed.  

not really.  IP has always assumed that address space would be
delegated in power-of-two sized "chunks" - at first those chunks only
came in 3 sizes (2**8, 2**16, or 2**24 addresses), and later on it
became possible to delegate any power-of-two sized chunk.  but even
assuming ideally sized allocations, each of those chunks would on
average be only 50% utilized. 

so every level of delegation effectively uses 1 of those 32 bits, and
on average most parts of the net are probably delegated 4-5 levels
deep.  (IANA/regional registry/ISP/customer/internal). so we end up
effectively not with 2**32 addresses but with something like 2**27 or
2**28.  (approximately 134 million or 268 million)

(see also RFC 1715 for a different analysis, which when applied to
IPv4, yields similar results for the optimistic case)

allocating space in advance might indeed take away another few bits.
but given the current growth rate of the internet it is necessary.
the internet is growing so fast that a policy of always allocating
only the smallest possible chunk for a net would not only be
cumbersome, it would result in poor aggregation in routing tables and
quite possibly in worse overall utilization of address space.

but if it someday gets easier to renumber a subnet we might then find
it easier to garbage collect, and recycle, fragmented portions of
address space.  and if the growth rate slowed down (which for various
reasons is possible) then we could do advance allocation more
conservatively.

> It should be clear that IPv6 will have the same problem.  The space
> will be allocated in advance.  Over time, it will become obvious that
> the original allocation scheme is ill-adapted to changing requirements
> (because we simply cannot foresee those requirements).  Much, _much_
> sooner than anyone expects, IPv6 will start to run short of addresses,
> for the same reason that IPv4 is running short.  It seems impossible
> now, but I suppose that running out of space in IPv4 seemed impossible
> at one time, too.

IPv6 allocation will have some of the same properties of IPv4
allocation.  We're still using power-of-two sized blocks, we'll still
waste at least one bit of address space per level of delegation.  It
will probably be somewhat easier to renumber networks and recycle
address - how much easier remains to be seen.

OTOH, I don't see why IPv6 will necessarily have significantly more
levels of assignment delegation.  Even if it needs a few more levels,
6 or 7 bits out of 128 total is a lot worse than 4 or 5 bits out of 32.

> The allocation pattern is easy to foresee.  Initially, enormous
> subsets of the address space will be allocated carelessly and
> generously, because "there are so many addresses that we'll never run
> out" 

I don't know where you get that idea.  Quite the contrary, the
regional registries seem to share your concern that we will use up
IPv6 space too quickly and *all* of the comments I've heard about the
initial assignment policies were that they were too conservative.
IPv6 space does need to be carefully managed, but it can be doled out
somewhat more generously than IPv4 space.

> and because nobody will want to expend the effort to achieve
> finer granularity in the face of such apparent plenty.  

First of all, having too fine a granularity in allocation prevents you
from aggregating routes.  Second, with power-of-two sized allocations
there's a limit to how much granularity you can get - even if you
always allocate optimal sized blocks.

> This mistake will be repeated for each subset of the address space
> allocated, by each organization charged with allocating the space.

It's not clear that it's a mistake.  it's a tradeoff between having
aggregatable addresses and distributed assignment on one hand and
conserving address space on the other.  and the people doing address
assignment these days are quite accustomed to thinking in these terms.

> If you need further evidence, look at virtual memory address spaces.
> Even if a computer's architecture allows for a trillion bits of
> addressing space, it invariably becomes fragmented and exhausted in an
> amazingly short time.  

this is only amazing to those who haven't heard of Moore's law.
(presumably the same set of people who thought DES would never be broken)

on the other hand, it's not clear how valid this analogy is for
predicting the growth of the Internet - just because Moore's law (if
it keeps on working) might predict that in a decade we could
eventually have thousands of network-accessible computing devices for
everyone on the planet, doesn't mean that those people would be able
to deal with thousands of such devices.  and there do appear to be
limits to the number of human beings that the planet can support.  and
if by that time the robot population exceeds the human population then
I'm happy to let the robots solve the problem of upgrading to a new
version of IP.

and as for other planets, all kinds of assumptions about the current
Internet fail when you try to make it work at interplanetary
transmission latencies.  so if we do manage to significantly populate
other planets or if we find extraterrestrial species that we want to
network with, we'll have to build a new architecture.  and people are
already working on that.

> The only real solution to this is an open-ended addressing scheme--one
> to which digits can be added as required.  

variable length addresses do have some nice properties.  there are
also some drawbacks.

fwiw, phone numbers do in fact have a fixed maximum length which is
wired into devices all over the planet - not just in the phone system
but in numerous computer databases, etc..  it is not much easier to
increase the overall length of phone numbers than it is to make IP
addresses longer.  and once you set a fixed maximum length then it's
just a matter of representation - do you have a variable-length
address field or do you have a fixed-length field with zero padding?
fixed-length fields are a lot easier for routers to deal with.  (and
for similar reasons a lot of software uses fixed-length fields for
phone numbers)

128-bit IPv6 addresses are roughly equivalent to 40 digits, which IIRC
is a lot longer than the maximum size of a phone number under E.164.
(sorry, I don't have a copy handy to check)

and the means by which IPv6 addresses are being allocated is actually
not so different from the means in which phone numbers are allocated -
the major exception being that IPv6 prefixes are assigned to major
ISPs rather than to geographic regions.  (the latter difference might
affect routing but probably does not affect allocation efficiency).

so I think the bottom line answer to your message is that your concers
are valid (if perhaps a bit exaggerated) and an allocation mechanism
similar to what you suggest is already in place.

Keith

Reply via email to