IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

What I find interesting throughout discussions that mention IPv6 as a solution for a 
shortage of addresses in IPv4 is that people
see the problems with IPv4, but they don't realize that IPv6 will run into the same 
difficulties.  _Any_ addressing scheme that uses
addresses of fixed length will run out of addresses after a finite period of time, and 
that period may be orders of magnitude
shorter than anyone might at first believe.

Consider IPv4.  Thirty-two bits allows more than four billion individual machines to 
be addressed.  In theory, then, we should have
enough IPv4 addresses for everyone until four billion machines are actually online 
simultaneously.  Despite this, however, we seem
to be running short of addresses already, even though only a fraction of them are 
actually used.  The reason for this is that the
address space is of finite size, and that we attempt to allocate that finite space in 
advance of actual use.

It should be clear that IPv6 will have the same problem.  The space will be allocated 
in advance.  Over time, it will become obvious
that the original allocation scheme is ill-adapted to changing requirements (because 
we simply cannot foresee those requirements).
Much, _much_ sooner than anyone expects, IPv6 will start to run short of addresses, 
for the same reason that IPv4 is running short.
It seems impossible now, but I suppose that running out of space in IPv4 seemed 
impossible at one time, too.

The allocation pattern is easy to foresee.  Initially, enormous subsets of the address 
space will be allocated carelessly and
generously, because "there are so many addresses that we'll never run out" and because 
nobody will want to expend the effort to
achieve finer granularity in the face of such apparent plenty.  This mistake will be 
repeated for each subset of the address space
allocated, by each organization charged with allocating the space.  As a result, in a 
surprisingly short time, the address space
will be exhausted.  This _always_ happens with fixed address spaces.  It seems to be 
human nature, but information theory has a hand
in it, too.

If you need further evidence, look at virtual memory address spaces.  Even if a 
computer's architecture allows for a trillion bits
of addressing space, it invariably becomes fragmented and exhausted in an amazingly 
short time.  The "nearly infinite space" allowed
by huge virtual addresses turns out to be very finite and very limiting indeed.

The only real solution to this is an open-ended addressing scheme--one to which digits 
can be added as required.  And it just so
happens that a near-perfect example of such a scheme is right in front of us all, in 
the form of the telephone system.  Telephone
numbers have never had a fixed number of digits.  The number has always been variable, 
and has simply expanded as needs have changed
and increased.  At one time, a four-digit number was enough to reach anyone.  Then 
seven-digit numbers became necessary.  Then an
area code became necessary.  And finally, a country code became necessary.  Perhaps a 
planet code will be necessary at some point in
the future.  But the key feature of the telephone system is that nobody ever decided 
upon a fixed number of digits in the beginning,
and so there is no insurmountable obstacle to adding digits forever, if necessary.  
Imagine what things would be like if someone had
decided in 1900 that seven digits would be enough for the whole world, and then 
equipment around the world were designed only to
handle seven digits, with no room for expansion.  What would happen when it came time 
to install the 10,000,000th telephone, or when
careless allocation exhausted the seven-digit space?

Anyway, some keys to a successful addressing scheme, in my opinion, are as follows 
(but the first is the only mandatory feature, I
think):

1. The number of digits used for addressing is not limited by the addressing protocol.
2. Every machine in the network need only know in detail about other points in the 
network that have the same high-order digits in
their addresses.
3. There is a distinction for every machine between "local" addresses (those that 
implicitly have the same high-order digits as the
address of the machine in question) and "remote" addresses (those that have different 
high-order digits).

With such an address scheme, a single international body can allocate one digit to 
each region of the world (the size of the regions
is irrelevant).  Beneath that, other, more local bodies, one per initial digit, can 
allocate more digits below that.  There is no
need for anyone to allocate the entire address space in advance, so there is no need 
to worry about problems with the initial
allocation that will have to be fixed later.  And since the actual number of digits in 
a machine address is unlimited, different
parts of the world, different companies, different organizations, etc., can expand 
addresses as needed.  At any given time, the
maximum number 

RE: IPv6: Past mistakes repeated?

2000-04-24 Thread David A Higginbotham

I agree! Why create a finite anything when an infinite possibility exists?
On another note, I have heard the argument that a unique identifier already
exists in the form of a MAC address why not make further use of it?

David H

-Original Message-
From: Anthony Atkielski [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 24, 2000 6:05 AM
To: [EMAIL PROTECTED]
Subject: IPv6: Past mistakes repeated?


What I find interesting throughout discussions that mention IPv6 as a
solution for a shortage of addresses in IPv4 is that people
see the problems with IPv4, but they don't realize that IPv6 will run into
the same difficulties.  _Any_ addressing scheme that uses
addresses of fixed length will run out of addresses after a finite period of
time, and that period may be orders of magnitude
shorter than anyone might at first believe.

Consider IPv4.  Thirty-two bits allows more than four billion individual
machines to be addressed.  In theory, then, we should have
enough IPv4 addresses for everyone until four billion machines are actually
online simultaneously.  Despite this, however, we seem
to be running short of addresses already, even though only a fraction of
them are actually used.  The reason for this is that the
address space is of finite size, and that we attempt to allocate that finite
space in advance of actual use.

It should be clear that IPv6 will have the same problem.  The space will be
allocated in advance.  Over time, it will become obvious
that the original allocation scheme is ill-adapted to changing requirements
(because we simply cannot foresee those requirements).
Much, _much_ sooner than anyone expects, IPv6 will start to run short of
addresses, for the same reason that IPv4 is running short.
It seems impossible now, but I suppose that running out of space in IPv4
seemed impossible at one time, too.

The allocation pattern is easy to foresee.  Initially, enormous subsets of
the address space will be allocated carelessly and
generously, because "there are so many addresses that we'll never run out"
and because nobody will want to expend the effort to
achieve finer granularity in the face of such apparent plenty.  This mistake
will be repeated for each subset of the address space
allocated, by each organization charged with allocating the space.  As a
result, in a surprisingly short time, the address space
will be exhausted.  This _always_ happens with fixed address spaces.  It
seems to be human nature, but information theory has a hand
in it, too.

If you need further evidence, look at virtual memory address spaces.  Even
if a computer's architecture allows for a trillion bits
of addressing space, it invariably becomes fragmented and exhausted in an
amazingly short time.  The "nearly infinite space" allowed
by huge virtual addresses turns out to be very finite and very limiting
indeed.

The only real solution to this is an open-ended addressing scheme--one to
which digits can be added as required.  And it just so
happens that a near-perfect example of such a scheme is right in front of us
all, in the form of the telephone system.  Telephone
numbers have never had a fixed number of digits.  The number has always been
variable, and has simply expanded as needs have changed
and increased.  At one time, a four-digit number was enough to reach anyone.
Then seven-digit numbers became necessary.  Then an
area code became necessary.  And finally, a country code became necessary.
Perhaps a planet code will be necessary at some point in
the future.  But the key feature of the telephone system is that nobody ever
decided upon a fixed number of digits in the beginning,
and so there is no insurmountable obstacle to adding digits forever, if
necessary.  Imagine what things would be like if someone had
decided in 1900 that seven digits would be enough for the whole world, and
then equipment around the world were designed only to
handle seven digits, with no room for expansion.  What would happen when it
came time to install the 10,000,000th telephone, or when
careless allocation exhausted the seven-digit space?

Anyway, some keys to a successful addressing scheme, in my opinion, are as
follows (but the first is the only mandatory feature, I
think):

1. The number of digits used for addressing is not limited by the addressing
protocol.
2. Every machine in the network need only know in detail about other points
in the network that have the same high-order digits in
their addresses.
3. There is a distinction for every machine between "local" addresses (those
that implicitly have the same high-order digits as the
address of the machine in question) and "remote" addresses (those that have
different high-order digits).

With such an address scheme, a single international body can allocate one
digit to each region of the world (the size of the regions
is irrelevant).  Beneath that, other, more local bodies, one per initial
digit, can allocate more digits below that.  There is no
need for anyone to allocate 

RE: IPv6: Past mistakes repeated?

2000-04-24 Thread Manish R. Shah.


IPv6 is designed to be compatible with IPv4?

If what you suggest should be implemented, then probably
the entire software of all the switches and hubs need to be
upgraded (if not entirely scrapped) . 

As also everytime the source and destination addresses are
upgraded, all the systems and the related software needs to 
be upgraded. Personally my telephone number has changed
3 times within the last couple of years. So in this case, it is not
possible to change the code of the intermediate routers every time.
But yeah, the numbers can be made configurable.

Although I agree that the concept being advocated is indeed 
revolutionary, and also might be beneficial to some extent. 
But the million dollar question is that whether the protocol and 
switch vendors would like to scrap the years and amount 
of investment that they have already made in the existing system.

Furthur study of your proposal needs to be done !!! and can be a 
hot topic of discussion. 

Cheers !!!

Manish.

--
** Nothing is Impossible, Even Impossible says I'm possible !!! **
--

Manish R. Shah.
Senior Software Engineer,
Future Software Pvt Ltd.
480-481, Anna Salai, Nandanam
Chennai 600035.
Phone: +91-(44)-433-0550 Xten 294.

+++

-Original Message-
From:   Anthony Atkielski [SMTP:[EMAIL PROTECTED]]
Sent:   Monday, April 24, 2000 3:35 PM
To: [EMAIL PROTECTED]
Subject:IPv6: Past mistakes repeated?

  File: ATT1.txt; charset = Windows-1252  

What I find interesting throughout discussions that mention IPv6 as a solution for a 
shortage of addresses in IPv4 is that people
see the problems with IPv4, but they don't realize that IPv6 will run into the same 
difficulties.  _Any_ addressing scheme that uses
addresses of fixed length will run out of addresses after a finite period of time, and 
that period may be orders of magnitude
shorter than anyone might at first believe.

Consider IPv4.  Thirty-two bits allows more than four billion individual machines to 
be addressed.  In theory, then, we should have
enough IPv4 addresses for everyone until four billion machines are actually online 
simultaneously.  Despite this, however, we seem
to be running short of addresses already, even though only a fraction of them are 
actually used.  The reason for this is that the
address space is of finite size, and that we attempt to allocate that finite space in 
advance of actual use.

It should be clear that IPv6 will have the same problem.  The space will be allocated 
in advance.  Over time, it will become obvious
that the original allocation scheme is ill-adapted to changing requirements (because 
we simply cannot foresee those requirements).
Much, _much_ sooner than anyone expects, IPv6 will start to run short of addresses, 
for the same reason that IPv4 is running short.
It seems impossible now, but I suppose that running out of space in IPv4 seemed 
impossible at one time, too.

The allocation pattern is easy to foresee.  Initially, enormous subsets of the address 
space will be allocated carelessly and
generously, because "there are so many addresses that we'll never run out" and because 
nobody will want to expend the effort to
achieve finer granularity in the face of such apparent plenty.  This mistake will be 
repeated for each subset of the address space
allocated, by each organization charged with allocating the space.  As a result, in a 
surprisingly short time, the address space
will be exhausted.  This _always_ happens with fixed address spaces.  It seems to be 
human nature, but information theory has a hand
in it, too.

If you need further evidence, look at virtual memory address spaces.  Even if a 
computer's architecture allows for a trillion bits
of addressing space, it invariably becomes fragmented and exhausted in an amazingly 
short time.  The "nearly infinite space" allowed
by huge virtual addresses turns out to be very finite and very limiting indeed.

The only real solution to this is an open-ended addressing scheme--one to which digits 
can be added as required.  And it just so
happens that a near-perfect example of such a scheme is right in front of us all, in 
the form of the telephone system.  Telephone
numbers have never had a fixed number of digits.  The number has always been variable, 
and has simply expanded as needs have changed
and increased.  At one time, a four-digit number was enough to reach anyone.  Then 
seven-digit numbers became necessary.  Then an
area code became necessary.  And finally, a country code became necessary.  Perhaps a 
planet code will be necessary at some point in
the future.  But the key feature of the telephone system is that nobody ever decided 
upon a fixed number of digits in the beginning,
and so there is no insurmountable obstacle to adding digits forever, if necessary.  
Imagine what things 

Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Steven M. Bellovin

In message BB2831D3689AD211B14C00104B14623B1E7569@HAZEN04, "David A Higginbot
ham" writes:
I agree! Why create a finite anything when an infinite possibility exists?
On another note, I have heard the argument that a unique identifier already
exists in the form of a MAC address why not make further use of it?

Would it surprise anyone to hear that all of that was considered and 
discussed, ad nauseum, in the IPng directorate?  That's right -- we weren't 
stupid or ignorant of technological history.  There were proponents for 
several different schemes, including fixed-length addresses of 64 and later 
128 bits, addresses where the two high-order bits denoted the multiple of 64 
to be used (that was my preference), or CLNP, where addresses could be quite 
variable in length (I forget the maximum).

But the first thing to remember is that there are tradeoffs.  Yes, infinitely 
long addresses are nice, but they're much harder to store in programs (you can 
no longer use a simple fixed-size structure for any tuple that includes an 
address) and (more importantly) route, since the router has to use the entire 
address in making its decision.  Furthermore, if it's a variable-length 
address, the router has to know where the end is, in order to look at the next 
field.  (Even if the destination address comes first, routers have to look at 
the source address because of ACLs -- though you don't want address-based 
security (and you shouldn't want it), you still need anti-spoofing filters.)  
I should add, btw, that there's a considerable advantage to having addresses 
be a multiple of the bus width in size, since that simplifies fetching the 
next field.)

As I said, I (and others) preferred a limited form of variable-length addresses. 
Given the various tradeoffs, we "lost".  One reason is something that was 
pointed out by a number of people:  code that isn't exercised generally doesn't
work well.  If we didn't have really long addresses in use from the beginning, 
some major implementations wouldn't support them properly.

Some minor points.  Using a MAC address was considered and rejected.  
First, not all machines have them.  Second, some machines have more than one 
-- which should be used?  Third, although MACs are supposed to be globally 
unique, accidents happen and there have been collisions.  Fourth, they're two 
short -- 48 bits then, moving towards 64 bits today.  Fifth, there's the issue 
of privacy.  Sixth -- and this rules out pure geographic addressing schemes -- 
IP addresses are tied to the routing system.  We don't know any other way to 
route large numbers of networks other than by using the high-order bits of the 
address.  If you want addresses allocated geographically, your routing has to 
be geographic.  (There have been designs for that, I should add, such as the 
Metropolitan Area Exchanges.  But for those to work, assorted ISPs would have 
to co-operate on a large scale, something that I don't think will happen.)  
Phone numbers are allocated geographically, but that only works because 
historically, most areas only had one monopoly phone company.  That has 
changed today, in many parts of the world, leading to complexities such as (in 
the U.S.) local number portability -- but telephone networks do one lookup per 
call, not one per packet.

--Steve Bellovin





Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Sean Doran


[Keith Moore on a "KMart box"]
| take it home, plug it in to your phone line or whatever, and get
| instant internet for all of the computers in your home.  
| (almost just like NATs today except that you get static IP addresses).

No, not "or whatever" but "AND whatever".

Otherwise this is a nice but insufficient device, 
since there is an implicit presupposition that only
 one provider will be used by any given owner/lessor 
of this K-Mart box.

If one makes a broad policy decision that it should be possible
and simple for all very small users to be serviced simultaneously
by multiple providers, then you must be very careful about the
static IP address constraint.

Personally, I think _at least_ individual households should be
multihomed -- adjust your K-Mart box so that it can support multiple
interfaces.  Perhaps you might end up with a plug for your DSL or
POTS connection, a plug for your cable connection, and a plug for
your wireless or electricity-grid connection.

Traditional multihoming has some significant features:
-- all the hosts in the multihomed entity have a fixed
   set of addresses relative to one another
   (i.e., hosts don't care about the "remote" topology)
-- traffic is balanced in both directions in some fashion 
   across the set of multiple providers' connections
-- if there is a partition in the network which
   breaks connectivity through one provider, the
   connectivity will automatically back-up through
   the remaining provider(s) who are unaffected
   by the partition

Unfortunately, IPv6's current addressing architecture makes it very
difficult to do this sort of traditional multihoming if one is not
a TLA.  This is a significant step backward from the current IPv4
situation, where one can persuade various operators to accept
more-specific prefixes (coloured with appropriate community
attributes) in order to optimize return traffic from particular
parts of the Internet.

Therefore, in order to support IPv6 house-network multihoming, so
as to preserve at least these three features of traditional
multihoming, either the current IPv6 addressing architecture's
restrictions on who can be a TLA must be abandoned (so each house
becomes a TLA), or NATs must be used to rewrite house-network
addresses into various PA address ranges supplied by the multiple
providers.

If it is reasonable to want to support multihoming individual
SMEs, households, or even "smd"s, IPv6's overall addressing and
routing architecture seems much ill-suited to the task WITHOUT
the presence of NAT.   

IPv6's larger address space is merely a necessary piece of an 
Internet which will not run out of numbers.   

NATs and NAT-like translators appear to be more and more a
fundamental tool in the IPv6 arsenal, and it unfortunate that
people position IPv6 as an alternative to NAT.

Sean.




RE: IPv6: Past mistakes repeated?

2000-04-24 Thread Ian King

"Near-perfect example"?  I beg to differ.  I used to work for a Local
Exchange Carrier.  

The telephone number situation in the United States has been one of
continual crisis for years, because of rapid growth in use (in part because
of Internet access!).  The area served by a given "area code" would be split
into smaller areas with multiple area codes; these days, those areas aren't
necessarily even contiguous.  Moving from seven-digit to (effectively)
ten-digit numbers was difficult, if not impossible, for some older
equipment; sometimes a kludge could be developed to allow the old equipment
to be used for a few more months or years, but often as not new equipment
was required, at considerable cost.  It was difficult for end users, too: in
addition to the confusion everyone suffered during the transition (I still
get scads of wrong numbers on my cellphone, because people forget the area
code is needed), businesses had to spend great sums of money to revise their
public appearance (advertising, letterhead, etc.).  

And, often as not, we'd do it all over again a few months later.  

My point is that ANY numbering scheme is difficult to change, once it's in
place.  Someone else on this thread made a good point, however, that the
administration of that scheme can make worlds of difference - this person's
point was about "giveaway" assignment of large portions of the address
space, "because there's so much" -- hm, sounds like the exhaustion of
Earth's natural resources, too.  :-)  I'd suggest that address assignment
policy should keep process lightweight, so that it is realistic for
businesses to regularly ask for assignments in more granular chunks; rather
than grabbing a class A-size space "just in case", big users would be
willing to request another 256 when the new branch office opens, then
another 64 for the summer interns... and so individuals can easily get
multiple addresses through an ISP.  

In fact, it should be as easy as getting a telephone number.  -- Ian 

 -Original Message-
 From: Anthony Atkielski [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 24, 2000 3:05 AM
 To: [EMAIL PROTECTED]
 Subject: IPv6: Past mistakes repeated?
 
 
[snip]
 The only real solution to this is an open-ended addressing 
 scheme--one to which digits can be added as required.  And it just so
 happens that a near-perfect example of such a scheme is right 
 in front of us all, in the form of the telephone system.  Telephone
 numbers have never had a fixed number of digits.  The number 
 has always been variable, and has simply expanded as needs 
 have changed
 and increased.  At one time, a four-digit number was enough 
 to reach anyone.  Then seven-digit numbers became necessary.  Then an
 area code became necessary.  And finally, a country code 
 became necessary.  Perhaps a planet code will be necessary at 
 some point in
 the future.  But the key feature of the telephone system is 
 that nobody ever decided upon a fixed number of digits in the 
 beginning,
 and so there is no insurmountable obstacle to adding digits 
 forever, if necessary.  Imagine what things would be like if 
 someone had
 decided in 1900 that seven digits would be enough for the 
 whole world, and then equipment around the world were designed only to
 handle seven digits, with no room for expansion.  What would 
 happen when it came time to install the 10,000,000th 
 telephone, or when
 careless allocation exhausted the seven-digit space?
 
[snip]




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Pyda Srisuresh


--- Henning Schulzrinne [EMAIL PROTECTED] wrote:
 It might be useful to point out more clearly the common characteristics
 of protocols that are broken by NATs. These include, in particular,
 protocols that use one connection to establish another data flow. Such
 protocols include ftp, SIP and RTSP (the latter is not mentioned yet in
 the draft, but NATs also interfere with its operation). Note that unless
 we forego such control protocol designs altogether, NATs in principle
 break these unless every host has an external DNS mapping. 

We had originally considered having a section in the draft listing 
common characteristics of all the applications that fail. Then we 
decided against it, as such a section already exists in RFC 2663 and 
"Traditional-NAT" draft. Instead, we chose to focus on gathering 
the various protocols/applications that fail, why they fail and if
there are any work-arounds. Input on the IETF list, past few days, 
has been great. The draft should look much better when the input is
all folded in. 

For example, the problem you point out with applications with 
inter-dependent control and data sessions can be found listed 
as NAT limitation in section 8.2 of RFC 2663.

(Thus, in
 reference to a recent message to just design NAT-friendly protocols,
 this means in practice that such "out-of-band" designs could not be
 supported by this NATy version of the Internet. In effect, this leads to
 the abomination of carrying real-time data in HTTP connections.)
 
Agreed. The intent of NAT-friendly guidelines was merely to point 
the gotchas that can be fixed - not to dissuade development of 
protocols that cannot be NAT-friendly.


 Other protocol designs are those that are symmetric rather than
 client-server based. This affects all Internet telephony or event-based
 protocols (IM and generalizations) unless they maintain an outbound
 connection with a server acting as their representative to the globally
 routed Internet. The latter obviously does not address the media stream
 addressing problems.
 
I assume, you mean Peer-to-peer protocols/applications require 
bi-directional flows at a minimum. Clearly, it is a problem doing
this across traditional NAT (i.e., Basic-NAT or NAPT) because
Traditional-NAT fundamentally is unidirectional and supports 
out-bound flows only. Bidirectional-NAT might work with these
apps (with all the caveats that go with address translation). 

 -- 
 Henning Schulzrinne   http://www.cs.columbia.edu/~hgs
 

regards,
suresh

__
Do You Yahoo!?
Send online invitations with Yahoo! Invites.
http://invites.yahoo.com




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Steve Deering

At 4:32 PM +0200 4/24/00, Sean Doran wrote:
Unfortunately, IPv6's current addressing architecture makes it very
difficult to do this sort of traditional multihoming if one is not
a TLA.  This is a significant step backward from the current IPv4
situation, where one can persuade various operators to accept
more-specific prefixes (coloured with appropriate community
attributes) in order to optimize return traffic from particular
parts of the Internet.

Sean,

That is widely claimed but incorrect.  Nothing in the IPv6 addressing
architecture prevents a user from negotiating with multiple operators
to accept any prefix assigned to that user.  IPv6 retains the same
capability as IPv4 in that respect.

Therefore, in order to support IPv6 house-network multihoming, so
as to preserve at least these three features of traditional
multihoming, either the current IPv6 addressing architecture's
restrictions on who can be a TLA must be abandoned (so each house
becomes a TLA),...

The consequences of those restrictions are not what you imagined, but
even so, making each house a TLA does not strike me as a scalable
multihoming solution for very large numbers of houses, given the current
state of the routing art.

...or NATs must be used to rewrite house-network addresses into various
PA address ranges supplied by the multiple providers.

That's not the only possible alternative, and it is an alternative that
creates a bunch of other unsolved problems (see earlier messages in this
thread).

IPv6's larger address space is merely a necessary piece of an 
Internet which will not run out of numbers.  

Wow, we actually agree on something!  (Though I could quibble over the
"merely".)

Steve




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Daniel Senie

Ian King wrote:
 
 "Near-perfect example"?  I beg to differ.  I used to work for a Local
 Exchange Carrier.
 
 The telephone number situation in the United States has been one of
 continual crisis for years, because of rapid growth in use (in part because
 of Internet access!).  The area served by a given "area code" would be split
 into smaller areas with multiple area codes; these days, those areas aren't
 necessarily even contiguous.  Moving from seven-digit to (effectively)
 ten-digit numbers was difficult, if not impossible, for some older
 equipment; sometimes a kludge could be developed to allow the old equipment
 to be used for a few more months or years, but often as not new equipment
 was required, at considerable cost.  It was difficult for end users, too: in
 addition to the confusion everyone suffered during the transition (I still
 get scads of wrong numbers on my cellphone, because people forget the area
 code is needed), businesses had to spend great sums of money to revise their
 public appearance (advertising, letterhead, etc.).
 
 And, often as not, we'd do it all over again a few months later.

We've now got number portability. I've got a choice of local exchange
carriers. I can get service from Bell Atlantic or from MediaOne. I can
keep the same phone number when I move from one to the other.

From the reports I read, this was implemented by mapping phone numbers
to some other tag (which the user doesn't see) which is used to get the
calls to the proper carrier and ultimately to the proper user.

Sounds a whole lot like using DNS to map names to IP addresses. Of
course we expose users to IP addresses WAY too often, and overuse them
in applications as well, for this analogy to be really workable for the
Internet.

Users shouldn't care or know about the network's internal addressing.
Some of the application issues with NATs spring directly from this issue
(e.g. user of X-terminal setting display based on IP address instead of
DNS name).

-- 
-
Daniel Senie[EMAIL PROTECTED]
Amaranth Networks Inc.http://www.amaranth.com




Re: Patent protection from NATs

2000-04-24 Thread John Stracke

Henning Schulzrinne wrote:

 Indeed, I
 think we should get together a group of people to patent all the
 architecturally bad ideas (call it the "RSI group"), as they'll appear
 sooner or later. That will give us 20 years of respite...

...provided somebody pays the legal fees to enforce the patents.

--
/\
|John Stracke| http://www.ecal.com |My opinions are my own.  |
|Chief Scientist |===|
|eCal Corp.  |The cheapest, fastest, most reliable components|
|[EMAIL PROTECTED]|of a computer system are those that aren't |
||there.--Gordon Bell|
\/






Re: IPv6: Past mistakes repeated?

2000-04-24 Thread John Stracke

Ian King wrote:

 I'd suggest that address assignment
 policy should keep process lightweight, so that it is realistic for
 businesses to regularly ask for assignments in more granular chunks; rather
 than grabbing a class A-size space "just in case", big users would be
 willing to request another 256 when the new branch office opens

Wasn't one of the design goals of IPv6 to make renumbering easier, so that
people could move from small assignments to large ones?

--
/\
|John Stracke| http://www.ecal.com |My opinions are my own.  |
|Chief Scientist |===|
|eCal Corp.  |The cheapest, fastest, most reliable components|
|[EMAIL PROTECTED]|of a computer system are those that aren't |
||there.--Gordon Bell|
\/






Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Richard Shockey



  The telephone number situation in the United States has been one of
  continual crisis for years, because of rapid growth in use (in part because
  of Internet access!).  The area served by a given "area code" would be 
 split
  into smaller areas with multiple area codes; these days, those areas aren't
  necessarily even contiguous.  Moving from seven-digit to (effectively)
  ten-digit numbers was difficult, if not impossible, for some older
  equipment; sometimes a kludge could be developed to allow the old equipment
  to be used for a few more months or years, but often as not new equipment
  was required, at considerable cost.  It was difficult for end users, 
 too: in
  addition to the confusion everyone suffered during the transition (I still
  get scads of wrong numbers on my cellphone, because people forget the area
  code is needed), businesses had to spend great sums of money to revise 
 their
  public appearance (advertising, letterhead, etc.).
 
  And, often as not, we'd do it all over again a few months later.

We've now got number portability. I've got a choice of local exchange
carriers. I can get service from Bell Atlantic or from MediaOne. I can
keep the same phone number when I move from one to the other.

FYI And by 2002 you will be able to port your number from your land line 
phone service to your cell phone as well...that is  called "service 
portability".



 From the reports I read, this was implemented by mapping phone numbers
to some other tag (which the user doesn't see) which is used to get the
calls to the proper carrier and ultimately to the proper user.

Yep ...essentially phone numbers are nothing more than names to the IN.

FYI  there is a ID on how the Number Portability system works and there 
have been recent FCC rulings on Number Conservation and Pooling which 
direct teleco's not to hoard phone numbers or else they will be taken away.

It is a system that has worked remarkably well and is rapidly being adopted 
my many other countries as well.




A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-foster-e164-gstn-np-00.txt



 
Richard Shockey
Shockey Consulting LLC
8045 Big Bend Blvd. Suite 110
St. Louis, MO 63119
Voice 314.918.9020
eFAX Fax to EMail 815.333.1237 (Preferred for Fax)
INTERNET Mail  IFAX : [EMAIL PROTECTED]
GSTN Fax 314.918.9015
MediaGate iPost VoiceMail and Fax 800.260.4464





Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Keith Moore

 [Keith Moore on a "KMart box"]
 | take it home, plug it in to your phone line or whatever, and get
 | instant internet for all of the computers in your home.  
 | (almost just like NATs today except that you get static IP addresses).
 
 No, not "or whatever" but "AND whatever".
 
 Otherwise this is a nice but insufficient device, 
 since there is an implicit presupposition that only
 one provider will be used by any given owner/lessor 
 of this K-Mart box.

sorry if I oversimplified things.  it's clear to me that you have
to allow for user selection of providers, but I was trying to 
make a simple illustration of how this might work, not write 
the requirements document.
 
 If one makes a broad policy decision that it should be possible
 and simple for all very small users to be serviced simultaneously
 by multiple providers, then you must be very careful about the
 static IP address constraint.

I'm all for user selection of providers, but 'serviced simultaneously
by multiple providers' seems like a stretch.  as I'm sure you are aware,
traditional multihoming has scaling implications for the routing 
infrastructure - it's far from clear that it's feasible for every
household to be traditionally multihomed.

my view is that folks who want traditional multihoming 
(i.e. they who want their own entries in core routers' tables)
will sooner or later have to pay major ISPs for those entries.  
I have heard that you can pay individual ISPs for this now.  
so maybe what we need is a clearinghouse organization or two 
that provides one-stop shopping - take your money and ensure
that your routing table entry is maintained in each of several
major ISPs routers.  once we have that kind of cost recovery model,
folks who really need traditional multihoming (and can afford it)
will be able to get it - it won't matter how big your subnet mask is.

 Traditional multihoming has some significant features:
   -- all the hosts in the multihomed entity have a fixed
  set of addresses relative to one another
  (i.e., hosts don't care about the "remote" topology)
   -- traffic is balanced in both directions in some fashion 
across the set of multiple providers' connections
   -- if there is a partition in the network which
  breaks connectivity through one provider, the
  connectivity will automatically back-up through
  the remaining provider(s) who are unaffected
  by the partition

traditional multihoming is very useful under the right circumstances,
though you need a lot more than just multiple connections advertised
to the net to make it work well.

 Unfortunately, IPv6's current addressing architecture makes it very
 difficult to do this sort of traditional multihoming if one is not
 a TLA.  This is a significant step backward from the current IPv4
 situation, where one can persuade various operators to accept
 more-specific prefixes (coloured with appropriate community
 attributes) in order to optimize return traffic from particular
 parts of the Internet.

I agree that traditional multihoming should not be limited to TLA-sized
portions of the net, and I expect that in practice, even in IPv6, 
it will not be so limited.  having fixed partition sizes in IP addresses
is a bad idea - we learned that long ago with fixed class sizes. And
IPv6-style multihoming through DNS has some fairly significant limitations - 
not that it's not useful for some cases, but for many applications it's 
not going to substitute for traditional multihoming.  

 Therefore, in order to support IPv6 house-network multihoming, so
 as to preserve at least these three features of traditional
 multihoming, either the current IPv6 addressing architecture's
 restrictions on who can be a TLA must be abandoned (so each house
 becomes a TLA), or NATs must be used to rewrite house-network
 addresses into various PA address ranges supplied by the multiple
 providers.

it's not at all clear to me why households need traditional multihoming,
nor how to make it feasible for households to have it.  so I would regard
this as overdesign of the home 'internet interface box'

and given the degree of harm that NATs have done to IPv4, I hope they
never rear their ugly heads in IPv6.

 If it is reasonable to want to support multihoming individual
 SMEs, households, or even "smd"s, IPv6's overall addressing and
 routing architecture seems much ill-suited to the task WITHOUT
 the presence of NAT.   

what's the point of traditional multihoming anyway if you have to
have NATs?  you might as well do IPv6-style multihoming - assign a 
separate address prefix to every incoming connection and let the
hosts sort it out.  you don't need NATs to do this.

 IPv6's larger address space is merely a necessary piece of an 
 Internet which will not run out of numbers.   
 
 NATs and NAT-like translators appear to be more and more a
 fundamental tool in the IPv6 arsenal, and it unfortunate that
 people position 

RE: Universal Network Language

2000-04-24 Thread Scot Mc Pherson

Pardon my ignorance, but isn't this the function of IP?

-Scot Mc Pherson, N2UPA
-Sr. Network Analyst
-ClearAccess Communications
-Ph: 941.744.5757 ext. 210
-Fax: 941.744.0629
-mailto:[EMAIL PROTECTED]
-http://www.clearaccess.net

-Original Message-
From: Fred Baker [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 21, 2000 11:54 AM
To: Anders Feder
Cc: [EMAIL PROTECTED]
Subject: Re: Universal Network Language


At 11:01 PM 4/20/00 +0200, Anders Feder wrote:
The translation system being developed for the United Nations, the
Universal
Network Language (UNL), looks quite promising. Does the IETF have any plans
regarding this system?

not specifically. Care to make an argument that we should?




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Leonid Yegoshin

From: "Steven M. Bellovin" [EMAIL PROTECTED]

In message BB2831D3689AD211B14C00104B14623B1E7569@HAZEN04, "David A Higginbot
ham" writes:
I agree! Why create a finite anything when an infinite possibility exists?
On another note, I have heard the argument that a unique identifier already
exists in the form of a MAC address why not make further use of it?

Would it surprise anyone to hear that all of that was considered and
discussed, ad nauseum, in the IPng directorate?  That's right -- we weren't
stupid or ignorant of technological history.  There were proponents for
several different schemes, including fixed-length addresses of 64 and later
128 bits, addresses where the two high-order bits denoted the multiple of 64
to be used (that was my preference), or CLNP, where addresses could be quite
variable in length (I forget the maximum).

But the first thing to remember is that there are tradeoffs.  Yes, infinitely
long addresses are nice, but they're much harder to store in programs (you can
no longer use a simple fixed-size structure for any tuple that includes an
address) and (more importantly) route, since the router has to use the entire
address in making its decision.  Furthermore, if it's a variable-length
address, the router has to know where the end is, in order to look at the next
field.  (Even if the destination address comes first, routers have to look at
the source address because of ACLs -- though you don't want address-based
security (and you shouldn't want it), you still need anti-spoofing filters.)
I should add, btw, that there's a considerable advantage to having addresses
be a multiple of the bus width in size, since that simplifies fetching the
next field.)

   Routers may use the different addresses for routing. Outbound router
may assign "route address" to keep intermediate route tables small.

   It is not the same as NAT because original and real destination address
never replaced.

   - Leonid Yegoshin.




RE: IPv6: Past mistakes repeated?

2000-04-24 Thread Bob Braden

  * 
  * I can remember early TCP/IP implementations that used class A
  * addressing only, with the host portion of the Enet MAC address as the
  * host portion of the IP address - "because ARP is too hard" or
  * something like that.  I think the first Suns did this.
  * 
  * --

Dick,

Right idea, wrong link layer. The low-order 24 bits of an IP address
was originally a 24-bit ARPANET (/Milnet/DDN) host address.

Bob Braden




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread John Stracke

Keith Moore wrote:

 it's not at all clear to me why households need traditional multihoming,
 nor how to make it feasible for households to have it.  so I would regard
 this as overdesign of the home 'internet interface box'

Now that I've got a decent DSL provider, I've found that the least reliable
component of my Net access is the power line: my DSL just works, but the power
company has been flaking out 2-3 times a week for the past month or so.  (I
have my computers on UPSes, but your average K-Mart Box user wouldn't.)
Multihoming wouldn't solve that.

--
/\
|John Stracke| http://www.ecal.com |My opinions are my own.  |
|Chief Scientist |===|
|eCal Corp.  |Using strong crypto on the Internet is like|
|[EMAIL PROTECTED]|using an armored car to transport money from   |
||someone living in a tent to someone living in a|
||cardboard box. |
\/






Re: Universal Network Language

2000-04-24 Thread John Stracke

Scot Mc Pherson wrote:

 Pardon my ignorance, but isn't this the function of IP?

No, it turns out that what they mean by UNL is an artificial human language, a
common intermediary that any human text can be translated into; they postulate
translation servers that know how to translate between UNL and specific human
languages.  Much higher in the stack than IP.  :-)

--
/==\
|John Stracke| http://www.ecal.com |My opinions are my own.|
|Chief Scientist |=|
|eCal Corp.  |"There will be no more there. We will all be |
|[EMAIL PROTECTED]|here."--networkMCI ad|
\==/






RE: IPv6: Past mistakes repeated?

2000-04-24 Thread J. Noel Chiappa

A couple of routing points, not related to NAT:

 From: Ian King [EMAIL PROTECTED]

 so that it is realistic for businesses to regularly ask for assignments
 in more granular chunks; rather than grabbing a class A-size space
 "just in case", big users would be willing to request another 256 when
 the new branch office opens, then another 64 for the summer interns...

Sorry, this doesn't work - at least with IPvN (N=4,6) addresses as currently
constituted. The routing system (i.e. the software that computes paths
through the network) uses those addresses as the namespace it works on, and
to make the routing scale properly (a.k.a. "keep the network running"), those
addresses have to be aggregable.

In other words, you need to be able to have a single routing table entry that
covers a chunk of the network (such as a company's in-house network) - and
that routing table entry can't include other things as well. If a company,
etc, had addresses assigned in dribs and drabs, the way you suggest, that
company's addresses would no longer have that property.

Other namespaces, which don't have to include location information, just
identification (e.g. IEEE 48-bit numbers) work just fine with this kind of
allocation policy - but not any namespace used by path selection in a large
network.


 From: Steve Deering [EMAIL PROTECTED]

 making each house a TLA does not strike me as a scalable multihoming
 solution for very large numbers of houses, given the current state of
 the routing art.

The restriction has little to do with the current state of the routing art
(which is not to say that better path-selection architectures than the one
the Internet is currently using do not exist :-).

Even with the best routing system, it still couldn't support tracking large
numbers of houses as individual destinations (i.e. having individual routing
tables extries across the global scope) - even if the routers had large
enough route table memories to hold the 100's of millions of routes which
could result.

Noel




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Sean Doran


| it's not at all clear to me why households need traditional multihoming,
| nor how to make it feasible for households to have it.  so I would regard
| this as overdesign of the home 'internet interface box'

Three observations:

1. 

In the past, when and if large arrogant backbone providers like
me used to say that a push against multihoming was a good way
to avoid stressing the current routing system by avoiding
address deaggregation and globally-visible reachable changes
that could result, we would get flayed alive.

The lesson: always assume that, no matter how technically odd
one thinks it may be, everyone will want to multihome if it is
feasible to do so.

2.

Multihomed entities like to submit bandwidth increase orders
to one or the other provider over time, depending on a
number of factors.   Likewise, singlehomed entities appear
to want to multihome, and at some point rather than upgrade
the single connection to the Internet, will order a second
connection from a different provider, and attempt to do
traditional multihoming.

3.

In many areas nearly every household PRESENTLY has several
possible fixed, wireless and switched facilities over which
Internet access can be offered.

I would say that rather than being overdesigned, that it is slightly
underdesigned, because it does not address the possibility of multitenanted
households, with (for example) hishers routing policies.  (For example,
he uses a cable provider for access, she has a DSL paid for her by her job
so she can do her engineering activities from home).

| and given the degree of harm that NATs have done to IPv4, I hope they
| never rear their ugly heads in IPv6.

Noel's right, your knees must really hurt!

| what's the point of traditional multihoming anyway if you have to
| have NATs?  you might as well do IPv6-style multihoming - assign a 
| separate address prefix to every incoming connection and let the
| hosts sort it out.  you don't need NATs to do this.

The NAT function here is subsumed into the host.

Pushing NATs into hosts is an attractive idea, but it does
require alot more knowledge of the network in the hosts, and
one can gain no economies of scale that a standalone NAT shared
by many hosts can achieve.  Also, if the hosts themselves are 
singly homed to a particular LIS (e.g. an ethernet with hosts and a
router with interfaces to several providers), they will have a
devil of a time incorporating a NAT function.

| NATs can be a transition tool for connecting between IPv4 and IPv6, 
| but NATs should never get in the way of a native IPv6 connection.

They will, though.

Sean.




Re: Universal Network Language

2000-04-24 Thread Valdis . Kletnieks

On Mon, 24 Apr 2000 15:08:40 EDT, John Stracke [EMAIL PROTECTED]  said:
 No, it turns out that what they mean by UNL is an artificial human language, a
 common intermediary that any human text can be translated into; they postulate
 translation servers that know how to translate between UNL and specific human
 languages.  Much higher in the stack than IP.  :-)

Remember that the Babelfish, by allowing perfect communication, was the cause
of more and bloodier wars than anything else ever recorded... ;)

Douglas Adams was right...

-- 
Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Keith Moore

 What I find interesting throughout discussions that mention IPv6 as a
 solution for a shortage of addresses in IPv4 is that people see the
 problems with IPv4, but they don't realize that IPv6 will run into the
 same difficulties.  _Any_ addressing scheme that uses addresses of
 fixed length will run out of addresses after a finite period of time,

I suppose that's true - as long as addresses are consumed at a rate
faster than they are recycled.  But the fact that we will run out of
addresses eventually might not be terribly significant - the Sun will
also run out of hydrogen eventually, but in the meantime we still find
it useful.

 and that period may be orders of magnitude shorter than anyone might
 at first believe.

it is certainly true that without careful management IPv6 address
space could be consumed fairly quickly.  but to me it looks like that
with even moderate care IPv6 space can last for several tens of years.

 Consider IPv4.  Thirty-two bits allows more than four billion
 individual machines to be addressed.  

not really.  IP has always assumed that address space would be
delegated in power-of-two sized "chunks" - at first those chunks only
came in 3 sizes (2**8, 2**16, or 2**24 addresses), and later on it
became possible to delegate any power-of-two sized chunk.  but even
assuming ideally sized allocations, each of those chunks would on
average be only 50% utilized. 

so every level of delegation effectively uses 1 of those 32 bits, and
on average most parts of the net are probably delegated 4-5 levels
deep.  (IANA/regional registry/ISP/customer/internal). so we end up
effectively not with 2**32 addresses but with something like 2**27 or
2**28.  (approximately 134 million or 268 million)

(see also RFC 1715 for a different analysis, which when applied to
IPv4, yields similar results for the optimistic case)

allocating space in advance might indeed take away another few bits.
but given the current growth rate of the internet it is necessary.
the internet is growing so fast that a policy of always allocating
only the smallest possible chunk for a net would not only be
cumbersome, it would result in poor aggregation in routing tables and
quite possibly in worse overall utilization of address space.

but if it someday gets easier to renumber a subnet we might then find
it easier to garbage collect, and recycle, fragmented portions of
address space.  and if the growth rate slowed down (which for various
reasons is possible) then we could do advance allocation more
conservatively.

 It should be clear that IPv6 will have the same problem.  The space
 will be allocated in advance.  Over time, it will become obvious that
 the original allocation scheme is ill-adapted to changing requirements
 (because we simply cannot foresee those requirements).  Much, _much_
 sooner than anyone expects, IPv6 will start to run short of addresses,
 for the same reason that IPv4 is running short.  It seems impossible
 now, but I suppose that running out of space in IPv4 seemed impossible
 at one time, too.

IPv6 allocation will have some of the same properties of IPv4
allocation.  We're still using power-of-two sized blocks, we'll still
waste at least one bit of address space per level of delegation.  It
will probably be somewhat easier to renumber networks and recycle
address - how much easier remains to be seen.

OTOH, I don't see why IPv6 will necessarily have significantly more
levels of assignment delegation.  Even if it needs a few more levels,
6 or 7 bits out of 128 total is a lot worse than 4 or 5 bits out of 32.

 The allocation pattern is easy to foresee.  Initially, enormous
 subsets of the address space will be allocated carelessly and
 generously, because "there are so many addresses that we'll never run
 out" 

I don't know where you get that idea.  Quite the contrary, the
regional registries seem to share your concern that we will use up
IPv6 space too quickly and *all* of the comments I've heard about the
initial assignment policies were that they were too conservative.
IPv6 space does need to be carefully managed, but it can be doled out
somewhat more generously than IPv4 space.

 and because nobody will want to expend the effort to achieve
 finer granularity in the face of such apparent plenty.  

First of all, having too fine a granularity in allocation prevents you
from aggregating routes.  Second, with power-of-two sized allocations
there's a limit to how much granularity you can get - even if you
always allocate optimal sized blocks.

 This mistake will be repeated for each subset of the address space
 allocated, by each organization charged with allocating the space.

It's not clear that it's a mistake.  it's a tradeoff between having
aggregatable addresses and distributed assignment on one hand and
conserving address space on the other.  and the people doing address
assignment these days are quite accustomed to thinking in these terms.

 If you need further evidence, look at virtual 

Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Keith Moore

 Users shouldn't care or know about the network's internal addressing.
 Some of the application issues with NATs spring directly from this issue
 (e.g. user of X-terminal setting display based on IP address instead of
 DNS name).

it's not the same issue.  the point of using IP addresses in DISPLAY 
variables is not to make them visible to the user - it's because 
using the IP address is (in a non-NATted network) far more reliable 
than depending on the DNS lookup to work.  the fact that doing this
makes the address more visible to the user is just a side effect;
most users don't care diddly about it one way or the other as long
as it works.

Keith




Fw: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

 If what you suggest should be implemented, then
 probably the entire software of all the switches
 and hubs need to be upgraded (if not entirely scrapped) .

That's what has to be done, anyway.  I'm not sure that I see what you are
saying.

 As also everytime the source and destination addresses are
 upgraded, all the systems and the related software needs to
 be upgraded.

If you design it correctly in the first place, this isn't necessary.

Think of a railroad network as an analogy.  The current design for IP
addressing allows a fixed number of tracks, and you have to allocate them
all in advance.  If the future evolution of the network is such that your
allocation turns out to be less than optimal, you have to redo entire
sections of the network to reallocate tracks.

Now compare this with an open-ended addressing scheme.  All you have to do
in this case is allocate the first track.  As additional tracks are needed,
you build new ones, branching off from the first track.  If some branches
evolve more than others, no problem--they can just add additional branches
of their own.  No branch impinges on any other branch.  You might have only
two branches leading away from your original track.  One of them might lead
to a total of fifty stations (endpoints), but the other might lead to ten
trillion stations.  It doesn't matter, and you don't have to care, since
when you route trains on your section of the network, all you'll look at is
the first digit of the destination, which will tell you which of your two
branches the train must follow.
And you might be on a branch yourself, for that matter.  The network can be
restructured upstream or downstream of your little section of track, and it
remains transparent to you, as long as the digit designations for you and
the two branches you serve remain the same.

 Personally my telephone number has changed
 3 times within the last couple of years.

Probably because the telephone numbering scheme was not truly open-ended.
In the U.S., for example, attempts to fix the number of digits in telephone
numbers have caused great problems, with things like area codes being
exhausted, exchanges being exhausted, and so on.  A truly open-ended scheme
wouldn't have this problem--you'd just add more digits in the areas that
needed more numbers.  This open-ended scheme is actually in place for
international calling.  (Equipment usually has fixed-length buffers for
telephone numbers, but all you have to do is boost the size of the buffers
if you ever come across numbers that won't fit.)

 But the million dollar question is that whether the
 protocol and switch vendors would like to scrap the
 years and amount of investment that they have already
 made in the existing system.

Looks like they're pretty much doing that with IPv6 now!  And with a
fixed-length addres, they'll be doing it again in 15 years, only it will
cost a thousand times more than it did on this pass.

  -- Anthony





Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

 But the first thing to remember is that there are
 tradeoffs.  Yes, infinitely long addresses are nice,
 but they're much harder to store in programs (you
 can no longer use a simple fixed-size structure for
 any tuple that includes an address) ...

Sure you can.  You just allocated the fixed space generously, _and_ you
include code that traps any address with more bytes than you can handle.  If
that ever happens, you rebuild your table with a larger fixed space.  You
might include code to catch addresses that approach the limit so that you
have time to rebuild the table at your leisure, rather than when the routing
actually fails, of course.

 ... and (more importantly) route, since the router
 has to use the entire address in making its decision.

Why does it need the entire address?

You only need the entire address if addresses are assigned arbitrarily from
the address space, such that no subset of the complete address is in itself
sufficient to complete any portion of the routing.  That is indeed a problem
today, with address allocation that does not necessarily strictly following
routing requirements.  But that was imposed by the very fact of trying to
allocate a fixed space in advance.  In a variable-length address space, you
don't have to anticipate any kind of advance allocation--you can just add
digits to addresses where they are required, and routers only need to look
at enough of an address to figure out where it should go next.  In a
variable-length scheme, you can be sure that any address that begins with
19283 always goes down the route you have for 19283, no matter what the
remaining digits are.  (Naturally, you could route with finer granularity at
some nodes, if you wanted to, but you wouldn't necessarily be obligated to
do that.)

 Furthermore, if it's a variable-length address, the
 router has to know where the end is, in order to look
 at the next field.

Just put that up front.  For example, prefix the address with a length byte.
If the byte is zero, the address is four bytes long (compatible with IPv4).
If it is one, the address is five bytes long.  And so on, up to 254+4= 258
bytes long.  If the byte is 255, however (unlikely, but this scheme would
provide for _any_ address length), then the _next_ byte specifies additional
bytes to be added to 254 (i.e., lengths of 254 through 508 bytes).  This
second byte follows the same pattern, and so on.  You'll never run out of
addresses this way, ever.

It's not really hard.  You just have to write the code up front to handle
it.  And if you don't want to allow for infinite capacity (you have to stop
somewhere, in any practical implementation), you just make darn sure that
you have code that will trap any address longer than you can handle.  If
anything ever hits the trap, you change some parameters and recompile, and
you're back online.

 Even if the destination address comes first, routers have
 to look at the source address because of ACLs -- though
 you don't want address-based security (and you shouldn't
 want it), you still need anti-spoofing filters.

Hmm... I don't know.  If you restrict the address field to routing only, do
you still need anti-spoofing?  A given address can lead to only one
endpoint, unless I'm missing something here.

 One reason is something that was pointed out by a number
 of people:  code that isn't exercised generally doesn't
 work well.  If we didn't have really long addresses in
 use from the beginning, some major implementations wouldn't
 support them properly.

It's a lot easier to fix a bug than to rewrite the protocol from scratch
when it runs out of capacity.

 There have been designs for [geographic addressing], I
 should add, such as the Metropolitan Area Exchanges.
 But for those to work, assorted ISPs would have
 to co-operate on a large scale, something that I
 don't think will happen.

Why would they have to cooperate in a variable-length scheme?  They would
only allocate addresses on their branch.  They could route within their
branch without knowing or caring about other branches (as long as they know
the high-order digits identifying their own branches, which someone else
would assign to them).  And if they ever saw addresses with high-order
digits that didn't match their own assigned digits, they'd know that the
address meant "elsewhere."  Of course, finer granularity would be possible,
but not required.

That's how telephones work.  If you call someone in Mongolia from the U.S.,
the U.S. exchanges don't have to know or care about the trailing digits in
the number; they only have to look at enough of the number to know to route
the call towards Mongolia.  Can you imagine what the telephone system would
be like if every country had to know every detail about the numbering system
of every other country?

 Phone numbers are allocated geographically, but that
 only works because historically, most areas only had
 one monopoly phone company.

That's not a requirement.  Just assign additional 

Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

 its ironic you should send this today, when 12
 million people in london, england, had to learn
 to dial 8 digits instead of 7 because of lack
 of foresight from the telephone regualtor when
 re-numbering less than a decade ago ...

France has increased the number of digits in telephone numbers several times
over the past 20 years, and it has always been without a hitch.  Each time
the telco has prepared for a barrage of misdials, just in case, but they
have never materialized.  Currently, numbers are ten digits long, everywhere
in the country, although the first digit is actually a selector for your
chosen local telco provider (0 = France Telecom, the historical operator).

  -- Anthony




RE: Universal Network Language

2000-04-24 Thread Lillian Komlossy

I totally agree with you - at least there should be a choice either user or
content induced - to translate or not to translate. Also one must think of
the possibility of how much the translation service or program will become
another point of failure - or even a security issue.

Lillian Komlossy 
Site Manager 
http://www.dmnews.com   
http://www.imarketingnews.com  
(212) 925-7300 ext. 232 


-Original Message-
From: John Stracke [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 24, 2000 4:18 PM
To: Lillian Komlossy
Subject: Re: Universal Network Language


Lillian Komlossy wrote:

 It would make sense if it sat in front of the applications such as the
 browsers and did the translation - or the applications interfaced with it
-
 but either way it will be another monkie to slow down the entire process.
I
 don't know if it is worth the effort.

I suspect it will be if, and only if, the actual translation works.  If it
does, then someone will come up with a way to make it more efficient.  At
the
moment, it looks like they're putting the translation services onto servers
because they think that's the only way to get them deployed; and probably
they're right.

I'm skeptical about the translation, though; machine translation has a long
way
to go, and forcing it to run through a synthetic language will probably
hinder
more than it hurts.  (Think about what happens when you want to translate
from,
say, English to German, and the concept you're translating can be expressed
concisely in both languages, but not in UNL.)

--
/==\
|John Stracke| http://www.ecal.com |My opinions are my own.|
|Chief Scientist |=|
|eCal Corp.  |That is correct. I'm out of fuel. My landing |
|[EMAIL PROTECTED]|gear is jammed. And there's an unhappy bald  |
||eagle loose in the cockpit.  |
\==/





Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

 I agree! Why create a finite anything when an infinite
 possibility exists?

Exactly.  If you designed an open-ended protocol, you're far less likely to
ever have to rewrite it.

 On another note, I have heard the argument that
 a unique identifier already exists in the form of
 a MAC address why not make further use of it?

Not every machine on the Internet has an Ethernet card with a MAC address,
otherwise it might not be such a bad idea.  I think using the MAC address is
an excellent idea for software protection schemes (it's a lot more elegant
than a hardware key such as a dongle), but nobody seems interested in that.
In any case, this latter application is outside the scope of Internet
discussions.

  -- Anthony




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

 The telephone number situation in the United States
 has been one of continual crisis for years, because
 of rapid growth in use (in part because of Internet
 access!).  The area served by a given "area code" would
 be split into smaller areas with multiple area codes;
 these days, those areas aren't necessarily even contiguous.

That is mostly because the telco(s) tried to impose a fixed address length
on a scheme that really should have remained variable.  Telephone numbers
overseas are truly variable.  When you dial 011+3, the remaining digits can
be anywhere from one to a thousand.  The local end just stores them all
until you say you are done (by pausing or hitting the # key), and then it
routes it as far as it can, and passes the rest onto some other node.

 I'd suggest that address assignment policy should
 keep process lightweight, so that it is realistic for
 businesses to regularly ask for assignments in more
 granular chunks ...

But if you use a truly variable scheme, you don't have to assign anything at
all.

Say Company X wants some addresses, and it is in an area where all addresses
start with 9482.  You just add some digits, tell them what they are, and
they can add as many addresses as they want behind those digits.  All you
have to care about is that 94825x gets routed to Company X.  The rest of
the address allocation is their business.  They might have just two digits
on the end, or they might have forty.

With fixed-length addresses, you're in trouble as soon as you make an
assignment.  You might assign 9482 through 9482 to Company X.  The
problem is that, if Company X needs only 200 addresses, you've wasted 9800
addresses, and you can't give them to anyone else.  Conversely, if Company X
ever needs more than 1 addresses, you have to completely reallocate
everything, or fragment their address range.  Either way, you lose.

 ... big users would be willing to request another 256
 when the new branch office opens, then another 64 for
 the summer interns...

All well and good, except that it fragments the address space, making it
impossible to route on just a portion of the address--you have to start
looking at the entire address, all the time.

  -- Anthony




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Theodore Y. Ts'o

   Date: Mon, 24 Apr 2000 15:06:21 -0400
   From: John Stracke [EMAIL PROTECTED]

it's not at all clear to me why households need traditional multihoming,
nor how to make it feasible for households to have it.  so I would regard
this as overdesign of the home 'internet interface box'

   Now that I've got a decent DSL provider, I've found that the least
   reliable component of my Net access is the power line: my DSL just
   works, but the power company has been flaking out 2-3 times a week
   for the past month or so.  (I have my computers on UPSes, but your
   average K-Mart Box user wouldn't.)  Multihoming wouldn't solve that.

It depends on your ISP.  In the worst case some consumer grade DSL lines
are pretty bad --- 2-3 hour outages in the middle of the day (after all
no one uses the Internet during the day --- they're at work! :-),
sometimes every 3-4 days.  (I don't use this provider any more; clearly
they're using all of their money to take out full-page and half-page ads
in the Boston Globe's, and they're not spending it on upgrading their
network operations.  :-)

To make matters worse, there's a huge price differential between
"consumer grade" ISP's and "business grade" ISP's.  This kind of
situation is just ripe for arbitradge.  :-)

I can imagine some poor (but demanding) network geeks deciding that
they'll solve this problem by purchasing multiple cheap consumer grade
ISP's (say a cable modem and a ADSL line), and then set up tunnels to
some place where they can get address space.  If you can assume that
only one of your consomer grade pipes will crap out at a time, they can
switch the tunnel endpoint to the other grade of service, and keep
working with the same IP addresses, even though one of their lines has
stopped working.  (This is also useful because the cheap consumer grade
ISP's generally won't give you a large address block, even without the
dual redundancy --- and two cheap consumer grade network services is
probably far cheaper than a single business grade ISP monthly fee.)

Right now, getting address space via tunnelling usually requires knowing
someone with connections at some institution with a fast connection to
the internet and a large amount of available address space.  But I
suspect there may be a huge business opportunity here, especially if the
the some price differential between various grades of service continues.

- Ted




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread John Stracke

Keith Moore wrote:

 if by that time the robot population exceeds the human population then
 I'm happy to let the robots solve the problem of upgrading to a new
 version of IP.

Ah--the Iron Man's Burden.  :-)

--
/\
|John Stracke| http://www.ecal.com |My opinions are my own.  |
|Chief Scientist |===|
|eCal Corp.  |Beware of wizards, for you are crunchy and good|
|[EMAIL PROTECTED]|with ketchup.  |
\/






RE: IPv6: Past mistakes repeated?

2000-04-24 Thread Dick St.Peters

  making each house a TLA does not strike me as a scalable multihoming
  solution for very large numbers of houses, given the current state of
  the routing art.
 
 The restriction has little to do with the current state of the routing art
 (which is not to say that better path-selection architectures than the one
 the Internet is currently using do not exist :-).
 
 Even with the best routing system, it still couldn't support tracking large
 numbers of houses as individual destinations (i.e. having individual routing
 tables extries across the global scope) - even if the routers had large
 enough route table memories to hold the 100's of millions of routes which
 could result.

I should probably just go back to lurking, but ... my take on every
house being multihomed was to imagine full local meshing - each house
peering with its neighbors redundantly.  If, say, my power-line port
was down, that information needn't be known by anything outside my own
neighborhood.  When the local power distribution center couldn't get a
power-grid packet to me directly, they'd give it to my neighbor and
let his smart house determine whether to send it to mine by wireless
or cable or whatever else has come along.  The rest of the world could
just engage in some kind of "get it closer" routing.

Don't ask me about mobile users.  I'm going back to lurking ...

--
Dick St.Peters, [EMAIL PROTECTED] 
Gatekeeper, NetHeaven, Saratoga Springs, NY
Saratoga/Albany/Amsterdam/BoltonLanding/Cobleskill/Greenwich/
GlensFalls/LakePlacid/NorthCreek/Plattsburgh/...
Oldest Internet service based in the Adirondack-Albany region




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Anthony Atkielski

From: "Keith Moore" [EMAIL PROTECTED]


 I suppose that's true - as long as addresses are consumed
 at a rate faster than they are recycled.  But the fact that
 we will run out of addresses eventually might not be terribly
 significant - the Sun will also run out of hydrogen
 eventually, but in the meantime we still find it useful.

Ah ... famous last words.  I feel confident that similar words were said
when the original 32-bit address scheme was developed:

"Four billion addresses ... that's more than one computer for every person
on Earth!"

"Only a few companies are every going to have more than a few computers ...
just give a Class A to anyone who asks."

And so now we are running out.

But the real thing here is, even with the wisest allocations imaginable, we
would still run out much, much faster than the total number of addresses in
the address space might lead one to believe.  And that is because we have to
_predict_ how addresses will be used, and we cannot easily change the
consequences of those predictions.  So there are always significant blocks
of unused addresses, even as we run out of addresses in other ways.

 it is certainly true that without careful management
 IPv6 address space could be consumed fairly quickly.
 but to me it looks like that with even moderate care IPv6
 space can last for several tens of years.

Ten years, I'd say.  And then redefining a new addressing scheme will cost a
thousand times more than it will today, at least.  Even washing machines
will have to get new programming!

 not really.  IP has always assumed that address space
 would be delegated in power-of-two sized "chunks" ...

Aye, there's the rub.  It has always been necessary to _assume_ and
_delegate_ address space, because the address space is finite in size.  It
has always been necessary to predict the future, in other words, in a domain
where the future is very uncertain indeed.

 ... at first those chunks only came in 3 sizes (2**8,
 2**16, or 2**24 addresses), and later on it became possible
 to delegate any power-of-two sized chunk.

But by then, a lot of the biggest chunks were already in use.

 ... but even assuming ideally sized allocations, each
 of those chunks would on average be only 50% utilized.

Right.  So the only solution is to get rid of the need to allocate in
advance in the first place.

Think of variable addressing and the needs of the tiny country of Vulgaria.
Vulgaria needs address space.  Okay, so you say, well, Vulgaria is in, say,
Eastern Europe, and all addresses in Eastern Europe begin with 473.  All the
addresses from 4731 to 4738 are taken.  So you add address 4739, which now
means "everyone else in Eastern Europe," and you assign 47391 to Vulgaria.
That's all you have to do.  If Vulgaria wants to further subdivide, it can;
it can assign 473911 to Northern Vulgaria, and 473912 to the rugged
Vulgarian Alps region.  It doesn't matter to anyone except Vulgaria.

Given this, North America doesn't have to change anything at all.  Before,
anything that started with 473 went to Eastern Europe, and that's still the
case.  Some European routers have to smarten up a bit, because now they have
to be aware that 4739 goes to another routing point that handles "all other"
Eastern European countries.  And this new routing point (heck, maybe
Vulgaria will host it, eh?) must know that 47391 goes to Vulgaria, but
nothing more.  Only routers in Vulgaria itself need to care where 473911
goes as compared to 473912.  And only routers in the rugged Vulgarian Alps
need to know that 4739124 goes to Smallville, and 4739126 goes to
Metropolis, both cities nestled there in the Alps.  And since the addressing
scheme is open ended, even if Vulgaria one day has ten trillion computers on
the Net, nothing outside Vulgaria needs to change.

 allocating space in advance might indeed take away
 another few bits. but given the current growth rate
 of the internet it is necessary.

Only with a fixed address space.

 the internet is growing so fast that a policy of
 always allocating only the smallest possible chunk
 for a net would not only be cumbersome, it would result
 in poor aggregation in routing tables and quite
 possibly in worse overall utilization of address space.

Exactly ... but that's the magic of the variable address scheme.  You only
have to allocate disparate chunks in a fixed address scheme because the size
of each chunk is limited by the length of an address field.  But if the
address field is variable, you can make any chunk as big as you want.  If
you have addresses of 4739124xx initially (Metropolis only had a few
machines at first), and you run out of addresses after 473912498, you just
make 473912499 point to "more addresses for Metropolis," and start
allocating, say 4739124990001 through 473912498 (you always leave at
least one slot empty so that it can point to "more addresses").

 I don't know where you get that idea.

That's how it happened for IPv4.

 Quite the contrary, the 

Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Jeffrey Altman

  Users shouldn't care or know about the network's internal addressing.
  Some of the application issues with NATs spring directly from this issue
  (e.g. user of X-terminal setting display based on IP address instead of
  DNS name).
 
 it's not the same issue.  the point of using IP addresses in DISPLAY 
 variables is not to make them visible to the user - it's because 
 using the IP address is (in a non-NATted network) far more reliable 
 than depending on the DNS lookup to work.  the fact that doing this
 makes the address more visible to the user is just a side effect;
 most users don't care diddly about it one way or the other as long
 as it works.
 
 Keith
 

The default DISPLAY variable for an X Server on the local machine is

  unix:0

This means contact the 0th display attached to the 0th X Server on the
local machine.  When you make a connection to a remote machine you
cannot count on the return from gethostname() is going to have any
relationship to the name in the DNS.  Not to mention that on a
multi-homed machine you need to be able to choose the IP address that
is actually accessible to the remote.  So what you do is look at the
IP address on the local end of the socket that is being used to
connect to the remote system and insert that IP address into the
exported DISPLAY variable.  This has of course worked for 20 years and
fails when a NAT is in the middle.



Jeffrey Altman * Sr.Software Designer * Kermit-95 for Win32 and OS/2
 The Kermit Project * Columbia University
  612 West 115th St #716 * New York, NY * 10025
  http://www.kermit-project.org/k95.html * [EMAIL PROTECTED]





correction Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Keith Moore

in an earlier message, I wrote:

 OTOH, I don't see why IPv6 will necessarily have significantly more
 levels of assignment delegation.  Even if it needs a few more levels,
 6 or 7 bits out of 128 total is a lot worse than 4 or 5 bits out of 32.

the last sentence contains a thinko.  it should read:

6 or 7 bits out of 128 total is a lot better than 4 or 5 bits out of 32.

(I originally wrote the comparison in the other order, but when I swapped
sides, forgot to change the direction of the comparison.)

Keith




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Masataka Ohta

Sean;

 [Keith Moore on a "KMart box"]
 | take it home, plug it in to your phone line or whatever, and get
 | instant internet for all of the computers in your home.  
 | (almost just like NATs today except that you get static IP addresses).
 
 No, not "or whatever" but "AND whatever".

Do you mean "plug THEM in to your phone line and whatever"?

 Otherwise this is a nice but insufficient device, 

Otherwise the device is unnecessarily complex (which means it is
more expensive and less reliable) and, worse, is the single point
of failure.

Masataka Ohta




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Ralph Droms

At 09:45 PM 4/24/00 +0200, Anthony Atkielski wrote:
  I agree! Why create a finite anything when an infinite
  possibility exists?

Exactly.  If you designed an open-ended protocol, you're far less likely to
ever have to rewrite it.

You just have to redeploy new implementations when you add new 
features.  "Open-ended" isn't helping us in extending DHCP.  That's not to 
say a practical, extensible, open-ended protocol can't be written...

- Ralph





Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Keith Moore

personally, I can't imagine peering with my neighbors.
but maybe that's just me ... or my neighborhood.

Keith




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Andrew Partan

On Mon, Apr 24, 2000 at 04:32:38PM +0200, Sean Doran wrote:
 Therefore, in order to support IPv6 house-network multihoming, so
 as to preserve at least these three features of traditional
 multihoming, either the current IPv6 addressing architecture's
 restrictions on who can be a TLA must be abandoned (so each house
 becomes a TLA), or NATs must be used to rewrite house-network
 addresses into various PA address ranges supplied by the multiple
 providers.

Or seperate the end system identifer from the routing goop.  This
solves lots of problems (while introducing others).
[EMAIL PROTECTED] (Andrew Partan)




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Keith Moore

 Ah ... famous last words.  I feel confident that similar words were said
 when the original 32-bit address scheme was developed:
 
 "Four billion addresses ... that's more than one computer for every person
 on Earth!"
 
 "Only a few companies are every going to have more than a few computers ...
 just give a Class A to anyone who asks."

I wasn't there, but I expect it would have sounded even more preposterous
for someone to have said: "I'm absolutely positive that this Internet thing 
will reach to nearly everyone on the planet in a couple of decades, and 
therefore we need to make sure it has many times more than 32 bits of 
address space"  even though that's what eventually happened.  

but just because it happened once doesn't mean that it will happen again.
we do well to learn from the past, but the past doesn't repeat itself
exactly.

it often seems to be the case that if you design for the long
term, what you get back isn't deployable in the near term because
you've made the problem too hard.  and if you design for the near 
term, what you get back will break in the long term.  but at least 
you get somewhere with the latter approach - the fact that we got 
a global Internet out of IPv4 demonstrated to people that the 
concept was viable.

today's design constraints aren't the same as tomorrow's.  with
today's Internet a lack of address space is a big problem.  with
IPv6 there's a considerable amount of breathing room for address 
space.  address space shortage is just one of many possible problems.
as long as the network keeps growing at exponential rates we are 
bound to run into some other major hurdle in a few years.  it might
be address space but the chances are good that before we hit that 
limitation again that we will run into some other fundamental barrier.  we 
can either try to anticipate every possible hurdle that the Internet 
might face or we can concentrate on fixing the obvious problems now 
and wait for the later problems to make themselves apparent before 
trying to fix them.  if we try to anticipate every major hurdle, 
we will never agree on how to solve all of those problems, and the 
Internet will bog down to the point that it's no longer useful.

 But the real thing here is, even with the wisest allocations imaginable, we
 would still run out much, much faster than the total number of addresses in
 the address space might lead one to believe.  And that is because we have to
 _predict_ how addresses will be used, and we cannot easily change the
 consequences of those predictions.  

no, that's just bogus.  on one hand you're saying that we cannot predict
how addresses will be used, and on the other hand you're saying that you 
can definitely predict that we'll run out of IPv6 addreses very soon.

 Ten years, I'd say.  

right now you're just pulling numbers out of thin air.  you have yet to 
give any basis whatsoever to make such a prediction credible.

  not really.  IP has always assumed that address space
  would be delegated in power-of-two sized "chunks" ...
 
 Aye, there's the rub.  It has always been necessary to _assume_ and
 _delegate_ address space, because the address space is finite in size.  

wrong. you need to make design assumptions about delegation points, and
delegate portions of address space, even for variable length addresses
of arbitrary size.

  ... at first those chunks only came in 3 sizes (2**8,
  2**16, or 2**24 addresses), and later on it became possible
  to delegate any power-of-two sized chunk.
 
 But by then, a lot of the biggest chunks were already in use.

true, several of the class A blocks were already in use by then.  
but initial allocations of IPv6 space are much more conservative .

  ... but even assuming ideally sized allocations, each
  of those chunks would on average be only 50% utilized.
 
 Right.  So the only solution is to get rid of the need to allocate in
 advance in the first place.

no.  even with variable length addresses you want to exercise some 
discipline about how you allocate addresses.  otherwise you end up
some addresses being much longer than necessary, and this creates 
inefficiency and problems for routing.

  allocating space in advance might indeed take away
  another few bits. but given the current growth rate
  of the internet it is necessary.
 
 Only with a fixed address space.

nope.  even phone numbers are allocated by prefix blocks.

  the internet is growing so fast that a policy of
  always allocating only the smallest possible chunk
  for a net would not only be cumbersome, it would result
  in poor aggregation in routing tables and quite
  possibly in worse overall utilization of address space.
 
 Exactly ... but that's the magic of the variable address scheme.  You only
 have to allocate disparate chunks in a fixed address scheme because the size
 of each chunk is limited by the length of an address field.  

no, there are lots of other reasons for doing it.  you seem to be 
forgetting that routing 

Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Paul Ferguson

At 08:27 PM 04/24/2000 -0400, Andrew Partan wrote:

Or seperate the end system identifer from the routing goop.  This
solves lots of problems (while introducing others).

Deja Vu.

- paul




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-24 Thread Sean Doran

asp writes:

| Or seperate the end system identifer from the routing goop.  This
| solves lots of problems (while introducing others).

Right, so in the 8+8 model, some router performs a NAT function by
writing in the routing goop portion at an address abstraction boundary.

The host does not need to know the routing goop that will be used,
nor does it need to know the full routing goop associated with a
host from which it receives a packet.

However, I am wary of your terminology.  Rather than a system
identifier, you instead want a local locator -- something that
is unique within the local addressing scope and which allows
other hosts and routers in the scope to locate the correct receiving
interface.   The "routing goop" is a locator which is useful to
hosts outside the local addressing scope, and may vary from location
to location in the Internet.

In particular, if you follow the excellent posting by Anthony Atkieski
about variable-length addressing, you might find yourself agreeing
that in principle the "routing goop" may vary from place to place
not only by value, but also by length.

The NAT function is that which translates the "routing goop" 
from "undefined" to something useful outside the originating
local addressing scope.   It can also translate the "routing goop"
from one value to another.  That translation can also be an extension;
e.g. changing "4321" into "654321".

The "system identifier" is either a locator (which is how I guess
you mean to use it) or an actual system name, which would behave
more like a DNS entry.

The IRTF's NSRG is looking at what the system name/endpoint identifier
should be, how it should be carried, how it can be used to determine
what locators to use, and so forth.   It's a tricky problem in some ways.

If you overload these concepts and stuff the value into a packet,
(like 8+8), it could be a unique number (8 bytes long in this case).
In a given local routing scope we flat-route on those 8 byte addresses.
Outside that routing scope, you would not route on those 8 byte addresses
at all (you'd use the other 8 bytes of "routing goop").
 
If the "system identifier" 8 bytes are globally unique, than any
action where in IPv4 you operate on IP address as a way of identifying 
a host, you would only use those 8 bytes.   For example, the equivalent
of an IN-ADDR.ARPA lookup would use only those bytes, and none of the
routing goop bytes.

The NAT function then can be more restricted - it translates
only the "routing goop" part and not the "system identifier" part
of an 8+8 address.  The "system identifier" 8 bytes is used by
end hosts to determine that they are talking to the same entities,
even though the "routing goop" has been adjusted one or more times
in flight between the two hosts.

This can all be emulated in IPv4 iff we have a separate namespace
for "system identifier" carried in a conversation somehow.  

However, since it does make NAT much more straightforward, I do
not expect the bruised-knees brigade to admit or even recognize
the extra utility that such a namespace would bring to IPv6.  -:(

Fortunately, and to scare Steve Deering again, there are some
bright and sane people who think most of IPv6 isn't so bad, and
who are willing to play nicely in NSRG in hopes of improving it.
This bodes well.

Sean.




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Valdis . Kletnieks

On Mon, 24 Apr 2000 21:45:43 +0200, Anthony Atkielski [EMAIL PROTECTED]  said:
 Not every machine on the Internet has an Ethernet card with a MAC address,
 otherwise it might not be such a bad idea.  I think using the MAC address is
 an excellent idea for software protection schemes (it's a lot more elegant
 than a hardware key such as a dongle), but nobody seems interested in that.

Nobody is interested in it because it doesn't work.

The Ethernet spec requires that each card have a unique MAC address
that's burnt onto the card.  However, due to some truly wierd stuff
done by DecNet "way back when", cards were *also* required to support
loading a new MAC address on the fly.

So to pirate a softare package that locks based on the MAC address, all
you have to do is pirate it off any compatible machine on any subnet
other than your own.  You can even pirate it off your own subnet
if you don't care about ARP working. ;)

Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Valdis . Kletnieks

On Mon, 24 Apr 2000 22:18:09 +0200, Anthony Atkielski [EMAIL PROTECTED]  said:
 allocate a fixed space in advance.  In a variable-length address space, you
 don't have to anticipate any kind of advance allocation--you can just add
 digits to addresses where they are required, and routers only need to look
 at enough of an address to figure out where it should go next.  In a

Actually, we argued a *lot* about fixed/variable.  The reason 128
bit fixed won out was to a large extent due to the people from
various large high-performance router companies wanting a way to 
switch packets *quickly*.  At the time, a DS3 was considered REALLY
fast, and only a few places had FDDI campus backbones.

The problem is that the router guys wanted to fast-path the case of
"no IP option field, routing entry in cache" so that after seeing
only the first few bytes, they could know what interface to enqueue
the outbound packet on *before the entire packet had even come in*.
So for them, the idea of being able to take a known fixed-lenght field
that happened to line up nicely on the hardware memory cache lines,
stuffing it through an associative-lookup cache or other hardware
assist, and knowing in one or three cycles how to route it, was
VERY enticing.

Of course, an OC48 instead of a DS3 only makes it more crucial -
do the math, and figure out how many nanoseconds you have to make
a routing decision when reading off an OC48...

  Furthermore, if it's a variable-length address, the
  router has to know where the end is, in order to look
  at the next field.
 
 Just put that up front.  For example, prefix the address with a length byte.
 If the byte is zero, the address is four bytes long (compatible with IPv4).
 
 It's not really hard.  You just have to write the code up front to handle
 it.  And if you don't want to allow for infinite capacity (you have to stop

It's easy to do for an end-user workstation that's already bogged down
by the bloat inherent in insert your least favorite OS vendor here.

It's hard to do for something that's truly high-performance.


 
 Hmm... I don't know.  If you restrict the address field to routing only, do
 you still need anti-spoofing?  A given address can lead to only one
 endpoint, unless I'm missing something here.

Well, at least around here, we also look at the *source* address on
all packets inbound to our routers to see if they make sense.  If it's
coming in from off-campus, it shouldn't have a prefix that belongs to
our AS.  If it's coming into our backbone from a building subnet, the
source better be in that subnet's range.  And so on.  RFC2267 talks
about it in more detail.

Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




RE: IPv6: Past mistakes repeated?

2000-04-24 Thread Ian King

Yes, we made a guess -- a design compromise.  Folks, we're engineers, and we
come up with "good enough" answers.  Sure, we try to make sure that the
"good enough" answers are good enough for the majority of situations, for a
reasonable length of time.  But we're not prophets or philosophers or
prescient -- we're just engineers.  We made some "good enough" guesses with
IPv4 that, as Keith points out, got us to the situation of a global Internet
-- and our present dilemma is a byproduct of that solution's success.  I
would not be disappointed if our next "good enough" guess lasts us as long
as the last one.  After all, I'll want SOMEthing entertaining to do twenty
years from now.  :-)  

BTW -- I feel the same way about NAT: it's "good enough" for many
situations.  :-) Send me mail at home, it goes to one machine on my internal
172.16 LAN; check out my personal webpages, you're talking to another
machine (and a different OS) in that address space.  You don't see that, and
frankly I don't think about it very often.  It's close to a "it just works"
solution -- which is "good enough" for now.  

-- Ian 

 -Original Message-
 From: Keith Moore [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 24, 2000 5:38 PM
 To: Anthony Atkielski
 Cc: [EMAIL PROTECTED]
 Subject: Re: IPv6: Past mistakes repeated? 
 
 
[snip]
 
 but this same impossibility means that we do not know whether 
 we should
 put today's energy into making variable length addresses work 
 efficiently
 or into something else.  so we made a guess - a design compromise -
 that we're better off with very long fixed-length addresses because 
 fast routing hardware is an absolute requirement, and at least today
 it seems much easier to design fast routing hardware (or software)
 that looks at fixed offsets within a packet, than to design 
 hardware or
 software that looks at variable offsets.
 
[snip]