Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Eliot Lear

Why would a service provider give up skimming the cream with that
(nearly free) extra cash that weirdos like us hand them for real IPv4
addresses?

Eliot

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread marcelo bagnulo braun

Hi Andrew,


And people wonder why NATs proliferate... much of the world has no 
option but to live with them.  This is a direct result of policy 
discouraging IPv4 address allocation.




sorry for asking, but what policy are you referring to?

RIR policy?

Can you point out any RIRs policy that prevents from getting one public 
IPv4 address per machine connected to the Internet?


What do you think that needs to be changed in the v4 allocation policy?

Or are you talking about business model of the ISPs? (which doesn't 
seem to me to be related with policies, but just business...)


Thanks, marcelo



Andrew

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Iljitsch van Beijnum

On 30-mrt-2006, at 10:29, marcelo bagnulo braun wrote:

And people wonder why NATs proliferate... much of the world has no  
option but to live with them.  This is a direct result of policy  
discouraging IPv4 address allocation.



sorry for asking, but what policy are you referring to?



RIR policy?


Can you point out any RIRs policy that prevents from getting one  
public IPv4 address per machine connected to the Internet?


On a somewhat (un)related note: it's not easy for ISPs to give out  
two or three IP addresses to customers because there is no good  
mechanism to do so. One address works very well with PPP or DHCP, but  
a specific number other than one doesn't, so the next step is  
something like a /29.


On an even more (un)related note: it's not possible to give IPv6  
addresses to customers over PPP, and it's very inconvenient to make  
a /64 - /48 be routed towards a customer router that dynamically  
connects to an ISP network. (I.e., my cell phone is a router and it  
dials up, it gets an address through stateless autoconfig but then my  
laptop, PDA etc that use the cell phone as their router aren't  
automatically reachable.)


Some more work on provisioning mechanisms wouldn't be a bad thing.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: Stupid NAT tricks and how to stop them.)

2006-03-30 Thread Iljitsch van Beijnum

On 30-mrt-2006, at 6:26, Anthony G. Atkielski wrote:


We currently have 1/8th of the IPv6 address space set aside for
global unicast purposes ...



Do you know how many addresses that is? One eighth of 128 bits is a
125-bit address space, or



42,535,295,865,117,307,932,921,825,928,971,026,432



addresses. That's enough to assign 735 IP addresses to every cubic
centimetre in the currently observable universe (yes, I calculated
it). Am I the only person who sees the absurdity of wasting addresses
this way?


When I first learned about IPv6 I felt strongly that 128 bits was too  
much, especially since all those bits have to be carried in every IP  
packet twice, once as a source address and once as a destination  
address. However, since that time I've learned to appreciate  
stateless autoconfiguration and the potential usefulness of having  
the lower 64 bits of the IPv6 address as a place to carry some  
limited security information (see SEND and shim6 HBA).



... with the idea that ISPs give their customers /48 blocks.



Thank you for illustrating the classic engineer's mistake.  Stop
thinking in terms of _bits_, and think in terms of the _actual number
of addresses_ available.  Of better still, start thinking in terms of
the _number of addresses you throw away_ each time you set aside
entire bit spans in the address for any predetermined purpose.


The trouble is that you need to build in space for growth.  
Unfortunately, at the time IPv6 was created variable length addresses  
weren't considered viable. (In theory CLNP has variable length  
addresses, in practice that doesn't really work out.) And for some  
strange reason, apparently only powers of two were considered as  
address lengths. So the choice was either 64 bits, which is a lot,  
but it doesn't allow for any innovation over the 32 bits we have in  
IPv4, or 128 which does cost 16 extra bytes in every packet, but  
gives us stateless autoconfig and SEND. Now you can argue that 64 or  
48 bits and continuing current IPv4 practices would have been better,  
but given the choice for 128 bits, the current way of using the  
address space makes sense for the most part.


The only thing I'm not too happy about is the current one address /  
one subnet / /48 trichotomy. Ignoring the single address for a  
moment, the choice between one subnet and 65536 isn't a great one, as  
many things require a number of subnets that's greater than one, but  
not by much. For instance, the cell phone as a router example I  
talked about earlier. A /64 and a single address, or two /64s which  
would be a /63 would be more useful there. The idea that we'd use up  
too much address space by giving out /48s doesn't seem like a real  
problem to me, but on the other hand most people don't need a /48 so  
some choice thats  /48 and  64 makes sense. But a /56 as suggested  
by some people is suboptimal: people with growing networks will at  
some point need more than 256 subnets but at that point they already  
have very many subnets so renumbering then is painful. Making the  
choice between /60 and /48 makes much more sense: just give everyone  
a /60 rather than a /64 just in case they need a handful of subnets,  
which is adequate for 99% of all internet users. People who need more  
than a handful of subnets can get a /48 and won't have to renumber at  
an inconvenient point in the growth curve.


Yes, I know this will use up sixteen times the number of addresses  
for people who really only need a single subnet, but it saves a  
factor 4096 for the people who need 2 - 16 subnets and would have  
gotten a /48 or a factor 16 for the people who would have gotten a / 
56. The extra wasted address space from giving people who really only  
need one subnet a /60 is made up for by the people who would have  
gotten a /48 but can now get by with a /60 if the ratio of one-subnet  
to 17-subnet users is 1 : 4000 or less. (Making the choice /64 vs / 
56 saves even more as long as the ratio is lower than 94 : 6 but  
doesn't have the desireable near-onesizefitsall quality.)



If you want exponential capacity from an address space, you have to
assign the addresses consecutively and serially out of that address
space.  You cannot encode information in the address.  You cannot
divided the address in a linear way based on the bits it contains and
still claim to have the benefits of the exponential number of
addresses for which it supposedly provides.


The thing that is good about IPv6 is that once you get yourself a / 
64, you can subdivide it yourself and still have four billion times  
the IPv4 address space. (But you'd be giving up the autoconfiguration  
advantages.)


Also, when the time comes to create the next version of IP, we won't  
have to worry about all of this to a noticeable degree because IPvA  
or IPvF or whatever can have a different addressing structure that  
can still be expressed as an IPv6-like 128 bit number for backward  
compatibility with 

Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Kurt Erik Lindqvist


On 28 mar 2006, at 18.00, Hallam-Baker, Phillip wrote:




From: Kurt Erik Lindqvist [mailto:[EMAIL PROTECTED]



NAT is a dead end.  If the Internet does not develop a way

to obsolete

NAT, the Internet will die.  It will gradually be replaced

by networks

that are more-or-less IP based but which only run a small number of
applications, poorly, and expensively.



...or you will see an overlay network build on top of
NAT+IPv4 that abstracts the shortcomings away - aka what the
peer to peer networks are doing. End-to-end addressing...


Precisely. Just what is this fetish about keeping the IP address  
the same as

the packet travels?


I will have to get better at making irony clearerI most certainly  
hope we are not heading down the route I suggest above. I am _afraid_  
we are though.


- kurtis -

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: Stupid NAT tricks and how to stop them.)

2006-03-30 Thread Tim Chown
On Thu, Mar 30, 2006 at 01:36:18PM +0200, Iljitsch van Beijnum wrote:
 
 The thing that is good about IPv6 is that once you get yourself a / 
 64, you can subdivide it yourself and still have four billion times  
 the IPv4 address space. (But you'd be giving up the autoconfiguration  
 advantages.)

I noticed that by deafult MS Vista doesn't use autoconf as per 2462, 
rather it uses a 3041-like random address.  See:
http://www.microsoft.com/technet/itsolutions/network/evaluate/new_network.mspx

Random Interface IDs for IPv6 Addresses

 To prevent address scans of IPv6 addresses based on the known company IDs of 
network adapter manufacturers, Windows Server Longhorn and Windows Vista by 
default generate random interface IDs for non-temporary autoconfigured IPv6 
addresses, including public and link-local addresses.

That reads to me like no 2462 by default.  Maybe I'm misinterpreting.

One could envisage an option where that randomness is applied to 48 host
bits not 64.  If you really really wanted to do that.

-- 
Tim/::1



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-30 Thread Anthony G. Atkielski
Steve Silverman writes:

 The problem with allocating numbers sequentially is the impact on
 routers and routing protocols.

The problem with not doing so is that a 128-bit address doesn't
provide anything even remotely close to 2^128 addresses.

You have to choose what you want.

 I have heard that the Japanese issue house numbers chronologically.
 When you find the right block, you have to hunt
 for the right number.  What you are suggesting is similar. You would
 have as many routing table entries as hosts in the world.  The router
 would not be affordable.  The traffic for routing entries would swamp
 the net. The processing of these
 routing advertisements would be impossible.  It doesn't scale!

Variable address length scales, and it never runs out of addresses,
but nobody wants to do that, even though telephones have been doing it
for ages.

 The function of an address is to enable a router to find it. That is
 why we try to use hierarchical addressing even at the cost of numbering
 space.

In that case, assign addresses to points in space, instead of devices.
An office occupying a given plot of land will have an IP address space
that is solely a function of the space it occupies.  Routing would be
the essence of simplicity and blazingly fast.

 IMO one problem of the Internet is that it isn't hierarchical enough.
 Consider the phone system:  country codes, area codes ...  This makes
 the job of building a switch much easier. I think we should have
 divided the world into 250 countries. Each country into 250
 provinces.  Yes, it would waste address space but it would make
 routing much easier and more deterministic.

With a variable address length that can extend infinitely at either
end, the address space would never be exhausted.  That's how
telephones work.

 Yes this would mean a mobile node needs to get new addresses as it
 moves. So what. We already have DHCP.  Cell phones do a handoff
 already.

I agree.  We also have DNS.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Anthony G. Atkielski
Keith Moore writes:

 I find myself wondering, don't they get support calls from customers
 having to deal with the problems caused by the NATs?

Sure, and the reply is I'm sorry, but we don't support multiple
computers on residential accounts.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Peter Dambier

Austin Schutz wrote:

On Wed, Mar 29, 2006 at 01:00:44AM +0200, Iljitsch van Beijnum wrote:


1996199719981999200020012002200320042005
2.7 1.2 1.6 1.2 2.1 2.4 1.9 2.4 3.4 4.5

(The numbers represent the number of addresses used up in that year  
as a percentage of the 3.7 billion total usable IPv4 addresses.)


Those years where the growth was smaller than the year before never  
happened twice or more in a row.


This basically means that unless things take a radical turn, the long- 
term trend is accelerating growth so that remaining 40% will be gone  
in less than 9 years. Probably something like 7, as Geoff Huston  
predicts.





This is much less time than I have seen in previous reports. If
this is accurate and consistent there is a greater problem than I had
previously thought.
If that is indeed the case then the enhanced nat road for ipv6
begins to make much more sense, even in the nearer term.

Austin



I am afraid the problem is even bigger.

I have seen again and again that cable providers are giving out
ip-addresses in the 10.0.0.0/8 area to save ip address space.

Not to mention wireless hotspots. The hotspots I have been playing
with own only a single one ip-address.

You notice something is awfully wrong, when your VoIP phone is not
working but your neighbar keeps telling you, his skype does.

--
Peter and Karin Dambier
The Public-Root Consortium
Graeffstrasse 14
D-64646 Heppenheim
+49(6252)671-788 (Telekom)
+49(179)108-3978 (O2 Genion)
+49(6252)750-308 (VoIP: sipgate.de)
mail: [EMAIL PROTECTED]
mail: [EMAIL PROTECTED]
http://iason.site.voila.fr/
https://sourceforge.net/projects/iason/


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread John C Klensin



--On Thursday, March 30, 2006 08:47 -0800 Peter Sherbin 
[EMAIL PROTECTED] wrote:



If someone calls up for help with a
configuration problem, that may be six month's of
profits from that customer eaten up in the cost of

answering the call.

That is because the current Internet pricing has been
screwed-up from the start. LD settlements between
telcos are fully applicable to ISPs but have never
been instituted. Internet has been subsidised for
years by the local access but now as wireline declines
everybody starts feeling the pain. Usage based billing
and inter-ISP settlements start showing up lately and
they fit well for the Internet. Otherwise transit
providers as well as heavy users rip all the benefits.


Peter,

I was describing the facts of what is going on, rather than the 
causes.


That said, based on some experience on both the telco and ISP 
sides of things, I believe your analysis is incorrect in a 
number of important ways, starting with the difficulties of 
applying a settlement model that assumes that value accrues to 
the caller to today's Internet.


   john


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Keith Moore
  I find myself wondering, don't they get support calls from
  customers having to deal with the problems caused by the NATs?
 
 Because they don't answer them.  In the process of doing the 
 work that led to RFC 4084, I reviewed the terms and conditions 
 of service of a large number of ISPs in the US (and a few 
 others) who provide low-cost Internet connectivity.  Some 
 prohibit connection of more than one machine to the incoming 
 line/router/modem.  Others provide a NAT-capable router but 
 prohibit the customer from making any changes to its 
 configuration and from running any applications that don't work 
 in that environment.  And still others indicate that customers 
 can supply their own NATs, but must obtain any support 
 elsewhere.  All of these prohibitions are enforced the same 
 way -- if the user calls with a problem, he or she either
 
 (i) is told that there is no support for violations of the rules 
 and offered the opportunity to be disconnected (often with a 
 large early termination fee) or
 
 (ii) is instructed to disconnect all equipment between the 
 machine in question and the router, and see if the problem still 
 occurs.  If it doesn't, then the ISP has no problem and the 
 customer's problem is of no interest.

Well, the reason I asked is that when I got my DSL line, my ISP
supplied me with a modem that does NAT - but only for a single host. 
As best as I can tell this is because the box needs to run PPPoE
on the carrier side and DHCP on the host side, and the only way that
the DHCP server can give the host an address under those conditions is
to do NAT.  So in this case (which I have no reason to believe is
atypical) the ISP is supplying the NAT - and they do so even for
customers who pay them extra to get a static IP address!

And yes it does break things even when there are no other local hosts
involved and no additional boxes between the modem and the customer's
host.  So I have a hard time believing that ISPs don't get support
calls about failures due to NATs, at least when they install the NATs.

Now of course this ISP does have a TC that prohibits running a server,
but server is a pretty vague term, and you don't have to be running
any kind of server to suffer from NAT brain-damage.

Keith

p.s. fwiw the workaround in my case was to tell the modem to work in
passthrough mode and configure my local router to run PPPoE.
Under those conditions, I'm happy to report, 6to4 works just fine.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Austin Schutz
On Thu, Mar 30, 2006 at 11:26:40PM +0200, Iljitsch van Beijnum wrote:

 If that is indeed the case then the enhanced nat road for ipv6
 begins to make much more sense, even in the nearer term.
 
 I remember someone saying something about enhanced NAT here a few  
 days ago but I can't find it... What is it and what does it have to  
 do with IPv6?

It was a term Keith Moore used to describe the addition of
ipv6 capability to NAT devices. Not intended as a real term, merely
a marketing name to explain to the end user the benefit of having
ipv6 capability.

If address space does indeed burn that quickly, ISPs will start to
realize they can't sell additional IP addresses as a way of making a quick
profit. Those with dwindling address pools will begin to demand proper
ipv6 support from router vendors to offer it at a discounted price (compared
to ipv4) to their customers who are savvy enough to want to run servers but
too cheap to buy ipv4 space at a premium.
From there it should only be a matter of time. If key applications work
with ipv6 that will probably be adequate to get the ball rolling.

IIRC there was a similar transition back when virtual web hosting
meant blowing an ip address for every extra domain. After an adequate number
of browsers were upgraded hosting providers made available ip-less virtual
hosts at a heavy discount from ip-burning ones. After a surprisingly short
amount of time the vast majority of browsers were compliant. The final nail was
registries refusing virtual hosting as an excuse to justify allocations.
That's not news to most here, but I definitely see the similarity in the
situation.

Austin

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-30 Thread Stephen Sprunk

Thus spake Anthony G. Atkielski [EMAIL PROTECTED]

Iljitsch van Beijnum writes:

However, since that time I've learned to appreciate
stateless autoconfiguration and the potential usefulness of having
the lower 64 bits of the IPv6 address as a place to carry some
limited security information (see SEND and shim6 HBA).


Once it's carrying information, it's no longer just an address, so
counting it as pure address space is dangerous.


An IPv4/6 address is both a routing locator and an interface identifier. 
Unfortunately, the v6 architects decided not to separate these into separate 
address spaces, so an address _must_ contain routing information until that 
problem is fixed.  It doesn't seem to be likely we'll do so without having 
to replace IPv6 and/or BGP4+, and there's no motion on either front, so 
we're stuck with the locator/identifier problem for quite a while.



Building in space means not allocating it--not even _planning_ to
allocate it.  Nobody has any idea what the Internet might be like a
hundred years from now, so why are so many people hellbent on
planning for something they can't even imagine?


That's why 85% of the address space is reserved.  The /3 we are using (and 
even then only a tiny fraction thereof) will last a long, long time even 
with the most pessimistic projections.  If it turns out we're still wrong 
about that, we can come up with a different policy for the next /3 we use. 
Or we could change the policy for the existing /3(s) to avoid needing to 
consume new ones.


If IPv6 is supposed to last 100 years, that means we have ~12.5 years to 
burn through each /3, most likely using progressively stricter policies. 
It's been a decade since we started and we're nowhere near using up the 
first /3 yet, so it appears we're in no danger at this point.  Will we be in 
50 years?  None of us know, which is why we've reserved space for the folks 
running the Internet then to make use of -- provided IPv6 hasn't been 
replaced by then and making this whole debate moot.



Unfortunately, at the time IPv6 was created variable length addresses
weren't considered viable.


Variable-length addresses are the only permanent solution, unless IP
addresses are assigned serially (meaning that all routing information
has to be removed).

Variable-length addresses work very well for the telephone system, and
they'd work just as well for the Internet, if only someone had taken
the time to work it out.


Variable-length addresses only work if there is no maximum length.  E.164 
has a maximum of 15 digits, meaning there are at most 10^15 numbers.  Here 
in +1 we only use eleven digit numbers, meaning we're burning them 10^4 
times as fast as we could.  That's not a great endorsement.


Also, telephone numbers have the same locator/identifier problem that IPv4/6 
addresses do.  In fact, IPv6's original addressing model looked strikingly 
similar to the country codes and area/city codes (aka TLAs and NLAs) that 
you're apparently fond of.


Even OSI's variable length addresses had a maximum length, and most 
deployments used the maximum length; they degenerated into fixed-length 
addresses almost immediately.



The only thing I'm not too happy about is the current one address /
one subnet / /48 trichotomy. Ignoring the single address for a
moment, the choice between one subnet and 65536 isn't a great one, as
many things require a number of subnets that's greater than one, but
not by much.


It's a good example of waste that results from short-sightedness.  It
happened in IPv4, too.


The difference is that in IPv6, it's merely a convention and implementors 
are explicitly told that they must not assume the above boundaries.  In 
IPv4, it was hardcoded into the protocol and every implementation had to be 
replaced to move to VLSM and CIDR.


Conventions are for human benefit, but they can be dropped when it becomes 
necessary.  Folks who use RFC 1918 space almost always assign /24s for each 
subnet regardless of the number of hosts; folks using public addresses used 
to do the same, but instead now determine the minimum subnet that meets 
their needs.  Hopefully the conventions in IPv6 won't be under attack for a 
long time, but if they need to go one day we can drop them easily enough.



The thing that is good about IPv6 is that once you get yourself a /
64, you can subdivide it yourself and still have four billion times
the IPv4 address space.


It sounds like NAT.


Not at all.  You'd still have one address per host, you'd just move the 
subnet boundary over a few bits as needed.  With the apparent move to random 
IIDs, there's no reason to stick to /64s for subnets -- we could go to /96s 
for subnets without any noticeable problems (including NAT).


The RIRs could also change policies so that /32 and /48 are not the default 
allocation and assignment sizes, respectively.  That is also another 
convention that we could easily dispense with, but it saves us a lot of 
paperwork to abide by it as 

Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-30 Thread Stephen Sprunk

Thus spake Anthony G. Atkielski [EMAIL PROTECTED]

Iljitsch van Beijnum writes:

So how big would you like addresses to be, then?


It's not how big they are, it's how they are allocated.  And they are
allocated very poorly, even recklessly, which is why they run out so
quickly.  It's true that engineers always underestimate required
capacity, but 128-bit addresses would be enough for anything ... IF
they were fully allocated.  But I know they won't be, and so the
address space will be exhausted soon enough.


I once read that engineers are generally incapable of designing anything 
that will last (without significant redesign) beyond their lifespan. 
Consider the original NANP and how it ran out of area codes and exchanges 
around 40 years after its design -- roughly the same timeframe as the 
expected death of its designers.  Will IPv6 last even that long?  IMHO we'll 
find a reason to replace it long before we run out of addresses, even at the 
current wasteful allocation rates.



We currently have 1/8th of the IPv6 address space set aside for
global unicast purposes ...


Do you know how many addresses that is? One eighth of 128 bits is a
125-bit address space, or

42,535,295,865,117,307,932,921,825,928,971,026,432

addresses. That's enough to assign 735 IP addresses to every cubic
centimetre in the currently observable universe (yes, I calculated
it). Am I the only person who sees the absurdity of wasting addresses
this way?

It doesn't matter how many bits you put in an address, if you assign
them this carelessly.


That's one way of looking at it.  The other is that even with just the 
currently allocated space, we can have 35,184,372,088,832 sites of 65,536 
subnets of 18,446,744,073,709,551,616 hosts.  Is this wasteful?  Sure.  Is 
it even conceivable to someone alive today how we could possibly run out of 
addresses?  No.


Will someone 25 years from now reach the same conclusion?  Perhaps, perhaps 
not.  That's why we're leaving the majority of the address space reserve for 
them to use in light of future requirements.



... with the idea that ISPs give their customers /48 blocks.


Thank you for illustrating the classic engineer's mistake.  Stop
thinking in terms of _bits_, and think in terms of the _actual number
of addresses_ available.  Of better still, start thinking in terms of
the _number of addresses you throw away_ each time you set aside
entire bit spans in the address for any predetermined purpose.

Remember, trying to encode information in the address (which is what
you are doing when you reserve bit spans) results in exponential (read
incomprehensibly huge) reductions in the number of available
addresses.  It's trivially easy to exhaust the entire address space
this way.

If you want exponential capacity from an address space, you have to
assign the addresses consecutively and serially out of that address
space.  You cannot encode information in the address.  You cannot
divided the address in a linear way based on the bits it contains and
still claim to have the benefits of the exponential number of
addresses for which it supposedly provides.

Why is this so difficult for people to understand?


And sequential assignments become pointless even with 32-bit addresses 
because our routing infrastructure can't possibly handle the demands of such 
an allocation policy.  The IETF has made the decision to leave the current 
routing infrastructure in place, and that necessitates a bitwise allocation 
model.


Railing against this decision is pointless unless you have a new routing 
paradigm ready to deploy that can handle the demands of a non-bitwise 
allocation model.


Why is this so difficult for you to understand?


That gives us 45 bits worth of address space to use up.


You're doing it again.  It's not 45 bits; it's a factor of
35,184,372,088,832.

But rest assured, they'll be gone in the blink of an eye if the
address space continues to be mismanaged in this way.


I take it you mean the blick of an eye to mean a span of decades?  That is 
not the common understanding of the term, yet that's how long we've been 
using the current system and it shows absolutely no signs of strain.



It's generally accepted that an HD ratio of 80% should be reachable
without trouble, which means we get to waste 20% of those bits in
aggregation hierarchies.


No. It's not 20% of the bits, it's 99.9756% of your address space that
you are wasting.

Do engineers really study math?


To achieve bitwise aggregation, you necessarily cannot achieve better than 
50% use on each delegation boundary.  There are currently three boundaries 
(RIR, LIR, site), so better than 12.5% address usage is a lofty goal. 
Again, if you want something better than this, you need to come up with a 
better routing model than what we have today.


(And then throw in the /64 per subnet and you're effectively wasting 100% of 
the address space anyways, so none of this matters until that's gone)



This gives us 36 bits = 68 

Re: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Stephen Sprunk

Thus spake Keith Moore moore@cs.utk.edu

Now of course this ISP does have a TC that prohibits running a server,
but server is a pretty vague term, and you don't have to be running
any kind of server to suffer from NAT brain-damage.


My ISP has ingeniously defined a server as any application that does not 
work through NAT without port forwarding.  Bingo, problem solved (from their 
perspective).


Of course, they don't actually enforce this unless a user's upstream 
bandwidth usage consistently exceeds total POP upstream bandwidth divided by 
the number of users at the POP (in my case, about 300kB/s).  Go above that 
and you get an email asking you to turn down the speed on your P2P client 
;-)



p.s. fwiw the workaround in my case was to tell the modem to work in
passthrough mode and configure my local router to run PPPoE.
Under those conditions, I'm happy to report, 6to4 works just fine.


Alas, I've been unable to find a consumer-grade router that will run native 
IPv6, 6to4, or even pass through IPinIP (excluding open-source hacks which 
are not supported by the vendor -- that's not a solution for real 
consumers).  If anyone knows of one, please let me know off-list.


S

Stephen SprunkStupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them.  --Aaron Sorkin 



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: PI space (was: Stupid NAT tricks and how to stop them)

2006-03-30 Thread Michel Py
 Noel Chiappa wrote:
 Needless to say, the real-time taken for this process to complete
 - i.e. for routes to a particular destination to stabilize, after
 a topology change which affects some subset of them - is dominated
 by the speed-of-light transmission delays across the Internet
 fabric. You can make the speed of your processors infinite and it
 won't make much of a difference.

This is total bull. The past stability issues in BGP have little to do
with latency and everything to do with processing power and bandwidth
available to propagate updates. In other words, it does not make any
difference in the real world if you're using a 150ms oceanic cable or a
800ms geosynchronous satlink as long as the pipe is big enough and there
are enough horses under the hood.

Only if we were shooting for a sub-second global BGP convergence the
speed of light would matter.


 Stephen Sprunk wrote:
 The IPv4 core is running around 180k routes today, and even
 the chicken littles aren't complaining the sky is falling.

I was about to make the same point. Ever heard whining about a 7500 with
RSP2s not being able to handle it? Yes. Ever heard about a decently
configured GSR not being able to handle it? No. Heard whining about
receiving a full table over a T1? Yes. Heard whining about receiving a
full table over an OC-48? No.

Anybody still filtering at /20 like everybody did a few years back?


 and the vendors can easily raise those limits if customers demand
 it (though they'd much prefer charging $1000 for $1 worth of RAM
 that's too old to work in a modern PC).

You're slightly exaggerating here. I remember paying $1,900 for 32MB of
ram worth $50 in the street :-D

Michel.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: PI space (was: Stupid NAT tricks and how to stop them)

2006-03-30 Thread Noel Chiappa
 From: Michel Py [EMAIL PROTECTED]

 Needless to say, the real-time taken for this process to complete
 - i.e. for routes to a particular destination to stabilize, after a
 topology change which affects some subset of them - is dominated by
 the speed-of-light transmission delays across the Internet fabric. You
 can make the speed of your processors infinite and it wqwon't make
 much of a difference.

 The past stability issues in BGP have little to do with latency and
 everything to do with processing power and bandwidth available to
 propagate updates.

The past stability issues had a number of causes, including protocol
implementation issues, IIRC.

In any event, I was speaking of the present/future, not the past. Yes, *in
the past*, processing power and bandwidth limits were an *additional* issue.
However, that was in the past - *now*, the principal term in stabilization
time is propogation delay.

 In other words, it does not make any difference in the real world if
 you're using a 150ms oceanic cable or a 800ms geosynchronous satlink as
 long as the pipe is big enough and there are enough horses under the
 hood.

If you think there aren't still stability issues, why don't you try getting
rid of all the BGP dampening stuff, then? Have any major ISP's out there done
that?

Noel

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-30 Thread Steven M. Bellovin
On Thu, 30 Mar 2006 20:43:14 -0600, Stephen Sprunk
[EMAIL PROTECTED] wrote:

 
 That's why 85% of the address space is reserved.  The /3 we are using (and 
 even then only a tiny fraction thereof) will last a long, long time even 
 with the most pessimistic projections.  If it turns out we're still wrong 
 about that, we can come up with a different policy for the next /3 we use. 
 Or we could change the policy for the existing /3(s) to avoid needing to 
 consume new ones.
 

I really shouldn't waste my time on this thread; I really do know
better.

You're absolutely right about the /3 business -- this was a very
deliberate design decision.  So, by the way, was the decision to use
128-bit, fixed-length addresses -- we really did think about this
stuff, way back when.

When the IPng directorate was designing/selecting what's now IPv6,
there was a variable-length address candidate on the table: CLNP.  It
was strongly favored by some because of the flexibility; others pointed
out how slow that would be, especially in hardware.

There was another proposal, one that was almost adopted, for something
very much like today's IPv6 but with 64/128/192/256-bit addresses,
controlled by the high-order two bits.  That looked fast enough in
hardware, albeit with the the destination address coming first in the
packet.  OTOH, that would have slowed down source address checking
(think BCP 38), so maybe it wasn't a great idea.

There was enough opposition to that scheme that a compromise was
reached -- those who favored the 64/128/192/256 scheme would accept
fixed-length addresses if the length was changed to 128 bits from 64,
partially for future-proofing and partially for flexibility in usage.
That decision was derided because it seemed to be too much address
space to some, space we'd never use.

I'm carefully not saying which option I supported.  I now think, though,
that 128 bits has worked well.

--Steven M. Bellovin, http://www.cs.columbia.edu/~smb

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Michel Py
 Noel Chiappa wrote:
 If you think there aren't still stability issues, why don't
 you try getting rid of all the BGP dampening stuff, then?
 Have any major ISP's out there done that?

Dampening is part of the protocol and has nothing to do with the speed
of light. Removing it is akin to removing packet re-ordering in TCP;
nobody with half a brain would consider it. Same as packets arriving out
of sequence, stability issues are part of life for someone who actually
operates a network. Code has bugs, hardware fails, power goes bezerk,
UPS batteries leak, rodents chew on cables, backhoes cut fiber, fat
fingers screw up configs, and rookies flap routes because they know
everything about astrophysics and nothing about running a production
network. That's what dampening is for.


 Stephen Sprunk wrote:
 Of course, they don't actually enforce this unless a user's
 upstream bandwidth usage consistently exceeds total POP
 upstream bandwidth divided by the number of users at the POP
 (in my case, about 300kB/s). Go above that and you get an email
 asking you to turn down the speed on your P2P client ;-)

Some bigger ISPs (with large POPs) prefer to limit the upstream so they
don't have to manage quotas/bandwidth. I have 512kb upstream; it's not
big but I can bittorrent all of it all day long, they don't care and
probably don't know.


 Alas, I've been unable to find a consumer-grade router that
 will run native IPv6, 6to4, or even pass through IPinIP

There's no market for it. Consumers don't know what it is and geeks
already have a hack or an el-cheapo-ebay Cisco. Heck, my home router is
a 7204 why would I go for a Linksys :-D


Michel.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-30 Thread Theodore Ts'o
On Fri, Mar 31, 2006 at 05:36:30AM +0200, Anthony G. Atkielski wrote:
 More bogus math.  Every time someone tries to compute capacity, he
 looks at the address space in terms of powers of two.  Every time
 someone tries to allocate address space, he looks as the address space
 in terms of a string of bits.  

Anthony,

You've been making the same point over and over (and over)
again.  It's probably the case that people who will be convinced by
your arguments, will have accepted the force of your arguments by now.
For people who don't accept your arguments, they are not likely to be
swayed by a last post wins style of argumentation.

May I gently suggest that you stop and think before deciding
whether you need to respond to each message on this thread, and
whether you have something new and cogent to add, as opposed to
something which you've said already, in some cases multiple times?

Thanks, regards,

- Ted

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Stupid NAT tricks and how to stop them.

2006-03-30 Thread Christian Huitema
 Dampening is part of the protocol and has nothing to do with the speed
 of light. 

Well, not really. Assume a simplistic model of the Internet with M
core routers (in the default free zone) and N leaf AS, i.e. networks
that have their own non-aggregated prefix. Now, assume that each of the
leaf AS has a routing event with a basic frequency, F. Without
dampening, each core router would see each of these events with that
same frequency, F. Each router would thus see O(N*F) events per second.
Since events imply some overhead in processing, message passing, etc,
one can assume that at any given point in time there is a limit to what
a router can swallow. In either N or F is too large, the router is
cooked. Hence dampening at a rate D, so that N*F/D remains lower than
the acceptable limit.

Bottom line, you can only increase the number of routes if you are ready
to dampen more aggressively. There is an obvious tragedy of the
commons here: if more network want to multi-home and be declared in
the core, then more aggressive dampening will be required, and each of
the multi-homed networks will suffer from less precise routing, longer
time to correct outages, etc.

There are different elements at play that also limit the number of core
routers. Basically, an event in a core router affects all the path that
go through it, which depending on the structure of the graph is
somewhere between O(M*log(M)) or O(M.log(M)). In short, the routing load
grows much faster than linearly with the number of core routers.

-- Christian Huitema


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)

2006-03-30 Thread Anthony G. Atkielski
Stephen Sprunk writes:

 An IPv4/6 address is both a routing locator and an interface identifier.

And so engineers should stop saying that n bits of addressing provides
2^n addresses, because that is never true if any information is
encoded into the address.  In fact, as soon as any information is
placed into the address itself, the total address space shrinks
exponentially.

 Unfortunately, the v6 architects decided not to separate these into
 separate address spaces, so an address _must_ contain routing
 information until that problem is fixed. It doesn't seem to be
 likely we'll do so without having to replace IPv6 and/or BGP4+, and
 there's no motion on either front, so we're stuck with the
 locator/identifier problem for quite a while.

Then we need to make predictions for the longevity of the scheme based
on the exponentially reduced address space imposed by encoding
information into the address.  In other words, 128 bits does _not_
provide 2^128 addresses; it does not even come close.  Ultimately, it
will barely provide anything more than what IPv4 provides, if current
trends continue.

 That's why 85% of the address space is reserved.  The /3 we are using (and
 even then only a tiny fraction thereof) will last a long, long time even
 with the most pessimistic projections.  If it turns out we're still wrong
 about that, we can come up with a different policy for the next /3 we use.
 Or we could change the policy for the existing /3(s) to avoid needing to
 consume new ones.

Or simply stop trying to define policies for an unknown future, and
thereby avoid all these problems to begin with.

 It's been a decade since we started and we're nowhere near using up the
 first /3 yet, so it appears we're in no danger at this point.

As soon as you chop off 64 bits for another field, you've lost just
under 100% of it.

 Variable-length addresses only work if there is no maximum length.

Ultimately, yes.  But there is no reason why a maximum length must be
imposed.

 E.164 has a maximum of 15 digits, meaning there are at most 10^15
 numbers. Here in +1 we only use eleven digit numbers, meaning we're
 burning them 10^4 times as fast as we could. That's not a great
 endorsement.

Telephone engineers make the same mistakes as anyone else; no natural
physical law imposes E.164, however.

 Also, telephone numbers have the same locator/identifier problem
 that IPv4/6 addresses do. In fact, IPv6's original addressing model
 looked strikingly similar to the country codes and area/city codes
 (aka TLAs and NLAs) that you're apparently fond of.

Maybe the problem is in trying to make addresses do both.  Nobody
tries to identify General Electric by its street address, and nobody
tries to obtain a street address based on the identifier General
Electric alone.

 The difference is that in IPv6, it's merely a convention ...

Conventions cripple society in many cases, so merely a convention
may be almost an oxymoron.

 The folks who designed IPv4 definitely suffered from that problem.  The
 folks who designed IPv6 might also have suffered from it, but at least they
 were aware of that chance and did their best to mitigate it.  Could they
 have done better?  It's always possible to second-guess someone ten years
 later.  There's also plenty of time to fix it if we develop consensus
 there's a problem.

Sometimes the most important design criterion is ignorance.  In other
words, the best thing an engineer can say to himself in certain
aspects of design is I don't know.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: 128 bits should be enough for everyone, was:

2006-03-30 Thread Anthony G. Atkielski
Theodore Ts'o writes:

 You've been making the same point over and over (and over)
 again.

To some, perhaps.  I'm not so sure that it has yet been made even once
to others.

 It's probably the case that people who will be convinced by your
 arguments, will have accepted the force of your arguments by now.
 For people who don't accept your arguments, they are not likely to
 be swayed by a last post wins style of argumentation.

It depends.  People with an emotional attachment to a specific notion
will never been convinced otherwise, but people who simply don't
understand something may change their mind once they understand.

 May I gently suggest that you stop and think before deciding
 whether you need to respond to each message on this thread, and
 whether you have something new and cogent to add, as opposed to
 something which you've said already, in some cases multiple times?

May I gently suggest that you use the delete key on your keyboard for
messages that you don't want to see?  I doubt that bandwidth is a
problem at MIT.  It has always worked for me, and I'm constrained by
much more limited bandwidth.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Don't feed the trolls

2006-03-30 Thread Pekka Savola

On Thu, 30 Mar 2006, Stephen Sprunk wrote:

Thus spake Anthony G. Atkielski [EMAIL PROTECTED]

Why is this so difficult for people to understand?

..

Why is this so difficult for you to understand?


Feeding trolls is not very useful.. please at least keep him in Cc: so 
I don't have to see the messages.. :-)


I've yet to see Anthony make any useful contribution to the IETF 
(various rants on the IETF list don't count).


Perhaps we should just ban him from the list.

--
Pekka Savola You each name yourselves king, yet the
Netcore Oykingdom bleeds.
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Last Call: 'TLS User Mapping Extension' to Proposed Standard

2006-03-30 Thread The IESG
The IESG has received a request from an individual submitter to consider the 
following documents:

- 'TLS User Mapping Extension'
   draft-santesson-tls-ume-04.txt as a Proposed Standard
- 'TLS Handshake Message for Supplemental Data'
   draft-santesson-tls-supp-00.txt as a Proposed Standard

The previous Last Call on draft-santesson-tls-ume-03.txt has finished.
However, to resolve some comments that were received during the
previous Last Call, the document has been updated and
draft-santesson-tls-supp-00.txt was written.  Due to the significant
changes in one area of the document, the IESG is making a second
call for comments.  This comment period is shorter since the majority
of the document is unchanged.

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action.  Please send any comments to the
iesg@ietf.org or ietf@ietf.org mailing lists by 2006-04-11.

The file can be obtained via
http://www.ietf.org/internet-drafts/draft-santesson-tls-ume-04.txt
http://www.ietf.org/internet-drafts/draft-santesson-tls-supp-00.txt


___
IETF-Announce mailing list
IETF-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf-announce


Reclassification of draft-ietf-idwg-beep-idxp-07 as an Experimental RFC

2006-03-30 Thread IESG Secretary
The IESG has reclassified draft-ietf-idwg-beep-idxp, currently in the
rfc-editor's queue as an experimental RFC. This document was
originally approved as a proposed standard. However it contains a
normative reference to draft-ietf-idwg-idmef-xml. That specification
has been approved as an experimental RFC not as a standards-track
document. So, in order to avoid a normative reference from a
standards track document to an experimental document, the IESG has
chosen to reclassify the IDXP protocol as experimental.

___
IETF-Announce mailing list
IETF-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf-announce