Hi,
On Sat, 13 Feb 2010, Robin Whittle wrote:
Hi Paul,
On your page:
http://pjakma.wordpress.com/2010/02/12/making-the-internet-scale-through-nat/
there are a number of statements which I think repeat mistakes made
by others:
According to Noel Chiappa, the LISP team will soon be turning to a
DNS-based global mapping resolution system:
http://www.ietf.org/mail-archive/web/rrg/current/msg05772.html
Interesting.
I have to say, I don't really know the details of LISP. I just know a
lot of work has gone into it, and I have no reason to doubt that it
can be deployed, and can solve at least a few problems. I might not
be too keen on its apparent complexity, but given the amount of
engineering gone into it, it seems a deployable system.
This is not a common idea among all RRG proposals. This
"Locator/Identifier Separation" approach is used by all the
Core-Edge Separation (CES) approaches:
"Locator/Identifier Separation" means that separate objects, in
separate namespaces are used for uniquely Identifying hosts and for
specifying their Location (either exactly, or down to the level of
a particular ISP or end-user network).
I don't follow RRG closely enough to be completely au fait with its
somewhat extensive ontology (which, I have to say, seems to change
significantly even from month to month - the traffic here can
sometimes seems from mars if you fail to follow the list closely for
even a short while). :) A sign of the breadth of different, proposed
solutions I guess.
I didn't realise there were proposals that did not split the ID space
in some way. I know there are some proposals that use an implicit
space, leaving the 2 separate spaces still combined in one greater
space, e.g. for protocol compatibility reasons, but the split is
still there.
I.e. I don't mind if an existing space is effectively split up, or if
2 explicitly separate spaces are used - they're both splits to me.
[snip interesting overview of proposals]
This is true of CEE architectures. This is because hosts are then
Identified by something different from IP addresses. So the
host-to-host sessions survive changes in the IP addresses used by
the hosts, and renumbering a network when choosing a new ISP does
not alter the identity of the hosts. CEE architectures multihome
by giving each host two or more IP addresses, one from each ISP.
Your statement does not apply to CES architectures. For CES, the DFZ
routers are primarily concerned with the ISP's prefixes, which use
the "core" subset of the global unicast address space, which is what
remains after a new subset of scalable "edge" addresses have been
removed.
Right, it's still a hierarchical split. Sometimes too much
categorisation is not useful. I.e. I see there's a distinction, but
that distinction is not in the general concept of there being a
split, but in whatever practical details of it.
So I recognise there are 2 different things, call them CES and CEE if
you want, but I don't see the utility in argueing one that one has a
split and the other does not. Anyway, doesn't matter (to me ;) ).
PJ> We will note that some proposals, in order to be as
transparent and invisible to end-host transport protocols
as possible, use a "map and encap" approach – effectively
tunneling packets over the core of the internet.
These are all CES architectures. CEE architectures have no need for
tunneling.
I think I'll stick with "map-and-encap" and "rewriting". I have to
say, I sometimes wish this WG would try adopt labels that are
semi-evocative of their meaning where possible, rather than all these
acronyms. Very confusing! :)
PJ> Some other proposals use a NAT-like approach, like Six-One or
GSE, with routers translating from inside to outside.
"Six/One" is an earlier host-based proposal from Christian Vogt -
arguably an improved version of Shim6. I think you are referring to
"Six/One Router", also from Christian Vogt.
Ah, perhaps I do - thanks!
CES architectures do not require any additional complexity, or any
other changes, in host stacks or applications. They work by adding
some new elements to the routing system - and a mapping system -
but do not require alterations to most routers.
They tend to be complex themselves though.
These differences, and the fact the CES maintains the current IP
naming system while CEE completely changes it, means that the
distinctions between these two types of solution are highly
significant and helpful when discussing scalable routing solutions.
Sure, but do we really need to use acronyms? :)
Good CES architectures provide immediate benefits to adopters, by
supporting all their traffic, and provides scaling benefits in
proportion to the adoption rate.
If that's so, then the map-encap systems will see adoption. I'd have
my doubts about the "immediate benefits to adopters" though. Doesn't
CES have a path-MTU problem though?
CEE architectures can only provide substantial benefits to adoptors
and only provide real scaling benefits, when all, or almost all,
networks adopt the new architecture. This means moving everyone to
IPv6 and altering all host stacks to implement the particular CEE
architecture's alteration of IPv6.
I'd disagree with this. If you read the blog entry I'm outlining a
scenario where the immediate problem to be solved is pressure on NAT
(not multi-homing), leading to hacks being deployed which benefit
both customer and ISP together. These hacks then can be later,
slowly, extended to things like multi-homing.
I think it's plausible that IPv4 space for multi-homers will continue
to be available for long enough to stave off map-encap deployment
(particularly if map-encap connectivity has PMTU problems). This
could be facilitated partly by ever greater use of NAT, such that the
*first* problem that ISPs and end-users really clamour to have solved
(and will upgrade for) is NAT port space pressure. Some big NAT using
networks have already noticed such pressre problems today, e.g., so
it's not such a hand-wavy scenario.
Everything else follows logically from that. Just saying :)
LISP and Ivip both require considerable complexity. This is not
surprising, since we are planning a once in several decades
enhancement for the IPv4 and IPv6 Internets - with the work to be
done in the network, without requiring host changes.
I have to say, I'm not a fan of complex networks. There seem to be
good economic reasons why a dumb inter-network of smarts hosts won
out over a plethora of smart networks of dumbish terminals (to
varying degrees).
Not convinced myself that avoiding changing host software is a
long-term holy grail. Particularly if the correct place economically
and technically is to change the hosts. As Brian Carpenter has said
here regarding IPv6, changing the host software is /not/ that hard -
changing the /network/ is the hard part.
I can't see customers clamouring for "LISP" (what do customers care
about how the network works) and the path of least resistance for
ISPs is to support their hosting and multi-homing businesses by
using NAT to reclaim address space from access customers.
However, that's all debatable, and I'll guess we'll see.
My bets are on end-host extensions like Shim (Shim6 if IPv6 becomes
popular, something similar for v4 if not) and, possibly, extensions
to facilitate the dumbest possible network middle-boxes - like the
NAT extension described in my blog entry.
You mention options for the IP header. As far as I know, IPv6
extension headers can be used.
The assumption in my blog post is that IPv6 deployment continues to
stall, and people start looking for ways to band-aid problems with
IPv4 to allow it to be continue to used.
I show that such an approach, by solving immediate problems
incrementally, could iterate toward a new address family that solves
internet scaling problems while remaining somewhat backward
compatible with IPv4 as this development process goes on. I.e. change
the end-host software by all means, but remain compatible with the
network.
With the hindsight available to us today, this is probably the
approach that should have guided IPng.
Unfortunately the same is not true of IPv4 header options. One of
the RRG proposals (hIPv4) relied on these, but DFZ routers handle
such packets on the "slow path", making this entirely impractical:
End-to-end measurements on performance penalties of IPv4 options
Fransson, P.; Jonsson, A.
Global Telecommunications Conference, 2004. GLOBECOM apos;04. IEEE
Volume 3, Issue , 29 Nov.-3 Dec. 2004 Page(s): 1441 - 1447 Vol.3
10.1109/GLOCOM.2004.1378221
http://www.sm.luth.se/csee/csn/publications/end_to_end_measurements.pdf
most routers process such packets on the "slow path" with software.
Yes, I mentioned that. Thanks for the references though. However, I
also claim that if an IP option became popular that newer hardware
would fast-path certain common cases.
We should perhaps take care to avoid making long-term architectural
decisions based on trivialities of todays implementations.
From the analysis it can be concluded that there is a slight
increase in delay and jitter and a severe increase in loss rate.
I think the latter part of your article is focussed on IPv4, since
your aim is to make NAT more workable. Unfortunately, it seems that
the "slow path" handling of packets with IPv4 option headers rules
out any solution along the lines you are suggesting.
I think I covered this in the blog post.
a) Today: "slow connectivity" trumps "no connectivity, cause a NAT
box in the middle is maxed out".
b) Longer-term: IP options are slow-path only cause they're uncommon.
There is no reason why a popular option, reliably placed in the
packet could not stay on the fast path. (E&OE, IANAEE)
regards,
--
Paul Jakma p...@jakma.org Key ID: 64A2FF6A
Fortune:
A meeting is an event at which the minutes are kept and the hours are lost.
_______________________________________________
rrg mailing list
rrg@irtf.org
http://www.irtf.org/mailman/listinfo/rrg