Short version:   A core-edge separation scheme is more in keeping
                 with the Internet tradition - where the host is not
                 concerned with moment-to-moment connectivity changes
                 in the network - than a host-based system which
                 copes with this stuff so the existing routing system
                 can remain simple and unchanged.

                 There seems to be a steep set of barriers to making
                 all the changes required to get everyone to
                 transition to a new Internet with new host stack
                 capabilities and all applications rewritten to use a
                 purely host-name interface to the stack.

Hi Tony,

You wrote:

> If we want to change the architecture to something that we can live with in
> perpetuity, we might want to step back and take a larger view of the world.
> IPv6 is coming, like it or not.  If its routing architecture is
> fundamentally flawed and we have to deploy awkward systems to compensate,
> then we will have to live with those indefinitely.

>From a scalability point of view, IPv6's routing architecture is as
fundamentally flawed as IPv4's.


I think you and some other folks have a very dim view of introducing
a core-edge separation scheme: new devices in the form of ITRs, ETRs
and query servers, a global database etc.

To me, this is not much worse - and is arguably more elegant and
global - than changing the routing system in some fundamental way so
that each router needs to know only about a limited subset of the
system, such as with geographical aggregation.


There seems to be no scope for souping up BGP to the degree needed to
cope with tens or hundred of millions of end-user prefixes.  Nor does
there seem to be any replacement for BGP which could do this - even
if we could figure out a way of transitioning to it.

If you want to keep the interdomain routing system architecturally
simple - which it is at present - and keep it scalable, then I think
your only option is to ensure that end-user networks large and small
can get multihoming and something as good as portability (I believe
there is no such thing), while the core only handles ISP prefixes.
This can only be achieved with radical changes to all host stacks and
applications.


I think you would need to change all host stacks, APIs and
applications so that all applications work like a charm despite
arbitrary changes in the one or more IP addresses each host is using.
  Assuming you want to keep applications out of any direct
involvement in the routing system, this could only be done by having
every application work purely with hostnames, leaving actual IP
address stuff to the operating system.

I think this would require new or changed crypto protocols, with new
or changed APIs.

You would also need to alter internal routing systems of all end-user
networks so they could easily and robustly cope with these arbitrary
changes of address.  That sounds really daunting.  The routers are
supposed to be always reachable and manageable, never dropping a
packet, despite working on constantly shifting sands where there is
no stable IP address (unless perhaps private space), and where by
some secure mechanism, the whole arrangement of routers can be told
to replace one set of addresses with another, and manage having one,
two or any number of such addresses in operation at any one time.

It is not clear to me how you could reliably implement ACNs with such
a scheme.  Filtering of packets requires binary numbers and there is
no obviously elegant way a router in some remote network could
consistently keep up with whatever changes were happening to the IP
address of some host in another network, as it moves from one ISP's
address range to another, for instance for multihoming and TE reasons.

The DNS would need to be changed to support rapid initiation of
sessions, to provide each enquiring host with not just one IP
address, or several, but typically several with instructions on which
to try first, while at the same time allowing load sharing between
multiple IP addresses.

Every session requires the other end be told of the multiple IP
addresses of the host, with information on which address to try
first.  There would need to be new protocols, such as pinging a host
by name, and getting the response back to the sending host, despite
it potentially acquiring a new IP address since it sent the packet.

Every time one of the initial host's addresses changes, the initial
host has to tell this to all the hosts which might think they have a
session with that host.  Doing this is costly and raises security
problems which can only be solved with more elaborate protocols,
nonces, caches of nonces sent to or received from distant hosts etc.

All this involves a great deal of extra information, communication,
security software, management of security software, caching etc. in
all hosts and in any router or other device which involves ACLs.

It also requires changes to all hosts and applications, which can't
be introduced in a way which provides substantial benefits to early
adoptors - so I believe it will be impossible to introduce any such
host changes.


I think it would be better to leave the hosts as they are - where
they expect another host to have a stable IP address for the current
session - and then to add a core-edge separation scheme to make a new
kind of scalable end-user address space.

I think it is a reasonable division of labour: the hosts and internal
routers work with stable addresses.  They have enough work to do as
it is.  The DFZ routers continue as they do, with BGP, but handling a
diminishing number of end-user prefixes.  Then we add a new
architectural layer to the routing system to do what needs to be
done: create the new kind of scalable address space for end-user
networks which is portable and suitable for multihoming and TE.  That
also happens to be a basis for a new approach to mobility, in which
hosts also get to keep stable IP addresses.

  http://www.firstpr.com.au/ip/ivip/#mobile

Anything we can do to minimise complexity in mobile and embedded
devices, and to minimise the protocol overhead to those mobile
devices, is a good thing.  IPv6 lightswitch folks would be keen to
keep the basic host specification as simple as possible.

For instance, the 6lowPAN WG is developing a method of using IPv6
with the limited bandwidth 802.15.4 radio links, as an alternative to
ZigBee.  6lowPAN aims to provide a basic networking protocol stack
which can be implemented on 8 bit CPUs with as little as 32k bytes of
program memory – which is about the same as ZigBee's requirement.

The additional host stack functionality you need to add to get hosts
to do all the TE, multihoming etc. is surely going to be at odds with
the desire to use IPv6 in compact, very low power, embedded devices.
  The aim is to make a device work for years from a 0.5 amp-hour
CR3032 lithium battery.


So this is certainly a fork in the road.

You go the High Way, attempting to:

  1 - Devise a new kind of host-based solution so the existing
      routing system can continue in its simple state.  SHIM6 and
      ILNP come to mind, but you also need to cope with ACLs and
      devise a transition mechanism, firstly to IPv6 and then
      to this new kind of IPv6.

  2 - Convince all writers of operating systems to adopt the
      new stack, the new name-based API it involves etc.  Set
      up debugging and testing arrangements so people can be sure
      a host stack meets all the specs.

  3 - Convince all application writers to create dual mode versions
      of their applications which work perfectly with the current
      API and with the new one, where the new one is only partially
      implemented in other hosts.

      This would be a major nightmare to write and test.  What
      motivation can you give the programmers to drastically alter
      their code like this?  No direct impetus, as far as I can see,
      other than: "For the good of the DFZ ....", "This is the Way of
      the Future, like it or not . . .".  I can't imagine how you
      could motivate most application writers to do this.  You really
      need to motivate them all.

  4 - Find a way of convincing all ISPs and end-users to go
      dual-stack IPv4 and IPv6 (as modified with the new scheme)
      for as long as it takes for almost all end-user networks to
      function 100% reliably with the new IPv6 arrangement.  Only
      then will they be able to let go of their IPv4 addresses.

There are many problems with all this.  Not least that people want
and need to run applications which no-one is maintaining at present.
 This includes embedded devices, such as printers.

This "High Road" is a far bolder thing to try to get the world to do
than migrate to IPv6.

It doesn't help much to say something like this must happen - because
the long-term consequences of not doing so are much worse.  That line
of argument doesn't work anywhere near well enough with the
greenhouse effect and it surely won't motivate billions of
technically un-inclined end-users to change anything.


> Alternately, if we fix the routing architecture here and now, we might end
> up with a solution that we actually like.

I don't think it is essential we keep the routing system exactly as
simple as it is now.  With the growth of the Net since its original
conception, I don't think it is unreasonable to add one carefully
crafted architectural elaboration to the routing system.

Adding a core-edge separation scheme doesn't transfer any application
functionality to the routing system.  I think this way - the Low Way
in terms of idealised principles of routing system simplicity - is
the way to get to Scotland.

Arguably, the core-edge separation scheme is more in keeping with the
Internet tradition - where the host is not concerned with
moment-to-moment connectivity changes in the network.

I would rather design and debug a system where this real-time
implementation of multihoming, TE, change of ISP etc. is concentrated
in an admittedly distributed system of ITRs, ETRs and mapping system,
than to try to write and debug the same functionality in an unknown
number of stacks, changing an unknown number of applications.  I
think there could be much more trouble with problems due to
particular combinations of host stacks and applications which were
not written correctly.

 - Robin

_______________________________________________
rrg mailing list
[email protected]
https://www.irtf.org/mailman/listinfo/rrg

Reply via email to