Hi Noel,

You wrote:

> > Here is a list of objections to any routing scaling solution which
> > pushes work relating to multihoming, TE and changing ISPs onto hosts.
> 
> >                   Extra host traffic
> >                   Host operation more affected by packet loss
> >                   Increased cost and reliability problems
> >                   for mobile hosts operating over wireless
> >                   Extra complexity in the host
> > ...
> >                   General principle of solving a problem close
> >                   to its origin
> 
> A very similar list of objections could (and probably was, that discussion
> would have been slightly before my time) have been raised to doing
> reliability (retransmission, etc) in the hosts, back in the day (i.e. early
> 70's); as many will no doubt recall, in early networking (ARPANet, X.25),
> reliability was the responsibility of the network, not the hosts.
> 
> Yet today we do reliability in the hosts, and to most people it's 'obvious'
> that that's the right thing....

OK - I think this is an interesting line of discussion.

One could argue that a host-based solution to the routing scaling
problem is simply upgrading the host's capacity to deal with lost
packets, by adding new methods to the current one of simply resending
to the same IP address.

The trouble with delegating reliable delivery to the "network" is
that the host has an unreliable link to the network - so it cannot be
a complete solution.

I think the core-edge separation systems can be complete solutions.
They give the host and the end-user network stable IP addresses, so
the host continues the current practice of ensuring reliable
communications by resending to the same address.  This covers
problems in the entire host-to-host path, including the link to the
nearest router, Ethernet switch, Wi-Fi access point, IP gateway etc.

I stand by my objections to hosts having to engage in processing,
probing, extra traffic etc. to cope with multihoming, TE and changing
ISPs.


> > add architectural elements to the network to solve its scaling problem
> > when millions of end-user networks need portability, multihoming and TE.
> 
> It's not clear to me what you mean by "[in] the network", in saying the
> above.
> 
> If you mean by this to put a function in the first-hop router, instead of the
> host, that is not going to really change things like the capabilities (e.g.
> response time) or cost (e.g. overhead in terms of number of packets needed to
> run the mechanism). It may be more practical (less boxes to modify), but
> that's all - and that more speaks to the deployment path, than the basic
> architecture (e.g. deploy it in first-hop routers to start with, but
> eventually it should migrate into the hosts).

Yes, I would object to pushing the functionality, extra traffic etc.
out almost to the hosts as well, such as to the router closest to the
hosts.

I think the best router-based core-edge separation solutions involve
the CE and PE routers - those routers at either side of the ISP /
end-user network division - with a mapping system and extra functions
such as query servers, ITRs, ETRs and OITRDs.  They do not require
any changes to the rest of the end-user network's routers, or to its
hosts.


> (To me, 'extra complexity in the hosts' as a reason to not put something in
> the hosts is a bit of a non-starter: most of the recurring costs, in terms of
> code size, etc are trivial, given current OS sizes; and the non-recurring
> costs, such as engineering time to write the code, are amortized over so many
> hosts they are also not too significant.)

I agree - a central reason for the Internet's success is hosts having
quite a complex stack - and arbitrarily complex, flexible and
updatable application programs.

For most PCs, adding another megabyte of code or RAM usage is not
going to cause significant costs or difficulties.

My objection is partly to the CPU and RAM requirements being
increased  in all hosts, including the smallest.  My objections are
more to do with:

  1 - Increased traffic to all hosts.

  2 - The extra costs and difficulties this causes for wireless
      hosts.

  3 - How much basic connectivity in the the new regime would be more
      dependent, fast, on reliable packet delivery to and from each
      host, including fast, reliable DNS.


> Do we agree that by "[in] the network", one cannot possibly mean 'have the
> path selection do it', because it is precisely the case of "millions of
> end-user [small entities]" which _cannot_ be supported in the path selection?

I think you are referring to souping up each DFZ's mechanism for
choosing the best path for packets matching any one end-user prefix.
 As far as I know, we all agree BGP can't be souped up to cope with
millions of end-user networks.  Nor do we have a direct alternative
to BGP which could do this - or a way of smoothly transitioning to
such an alternative.

What we do have is router-based core-edge separation schemes, which
could be introduced without disruption and without any changes to
hosts.  Ivip with OITRDs and LISP with PTRs could could provide
multihoming for 100% of incoming packets for adopters of the
technology, irrespective of how many sending hosts were in networks
with the technology (ITRs in the sending host's networks).

I think this is the best solution. Please see a later message:

  Tony's critique of map-encap - or of router-based core-edge
  separation

for my preferred solutions.

 - Robin

_______________________________________________
rrg mailing list
[email protected]
https://www.irtf.org/mailman/listinfo/rrg

Reply via email to