On Thu, Jul 28, 2011 at 7:15 PM, Noel Chiappa <[email protected]> wrote:
> Yes and no. For _most_ sites, no, since their access patterns (even for
> servers) are to a limited share of the Internet, therefore much less than a
> full table. Also, BGP (like any 'push' system) distributes _everything_ to
> _everyone_, regardless of whether they need it or not. Any data at any ITR
> (even if it's a cITR at a content provider, with lots of data) is there
> because it is _actually going to be used_.
>

Believe you are saying that DNS-stylish distribution for rITRs and
BGP-stylish distribution for cITRs,...

> Yes, for cITRs, it will be a lot of data. But TANSTAAFL - separating location
> and identity is going to have some costs, to go along with all the benefits
> it provides. And an alternative 'clean-sheet' architecture might have
> slightly better distribution of those costs (i.e. without point loads such as
> a cITR), but then you have the 'cost' of a forklife upgrade to everything in
> the network - and we know from experience that doesn't succeed very well. I
> do expect people will come up with some way of partitioning the mapping state
> to get rid of large point loads (e.g. through parallel cITRs at large sites).
>
> Let's not forget that _any_ location/identity separation system (which there
> is general consensus we have to have) is going to have to have roughly the
> same amount of mapping state, so it's not like there's some magic dust that
> can make the problem go away entirely...
>

The Loc/ID split can be achieved in many ways and the different
methods do not necessary compete with each other. LISP is a endpoint
(host) identifier scheme and separates the location from identity but
it is still providing a host-to-host centric architecture. Pretty much
the same as we have today with regular IPv4 and forthcoming IPv6
architectures (which is basically the same as IPv4 but with a larger
unidimensional address space) though LISP is providing more mobility.

The host-to-host centric architecture is quite retro compared with the
current requirements - when you design a data center network you end
up with trying to break the host-to-host concept for the most
important services by adding a middlebox solution (such as SLB, ADC or
tweaking with DNS) in front of the real servers to achieve load
balancing of traffic to several servers and also to provide greater
availability. What we are trying to achieve here is a host-to-service
architecture, but due to the current stack structure you have to
implement kludges in the data center to achieve it. And they are
expensive, imposing L2 connectivity between the data centers, and also
to configure and troubleshoot SLBs, ADCs in large L2 domains is far
from fun. And I don't see how the data center becomes easier to
configure and troubleshoot by adding cITRs/cETRs in front of the
SLBs/ADCs...

If you dare to go after the stack (and the unidimensional address
space)  the data center network architecture can be simplified - read
the SCAFFOLD paper but with a conceptual view mindset. You can make
SCAFFOLD less "clean-sheet" and backwards compatible with the current
architecture. Take the packet header structure from RFC 6306 and to
get the service ID backwards compatible with current applications it
is obvious that the format of the service ID needs to be 32 bit. Then
the current applications, that are informed about the destination IP
address from the stack API, are still getting a 32 bit value but this
time it is not an overloaded IP address - instead the service ID is
shown and that has nothing to do with the routing architecture
whatsoever. This opens up the possibility to replace the current
kludges with service routers (no caches) and the service routers can
redirect the service request to available real servers at any location
- no need to have L2 connectivity between real servers because if you
use MPTCP/SCTP as the transport protocol the session  can be moved
from server to another server over L3 boundaries (the MPTCP token is a
session identifier). The application at the initiator never sees the
locators, the stack shows only the service ID towards the application.
Most likely this will reduce the CAPEX and OPEX in data centers, it
becomes more flexible because then you could add additional resources
over L3 boundaries to take care of traffic peaks (fulfill the promises
of Cloud Computing).

It might be that the enterprises and very large content providers are
interested in this kind of architecture and starts to invest in it -
it should be cheaper than the current one and it is a lot more
flexible and more stable, a combination that is hard to resist - it
can also be made backwards compatible with existing deployments, so no
forklift upgrade is required.

And LISP is needed to get this transition started. Oh,and I think
there will be much less state information in the routing subsystem,
because some of the workload is shifted from the network to the
endpoints and the transport protocol is pretty good on doing liveness
detection. And a hole new world opens up, think we are only scratching
on the surface yet...

Patrick
_______________________________________________
lisp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lisp

Reply via email to