Hi Robin - see below for some follow-up:

> -----Original Message-----
> From: Robin Whittle [mailto:r...@firstpr.com.au]
> Sent: Thursday, March 11, 2010 4:18 PM
> To: RRG
> Cc: Templin, Fred L
> Subject: Re: [rrg] IRON-RANGER scalability and support for packets from 
> non-upgradednetworks
> Hi Fred,
> Thanks for this:
> > In catch-up mode, your suggestion of a need for more
> > discussion of scaling properties is a good one, and
> > something I think that can be accommodated in the
> > next IRON update.
> Also, in "Re: [rrg] Why won't supporters of Loc/ID Separation ..."
> you wrote:
> > IRON-RANGER used to speak of using IPv6 neighbour discovery
> > as the means for locator liveness testing, dissemination
> > of routing information, secure redirection, etc. However,
> > the VET and SEAL mechanisms are being revised to instead
> > use a different mechanism called the SEAL Control Message
> > Protocol (SCMP) for tunnel endpoint negotiations that occur
> > *within* the tunnel sublayer and are therefore not visible
> > to either the outer IP protocol nor the inner network layer
> > protocol. Hence, the inner network layer protocol could be
> > anything, including IPv4, IPv6, OSI CLNP, or any other network
> > layer protocol that is eligible for encapsulation in IP.
> OK.  I hope you will be able to explain these things not just in
> terms of high-level concepts, but to give examples of how the whole
> thing would actually work on a large scale.

OK if you are talking about an architectural description,
but please note that both VET and SEAL are already full
functional specifications that can be used by software
developers to produce real code. 

> For instance, how many IRON routers are there in an IPv4 I-R system,
> and how many individual EID prefixes?

Let's suppose that each VP is an IPv6 ::/32, and that
the smallest unit of PI prefix delegation from a VP is
an IPv6 ::/56. In that case, there can theoretically be
up to 4B VPs in the IRON RIB and 16M PI prefixes per VP.
In practice, however, we can expect to see far fewer than
that until the IPv6 address space reaches exhaustion
which many believe will be well beyond our lifetimes.

Still thinking (very) big, let's try sizing the system
for 100K VPs; each with 100K ::/56 delegated PI prefixes.
That would give 10B ::/56 PI prefixes, or 1 PI prefix
for every person on earth (depending on when you sample
the earth's population). Let's look at the scaling
considerations under these parameters:
> Then, how do these IRON
> routers, for each of these EID prefixes continually and repeatedly (I
> guess every 10 minutes or less) securely inform a given number of VP
> routers they are the router, or one of the routers, to which packets
> matching a given EID prefix should be tunneled.  Since there could be
> multiple VP routers for a given VP, and the IRON routers don't and (I
> think) can't know where they are, how does this process work securely
> and scalably?

Each IRON router R(i) discovers the full map of VPs in
the IRON through participation in the IRON BGP. That
means that each R(i) would need to perform full database
synchronization for 100K stable IRON RIB entries that rarely
if ever change. This doesn't sound terrible even for existing
core router equipment. As you noted, it is also possible that
a given VP(j) would be advertised by multiple R(i)s - let's
say each VP(j) is advertised by 2 R(i)s (call them R(x) and
R(y)). But, since the IRON RIB is fully populated to all
R(i)s, each R(i) would discover both R(x) and R(y) that
advertise VP(j).

Now, for IRON router R(i) that is the provider for 100K PI
prefixes delegated from VP(j), R(i) needs to send a "bubble"
to both R(x) and R(y) for each PI prefix. That would amount
to 200K bubbles every 600 sec, or 333 bubbles/sec. If each
bubble is 100bytes, the total bandwidth required for updating
all of the 100K PI prefixes is 260Kbps. Now, let's say that
each PI prefix is multihomed to 2 providers, then we get 2x
the message traffic for 520Kbps total for the bubbles needed
to keep the 100K PI prefixes refreshed.  

> If the VP routers act like DITRs or PTRs by advertising their VP in
> the DFZ, then in order to make them work well in this respect - to
> generally minimise the extra path length taken to and from them
> compared to the path from the sending host to the proper IRON router
> - I think you need at least a dozen of them.   This directly drives
> the scaling problems in the process just mentioned where the IRON
> routers continually register each of their EID prefixes with the
> dozen or so VP routers which cover that EID prefix.

I don't understand why the dozen - I think with IRON VP
routers, the only reason for multiples is for fault tolerance
and not for optimal path routing, since path optimization will
be coordinated by secure redirection. So, just a couple (or a
few) IRON routers per VP should be enough I think?

> Your IDs tend to be very high level and tend to specify external RFCs
> for how you do important functions in I-R.

You may be speaking of IRON/RANGER, but the same is not
true of VET/SEAL. VET and SEAL are fully functional
specifications from which real code can be and has been

> Yet those RFCs say
> nothing about I-R itself.  I think your I-Ds generally need more
> material telling the reader specifically how you use these processes
> in I-R.   Then, for each such process, have a detailed discussion
> with real worst-case numbers to show that it is scalable at every
> level for some worst-case numbers of EID prefixes, IRON routers etc.
> - as well as secure against various kinds of attack.

Does the analysis I gave above help? If so, I can put
it in the next version of IRON.
> >>   8 - Apart from Ivip's Modified Header Forwarding arrangements,
> >>       CES architectures involve encapsulation for tunneling
> >>       packets from ITRs to ETRs (IRON-RANGER doesn't have ITRs and
> >>       ETRs, but it still requires encapsulated tunneling).  There
> >>       are some problems with this - but they do not appear to be
> >>       prohibitive.
> >
> > IRON-RANGER calls them as ITEs/ETEs because it is possible
> > to also configure a tunnel endpoint on a host and not just
> > on routers. In terms of routers, the IRON-RANGER ITE/ETE
> > are exactly equivalent to what the other proposals are
> > calling as ITR/ETR.
> OK.  In Ivip the sending host can have in "ITR" function - though it
> is not a router and this "ITR" function doesn't advertise routes to
> the MABs (Mapped Address Blocks) inside the host.  It does however
> only handle packets sent by the host's stack which have destination
> addresses matching any of the MABs.  I am sticking with "ITR" and
> "ETR" in Ivip, to remain compatible with LISP - and because I think
> they are easier to pronounce than "ITE" and "ETE".

I'm not sure about this - an {Ingress/Egress} Tunnel
*Router* is a router that happens to terminate tunnel
endpoints. On the other hand, an {Ingress/Egress}
Tunnel *Endpoint* is a tautologically a tunnel
*endpoint* - so, why not call it as such?

Thanks - Fred
>   - Robin

rrg mailing list

Reply via email to