> From: Ross Callon <[email protected]>
> The papers that I read both assume that the granularity of the
> EID-to-RLOC tables will be the same as the granularity of the current
> top level BGP routing table. If this assumption is wrong, then the
> results will be correspondingly inaccurate.
> To me it seems highly unlikely that this assumption is within an order
> of magnitude of being correct.
Yes and no. The papers do assume that the granularity is basically (see
below) the same (i.e. the size profile of the mapping entries matches that of
BGP routing table), but apart from that, I'm not sure they support the
implicit contention that caches will therefore usually be much larger than
BGP routing tables.
Looking at, for instance, "LISP-TREE: A DNS Hierarchy to Support the LISP
Mapping System" (which has extensive trace-drive simulations) we read the
following (pp. 13, 16):
"we removed from the iPlane dataset the more specific prefixes that are
mostly advertised for traffic engineering purposes [2] and wouldn't be
advertised with LISP. A total number of 112,233 prefixes are assigned
based on originating AS to their respective xTR[s]
...
During our simulations the maximum number of mapping cache entries reached
was 22,993 and 15,011 for the two traces. This is an order of magnitude
less than routes in the DFZ"
A couple of observations. First, the first step taken (dropping TE
more-specifics) likely biased their BGP dataset towards larger prefixes (i.e.
somewhat violating the assumption that "the size profile of the mapping
entries matches that of BGP routing table"), but I will hereafter ignore this
since it's probably second order.
The more interesting observtion is that the cache size (on two quite large,
eclectic sites) is a factor of at least 5 smaller than the overall routing
table. So even if there is some 'mapping table bloat' compared to the BGP DFZ
routing table as a whole (i.e. more mapping entries, system-wide, than there
are BGP routes), these measured results suggest that even a eventual overall
quite large bloat factor will still not result in tables that are larger than
existing BGP tables.
Of course, this 'downsized working set' pattern (which is a topic in itself,
which is not explored at length here - there are good reasons why most sites
will have smaller working sets, proportionately, in a growing Internet) will
not hold for absolutely all sites; large content providers will likely see
working sets which are much larger. However, such sites almost always have
specialized infrastructure anyway, and it's not clear to me that their
inability to use 'off the peg' solutions which work for everyone else is
really a problem - it certainly doesn't seem to be for other aspects of their
operations.
Noel
_______________________________________________
lisp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lisp