On Wed, Dec 31, 2008 at 1:30 AM, Tony Li <[email protected]> wrote: > |<[email protected]> wrote: > |> It is not known with any scientific or engineering accuracy when > |> vanilla BGP4 will reach an economic or technical scaling limit > > |And what's the justification for maintaining > |the IETF recommendation that the RIRs impose artificial restrictions > |on the minimum allocation size? Or for that matter any restrictions at > |all on IPv6 assignment in multihomed environments? > > Adding more prefixes will hasten the inevitable.
If it isn't justified quantitatively then it's just FUD. Do the folks on this group clearly understand the harm that the Regional Internet Registries do by denying number resources to multihomed registrants based on our assurance that it's a necessary evil in order to keep BGP stable? If we still can't reliably quantify a BGP scaling limit then our request that the RIRs suppress BGP growth by suppressing resource assignment is simply unconscionable. > We know of no hard upper bound to BGP or (more importantly) > the routing architecture as it currently exists. It is apparent that unless > there is some significant progress somehow, the cost and/or complexity of > running the current architecture is going to start to climb. Climb to what? Assume a $2k COTS PC purchased 12/31/08 with component choice optimized for routing. Assume the ratio between BGP routes and updates per second will follow whatever growth pattern has been demonstrated in the ratio of routes to updates over the last 10 years. I expect this ratio is constant or near constant. Assume the PC builds a trie-based FIB from the BGP RIB. Trie based FIBs are known to exhibit a growth in resource consumption which is linear relative to the traffic switched, regardless of the size of the trie. How many routes can we pack in before we either fill memory, can no longer sustain both the 500mbps routing rate and keep up with BGP updates? Surely we can answer this question with engineering accuracy! Can we not compute the same for a $2k PC 5 and 10 years ago? If we can, then we can with scientific accuracy project where that limit will be for the foreseeable future. At worst, a high-speed router consists of parallel units of lower speed routers, a linear increase in cost tied to the data rate but not the table size. So if the above can be done we will have, with scientific accuracy, deduced a function which yields an upper bound for BGP scalability. Regards, Bill Herrin -- William D. Herrin ................ [email protected] [email protected] 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004 _______________________________________________ rrg mailing list [email protected] https://www.irtf.org/mailman/listinfo/rrg
