>> can take 4+ hours to synchronize (only in SIDR do we talk about minutes and
>> hours as if they are "short"
>> convergence times).
> 
> As has been noted before, a *running* rpki cache server has low delays at each
> polling instance (see top of Table 1 for 100 up to 10,000 *changed* RPKI 
> objects). 
> For a *new* rpki cache server that is starting up and fetching all 1.5 
> million objects, 
> the rpki rsync delay is larger (4 hours). Routers would not be using this 
> *new* server 
> until it signals it is ready to serve. In the meanwhile, routers are being 
> served
> by other *up and running* rpki cache servers.

So the routing system is being secured by information that is at least
several minutes behind actual topology changes. What impact will this
have on the overall number of updates, speed of reachability, etc. --and
what's the business impact?

We don't live in a world of minutes and hours any longer.

> Would you like to share a ball park number you know that is a better estimate
> for the # eBGP speakers? It is a parameter in the model; so easy to rerun.

I would guess at least 6 per stub as.

For transit AS' --if you look at the maps on navigators.com, you'll find
every ISP on there shows at least 50 interconnect points, each of which
is at least 1 router --and this doesn't count private peering points. So
I would say 50 is a minimal number, and something larger is probably
more likely.

But again, this all predicated on current numbers --building scaling
around, "this is what we have today," is a recipe for disaster within a
matter of years.

Of course, SIDR has never cared about what happens ten years from now,
since that's beyond the time horizon for the actual goals at hand.

Russ
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to