1 quick note on the numbers below (I've not read the paper, just the commentary)
also, thanks to eric for making some work available, and taking a stab at the numbers/sizing/speeding. On Wed, Nov 14, 2012 at 11:36 PM, Arturo Servin <[email protected]> wrote: > Erick > > Very interesting research. But I am finding difficult to understand > how > you got 1.4 M objects. > > Let me try to explain what I have seen in the young deployment of > RPKI. > For simplicity let's use the "hosted" model of RIRs. > > Let's suppose, each RIR issues in average 1 certificate per member > containing all the resources (v4,v6,ASNs). I would say that there are > 40,000 entities holding IP(4 and 6) prefixes and/or ASNs in the world > (same as number of ASs). What I have see is that most prefix holders > issue one roa with all the resources, but let's use 5 ROAs per > organization as average. Then we have: > > 40,000 certificates > 200,000 roas > 80,000 CRL,manifest > 40,000 ghostbuster (not very deployed but let's count it) > > Am I missing something besides the Router EE? Or is the Router EE that > makes the difference? > > It seems that we agree that Ototal is the same equation, but the > values > for Cas, Eas, etc. are different. > > But, anyways. It's aprox 350K objects. If we want to load all in 5 x2 - keyrollover -chris > minutes (it would be good to define what is acceptable) we need to > deliver objects in less 0.00085 secs. > > Regards, > as > > > > > On 15/11/2012 01:47, Eric Osterweil wrote: >> Hey everyone, >> >> A couple of us have done some quick back-of-the-envelope style calculations >> to help get an idea of what a global deployment of RPKI (supporting a global >> BGPSEC deployment) might look like, if we were to be able to deploy it. >> We've written up our methodology, evaluation, and findings in a short little >> tech-note here: >> http://techreports.verisignlabs.com/tr-lookup.cgi?trid=1120005 >> >> What we tried to do was calculate an extreme _lower_bound_ on what the >> overall gather/fetch times might be for a cache trying to gather a fully >> deployed RPKI repository. This seemed to be particularly opportune moment >> to raise this topic, re: some comments recently posted in the thread, ``Re: >> [sidr] additions and changes to agenda on Friday:'' >>> 1) size of a single repository (pick a large ISP as a for-instance, >>> some one like L3 who has ~30k customers, each with 5 routes average x2 >>> for keyroll situations? - or better yet, make up your own set of >>> numbers, document them and the reasoning why) >>> 2) number of repositories in existence (say, number of ASN in the >>> global table, or ...) >>> 3) re-fetch times of every repository (3 hrs for instance for any >>> object type?) >>> 4) average network latency from fetcher to fetchee (~150ms for instance) >>> >>> document that and then start looking at tradeoffs and consequences? >> >> What we found is that by creating a _systematic_ estimate of what a global >> RPKI would look like, we wind up with roughly 1.5 million objects (we >> explain why we feel this is a large _underestimate_ in the tech-note), and >> in order to ensure that all caches have received updated information (such >> as getting new certificates/CRLs/etc disseminated), repositories may have to >> wait about a month (roughly 32 days by our estimates), just for gatherers to >> reliably pick up a repo's changes. Or, a month before a key compromise >> might get remediated throughout the RPKI system. >> >> Our sincere hope is that this tech note will be a living document. To that >> end, comments, corrections, and feedback are very welcome. >> >> Eric >> _______________________________________________ >> sidr mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/sidr >> > _______________________________________________ > sidr mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/sidr _______________________________________________ sidr mailing list [email protected] https://www.ietf.org/mailman/listinfo/sidr
