Hey AS, We are very open to feedback, but the numbers below seem to be derived somewhat more qualitatively than the estimations we used. Would it be possible to frame the questions in terms of the numbers we estimated? Clearly we acknowledge that anyone is 100% encouraged to agree with us, disagree with us, or create their own estimates, but I think the numbers below miss some the important aspects that we included in our evaluation. If, perhaps, you think some of the justification is wrong in the tech-note, I'd be much obliged if you would point it out. We tried very hard not to ballpark too many numbers, and use actual deployment statistics, as opposed to qualitatively evaluating. We worried that being too qualitative may open up the possibility of measurement biases. Would you mind taking a look at the reported methodology and estimates?
On Nov 14, 2012, at 11:36 PM, Arturo Servin wrote: > Erick > > Very interesting research. But I am finding difficult to understand how > you got 1.4 M objects. > > Let me try to explain what I have seen in the young deployment of RPKI. > For simplicity let's use the "hosted" model of RIRs. > > Let's suppose, each RIR issues in average 1 certificate per member > containing all the resources (v4,v6,ASNs). I would say that there are > 40,000 entities holding IP(4 and 6) prefixes and/or ASNs in the world > (same as number of ASs). And if an entity is a member of multiple RIRs? I think that would inflate the above, but perhaps just claiming there are #AS Autonomous Somethings lets us use the CIDR report's 42,000. ;) > What I have see is that most prefix holders > issue one roa with all the resources, but let's use 5 ROAs per > organization as average. See, I don't understand this generalization. You can't have one ROA for non-overlapping allocations, so we already know the above is not a good mapping. I don't know why we would guess 5. That's why we chose a systematic estimator. I'd be happy to look at others, but this seems a little too qualitative for this type of estimation (imho). Also, there's no discussion of router EE certs below (except acknowledgement of their omission) From here, I think the numbers are based on a foundation I have trouble following, so I think it might be easier to frame this discussion on replacing/addressing the estimates we were using? Thanks, Eric > Then we have: > > 40,000 certificates > 200,000 roas > 80,000 CRL,manifest > 40,000 ghostbuster (not very deployed but let's count it) > > Am I missing something besides the Router EE? Or is the Router EE that > makes the difference? > > It seems that we agree that Ototal is the same equation, but the values > for Cas, Eas, etc. are different. > > But, anyways. It's aprox 350K objects. If we want to load all in 5 > minutes (it would be good to define what is acceptable) we need to > deliver objects in less 0.00085 secs. > > Regards, > as > > > > > On 15/11/2012 01:47, Eric Osterweil wrote: >> Hey everyone, >> >> A couple of us have done some quick back-of-the-envelope style calculations >> to help get an idea of what a global deployment of RPKI (supporting a global >> BGPSEC deployment) might look like, if we were to be able to deploy it. >> We've written up our methodology, evaluation, and findings in a short little >> tech-note here: >> http://techreports.verisignlabs.com/tr-lookup.cgi?trid=1120005 >> >> What we tried to do was calculate an extreme _lower_bound_ on what the >> overall gather/fetch times might be for a cache trying to gather a fully >> deployed RPKI repository. This seemed to be particularly opportune moment >> to raise this topic, re: some comments recently posted in the thread, ``Re: >> [sidr] additions and changes to agenda on Friday:'' >>> 1) size of a single repository (pick a large ISP as a for-instance, >>> some one like L3 who has ~30k customers, each with 5 routes average x2 >>> for keyroll situations? - or better yet, make up your own set of >>> numbers, document them and the reasoning why) >>> 2) number of repositories in existence (say, number of ASN in the >>> global table, or ...) >>> 3) re-fetch times of every repository (3 hrs for instance for any >>> object type?) >>> 4) average network latency from fetcher to fetchee (~150ms for instance) >>> >>> document that and then start looking at tradeoffs and consequences? >> >> What we found is that by creating a _systematic_ estimate of what a global >> RPKI would look like, we wind up with roughly 1.5 million objects (we >> explain why we feel this is a large _underestimate_ in the tech-note), and >> in order to ensure that all caches have received updated information (such >> as getting new certificates/CRLs/etc disseminated), repositories may have to >> wait about a month (roughly 32 days by our estimates), just for gatherers to >> reliably pick up a repo's changes. Or, a month before a key compromise >> might get remediated throughout the RPKI system. >> >> Our sincere hope is that this tech note will be a living document. To that >> end, comments, corrections, and feedback are very welcome. >> >> Eric >> _______________________________________________ >> sidr mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/sidr >> > _______________________________________________ > sidr mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/sidr _______________________________________________ sidr mailing list [email protected] https://www.ietf.org/mailman/listinfo/sidr
