On 27/10/09 9:52 AM, "Sandra Murphy" <[email protected]> wrote:
> Geoff, I thought the reason for using (and mandating) rsync was precisely > to avoid re-load of the whole data space on each synchronization. The same thing can be achieved with manifests and any download protocol would suffice. (one might also hope that the download protocol provides a reasonable protection from MiTM attacks (sorry, I digress) ;) > > If so, what estimates would you use of how much of the space would be new > (what Matt calls "fresh") data at each synchronization time point? Wouldn't that depend on the frequency of change & publication adopted by the issuing CA in its CPS, and indeed how much change is expected at that CA? This, I think, is a really fuzzy area - a more frequent time looks like a nicer 'catch-all' for aspects of CA cycles or local time-zone differences of (say) midnight. However, I feel less than convinced that specifying an actual frequency is warranted. My recommendation would be for the relying party to assess it's own fetch cycle time on how it wishes for RPKI adjustments to affect its own routing system. Regardless of the adopted cycle time, the repositories are just going to have to scale to meet it, be that 24hrs, 12hrs, 3hrs, or 30 mins (in the extreme). Cheers Terry > > --Sandy > > (This is a clarification query, which might be viewed as co-chair hat on > or member hat on, or both, your choice.) > > On Tue, 27 Oct 2009, Geoff Huston wrote: > >> WG Co-Chair Hat OFF >> >> Hi Matt, >> >> >> >>> entities who are actually using RPKI data for routing SHOULD be fetching >>> fresh data from the repositories at least once every three hours. >>> >> >> 3 hours? >> >> At a first pass that seems very frequent. >> >> From a server's perspective if there are 30,000 AS's out there and each is >> running a local cache and each is a distinct relying party of the RPKI >> system, then the local hit rate at the server would be 3 per second, assuming >> that all the relying parties evenly spread their load (which is a pretty wild >> assumption - the worst case is that all 30,000 attempt to resync at the 3 >> hour clock chime point) Assuming that a repository sweep with no updates >> takes 30 seconds to complete then the server would have an average load of >> some 90 concurrent sync sessions. If there is a local rekey then the refresh >> would also imply a reload of all the signed products at this repository >> publication point. Assuming that this would then take 3 minutes to download, >> then the rekey load per server would be of the order of 540 concurrent rsync >> sessions as an average load. These load numbers appear to me to be somewhat >> large. >> >> From the relying party's perspective if there are 30,000 distinct RPKI >> repository publication points, and a serial form of local synchronisation >> using a top-down tree walk then the same set of assumptions imply that the >> relying party's perspective then it needs to process the synchronisation with >> the remote cache (including minimally the manifest crypto calculation at a >> rate of 3 per second. Assuming that there are 200,000 distinct ROAs out there >> that are re-validated at each fetch then once more the numbers imply that a 3 >> hour refresh would infer that the relying party would need to validate >> 200,000 ROAS in 10,800 seconds. That probably needs some pretty quick >> hardware. >> >> These numbers are pretty much a toss at a dart board, and the draft's >> authors' may well be using a different scale model to justify this >> recommended time cycle. What numbers did you have in mind Matt that would >> make this "SHOULD" 3 hour refresh cycle feasible in a big-I Internet scenario >> of universal use? >> >> >> Geoff >> >> WG Co-Chair hat off >> >> >> >> >> >> _______________________________________________ >> sidr mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/sidr > _______________________________________________ > sidr mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/sidr _______________________________________________ sidr mailing list [email protected] https://www.ietf.org/mailman/listinfo/sidr
