WG Co-Chair Hat OFF
Hi Matt,
entities who are actually using RPKI data for routing SHOULD be
fetching fresh data from the repositories at least once every three
hours.
3 hours?
At a first pass that seems very frequent.
From a server's perspective if there are 30,000 AS's out there and
each is running a local cache and each is a distinct relying party of
the RPKI system, then the local hit rate at the server would be 3 per
second, assuming that all the relying parties evenly spread their load
(which is a pretty wild assumption - the worst case is that all 30,000
attempt to resync at the 3 hour clock chime point) Assuming that a
repository sweep with no updates takes 30 seconds to complete then the
server would have an average load of some 90 concurrent sync sessions.
If there is a local rekey then the refresh would also imply a reload
of all the signed products at this repository publication point.
Assuming that this would then take 3 minutes to download, then the
rekey load per server would be of the order of 540 concurrent rsync
sessions as an average load. These load numbers appear to me to be
somewhat large.
From the relying party's perspective if there are 30,000 distinct
RPKI repository publication points, and a serial form of local
synchronisation using a top-down tree walk then the same set of
assumptions imply that the relying party's perspective then it needs
to process the synchronisation with the remote cache (including
minimally the manifest crypto calculation at a rate of 3 per second.
Assuming that there are 200,000 distinct ROAs out there that are re-
validated at each fetch then once more the numbers imply that a 3 hour
refresh would infer that the relying party would need to validate
200,000 ROAS in 10,800 seconds. That probably needs some pretty quick
hardware.
These numbers are pretty much a toss at a dart board, and the draft's
authors' may well be using a different scale model to justify this
recommended time cycle. What numbers did you have in mind Matt that
would make this "SHOULD" 3 hour refresh cycle feasible in a big-I
Internet scenario of universal use?
Geoff
WG Co-Chair hat off
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr