I keep hearing:
"this is not fast enough"
"we have a requirement to make it faster"
etc.
But nobody has answered Tim:
"> I would really like to echo Chris's last paragraph here. What do you
>think is a reasonable time to propagate from an operator editing the
>RPKI (A) -> 99.9% of Bs?
>
> I understand that half a day is way too long. Instantaneous is
>theoretically impossible when BGP and RPKI are separate. But is there
>really no reasonable pragmatic indication of what would be 'good
>enough' for the real world? E.g. if we can come up with a structure
>that enables repositories to support 100k RP tools ('B', assuming 2
>gatherers per ASN) getting their *updates* (full dump separate thread
>please) every 10 minutes, is that good enough?
>
So, instead of continuing a futile discussion, why not better start
working on a workable requirement?
I have read in this list some interesting approaches to make the
repositories more reliable and giving a reasonable indication of your
expectations as operators I think it would be a good start.
/as
On 19/12/2012 14:22, Tim Bruijnzeels wrote:
> Hi Danny, WG,
>
> People have mentioned that if the security was somehow part of the updates
> themselves, then you could have security at the speed of updates. I don't see
> how this could work, it would have to be a completely different set of
> standards from what's currently being worked on. Most likely this would
> result in lots of cycles on routers, but without any concrete proposals on
> this there really isn't much more I can sensibly say about this approach
> here..
>
> So for the sake of argument let me just stick to the model that we have now
> where the RPKI is an external system, separate from the BGP updates.
>
> This means that when operators make changes in BGP, they will also need to
> edit part of the RPKI.
>
> Chris described a path from roa (or any rpki object) creator to consumer:
>
> On Dec 18, 2012, at 10:52 PM, Christopher Morrow <[email protected]>
> wrote:
>> The architecture as laid out is, from 'roa creator' to 'roa consumer',
>> roughly:
>> A publication point (nominally one per roa-creator)
>> B gatherers (nominally one per roa-consumer)
>> C internal-cache-systems (some number per roa-consumer)
>> D routers
>
> Strictly speaking there is a step before A: creating the new objects.
> Dependent on the system (and especially if a remote pub server is used) there
> may be a short delay before the objects are actually published.
>
> But the basic model is right in my opinion. So following Chris's line of
> thought:
>> (yes, there is the iana->rir part of the tree
>> yes, there are are more than just ROAs in the repositories)
>>
>> So, the part that Randy and Danny and Eric are talking about is, as
>> far as the global system, is the A -> B conversation. Once you get
>> beyond B (to C and D) the problem is entirely inside some operator's
>> network and nothing on the outside matters.
>>
>> Essentially the problem here is distribution of 10k of data to ~40k
>> endpoints (today, it'll grow tomorrow, fine) in ~2 mins time (or 5
>> mins or 10 mins or ... someone draw a line in the sand so we know what
>> the target is)
>
>
> I would really like to echo Chris's last paragraph here. What do you think is
> a reasonable time to propagate from an operator editing the RPKI (A) -> 99.9%
> of Bs?
>
> I understand that half a day is way too long. Instantaneous is theoretically
> impossible when BGP and RPKI are separate. But is there really no reasonable
> pragmatic indication of what would be 'good enough' for the real world? E.g.
> if we can come up with a structure that enables repositories to support 100k
> RP tools ('B', assuming 2 gatherers per ASN) getting their *updates* (full
> dump separate thread please) every 10 minutes, is that good enough?
>
>
> Cheers
> Tim
> _______________________________________________
> sidr mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/sidr
>
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr