I'm not a fan of word in general.. but this comment numbering rules ;)

On Mon, Mar 18, 2013 at 10:33 AM, Arturo Servin <arturo.ser...@gmail.com> wrote:
> Hi,
>
>         Some comments about Steve comments:
>
> Comment 1 (also related with 44): I agree that ISPs may operate caches
> in behalf end-users ASNs, but also I think that more than 1 cache may be
> operated by a single ISP. Imagine a global ASN operator with routers in
> several places. Are they going to have just one master cache? Or are
> they have one or two (backup), and just in one location? Considering
> this, even the 40k clients may be low as worse case IMHO.

oops, so... we need to be clear in terminology here there are at least:
  o publication points - places/machines AS Operators would make their
authoritative information available to the world.
  o 'gatherer' machines - machines an AS Operator would use to gather
repository data from all of the publication points.
  o caches - systems inside of and AS which provide the digested data
to the routers of that operator's network.

Note that:
  1) a publication point may share common hardware/network/etc with
another publication point (shared/hosted model)
  2) a AS Operator may operate more than one 'gatherer' perhaps they
partition the publication space ? or some other strategy is in use
(local decisions not really relevant, I think, for overall discussion)
  3) a cache may be only internal facing "all for me, only!" or it may
provide the cache services for external parties (customers,
random-joe-internet-operator, researchers).
  4) it's entirely up to the AS operator to decide how many and in
what configuration the caches are in his/her network.

I think that most of Tim/Oleg's doc is about publication points and
the load placed on the entire system by gatherers... I don't think
they tackle the cache part at all in their doc.

As to the end of the comment in question:
 "So, I don’t think this
parameter is a good estimate (worst case, but
probably not representative). "

I think it's fair to state: "The worst case today is AS's == Gatherers"
which is the intent of tim/oleg's work here... I think :)

I think another part of Stephen's comment is really that there's the
potential for an AS operator to run a gatherer for his/her customer's
to benefit from... That doesn't seem unreasonable, but it's not
knowable how many WILL do that, so I'd err on the side of worst-case
and jump to #AS == #gatherers. (as long as that's stated clearly)

As to the memory limits imposed by platforms chosen for the study,
it's probably fair to not that any system which drives toward
max-memory+ is going to hit a cliff in performance when the system is
forced to swap :( try to avoid swap... it's slow (so I hear).

> Comment 10: Not sure if I understand the point, but because we do not
> trust fully on BIND we also operate DNS with NSD. In fact, I agree with
> the authors that we have a problem depending on just one implementation
> of rsync.
>
> Comment 28: For LACNIC ~ 2,500 members and ~ 2,200 PI. But as opposite
> to RIPE NCC, PI holders are members as well (not sure if this is
> relevant to the numbers)
>
> Comment 31: I do not have numbers of prefixes per ROA, but it is a
> number that we could get soon.
>
> Comment 44: Does any body has a pointer of that proposal?

I suspect this is actually a comment about cache's not gatherers?
Either way I'm not sure a 2x factor matters in end here...

> Comment 46: This is hard to answer. I have heard some operators asking
> for minutes to have fresh new ROAs. Some others do not mind and request
> objects every few hours. What is the middle or agreed value? (also
> related with 55)

this, I think, is from some of the discussions Danny/Eric have been
driving about time to get CRL updates out everywhere, or times to get
a new ROA published and seen 'everywhere'. I believe we ended up
talking about a time of on the order of 2-5 minutes for: "Have new
prefix, make roa, get published and distributed"

I don't know that that number matters a LOT today, it will certainly
be much more interesting as a target in a few more years (when
deployment is further along).

> Comment 57: I think the reason is that CDNs today work on http and we
> know them more or less well. Although possible it would be more
> expensive and complex to have rsync-CDNs. Also this is related to

'rsync cdn' == 'lots of rsync servers' really, right? since really you
are essentially making new 'cache' (at the cdn) for each client... the
only think you're winning with this approach (and rsync) is moving the
tcp endpoint closer (maybe) to your client/requestor.

Has anyone (not to derail too far) thought about git or another
network-based RCS system for this? It seems that (as much as I was
poking the tiger on the IETF thread about this) the checkpointing and
such is attractive... done right, it'd even give the history data that
some folks want for the changes to objects.

> comment 54, the "single point" is "magically" distributed by the CDN

for comment 54, I think the text is talking about: "with all RP's
talking to all PubPoints, as the time horizon on changes gets smaller
(from 1-3 hrs to 10 mins) each RP will have to talk to the PubPoints
more often, leading to more concurrency at each PubPoint over time"

Making the overall connection time shorter for these conversations
could alleviate the concurrency problem.

> responding with the "closest" point as it is done today for http content.
>
>         Hope it helps for the discussion and to improve the document.
>
> Regards,
> as
>
>
>
>
> On 3/14/13 6:03 PM, Stephen Kent wrote:
>> During the first SIDR session I commented that I agree with the need to
>> explore new
>> paradigms for distributing RPKI repository data, but that i was very
>> disappointed with
>> the analysis being used to motivate the exploration.
>>
>> Attached are my comments on the I-D inn question.  I have used MS Word
>> with change control
>> to associate the comments (and some edits) with the original text. I
>> rendered this as a PDF,
>> so that list members do not need to use MS Word.  I am happy to provide
>> the Word doc to the
>> authors if they wish.
>>
>> Steve
>>
>>
>> _______________________________________________
>> sidr mailing list
>> sidr@ietf.org
>> https://www.ietf.org/mailman/listinfo/sidr
>>
> _______________________________________________
> sidr mailing list
> sidr@ietf.org
> https://www.ietf.org/mailman/listinfo/sidr
_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to