Hey Steve,

On Nov 8, 2011, at 6:37 PM, Stephen Kent wrote:

<snip>

>> ...
>> 1 - In the draft, there is discussion of the global agreement to move to 
>> algorithm B.  Who ensures the global agreement of B, and who chooses
>> and ensures agreement of the various dates?
> 
> the IETF is responsible for the alg selection, just as it has been for other 
> algs used with all other IETF standard protocols. Based on Terry's comments, 
> I think we will state that the RFC defining the transition  dates will be 
> coordinated with the NRO and IANA.

Kewl, will look at the next version, thanks!

> 
>> 2 - (Double checking that I have read this right), if the motivation for an 
>> algorithm roll is discovery of a weakness in the current algo,
>> no CA can roll until this top-down process reaches them, right (months or 
>> years)?  I see this is broached in Section 11, but it doesn't seem
>> to be answered there?  It sounds like the authors don't intend to address 
>> this any further than acknowledging the suboptimality of this
>> approach?
> 
> The motivation for alg transition is anticipated weakness in the current alg 
> suite, more so than a sudden discovery of a vulnerability. Althoygh there 
> have been major headlines about alg breaks, these are usually FUD, and do not 
> motivate an immediate transition to a new alg suite.  So, no we are not 
> proposing a process that deals with a sudden alg break.

k.

> 
>> 3 - Section 11 also prompted another question I had throughout: what happens 
>> if a CA doesn't meet these deadlines?  It seems like that
>> CA is simply orphaned and cannot participate in routing anymore (until they 
>> catch back up)?
> 
> It's easier to discuss this if you pick a specific phase. Which one did you 
> have in mind?

OK.  For this particular question, I think I understood the draft to be saying 
that at the end of phase 4, there may be fewer verified entities in the global 
system (this was discussed in the last paragraph of Section 11).  I believe the 
implication is that if any CA doesn't keep up (so to speak) they are considered 
invalid and therefore would be un-routable?

> 
>> >From these three questions, I came to the following clarification 
>> >suggestions:
>> 1 - I see the phases in this draft as defining a formal process. However, I 
>> don't see any error-legs (i.e. what happens if there needs to
>> be an abort, rollback, whatever you want to call it).  I think it is 
>> important to outline how this process can react if there are any
>> unforeseen failures at each phase.  I'm not sure that we need to be terribly 
>> specific, but perhaps we can agree that _something_ could go
>> wrong and cause the need for an abort?  I think this is quite common in 
>> process-specifications, unless we think nothing will ever go wrong
>> in this process? :)
> 
> What one might would do is phase specific. But, in general, the timeline 
> could be pushed back if there is a good reason to do so. I thin terry';s 
> suggestion helps in this regard. If we view the NRO as representing the RIRs, 
> and the RIRs as representing ISPs, then there is a path for a CA or RP that 
> has a problem to make that problem known, and addressed.

I think this needs to be codified for each phase in the draft.  This would seem 
to be a simple necessity that comes from defining a formal process.

> 
>> 2 - Related to the above, I would imagine (but maybe this is just me?) that 
>> in the event of a failure at one phase or another,
>> there may need to be a rollback procedure specified.
> 
> I'm not sure that there is a need for a rollback, per se.  Pick a phase and a 
> failure mode as an example as we can explore that issue.

As per the above, I think each phase should define its starting requirements 
(which I think are there), and what to do if its success requirements are not 
met (exceptions, error legs, etc).  I don't think we need to rathole any 
strawmen to agree that it is possible that this process may need to be aborted 
(even if only in very extremely rare cases), and this document should detail 
how this will be done at each phase.  Indeed, I don't think this document 
should even try to enumerate the specific types of failures.  Rather, it should 
just tell people what to do if a failure is deemed to have occurred.

> 
>> 3 - I think a lot of complexity in the overall draft (and my above comments) 
>> could be addressed by allowing CAs to choose their own
>> algorithms and their own schedules.  Could this be considered?  I recall we 
>> discussed how this might negatively affect the performance of
>> the current design's complexity.  It's possible that we will just simply 
>> come to loggerheads here, but (design issues aside) do people think
>> CA operators should have the ability to protect themselves as soon as they 
>> can move to a new algo?
> 
> One cannot allow each CA to choose it's own set of algs, because that local 
> choice has an impact on ALL RPs. That's what economists call externalization, 
> and it's a bad thing. Having each CA choose it's own schedule is also a non 
> -starter. Geoff Huston observed that unless we adopt a top-down transition 
> plan, the repository could grow exponentially! That's an unreasonable burden. 
> With a top-down plan CAs have limits imposed on them, already, i.e., a lower 
> tier CA cannot switch to a new alg until it's parents support the new alg.

Hmm, I think there might be a bit of an oversimplification in this perspective. 
 The "externalization" you're describing sounds a lot like an operational 
entity's right to govern their own operational choices.  In other words, 
regardless of whether this is a "bad" thing or not, we've already got it now.  
Is trying to legislate a new operational model that supplants an existing one 
the job of this draft?

Does the exponential growth come from the need for a predecessor to necessarily 
employ the union of all crypto algos that exist in the hierarchy below them?  I 
don't think that should be a requirement here (perhaps exactly for the reason 
you just mentioned).  Why can't a cryptographic delegation chain be composed of 
heterogeneous algos?  It solves this complexity problem quite elegantly.  One 
might even call it algorithm agility...

Sorry, I couldn't help adding some levity... :-P

> 
>> 4 - Finally, there is a note that all algorithms must be specified in 
>> I-D.ietf-sidr-rpki-algs.  While I am not challenging that, I would
>> like to point out that having an analogous requirement in DNSSEC made life a 
>> little challenging to add new algos (specifically GOST) without a
>> lot of people trying to assess the algo's worthiness w/i the IETF. I 
>> thought, though I could be mistaken, that several people lamented
>> having that requirement.  So, perhaps it would make sense to soften it here?
> 
> DNSSEC was initially less rigorous in its alg criteria, and the result was 
> not great. We are avoiding those problems.

Could you elaborate please?

Eric
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to