Tim,

Hi,

As you may have noticed my name was added to the author list, so it will come as no surprise that I read this document and agree with its content.

I believe that all RIRs share both operational concerns outlined in section 3: (1) Operational fragility, and (2) resource transfers. One thing to note about the second case is that we don't have this problem right now because we only offer a hosted service where our members cannot create delegated certificates. However, this will change when we enable the provisioning (up-down) system and support non-hosted CAs that may delegate further.

Section 4 describes, in very general terms, two alternative approaches to counter these concerns.

The first approach has my strong preference. I believe it's simple to explain and implement, effective against both concerns, and I do not see any security issues with it. The change boils down to this: when doing top-down validation, just accept the *intersection* of resources listed on a certificate, and its parent. The idea of keeping track of resources explicitly is not new: we already have this when using 'inherit'. We have running code in our validator for this. It took us a day to implement this. The feature is off by default of course, but it's enabled without problems on a public instance that we're running: http://localcert.ripe.net:8088/
Since you implemented this capability based on what we all agree is a rough description of the
intended functionality, can you be very confident that it is "correct"?

More to the point, we all agree that it is very bad for an RIR or NIR to issue a cert that would break the current validation algoroithm. In a prior message I suggested several checks that I thought every CA operating should perform to detect and avoid such errors. Has RIPE been performing check such as these? Might adoption of relaxed validation rules minimize the perceived need to perform such checks, i.e., encourage sloppiness?
The second approach in section 4 was presented at the last IETF (draft-barnes-sidr-tao). Essentially it allows for transfer signalling through an up-down like protocol. Technically this approach can help to deal with transfer issues (2).
The cited approach approach is designed expressly to deal with transfer issues, not with the more general problem of errors by CAs. One ought not conflate these two issues.
The 'happy case' scenarios assume though that all involved parties play nice - and keep playing nice. There are many moving parts here, and it's not clear to me what happens when one party walks away (e.g. goes offline permanently). Additionally the timing of certificate shrinking in case of a cancel is not clear to me and introduces complexity: live transfer or not?, who signals the transfer?, when is it canceled, before or after shrinking and/or swinging? Can a cancel be canceled? But, more importantly, this approach offers no protection against the operational fragility case (1). Furthermore it adds a lot of complexity and this has huge costs in development (many months) and maintenance and introduces more operational fragility and potential interop issues between implementations.
Since we have NO description of resource transfer at the level of detail provided in the TAO I-D, it seems a bit premature to describe it as being more complex than the suggested alternative.

Steve
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to