Tim,

...

The first approach has my strong preference. I believe it's simple to explain and implement, effective against both concerns, and I do not see any security issues with it. The change boils down to this: when doing top-down validation, just accept the *intersection* of resources listed on a certificate, and its parent. The idea of keeping track of resources explicitly is not new: we already have this when using 'inherit'. We have running code in our validator for this. It took us a day to implement this. The feature is off by default of course, but it's enabled without problems on a public instance that we're running: http://localcert.ripe.net:8088/

What I forgot to mention above is that we do issue warnings on over claiming certificates. So this is visible, although so-far we have not seen these warnings in the RIR production repositories.
Can you says what tests you perform and how will they change if we adopt a relaxed path validation alg?

More to the point, we all agree that it is very bad for an RIR or NIR to issue a cert that would break the current validation algoroithm. In a prior message I suggested several checks that I thought every CA operating should perform to detect and avoid such errors. Has RIPE been performing check such as these? Might adoption of relaxed validation rules minimize the perceived need to perform such checks, i.e., encourage sloppiness?

It is not my intention, and I believe it's safe to say that it's not the intention of others, to encourage sloppiness, but to minimise impact and improve resiliency if an error does occur.
I understand this motive, but we have seen many examples over time where accepting non-conforming data results in progressive degradation of implementations. This is why the RPKI specs mandate RP enforcement of the criteria that CAs are supposed to follow in issuing certs. This is in contrast to the normal PKI specs which mandate CA cert generation rules, but do no generally require RPs to check that certs have been generated consistent with these rules. In the Web PKI context, this has resulted in a lot of "broken" certs being issued, and then accepted by RPs, with not so great results. So, I think it is hard to increase resilience yet maintain sufficient
feedback to CAs to help encourage standards compliance.
Speaking for our own CA implementation. We have code in place to ensure that we cannot create over-claiming certificates. We also have code to re-issue products as needed when resources are shrunk (e.g. omitting certain ROA prefixes when the resources are no longer held by the CA). However, there are a lot of moving parts here, and no software is without bugs. This problem gets worse when certificates are received from, or issued to third parties. In our current software we manage all CAs in our hierarchy locally, so we *know* when something changes and we can do the above. When dealing with 3rd parties there may be a significant time that parent and child are out of sync about the resources held by the child.
see my question above about how you ensure that certs don't over-claim and how the mechanisms
would change if path validation were relaxed.


The second approach in section 4 was presented at the last IETF (draft-barnes-sidr-tao). Essentially it allows for transfer signalling through an up-down like protocol. Technically this approach can help to deal with transfer issues (2).
The cited approach approach is designed expressly to deal with transfer issues, not with the more general problem of errors by CAs. One ought not conflate these two issues.

Okay, fair enough. As long as it's clear that this is the scope.
The first sentence of the Abstract for TAO says:

   This document defines an extension to the rpki-updown protocol to

provide support for transferring Internet Number Resources from one

INR holder to another.

That seems pretty clear.

Steve
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to