On Thu, Aug 31, 2017 at 5:21 PM, Jakob Bohm via dev-security-policy < [email protected]> wrote:
> On 31/08/2017 22:26, Ryan Sleevi wrote: > >> Agreed. But in general, in order to maintain interoperability, there's a >> process for building consensus, and repurposing extensions as you propose >> is generally detrimental to that :) >> > > But sometimes necessary. > There is a tremendous burden of proof to demonstrate this, and it's the path of last resort, not first. > Moving the information to a new extension would basically just bloat >>> certificates with more redundant data to be sent in every certificate >>> based protocol exchange. But changing the original decision in a >>> backwards compatible manner may still be a good idea, either as a >>> "stricter security policy" or (better, if it works well in controlled >>> tests) as part of an update RFC for the IETF standard that specified the >>> original semantics. >>> >> >> >> I can understand your perspective, but I must disagree with you that it's >> "backwards compatible". It isn't - it's a meaningful semantic change that >> breaks interoperability. >> >> > I meant backwards compatible and interoperable with the actual real > world CAs (as opposed to all the CAs that could be built under the old > standard). Compare to how the standard was changed from DNS name in the > CN element to DNS name exclusively in the SAN extension, but hopefully > with less transition time needed. I believe this may be operating on an incomplete knowledge of history. RFC 2818 (aka the HTTPS RFC) always indicated commonName was deprecated (and SAN was preferred), and nameConstraints have similarly always expressed a path for constraining the nameConstraints. So, from the get-go with the standards, it was possible to name constrain DNS. Unless you were referencing certificates prior to them being bound to domain names, but I can't see how that would be relevant, since the context is about DNS names. > Yes, it means that technically constrained sub-CA certificates may be >> 'bloated' in order to ensure the desired degree of security. That's a >> trade-off for the compromises necessary to avoid audits. That's not, >> however, an intrinsic argument against the process, or a suggestion it >> cannot be deployed. >> >> > Avoiding audit failures is a legal, not a technical need. Anything that > would only fail audits could be fixed by changing audit requirements, if > the organizations setting those (such a Mozilla and CAB/F) desires. > I didn't suggest avoiding audit failures, but rather, avoiding audits. That is, the material difference between a TCSC and a CA is not one of technical requirements (they're the same, effectively), but one of whether or not a self-audit is seen as acceptable versus an independent third-party audit. I highlight this because it makes the tradeoff something more concrete: An organization that wishes to avoid the administrative hassle of an independent audit could opt for a technically constrained sub-CA, which would be "bloated" in your view. If they didn't want the bloat, they could accept the administrative hassle of an independent third-party audit. That is, there are options to satisfy an organizations needs, and allows them to prioritize whether it's more important to have the size reduction or to have organizational flexibility. There's no innate requirement to allow both - and while that may be an optimization, is one that comes with the compatibility and interoperability risks I highlighted, so it may not actually be achievable in the world we have. But that's OK - organizations and individuals routinely have to operate in the world we have and make choices based on priorities, and we've made it so far :) > > >> The interaction between a nameConstraints extension not specifying >>> directorynames and the directoryname in the Subject field would be an >>> area >>> needing careful specification, based on compatibility and historic >>> concerns. >>> >>> >> Yes. Which would not be appropriate for m.d.s.p (for reasons of both >> consensus and intellectual property). That is a concern for some members, >> and is why organizations like W3C and groups such as WICG exist :) >> >> > Ok, I was simply hoping informal discussion in a place like m.d.s.p. > would be a better place to initial evaluate such an idea before starting > up the whole standardization process. > Fair enough. This makes a great venue for that, but certainly, as it shifts to technical details, working through a process like WICG - in which you could write up an 'explainer' explaining the idea and how you think it should work, sans technical details, to gauge interest while providing some of the protections, and then iterate and incubate should there be interest - is a great way to accelerate that. Please don't take this as a lack of interest or a suggestion the work isn't valuable - I think it is interesting, and the work is valuable - but it is one that should have some of the constraints identified early on, to see if it's still something you'd like to put momentum into :) > I agree that it's an error prone process, and I agree that changing the >> name (and not just the key) is an ideal scenario to transition. However, >> unless you revoke the old certificate, it's unconstrained. And if you >> revoke the old certificate, then everything it's issued is no longer >> valid, >> unless you reissue for the same name and key with new constraints. Which >> is >> why folks thought it was a good idea at the time. >> >> > Alternatively, one could choose a definition of "unconstrained" which > doesn't require impracticable revocations of real world certificates > that were considered fully constrained under previous definitions. > > This is the motivation for my proposal that browsers etc. wanting to > require additional "all ban" naming constraints might instead interpret > the absence of information as an indication that the SubCA certificate > is to be interpreted as if the newly considered name type was > nonexistent (and thus not permitted). So, I think the point that I'm trying to communicate, and perhaps did so poorly, is that we shouldn't redefine existing semantics as we see fit - but we can certainly introduce new ones. For example, one could imagine a simpler extension which indicated whether the nameConstraints was a whitelisted set (this could be an empty extension, for that matter). Applications which understood this extension would semantically alter their interpretation of nameConstraints to move from the blacklist to a whitelist, while applications that did not understand this extension would maintain current behaviour. If maintaining current behaviour was not desirable (and, as a constraint, it's probably not), then one could mark this new extension critical - to ensure only clients which understand and support constraining to a whitelist is the preferred interpretation. Or, one could argue it should be a non-critical extension, on the basis that said applications would _also_ presumably not support new name types without also supporting the new extension, so as long as the existing nameConstraints constrained the names relevant to existing applications (e.g. the current TCSC definition), then the new extension could be non-critical, and as new name types are introduced (e.g. SRVNames), applications supporting such names also supported the new extension, there would be no concerns. So there's lots of ways to tackle this problem, but we should generally operate under the constraint of "Don't break existing users" and "don't redefine things implicitly" :) And "have a plan to ensure interop" :) _______________________________________________ dev-security-policy mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security-policy

