On Thu, Jul 20, 2017 at 4:23 PM, Matthew Hardeman via dev-security-policy < [email protected]> wrote:
> I would be willing to take a stab at this if the subject matter is of > interest and would be willing to commit some time to work on it providing > that it would appear a convenient time to discuss and contemplate the > matter. Can anyone give me a sense of whether the matter of the potential > vulnerabilities that I see here -- and of the potential mitigations I might > suggest -- are of interest to the community? > Broadly, yes, but there's unfortunately a shade of IP issues that make it more difficult to contribute as directly as Gerv proposed. Gerv may accept any changes to the Mozilla side, but if the goal is to modify the Baseline Requirements, you'd need to sign the IPR policy of the CA/B Forum and join as an Interested Party before changes. And realize that the changes have to be comprehensible by those with limited to know background in technology :) > Quite separately, it appears that 3.2.2.8's "As part of the issuance > process..." text would strongly suggest that CAA record checking be > performed upon each instance of certificate issuance. I presume that > applies even in the face of a CA which might be relying upon previous DNS / > HTTP domain validation. I grant that the text goes on to say that issuance > must occur within the greater of 8 hours or the CAA TTL, but it does appear > that the intent is that CAA records be queried for each instance of > issuance and for each SAN dnsName. If this is the intent and ultimately > the practice and we are already requiring blocking reliance on DNS query > within the process of certificate issuance, should the validity of domain > validation itself be similarly curtailed? My argument is that if we are > placing a blocking reliance upon both the CA's DNS validation > infrastructure AS WELL AS the target domain's authoritative DNS > infrastructure during the course of the certificate issuance process > , then there is precious little extra point of failure in just requiring > that domain validation occur with similarly reduced validity period. > This is indeed a separate issue. Like patches, it's best to take as small as you can go. The question about the validity/reuse of this information is near and dear to Googles' heart (hence Ballots 185 and 186) and the desire to reduce this time substantially exists. That said, the Forum as a whole has mixed feelings on this, and so it's still an active - and separate - point of discussion. > > > I believe there would be a massive improvement in the security of DNS > query and HTTP client fetch type validations if the CA were required to > execute multiple queries (ideally at least 3 or 4), sourced from different > physical locations (said locations having substantial network and > geographic distance between them) and each location utilizing significantly > different internet interconnection providers. > > > > How could such a requirement be concretely specced in an auditable way? > > I can certainly propose a series of concrete specifications / requirements > as to a more resilient validation infrastructure. I can further propose a > list of procedures for validating point-in-time compliance of each of the > requirements in the aforementioned list. Further, I can propose a list of > data points / measurements / audit data that might be recorded as part of > the validation record data set by the CA at the time of validation which > could be used to provide strong support that the specifications / > requirements are being followed through the course of operations. If those > were written up and presented does that begin to address your question? I think it's worth exploring. Note that there's a whole host of process involved: - Change the CA/B documents (done through the Validation WG, at present - need to minimally execute an IPR agreement before even members can launder ballots for you) - Change to the WebTrust TF audit criteria (which would involve collaboration with them, and in general, they're not a big fan of precise auditable controls) - Change to the ETSI audit criteria (similar collaboration) Alternatively, if exploring the Mozilla side, it's fairly easy to make it up as you go along - which is not a criticism of the root store policy, but praise :) You just may not get as much feedback. That said, I think it's worthwhile to make sure the threat model, more than anything, is defined and articulated. If the threat model results in us introducing substantive process, but without objective security gain, then it may not be as worthwhile. Enumerating the threats both addressed and unaddressible are thus useful in that scope. _______________________________________________ dev-security-policy mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security-policy

