On Mon, Oct 1, 2018 at 9:21 AM Dimitris Zacharopoulos <[email protected]> wrote:
> No, this was not about the domain name but about the information displayed > to the Relying Party with the attributes included in the OV/EV Certificate > (primarily the Organization). So, I'm still uncertain if Ian's "misleading > street address" was trying to get a certificate for domain "stripe.com" > owned by "Stripe Inc." in California, or was trying to get a certificate > for "ian's domain.com" owned by "Stripe Inc." in Kentucky, as was the > previous discussions. The discussion so far indicates that it's the latter, > with the additional element that now the Street Address is also misleading. > I'm not sure the source of confusion. As the original message pointed out, this was about a Cloudflare certificate (or more aptly, two entities named Cloudflare). In both the "Stripe, Inc" and in this case, it was a domain that Ian owned and could demonstrate, for a legally incorporated entity that Ian represented. In the "Stripe, Inc" case, the information included in the certificate reflected the accurate entity - that is, the only "confusion" here was relying party confusion, while the information within the certificate was accurate. During those discussions, some suggested that it was this point - that the information was accurate, and a 'discerning' RP could distinguish between Kentucky and California - that prevented a "Stripe, Inc" cert from being problematic. This more recent "Cloudflare" issue builds upon that claim, by showing that CAs also use unreliable data sources, such that even a discerning RP may not be able to fully distinguish. In this case, Ian's attempted example was an 'off-by-one' error on a street address, while otherwise keeping all of the same information (except for serial number, since that's related to jurisdictional details). However, independent of any "name-collidey" discussion between Ian-Cloudflare and 'Real'-Cloudflare, the fact that some CAs treat D&B as a Reliable Data Source shows that unreliable data is able to be introduced into certificates. > I am certainly not suggesting that CAs should put inaccurate and > misleading information in certificates :-) I merely said that if the > Subscriber introduces misleading or inaccurate information in certificates > via a reliable information source, then there will probably be a trail > leading back to the Subscriber. This fact, combined with the lack of clear > damage that this can cause to Relying Parties, makes me wonder why doesn't > the Subscriber, that wants to mislead Relying Parties, just use a DV > Certificate where this probably doesn't leave so much evidence tracing back > to the Subscriber? > "The lack of clear damage" - I'm not sure how better to communicate, since we're discussing fundamental damage to the value that OV and EV are said to provide. The only way we can say "lack of clear damage" is to say that OV and EV are worthless - otherwise, it's incredibly damaging. I have no idea where the notion of 'tracability' comes from, or why that's relevant. It again seems to be anchoring on getting a certificate for the real cloudflare.com or stripe.com, which is not the discussion. We're talking about "confusing" a user (or subscriber or relying party or threat monitoring system) by suggesting that the certificates being issued are 'benign' or 'authorized'. > But this inaccurate data is not used in the validation process nor > included in the certificates. Perhaps I didn't describe my thoughts > accurately. Let me have another try using my previous example. Consider an > Information Source that documents, in its practices, that they provide: > > > 1. the Jurisdiction of Incorporation (they check official government > records), > 2. registry number (they check official government records), > 3. the name of legal representative (they check official government > records), > 4. the official name of the legal entity (they check official > government records), > 5. street address (they check the address of a utility bill issued > under the name of the legal entity), > 6. telephone numbers (self-reported), > 7. color of the building (self-reported). > > The CA evaluates this practice document and accepts information 1-5 as > reliable, dismisses information 6 as non-reliable, and dismisses > information 7 as irrelevant. > > Your argument suggests that the CA should dismiss this information source > altogether, even though it clearly has acceptable and verified information > for 1-5. Is that an accurate representation of your statement? > Yes, I'm stating that the existence of and inclusion of 5-7 calls into question whether or not this is a reliable data source. Your parenthetical about how they check that is what the CA has the burden to demonstrate, particularly given that they have evidence that there is less-than-reliable data included. How does the competent CA ensure that the registry number is not self-reported - or that the QIIS allows it to be self-reported in the future? This is where the 'stopped-clock' metaphor is incredibly appropriate. Just because 1-5 happen to be right, and happen to be getting the right process, is by no means a predictor of future guarantees or correctness or accuracy. More importantly, the inclusion of 5-7 in the reporting suggest that there is *unreliable* data actively being seen as acceptable, and because of that, the CA needs to take a view against including. > So you believe it's in best interests of Subscribers to have CAs > distrusted, certificates challenged and revoked, and for relying parties to > constantly call into question the certificates they encounter? And that > this is somehow better than consistently applied and executed validation > processes? I wish I could share your "Mad Max" level of optimism, but it > also fails to understand that we're not talking about Subscriber selection, > we're talking about adversarial models. The weakest link matters, not > "market reputation", as much as some CAs would like to believe. > > > Again, I might have described my thoughts unclearly. I was only trying to > say that Subscribers now pay more attention to the CA they chose than they > did before. They may not choose a "loose" or "weak" CA that easily because > of the risks associated with that decision. > And that has zero relevance to the discussion, or to mitigating the weakness. The subscriber choice is irrelevant - the attacker's choice is what matters - and that's why we have things like the Baseline Requirements to begin with. If we believed that anything above was an appropriate mitigation for misissuance, we wouldn't need BRs to begin with, we'd just let the market of reputation sort it out, with bad CAs eventually getting no customers. This, of course, also entirely ignores that the "loose" or "weak" CA is generally more appealing on other grounds (cost, complexity, time to validate) compared to CAs doing the "right" thing, so we can't even argue rational self-interest on behalf of Subscribers as somehow being a mitigation. > Funny enough, that subjectivity you just described is not permitted of CAs, > and for good reason. Every one of those certificates needs to be revoked, > per 4.9.1.1 of the BRs. The CA has also material misstated its warranty for > these certificates, per 9.6.1. > > > Yet we've seen this being exercised before and definitely in violation of > the 24 hour window. The main issue that we have seen some CAs struggle with > and explain in Incident Reports is that this information might actually be > proven to be accurate and can be re-validated without causing interruptions > for Subscribers and Relying Parties. > This is the stopped-clock argument - that the process to getting that information doesn't matter, as long as the information turns out eventually consistent. However, the act of certification is about ensuring that the process is well-formed. If the only backstop against misissuance is eventual consistency, then there's no incentive to get the process correct. And if the process isn't correct, there's nothing to mitigate adversarial impact. By comparison, we don't leave the root password to our servers as "secret", with the assumption that 99/100, the person logging in to that server will be authorized. Just because it's eventually consistent doesn't mean it's not fundamentally flawed. > If you believe that there are national jurisdictional databases, they can > be added to the whitelist. Indeed, the entire point would be to ensure > that, for the appropriate jurisdictional boundary, there's a clear > indication as to appropriate data sources. Then, there is no need for CA > discretion - or indiscretion. > > > You are basically suggesting that the evaluation of a data source > performed by the CA (at least for the smaller jurisdictions) be made public > and added in the white-list. I'm fine with that. However, we will face the > same problem if during re-evaluation we discover that some piece of > information is not as reliable as we thought. > Of course, but then we end up with a consistent interpretation and application of that data. _______________________________________________ dev-security-policy mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security-policy

