On Monday, November 7, 2016 at 9:02:37 AM UTC-8, Gervase Markham wrote:
> As in, their dishonesty would be carefully targetted and so not exposed
> by this sort of coarse checking?

(Continuing with Google/Chrome hat on, since I didn't make the previous reply 
explicit)

Yes. An 'evil log' can provide a divided split-view, targeting only an affected 
number of users. Unless that SCT was observed, and reported (via Gossip or some 
other means of exfiltration), that split view would not be detected.

Recall: In order to ensure a log is honest, you need to ensure it's providing 
consistent views of the STH *and* that SCTs are being checked. In the absence 
of the latter, you don't need to do the former - and that's infrastructure for 
monitoring primarily focuses on the STH consistency, with the 
assumption/expectation that clients are doing the SCT inclusion proof fetching.

So if I were wanting to run an evil log, which could hide misissued 
certificates, I could sufficiently compel or coerce a quorum of acceptable logs 
to 'misissue' an SCT for which they never incorporated into their STH. So long 
as clients don't ask for an inclusion proof of this SCT, there's no need for a 
split log - and no ability for infrastructure to detect. You could use such a 
certificate in targeted, user-specific attacks.

This is why it's vitally important that clients fetch inclusion proofs in some 
manner (either through gossip or through 'privacy' intermediaries, which is 
effectively what the Google DNS proposal is - using your ISP's DNS hierarchy as 
the privacy preserving aspect), and then check that the STH is consistent 
(which, in the case of Chrome, Chrome clients checking Google's DNS servers is 
effectively an STH consistency proof with what Google sees).

In the absence of this implementation, checking the SCT provides limited 
guarantee that a certificate has actually been logged - in effect, you're 
making a full statement that you trust the log to be honest. Google's goal for 
Certificate Transparency has been to not trust logs to be honest, but to verify 
- but as Chrome builds out it's implementation, it has to 'trust someone' - and 
given our broader analysis of the threat model and scenario, the decision to 
"trust Google" (by requiring at least one SCT from a Google-operated log) is 
seen as no worse than existing "trust Google" requests existing Chrome users 
are asked of (for example, trusting Chrome's autoupdate will not be 
compromised, trusting Google not to deliver targeted malicious code). [1]

Thus, in the absence of SCT inclusion proof checking (whether temporarily, as 
implementations blossom, or permanently, if you feel there can be no suitable 
privacy-preserving solution), you're trusting the logs not to misbehave, much 
like you trust CAs not to misbehave. You can explore technical solutions - such 
as inclusion proof checking - or you can explore policy solutions - such as 
requiring a Mozilla log, or requiring logs have some criteria to abide by ala 
WebTrust for CAs, or who knows what - but it's at least useful to understand 
the context for why that decision exists, and what the trust tradeoffs are with 
such a decision.


[1] As an aside, this "trust Google for binaries" bit is being explored in 
concepts like Binary Transparency, a very nascant and early stage exploration 
in how to provide reliable assurances that binaries aren't targeted. Similarly, 
the work of folks on verifiable builds, such as shown by Tor Browser Bundle, 
are meant to address the case of no 'obvious' backdoors, but the situation is 
more complex when non-open code is involved. I call this out to highlight that 
the computer industry has still not solved this, and even if we did for 
software, we have compilers hardware to contend with, and then we're very much 
into "Reflections on Trusting Trust" territory.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to