Hi Tom,

> On Aug 6, 2015, at 11:46 AM, Tom Ritter <[email protected]> wrote:
> 
> Thanks for this Bryan.  I'm going to echo Ben a bit:
> 
> On 6 August 2015 at 02:17, Bryan Ford <[email protected]> wrote:
>> It creates new privacy problems by requiring the client (browser user) to
>> “opt out” of web browsing privacy in order to gain any security benefit.
>> Either the user hands over all his browsing history to a remote “trusted
>> auditor”, thereby opting out of privacy; or else the user remains fully
>> vulnerable to the same kind of (potentially extended) MITM attacks that
>> motivated CT in the first place.
> 
> If the Trusted Auditor mode is a default that requires Opting Out
> instead of Opting In I think I speak for every author when I say we
> will have considered this a huge failure and will rally against it as
> much as we can.

Agreed.  With that in mind, would it be reasonable to include text to this 
effect in the “gossip” draft if it is adopted?  For example, “Implementations 
of CT must ensure that Trusted Auditor relationships are ‘opt-in’ relationships 
require the explicit consent of the user, and not a bundled system default” or 
something like that?

>> Could you explain how “the system as a whole” detects such misbehavior in
>> the case of the state/ISP-level MITM attacker scenario?
>> 
>> As I see it, if the client/browser doesn’t opt-out of privacy by gossiping
>> his browsing history with a trusted auditor, then the client’s *only*
>> connection to the rest of the world, i.e., “the system as a whole”, is
>> through the “web server”, which may well be the same kind of MITM attackers
>> that have been known to subvert the CA system.  The client never gets to
>> communicate to the rest of the system - i.e., the legitimate CT log servers,
>> auditors, or monitors - and so the client never gets the opportunity to
>> “compare notes” with the rest of the system.  And the rest of the system
>> (the legitimate log servers, auditors, and monitors) never have the
>> opportunity to learn from the client that the client saw a MITM-attacked,
>> forged set of CA certs, STHs, and SCTs.
> 
> In SCT Feedback the only connection to disclose data about Website A
> is indeed website A.
> In STH Feedback, assuming the client can resolve a Cert to a STH via
> an inclusion proof (which we currently suggest via DNS, but other
> mechanisms can exist, such as Tor) - that STH will be pollinated to
> _any_ website.  And the split-view log will be caught. So that's how
> the 'system as a whole' detects the issue.

OK, I see how STH pollination can at least raise the bar significantly for MITM 
attackers in a way that SCT feedback can’t.  But it still seems neither 
sufficient, nor the best we can do, nor convincingly “safe” in terms of 
privacy; see below.

> The difference is of course because the SCT is private data that
> reveals browsing history, the STH is not (not counting old-STH attacks
> which we mitigate).

I see there is already some text in the gossip draft acknowledging the privacy 
risks that “promiscuous” STH pollination, but the current text doesn’t go far 
enough and underestimates the risk.  For example, by the calculations in the 
text, suppose a malicious HTTPS server has up to 336 unique STHs per log that 
it could use to “tag” and track clients over time.  Supposing there are 10 
well-known log servers, this is 3360 unique STHs total.  The current text seems 
to suggest that this is “small enough”, which may be true in the one particular 
attack the text suggests earlier, namely that the attacker picks one particular 
less-popular (e.g., somewhat-old) STH to give a client it wishes to track.  

But this is not the “real” attack of interest: a just slightly smarter attacker 
would probably use the presence or absence of these less-popular STHs in a 
client’s gossip cache as (noisy) bits in a tracking identifier.  The attacker 
would gossip to each client it hasn’t seen before some kind of ECC-encoded 
pattern of less-popular STHs.  Thus, the 3360 unique STHs the attacker gets to 
play with do not represents “the number of clients the attacker can track” as 
the current text seems to suggest, but rather the 3360 unique STHs may in fact 
be “tracking tag bits” that the attacker gets to play with in attempting to 
imprint clients with tracking tags via gossip.  And 3360 bits is a lot of bits: 
plenty of room for a highly-unique (say 64-bit) per-client tag to be encoded in 
a high-expansion, rather noise-tolerant encoding.

Of course, a user could still avoid being tracked in this fashion by using a 
stateless, “amnesiac” browsing platform like Tails, which would presumably 
start with an empty STH cache on each boot.  But this would also defeat CT’s 
ability to use STH polination to detect MITM attacks across client/browser 
sessions.

> If an attacker can both persistently MITM someone to a website (or the
> whole internet) and DoS the STH lookup mechanism (and compromise a log
> and a CA) - yes, the attacker wins.
> 
> Attackers who can persistently MITM users forever are extremely
> powerful and we have no way to defend against them except to make all
> your client software just stop working.

Attackers who can persistently MITM users forever are indeed extremely powerful 
but are not at all unrealistic, given the increasing presence of Great Firewall 
type attackers (and all the smaller wanna-be knockoffs), which can indeed 
persistently control the “networked view of the world” seen by any user who 
cannot or does not regularly travel internationally.

I disagree that we have no way to defend against them; if you think so you’re 
giving up too quickly.  Of course such an attacker can simply block or DoS all 
the user’s communication in any case, but in practice most of these attackers 
would rather let the client think they’re communicating securely while silently 
compromising them.  CT with gossip cannot even guarantee that such silent, 
persistent MITM attacks will ever even be detected (by anyone), let alone 
prevented.  But the multi-signed STH approach I suggested would do exactly 
that.  Since the pervasive MITM attacker would be unable to forge a single 
valid STH signature the client will accept without compromising a large quorum 
of monitor/witness/co-signer servers as well, the attacker is left with the 
choice of remaining silent and leaving the client’s communication 
uncompromised, or denying/blocking the communication entirely.  I claim that 
such a situation would be much, much preferable to the current state of affairs 
even with CT deployed.

> This applies to the CA system
> absent CT, it requires to the CA system with CT, and it applies to
> software updates.  If you can write exploits, but not find bugs - just
> wait for a patch to come out, block the update, take your leisurely
> time reverse engineering, writing the exploit, and using it.

Of course client-side software security is a huge and important problem in 
itself, but it’s orthogonal to the design of CT.  I have plenty of thoughts on 
how multi-signing or cothorities could be used to improve the situation there 
as well, but that’s another topic.

> I don't mean to dismiss this concern, but a) I don't believe SCT
> Feedback and STH Pollination cause privacy problems

See above.

> b) This problem is
> so hard that no one is attempting to defend against it

I am; see above.

> c) History
> suggests such attackers are not all-powerful.  

Agreed, such attackers are not all-powerful, but pretty much any compromised or 
state-controlled ISP is certainly powerful enough to persistently MITM any 
client that always connects through that ISP - which is true at least of (a) 
desktop machines on a home or work network always attached via the same 
connection, and (b) smartphones roaming within a country but always using a 
data plan through the same cellular ISP.  These threat models do not seem 
unrealistic to me, but rather quite ubiquitous.

> Or at least, we've
> caught a lot of folks who could have gotten away with it that
> supposedly had that capability and could have known better.

Yes, a lot of current attackers have been sloppy in one way or another, but 
that doesn’t mean we should assume the they will not learn from their past 
mistakes and adopt less sloppy attacks in the future.  They will.

>> Or are you saying (contrary to my prior understanding) that Chrome requires
>> *all* CT certs to have multiple SCTs signed by different log servers?
> 
> As Ben said, the end goal is to use it for everything.

What is the “it” to be used for everything?  CT?  Or CT with a mandatory 
minimum of >1 SCTs?  And again, what do you think a reasonable mandatory 
minimum number of SCTs would be?  Two, five, ten, 100?  If it starts at 1 or 2 
for pragmatic reasons, will it ever increase or will it stay forever at 2 due 
to inertia?  Is that good enough?

> I don't know
> when we'll get there, but in the interim there will probably be a
> period of opt-in:
> https://ritter.vg/blog-require_certificate_transparency.html 
> <https://ritter.vg/blog-require_certificate_transparency.html>

Nice blog post, but I don’t see where it addresses the issue of how many 
independent SCTs a browser should expect/require a cert to have even if it does 
decide to demand CT support.

> The Comodo incident you reference, was that the mozilla.com cert in
> 2008?  That was someone just trying a RA portal and getting a cert:
> http://www.theregister.co.uk/2008/12/29/ca_mozzilla_cert_snaf/ 
> <http://www.theregister.co.uk/2008/12/29/ca_mozzilla_cert_snaf/>

> Despite all the effort to find such certs manually, via MEKAI,
> Perspectives, Convergence, Cert Patrol, and others - I'm not aware of
> anyone ever catching a CA-signed misissued cert manually.

I think it was the March 2011 incident, where a hacker was able to obtain 
fraudulent certificates through Comodo and then apparently attempted to use at 
least one of them, for ‘login.yahoo.com’, in some way: 
https://www.comodo.com/Comodo-Fraud-Incident-2011-03-23.html 
<https://www.comodo.com/Comodo-Fraud-Incident-2011-03-23.html>  Comodo’s report 
and the other information I found don’t make it clear exactly in what way the 
login.yahoo.com <http://login.yahoo.com/> cert was “seen live on the Internet” 
- perhaps it was Google pinning as well if Google was pinning Yahoo’s certs at 
that time (were they?), or perhaps it was manual forensics.  Perhaps you or 
someone else has better information about this.  But this is all incidental: 
it’s great that Google pinning has caught a lot of incidents, but nobody seems 
to be saying that means Google pinning is the best/final solution.

>> With the multi-SCT approach, the amount of bandwidth consumed during TLS
>> negotiation with every CT-supporting website increases linearly with n.
>> With the scalable multisignature approach, n can increase to arbitrarily
>> large size - a hundred, a thousand if needed - e.g., including all the
>> public auditors, monitors, etc. - without any increase in the size of
>> certificates, number of SCTs attached, or ultimately the bandwidth/latency
>> overhead during TLS negotiation.
> 
> While we aim to keep SCTs small as Ben indicated - yes, sending 10
> would be quite a bit.  I'm not sure of a great answer here.. I think
> the onus is kind of on the community to curate and run a fewer number
> of high-quality, mutually distrusting, different-juristictional logs.

This is precisely my point: by architecturally relying on multiple SCTs per 
cert we’re tying ourselves into the same bad old security tradeoff of having to 
trust a few “high-quality, mutually distrusting, different-jurisdictional logs” 
- i.e., a few conventional centralized authorities.  How do you choose them?  
Who chooses them?  Do you let one of those 10 be run by (say) Iran or Syria, in 
the interest of diversity (at least they’re probably not colluding with the 
Five Eyes countries!) - even though their governments have a record of trying 
to MITM their users 
(https://www.eff.org/deeplinks/2011/03/iranian-hackers-obtain-fraudulent-https 
<https://www.eff.org/deeplinks/2011/03/iranian-hackers-obtain-fraudulent-https>,
 https://www.eff.org/deeplinks/2011/05/syrian-man-middle-against-facebook 
<https://www.eff.org/deeplinks/2011/05/syrian-man-middle-against-facebook>) and 
can probably be expected to try to exploit their control over “their” log 
server in any way the CT architecture allows them to (such as persistent MITM 
attacks as discussed above)?

With the scalable multi signing, in contrast, we could have 10 or 100 or 1000 
monitor/co-signer servers contributing to each STH signature with *no* cost to 
client verification time, client CPU costs, TLS startup bandwidth consumed, 
etc.  One quite reasonable policy might be that *every* country that is 
technically capable of running a monitor/co-signer server could join and 
participate in the signing process of every STH of every well-known log.  If 
you’re claiming that it’s infeasible to decentralize trust in such a service 
beyond 10 or so well-known nodes, again I think you’re just giving up and 
conceding the field too quickly.

>> 2. Our original design for CT did indeed have certs served along with an STH
>> and an inclusion proof. Unfortunately, CAs categorically rejected this
>> approach because of the latency it introduced in the certificate issuance
>> process. Personally, I think it would be a better approach. Now CAs are more
>> accepting of CT, perhaps it would be worth revisiting the question with
>> them?
>> 
>> 
>> Agreed, that would be great.
> 
> I am most excited about the prospect of sending an inclusion proof to
> a recent STH to the client - but I don't actually think the cert is
> the answer. I think it's a TLS extension or (better) an OCSP staple.
> Doing this would eliminate what I think is the weakest part of the
> gossip document - the requirement to obtain a STH from a SCT (well a
> cert) via some 'privacy preserving manner'.  (Which we, again,
> theorize to be DNS, but I don't think that's the answer for everyone.)

Agreed on this.  In fact, it’s increasingly unclear to me what security purpose 
SCTs actually serve if it indeed ends up that the “real” security CT provides 
is eventually going to revolve mainly around STHs and inclusion proofs.  Out of 
curiosity, has anyone floated or elaborated on the idea of what CT would look 
like if simplified not to rely on SCTs at all?

> If the STH was included in the Cert, that doesn't really change the game for 
> me.
> 1) The STH could be a STH on the split view of the log, so the
> immediate-decision security is no better that just sending a SCT.
> 2) The STH is too old. I don't want clients pollinate old STHs,
> because as time goes on, old STHs will be rarer and will be indicative
> of what website you visited. After 2.5 years there will only be 100?
> 1000? popular websites with that STH in the cert. So the old STH needs
> to be resolved to a recent STH, and you pollinate _that_.

Agreed again.

B

> 
> -tom

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans

Reply via email to