On Aug 6, 2015, at 8:56 AM, Ben Laurie <[email protected]> wrote:
> On Thu, 6 Aug 2015 at 10:17 Bryan Ford <[email protected] 
> <mailto:[email protected]>> wrote:
>> On Jul 24, 2015, at 1:09 PM, Ben Laurie <[email protected] 
>> <mailto:[email protected]>> wrote:
>> On Thu, 23 Jul 2015 at 23:27 Bryan Ford <[email protected] 
>> <mailto:[email protected]>> wrote:
> 
>> Second, the gossip approach can’t ensure that privacy-sensitive Web clients 
>> (who can’t or don’t want to reveal their whole browsing history to a Trusted 
>> Auditor) will ever actually be able “to detect misbehavior by CT logs”, if 
>> for example the misbehaving log’s private key is controlled by a MITM 
>> attacker between the Web client and the website the client thinks he is 
>> visiting.
>> 
>> The intent of CT is not to enable clients to detect such behaviour - rather, 
>> it is to enable the system as a whole to detect it.
> 
> Could you explain how “the system as a whole” detects such misbehavior in the 
> case of the state/ISP-level MITM attacker scenario?  
> 
> The state or ISP must isolate all clients they attack forever to avoid 
> detection. In practice, this does not generally seem to be possible.
>  
> As I see it, if the client/browser doesn’t opt-out of privacy by gossiping 
> his browsing history with a trusted auditor, then the client’s *only* 
> connection to the rest of the world, i.e., “the 
> system as a whole”, is through the “web server”, which may well be the same 
> kind of MITM attackers that have been known to subvert the CA system.  The 
> client never gets to communicate to the rest of the system - i.e., the 
> legitimate CT log servers, auditors, or monitors - and so the client never 
> gets the opportunity to “compare notes” with the rest of the system.  And the 
> rest of the system (the legitimate log servers, auditors, and monitors) never 
> have the opportunity to learn from the client that the client saw a 
> MITM-attacked, forged set of CA certs, STHs, and SCTs.
> 
> This is correct, and seems to me to be correct for any system - if you can 
> isolate your victim forever, you can show your own view of any system.

To reiterate the key point I made in my response to Tom’s E-mail, this is not 
true of the multisignature-based STH signing I suggested.  Suppose for each CT 
log server there are (for example) 100 monitor servers widely distributed 
around the world, run by diverse companies, governments, etc.  And suppose that 
any STH must be collectively signed by the log server and at least 51 of its 
monitor servers in order for clients to consider it valid.  Assume the victim 
user/browser is isolated behind a MITM attacker (call it Repressistan) who 
controls all the paths in and out of Repressistan and never lets the user 
physically enter or leave Repressistan.  Thus the user never has opportunity to 
do CT gossip via any path not controlled by the Repressistani Firewall.  In 
CT’s current design, Repressistan wins - gaining the ability to silently 
compromise the user forever - simply by Pwning one CA key and one log server 
key.  (Or maybe two if you require two SCTs per cert.)  

In the multisignature approach, Repressistan gains the ability to forge a STH 
for a “client-customized world view" only if Repressistan can also compromise 
more than 50 monitors of a compromised log server’s collective signing pool.  
Every valid, client-verifiable STH signature ensures to the client not only 
that the log server’s key signed the STH but also that at least 51 monitor 
servers also witnessed it (even if those monitors are doing no checking at all 
other than saying “I saw it”).  Repressistan can no longer forge a valid 
(above-threshold) STH signature without an inconceivably massive collusion or 
security breach.  Repressistan still simply prevent the user from communicating 
at all, of course, but loses the ability either to silently compromise the 
connection or silently evade detection.  And the client need not risk its 
privacy via gossip in any way to receive this protection.

>> Suppose, for example, it ever so happens that a single company ends up 
>> running both a log server and a [sub-]CA, and keeps both in the same 
>> poorly-protected safe.  If an attacker can steal those two private keys, 
>> then the attacker can silently MITM any CT-aware client all it wants, by 
>> producing all the [non-EV] certificates it needs with a valid SCT, valid STH 
>> inclusion proofs in a fake log, etc.
>> 
>> Chrome's policy would require the subversion of at least two logs to achieve 
>> this, but for the sake of argument, I'll concede that.
> 
> I thought you were requiring CAs to have multiple SCTs only for EV 
> certificates, no?  What if the attacker is happy to MITM attack the user with 
> a non-EV certificate and forego showing the user the “green bar”?  Are you 
> assuming that users will stop using the connection if they don’t see the 
> green bar?
> 
> Not at all. EV certificates are our first step for CT.
>  
> Or are you saying (contrary to my prior understanding) that Chrome requires 
> *all* CT certs to have multiple SCTs signed by different log servers?
> 
> Chrome makes no requirement on "CT certs", only on EV certs. But in the long 
> run, the intention is to require CT for all certs.

But what does “requiring CT” mean in terms of the number of SCTs a given cert 
(EV or non-EV) will be “required” to have?  What do you see as reasonable 
numbers there, for the near term and long term?

> With the multi-SCT approach, the amount of bandwidth consumed during TLS 
> negotiation with every CT-supporting website increases linearly with n.  With 
> the scalable multisignature approach, n can increase to arbitrarily large 
> size - a hundred, a thousand if needed - e.g., including all the public 
> auditors, monitors, etc. - without any increase in the size of certificates, 
> number of SCTs attached, or ultimately the bandwidth/latency overhead during 
> TLS negotiation.
> 
> To be clear, you are proposing that instead of n logs with 1 signer each, we 
> have 1 log with n signers?

Possibly but not necessarily.  I envision n logs with m signers each, where say 
1 <= n <= 10, but 10 <= m <= 1000.  (And different log servers need not have 
the same signers, or even the same number of signers.)

> That seems unwise to me. You then have a SPOF for the whole network.

If a single log server cannot sign anything without it being witnessed and 
validated by 10, 100, or 1000 signers, how is it a SPOF?  Again, I’m not saying 
n should be 1 - there are good reasons we might still want n to be greater than 
1.  But if each log server can’t produce a valid STH without the active 
participation of a quorum of its m signers, and a reasonable number of those 
co-signers at least do minimal checking to ensure that the log server is 
behaving like a log (e.g., never signing two STHs with the same sequence 
number), then there’s a lot less a misbehaving log server - or an attacker who 
compromises that log server’s key - can do with it.

> But clearly each log could have more than one signer, but still have multiple 
> logs. Then the question is: where is the trade off between number of logs and 
> number of signers, bearing in mind that more than one signer introduces 
> substantial complication.

As I pointed out before, the problem with the current model is that adding log 
servers *decreases* security, at least in certain realistic attack scenarios, 
by giving the attacker more possible log server targets to compromise.  
Weakest-link security, just as in the current CA system.  Whereas increasing m, 
the number of signers per log server, can only increase security, assuming the 
multi-signing protocol/crypto itself isn’t broken.

B

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans

Reply via email to