On Thu, 22 Oct 2015 at 10:18 Bryan Ford <[email protected]> wrote:

> Hi Ben, sorry for dropping this thread earlier in August - just to finally
> answer some of the quite reasonable questions and points:
>
> On 10 Aug 2015, at 12:07, Ben Laurie <[email protected]> wrote:
>
> On Sat, 8 Aug 2015 at 18:25 Bryan Ford <[email protected]> wrote:
>
>> On Aug 6, 2015, at 8:56 AM, Ben Laurie <[email protected]> wrote:
>>
>> On Thu, 6 Aug 2015 at 10:17 Bryan Ford <[email protected]> wrote:
>>
>> On Jul 24, 2015, at 1:09 PM, Ben Laurie <[email protected]> wrote:
>>> On Thu, 23 Jul 2015 at 23:27 Bryan Ford <[email protected]> wrote:
>>>
>>> Second, the gossip approach can’t ensure that privacy-sensitive Web
>>>> clients (who can’t or don’t want to reveal their whole browsing history to
>>>> a Trusted Auditor) will ever actually be able “to detect misbehavior by CT
>>>> logs”, if for example the misbehaving log’s private key is controlled by a
>>>> MITM attacker between the Web client and the website the client thinks he
>>>> is visiting.
>>>>
>>>
>>> The intent of CT is not to enable clients to detect such behaviour -
>>> rather, it is to enable the system as a whole to detect it.
>>>
>>>
>>> Could you explain how “the system as a whole” detects such misbehavior
>>> in the case of the state/ISP-level MITM attacker scenario?
>>>
>>
>> The state or ISP must isolate all clients they attack forever to avoid
>> detection. In practice, this does not generally seem to be possible.
>>
>>
>>> As I see it, if the client/browser doesn’t opt-out of privacy by
>>> gossiping his browsing history with a trusted auditor, then the client’s
>>> *only* connection to the rest of the world, i.e., “the
>>>
>> system as a whole”, is through the “web server”, which may well be the
>>> same kind of MITM attackers that have been known to subvert the CA system.
>>> The client never gets to communicate to the rest of the system - i.e., the
>>> legitimate CT log servers, auditors, or monitors - and so the client never
>>> gets the opportunity to “compare notes” with the rest of the system.  And
>>> the rest of the system (the legitimate log servers, auditors, and monitors)
>>> never have the opportunity to learn from the client that the client saw a
>>> MITM-attacked, forged set of CA certs, STHs, and SCTs.
>>>
>>
>> This is correct, and seems to me to be correct for any system - if you
>> can isolate your victim forever, you can show your own view of any system.
>>
>>
>> To reiterate the key point I made in my response to Tom’s E-mail, this is
>> not true of the multisignature-based STH signing I suggested.  Suppose for
>> each CT log server there are (for example) 100 monitor servers widely
>> distributed around the world, run by diverse companies, governments, etc.
>> And suppose that any STH must be collectively signed by the log server and
>> at least 51 of its monitor servers in order for clients to consider it
>> valid.  Assume the victim user/browser is isolated behind a MITM attacker
>> (call it Repressistan) who controls all the paths in and out of
>> Repressistan and never lets the user physically enter or leave
>> Repressistan.  Thus the user never has opportunity to do CT gossip via any
>> path not controlled by the Repressistani Firewall.  In CT’s current design,
>> Repressistan wins - gaining the ability to silently compromise the user
>> forever - simply by Pwning one CA key and one log server key.  (Or maybe
>> two if you require two SCTs per cert.)
>>
>
>> In the multisignature approach, Repressistan gains the ability to forge a
>> STH for a “client-customized world view" only if Repressistan can also
>> compromise more than 50 monitors of a compromised log server’s collective
>> signing pool.  Every valid, client-verifiable STH signature ensures to the
>> client not only that the log server’s key signed the STH but also that at
>> least 51 monitor servers also witnessed it (even if those monitors are
>> doing no checking at all other than saying “I saw it”).  Repressistan can
>> no longer forge a valid (above-threshold) STH signature without an
>> inconceivably massive collusion or security breach.  Repressistan still
>> simply prevent the user from communicating at all, of course, but loses the
>> ability either to silently compromise the connection or silently evade
>> detection.  And the client need not risk its privacy via gossip in any way
>> to receive this protection.
>>
>
> Clearly its a numbers game - there's always some threshold at which we can
> claim that the required compromise is inconceivable.
>
> We could make the existing scheme equally inconceivable by requiring 51
> SCTs, which would also eliminate the need for gossip, according to you.
>
>
> Yes, it’s a numbers game: the number of co-signers (e.g., SCTs in this
> case) is effectively a type of security parameter, and the numeric value of
> such a security parameter is important.  The security difference between
> 56-bit DES encryption and 128-bit AES encryption has just as much to do
> with the increased key length as with the algorithmic changes (though of
> course key size is a very different kind of security parameter; I wouldn’t
> want to push that analogy far).
>
> For the most security-critical parts of Internet infrastructure I
> personally, at least, would be much more comfortable with a “certificate
> co-signing security parameter” in a ballpark of 51 than a security
> parameter value of 1 or 3. ;)
>

You are not comparing apples with apples - we choose a small number of logs
that cannot lie without risking detection. You choose a large number and
hope the majority are honest. If they choose to collude, they can get away
with it.

I prefer verification to blind trust.


>
> The problem I have with this idea is that if we eliminate gossip, then we
> eliminate the possibility of detecting this compromise.
>
>
> While I’m still unconvinced that gossip is really necessary, I’m coming
> around to feeling that it may be useful and is at least not actively
> harmful on the “well-connected public server” side of the CT ecosystem:
> namely, among the CAs, log servers, witness servers, and monitors.
>
> For clients I still think relying on gossip is deeply problematic, as
> further evidenced just recently by the important subtleties that Tom Ritter
> just brought up in the new “Unsticking a client” thread.  Perhaps they’re
> resolvable in a reasonably privacy-sensitive fashion, but even if so, they
> illustrate the sheer complexity and difficulty of ensuring that clients can
> somehow walk a fine line between “gossiping for security” and “not
> gossiping for privacy”.
>

I am still unconvinced there is any real privacy concern around gossiping
STHs.


>
> That said, if you want some kind of consensus signature scheme for STHs,
> there's no reason that couldn't live alongside other mechanisms.
>
> I don't buy that it is a replacement for gossip, though.
>
>
> I’m OK with considering collective signing as a mechanism complementary
> and orthogonal to gossip, and the new draft I just put online treats it
> that way.
>

> BTW, seems to me you don't need >50% as your threshold - any signatures on
> inconsistent STHs are an indication of badness, so if you assume monitors
> are mostly honest and talk to each other, lower thresholds would also work.
>
>
> Correct.  In fact there doesn’t need to be just one globally imposed or
> agreed-upon threshold.  The log server could have one threshold determining
> the number of co-signers that must be online for it to make progress, and
> that threshold could even be 0 if the log server does not ever want to risk
> its progress being blocked even by a massive outage or mass desertion of
> its co-signers.  (I don’t think this would be a good idea in practice, just
> pointing out that it’s one readily feasible policy.  If most of the
> co-signers are unavailable then it perhaps more likely means the log-server
> itself is net-split from the rest of the world, in which case it probably
> could and should hold off on signing further STHs until the net-split is
> repaired.)
>
> Clients could still impose their own (possibly different) threshold
> requirements on the number of co-signers they want to see on an STH before
> they will accept it.  More paranoid clients could demand higher thresholds
> than others, accepting a risk that they might falsely reject a legitimate
> STH that the log server signed when an unusually large number of witnesses
> happened to be offline.
>
> Chrome makes no requirement on "CT certs", only on EV certs. But in the
>> long run, the intention is to require CT for all certs.
>>
>>
>> But what does “requiring CT” mean in terms of the number of SCTs a given
>> cert (EV or non-EV) will be “required” to have?  What do you see as
>> reasonable numbers there, for the near term and long term?
>>
>
> Chrome's current policy is here:
> http://www.chromium.org/Home/chromium-security/root-ca-policy/EVCTPlanMay2015edition.pdf
> .
>
> We don't have any plans to change those numbers at present.
>
>
> Thanks, I hadn’t seen that document before.  So it looks like the
> “absolute minimum” number of SCTs required is 2, which is certainly better
> than 1 but still (to me anyway) a worryingly small number.  Compounding
> this, I notice that the first three log servers on the current list are run
> by Google (and in particular by one particular team at Google :) ),
> effectively presenting a “one-stop shop” for any adversary who might for
> whatever reason want to acquire the private keys of two CT log servers so
> as to be able to (for example) silently MITM-attack CT-enabled Chrome
> clients.
>

This is why you need gossip, of course - to make this kind of attack
non-silent. BTW, you also need a CA as well as two logs. And once you get
caught, they become valueless. Oh, and, Google is not a one-stop shop, even
for logs, because the policy requires at least one SCT to not come from
Google.


>
> Note that if I’m an MITM attacker, the “lifetime of certificate”
> provisions don’t matter to me at all: I would be perfectly happy with just
> a 1-day certificate, since I can produce a new fake one (together with a
> new fake pair of CT logs) tomorrow.
>

Certificate lifetimes are only mentioned in relation to the risk of a log
going bad - i.e. the longer your cert lasts, the more SCTs you need to
embed to insure against them becoming invalid before the cert expires.
Lifetime is not a security mechanism, and a mitm attacker producing
short-lived certs does not reduce the chance of detection.


>
> That aside, I do not disagree with the core idea. I wonder about its
> practicality. For example, currently we require STHs to be produced on a
> regular basis, which is necessary to ensure good behaviour. If we went to a
> multi-signing approach, and an STH was not produced on time, who would we
> blame? What would we do about that, in practice? Seems to me everyone
> involved could point fingers at everyone else. How would you address that?
>
>
> Agreed that this is an important question, hopefully addressed above: the
> log server need not necessarily allow its availability to be “held hostage”
> at all, and instead client policy could (independently) determine how many
> missing co-signers the client is willing to tolerate.
>

Which allows the log server to misbehave and claim its co-signers were
unavailable.


> Also, in the collective signing scheme’s current availability design, when
> any co-signers are missing, the collective signature is expanded slightly
> to include a “shame list” explicitly documenting which co-signers went AWOL
> in that signing round.  That shame list is part of what clients use to
> verify the collective signature: without it the collective signature won’t
> check out.
>

> So if a co-signer is persistently unreliable, that fact is publicly
> documented, and it probably means the log-server should kick them out of
> the witness group at the next reasonable opportunity (e.g., the next Chrome
> release).
>
>  Whereas increasing m, the number of signers per log server, can only
>> increase security, assuming the multi-signing protocol/crypto itself isn’t
>> broken.
>>
>
> Aside from my problem above, at least one other obvious issue with
> increasing the number of signers, is you also increase latency (I suspect)
> and decrease reliability.
>
>
> Yes, you increase latency, but in our experiments we get under 5 secs of
> latency for 8,000 signers; it seems hard to imagine that being a difficulty
> for a latency-tolerant activity like signing STHs that happens at periods
> counted in minutes.
>

Is that 8,000 geographically distributed signers?
_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans

Reply via email to