On 22 Oct 2015, at 12:16, Ben Laurie <[email protected]> wrote: > On Thu, 22 Oct 2015 at 10:18 Bryan Ford <[email protected] > <mailto:[email protected]>> wrote: > Yes, it’s a numbers game: the number of co-signers (e.g., SCTs in this case) > is effectively a type of security parameter, and the numeric value of such a > security parameter is important. The security difference between 56-bit DES > encryption and 128-bit AES encryption has just as much to do with the > increased key length as with the algorithmic changes (though of course key > size is a very different kind of security parameter; I wouldn’t want to push > that analogy far). > > For the most security-critical parts of Internet infrastructure I personally, > at least, would be much more comfortable with a “certificate co-signing > security parameter” in a ballpark of 51 than a security parameter value of 1 > or 3. ;) > > You are not comparing apples with apples - we choose a small number of logs > that cannot lie without risking detection. You choose a large number and hope > the majority are honest. If they choose to collude, they can get away with it.
With gossip, a colluding group can just as easily collude to lie without risking detection, for example by refusing to gossip with honest nodes, or by presenting one consistent view of reality to the well-connected set of honest nodes (honest monitors, auditors, etc.) and another consistent but different view of reality to any less well-connected participants that the colluding group can keep separated from the well-connected participants. On the other hand, the attacks that gossip *can* detect are just as readily detected in the collective signing approach, because the communication process required to produce the collective signature provides as a side effect the same information-distribution properties that gossip does, just in a more structured fashion. In order to collect and aggregate the parts of the collective signature the leader must communicate the message to be signed to all the participants, and those signature components it collects cannot be used to create a valid collective signature if any pair of honest nodes involved in the signing process disagree on what message is being signed. In a gossip setting, the colluding group can refuse to talk with honest participants, but those participants will be able to notice and complain. In a collective signing setting, the colluding group can selectively leave all or a selected subset of honest participants out of signatures, but those honest participants will be able to notice this and complain, in exactly the same way as for gossip. Moreover, the collective signature (at least the way we do it) leaves a precise and unforgeable public record of exactly which participants were and were not present and contributing to the signature in each signing round. If the leader has committed to using a particular group of co-signers but regularly omits a bunch of them from its collective signatures - even though those co-signers seem to be getting along fine with other authorities (e.g., other log servers) also using them for collective signing - then both those co-signers and all clients who see the resulting collective signatures have every opportunity to notice and ask questions. In contrast, gossip does not inherently produce any third-party-verifiable record of which nodes were present or absent, or able or unable to connect and gossip, at any given time. So in this respect, collective signing seems to provide the same (strong though imperfect) detection capabilities, together with significantly greater transparency in terms of the public record it leaves behind. >> Chrome's current policy is here: >> http://www.chromium.org/Home/chromium-security/root-ca-policy/EVCTPlanMay2015edition.pdf >> >> <http://www.chromium.org/Home/chromium-security/root-ca-policy/EVCTPlanMay2015edition.pdf>. >> >> We don't have any plans to change those numbers at present. > > Thanks, I hadn’t seen that document before. So it looks like the “absolute > minimum” number of SCTs required is 2, which is certainly better than 1 but > still (to me anyway) a worryingly small number. Compounding this, I notice > that the first three log servers on the current list are run by Google (and > in particular by one particular team at Google :) ), effectively presenting a > “one-stop shop” for any adversary who might for whatever reason want to > acquire the private keys of two CT log servers so as to be able to (for > example) silently MITM-attack CT-enabled Chrome clients. > > This is why you need gossip, of course - to make this kind of attack > non-silent. BTW, you also need a CA as well as two logs. And once you get > caught, they become valueless. Oh, and, Google is not a one-stop shop, even > for logs, because the policy requires at least one SCT to not come from > Google. Missed that proviso on first read, thanks. >> That aside, I do not disagree with the core idea. I wonder about its >> practicality. For example, currently we require STHs to be produced on a >> regular basis, which is necessary to ensure good behaviour. If we went to a >> multi-signing approach, and an STH was not produced on time, who would we >> blame? What would we do about that, in practice? Seems to me everyone >> involved could point fingers at everyone else. How would you address that? > > Agreed that this is an important question, hopefully addressed above: the log > server need not necessarily allow its availability to be “held hostage” at > all, and instead client policy could (independently) determine how many > missing co-signers the client is willing to tolerate. > > Which allows the log server to misbehave and claim its co-signers were > unavailable. Which would be just as immediately noticeable as if a colluding group of log servers were to stop gossiping with non-colluding nodes and claim all of them are unavailable. >> Whereas increasing m, the number of signers per log server, can only >> increase security, assuming the multi-signing protocol/crypto itself isn’t >> broken. >> >> Aside from my problem above, at least one other obvious issue with >> increasing the number of signers, is you also increase latency (I suspect) >> and decrease reliability. > > Yes, you increase latency, but in our experiments we get under 5 secs of > latency for 8,000 signers; it seems hard to imagine that being a difficulty > for a latency-tolerant activity like signing STHs that happens at periods > counted in minutes. > > Is that 8,000 geographically distributed signers? Since unlike some companies ( ;) ) we don’t have 8,000 physically geographically distributed machines we can allocate on demand to experiments, we used 8,000 processes distributed over a DeterLab virtual network topology that imposes 100ms RTT delays on network communication, in order to simulate a geographically distributed network. Perhaps we should be using 200ms or 300ms rather than 100ms RTTs to simulate a “truly global” network (mental note to self for future experiments), but the current 2-second signing latencies we’re getting now aren’t going to become 10 minutes even if the underlying latencies are doubled are tripled. Also, our experiments are pessimistic in the sense that because we don’t have 8,000 separate physical machines, each physical machine is running many virtual participants, so the signing latencies we’re seeing are despite all the participants being artificially overloaded. I conjecture that the plot on page 35 of the slides below might stay a lot flatter toward its right end (e.g., after 1024 participants or so where you can see it curve upward) if those were real machines not overloaded by experimental artifacts. But I don’t at the moment have the testbed I would need to confirm that conjecture. ;) http://dedis.cs.yale.edu/dissent/pres/151009-stanford-cothorities.pdf <http://dedis.cs.yale.edu/dissent/pres/151009-stanford-cothorities.pdf> Cheers Bryan
_______________________________________________ Trans mailing list [email protected] https://www.ietf.org/mailman/listinfo/trans
