On 30 Sep 2015, at 21:00,Katriel Cohn-Gordon <[email protected]> wrote:
> 
> ​Note that ARPKI <http://dl.acm.org/citation.cfm?id=2660298> (among other 
> academic CT variants) similarly has no need for gossip; roughly, monitors 
> sign that they have seen a cert and then clients just verify the signatures. 
> Multisignatures sound like an elegant way to achieve that; are they more 
> efficient than n individual signatures for smallish cothorities e.g. size <10?

Yes, a bit more efficient but the difference isn’t necessarily a big deal in 
practice for n ~ 10.  

We now (in just the past few weeks) have some better comparative experimental 
results on this; see pages 35 and 36 of the following slide deck:

        http://dedis.cs.yale.edu/dissent/pres/151009-stanford-cothorities.pdf

The green line and area is the signing latency of our collective signing 
protocol, now consistently under 2 seconds while scaling all the way out to 
8192 hosts.  

The red line is the performance of the basic approach of simply collecting a 
lot of individual signatures.  As you can see it works fine for small n (and in 
this case signing latency is lower until the crossover point of about 128 
participants.  But computation costs - most of which is signature checking, 
which signature-verifying clients would have to do as well - increases much 
more rapidly even for small numbers of hosts (slide 36).

Finally, the blue line shows the even worse scalability of classic 
multisignatures based on Joint Verifiable (Shamir) Secret Sharing (JVSS or 
JVSSS, depending on how much you like S’s).  The key problem there is that to 
avoid depending on a “trusted dealer”, every signing round requires every 
participant to deal a polynomial with shares for every other, then everyone 
must verify and combine these O(N) polynomials each of size O(N) into one 
master polynomial, etc., producing O(N^2) for everyone on each round.

At the talk above I recently gave at Stanford, Dan Boneh made the helpful 
observation that this O(N^2) per-signature cost of dealing fresh secrets in 
every round could probably be avoided through the use of a suitable 
pairing-based signature scheme such as BLS 
(https://en.wikipedia.org/wiki/Boneh–Lynn–Shacham).  Thus, after paying the 
O(N^2) as a one-time cost only on group setup, more traditional threshold 
crypto methods might become a more viable alternative in terms of per-signature 
costs.

Using traditional threshold crypto would have somewhat different properties 
that may be desirable in some situations and undesirable in others.  On the 
upside, the signatures produced would always be exactly the same size and 
always verifiable using exactly the same process no matter how many co-signers 
were actually online and participating in any given round.  On the downside, 
the signatures produced would not contain information “publicly documenting” 
which signers actually did or did not participate in each round, a property 
that seems undesirable in transparency-oriented situations like this.  Also, 
the threshold would have to be fixed and decided in advance, and a “signing 
threshold of zero” (ensuring that the primary signer can make progress no 
matter how many co-signers are missing) would not be an option.

Cheers
Bryan


_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans

Reply via email to