Re: [zapps-wg] Eliminating the possibility of backdoors with high probability

2017-11-16 Thread Peter Todd via zapps-wg
On Mon, Nov 13, 2017 at 08:26:04PM -0700, Sean Bowe wrote:
> > Q: What exactly happens if one participant fails to destroy that secret 
> > and/or
> > inputs a low-entropy secret? What about N participants?
> >
> > The paper states that "We show that security holds even if an adversary has
> > limited influence on the beacon." but it's unclear what exactly "limited
> > influence" means.
> 
> My understanding was that we lose N bits of security when an attacker
> can influence N bits of the randomness beacon.

Oh, I might have misunderstood the paper then: when it says "Random beacon" it
really does mean an external random beacon such as the NIST Randomness Beacon,
not something generated within the ceremony itself?

> This MPC is of the "only one has to be honest" kind. It is irrelevant
> if N-1 of the participants have low entropy / known secrets, so long
> as just one has high entropy w.r.t. the security parameter.

Right

> > As N increases you open up a new exfiltration route: the unused N-1 
> > responses
> > could themselves be the exfiltration route, and thus need to be
> > deterministically verified against the N-1 unused secrets. This isn't
> > particularly user-friendly, and it's easy to imagine how this could be 
> > skipped.
> 
> Note that the compute process prints out a hash of the response file,
> and so we can "cut-and-choose" in the same way to guard against these
> exfiltration routes. As an example, if we use DVDs, we can burn N
> response files and note each's respective hash and entropy. Then,
> reveal N-1's hash/entropy, but destroy those N-1 DVDs.

But note here how there's more opportunities to leak data because the
secrets(1) associated with each DVD are simultaneously available to the
attacker. OTOH, by simply doing each round separately the code is both simpler,
and the attacker has access to less information as only one secret should be
available to them at a time.

1) Entropy is an *implementation* of a secret; entropy itself isn't enough.

> > Finally, it's interesting how there's a whole class of "sham" participant
> >strategies, where someone who runs the computation and uploads an audit
> >response w/ revealed secret, but does not actually participate in that round,
> >still frustrates attackers who can not tell in advance if that particular
> >participant will or will not actually participate. This suggests that the
> >current round's challenge should be made public.
> 
> That's very interesting. Right now the transcript is public and so the
> current challenge can be computed by anyone, but it would be a little
> better if I put the "current" challenge file up for download.

Publishing a link on this mailing list would work well. Secondly, as we'll have
to archive all this stuff anyway, I'd suggest using the Internet Archive for
distribution of the challenges.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: Digital signature


Re: [zapps-wg] Eliminating the possibility of backdoors with high probability

2017-11-13 Thread Peter Todd via zapps-wg
On Mon, Nov 13, 2017 at 02:16:18PM -0700, Sean Bowe via zapps-wg wrote:
> There are three ways that a participants' toxic waste can be compromised:
> 
> 1. the participant is dishonest and keeps the toxic waste around
> 2. the toxic waste is extracted from the machine, either from a side
> channel attack or because the toxic waste still "exists" in the
> machine somewhere
> 3. the participant's code, compiler, operating system or hardware are 
> backdoored
> 
> Our solution to #1 is to have large numbers of diverse participants,
> to virtually eliminate the chance that all of them are dishonest and
> secretly colluding with each other. I am very confident in this
> approach.

Ditto

> Many of us are solving #2 by performing the computations on hardware
> we have randomly plucked from a store somewhere, in an environment
> (like a Faraday cage, or out in a field somewhere) where side-channel
> attacks are unlikely. And of course, completely destroying the machine
> afterward. I am very confident in this approach.

I also agree that this is easily achieved.

While ~all the hardware we have available to us is likely backdoored, the bad
guys can't backdoor the laws of physics.

> However, we don't really have a good handle on #3. Right now,
> participants are using the `powersoftau` code that I've written in
> Rust. It is possible to change the code or even make an alternative
> implementation, but that only gets you so far. You still have to hope
> your OS/compiler/hardware are not backdoored.

So to be clear, a simple example of such a backdoor attack would be to take the
secret k that was supposed to be used in the computation and replace it with
truncate(k,n), where n is low enough to brute force, and high enough to not get
caught by the birthday paradox; something like 24-bits is probably feasible.
The attacker would then brute-force the truncated k from the transcripts,
recovering the toxic waste.

> I think there's a nice solution to this problem which is inspired by
> an idea that Brian Warner had.
> 
> Currently, the code I've written takes some randomness from the system
> and mixes it with some user-supplied randomness. Instead, imagine
> using randomness supplied by the participant exclusively. (One way the
> participant can obtain it is with a boggle set.)

Note that this is better described as a user-supplied *secret*


Q: What exactly happens if one participant fails to destroy that secret and/or
inputs a low-entropy secret? What about N participants?

The paper states that "We show that security holds even if an adversary has
limited influence on the beacon." but it's unclear what exactly "limited
influence" means.

> The trick is that the participant performs the computation N times,
> each time with different randomness. This produces N response files.
> Now, the participant randomly chooses N-1 of the response files and
> reveals the randomness for them, and destroys the randomness of the
> last response file -- which is their contribution to the ceremony. The
> participant (and the general public) can perform the computations
> again on their machines to check that the same response files are
> produced for the ones we've revealed randomness for.

To be exact, what you mean to say here is that by re-doing the computations on
a trusted setup implementation that is *not* backdoored, you can detect the
existence of the backdoor because the results won't match.

> As N increases, the probability that any backdoor in the code,
> compiler, hardware, operating system etc. could have tampered with the
> entropy approaches zero. Now there is just one remaining problem: how
> do we get the response files out of the machine without the backdoor
> potentially sneaking the entropy over the channel?

As N increases you open up a new exfiltration route: the unused N-1 responses
could themselves be the exfiltration route, and thus need to be
deterministically verified against the N-1 unused secrets. This isn't
particularly user-friendly, and it's easy to imagine how this could be skipped.


I'd suggest instead that we ask participants to simply run the computation N>1
times, and pick at random which output to actually use. If they re-use hardware
for each run, ask them to do their best at wiping all non-volatile memory; if
they have the ability to use different hardware for each run, even better.

Note that a variety of procedures to pick outputs at random are desirable. For
example, one person might decide to flip a coin after each run, and stop when
it comes up tails; another might do something different. Diversity is good.

After they've completed their runs, simply upload one or more dummy runs and
associated initial secrets for peer auditing, as well as their official
contribution.


Secondly, once we do get some dummy runs, I'd suggest that future participants
consider testing their compute nodes against those challenges and secrets to
verify that they also get the same results.


Finally, it's