[bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-11-25 Thread Jeremy via bitcoin-dev
Bitcoin Developers,

Pleased to announce refinements to the BIP draft for OP_CHECKTEMPLATEVERIFY
(replaces previous OP_SECURETHEBAG BIP). Primarily:

1) Changed the name to something more fitting and acceptable to the
community
2) Changed the opcode specification to use the argument off of the stack
with a primitive constexpr/literal tracker rather than script lookahead
3) Permits future soft-fork updates to loosen or remove "constexpr"
restrictions
4) More detailed comparison to alternatives in the BIP, and why
OP_CHECKTEMPLATEVERIFY should be favored even if a future technique may
make it semi-redundant.

Please see:
BIP: https://github.com/JeremyRubin/bips/blob/ctv/bip-ctv.mediawiki
Reference Implementation:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify

I believe this addresses all outstanding feedback on the design of this
opcode, unless there are any new concerns with these changes.

I'm also planning to host a review workshop in Q1 2020, most likely in San
Francisco. Please fill out the form here https://forms.gle/pkevHNj2pXH9MGee9
if you're interested in participating (even if you can't physically attend).

And as a "but wait, there's more":

1) RPC functions are under preliminary development, to aid in testing and
evaluation of OP_CHECKTEMPLATEVERIFY. The new command `sendmanycompacted`
shows one way to use OP_CHECKTEMPLATEVERIFY. See:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-rpcs.
`sendmanycompacted` is still under early design. Standard practices for
using OP_CHECKTEMPLATEVERIFY & wallet behaviors may be codified into a
separate BIP. This work generalizes even if an alternative strategy is used
to achieve the scalability techniques of OP_CHECKTEMPLATEVERIFY.
2) Also under development are improvements to the mempool which will, in
conjunction with improvements like package relay, help make it safe to lift
some of the mempool's restrictions on longchains specifically for
OP_CHECKTEMPLATEVERIFY output trees. See:
https://github.com/bitcoin/bitcoin/pull/17268
This work offers an improvement irrespective of OP_CHECKTEMPLATEVERIFY's
fate.


Neither of these are blockers for proceeding with the BIP, as they are
ergonomics and usability improvements needed once/if the BIP is activated.

See prior mailing list discussions here:

*
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016934.html
*
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/016997.html

Thanks to the many developers who have provided feedback on iterations of
this design.

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Composable MuSig

2019-11-25 Thread ZmnSCPxj via bitcoin-dev
So I heard you like MuSig.


Introduction


Previously on lightning-dev, I propose Lightning Nodelets, wherein one 
signatory of a channel is in fact not a single entity, but instead an 
aggregate: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002236.html

Generalizing:

* There exists some protocol that requires multiple participants agreeing.
  * This can be implemented by use of MuSig on the public keys of the 
participants.
* One or more of the participants in the above protocol is in fact an 
aggregate, not a single participant.
  * Ideally, no protocol modification should be needed to support such 
aggregates, "only" software development without modifying the protocol layer.
  * Obviously, any participant of such a protocol, whether a direct 
participant, or a member of an aggregated participant of that protocol, would 
want to retain control of its own money in that protocol, without having to 
determine if it is being Sybilled (and all other participants are in fact just 
one participant).
  * Motivating example: a Lightning Network channel is the aggregate of two 
participants, the nodes creating that channel.
However, with nodelets as proposed above, one of the participants is 
actually itself an aggregate of multiple nodelets.
* This requires that a Lightning Network channel with a MuSig address, to 
have one or both participants, be potentially an aggregate of two or more 
nodelet participants, e.g. `MuSig(MuSig(A, B), C)`

This is the "MuSig composition" problem.
That is, given `MuSig(MuSig(A, B), C)`, and the *possibility* that in fact `B 
== C`, what protocol can A use to ensure that it uses the three-phase MuSig 
protocol (which has a proof of soundness) and not inadvertently use a two-phase 
MuSig protocol?

Schnorr Signatures
==

The scheme is as follows.

Suppose an entity A needs to show a signature.
At setup:

* It generates a random scalar `a`.
* It computes `A` as `A = a * G`, where `G` is the standard generator point.
* It publishes `A`.

At signing a message `m`:

* It generates a random scalar `r`.
* It computes `R` as `R = r * G`.
* It computes `e` as `h(R | m)`, where `h()` is a standard hash function and `x 
| y` denotes the serialization of `x` concatenated by the serialization of `y`.
* It computes `s` as `s = r + e * a`.
* It publishes as signature the tuple of `(R, s)`.

An independent validator can then get `A`, `m`, and the signature `(R, s)`.
At validation:

* It recovers `e[validator]` as so: `e[validator] = h(R | m)`
* It computes `S[validator]` as so: `S[validator] = R + e[validator] * A`.
* It checks if `s * G == S[validator]`.
  * If `R` and `s` were indeed generated as per signing algorithm above, then:
* `S[validator] = R + e[validator] * A`
* `== r * G + e[validator] * A`; subbstitution of `R`
* `== r * G + h(R | m) * A`; substitution of `e[validator]`
* `== r * G + h(R | m) * a * G`; substitution of `A`.
* `== (r + h(R | m) * a) * G`; factor out `G`
* `== (r + e * a) * G`; substitution of `h(R | m)` with `e`
* `== s * G`; substitution of `r + e * a`.

MuSig
=

Under MuSig, validation must remain the same, and multiple participants must 
provide a single aggregate key and signature.

Suppose there exist two participants A and B.
At setup:

* A generates a random scalar `a` and B generates a random scalar `b`.
* A computes `A` as `A = a * G` and B computes `B` as `B = b * G`.
* A and B exchange `A` and `B`.
* They generate the list `L`, by sorting their public keys and concatenating 
their representations.
* They compute their aggregate public key `P` as `P = h(L) * A + h(L) * B`.
* They publish the aggregate public key `P`.

Signing takes three phases.

1.  `R` commitment exchange.
  * A generates a random scalar `r[a]` and B generates a random scalar `r[b]`.
  * A computes `R[a]` as `R[a] = r[a] * G` and B computes `R[b]` as `R[b] = 
r[b] * G`.
  * A computes `h(R[a])` and B computes `h(R[b])`.
  * A and B exchange `h(R[a])` and `h(R[b])`.
2.  `R` exchange.
  * A and B exchange `R[a]` and `R[b]`.
  * They validate that the previous given `h(R[a])` and `h(R[b])` matches.
3.  `s` exchange.
  * They compute `R` as `R = R[a] + R[b]`.
  * They compute `e` as `h(R | m)`.
  * A computes `s[a]` as `s[a] = r[a] + e * h(L) * a` and B computes `s[b]` as 
`s[b] = r[b] + e * h(L) * b`.
  * They exchange `s[a]` and `s[b]`.
  * They compute `s` as `s = s[a] + s[b]`.
  * They publish the signature as the tuple `(e, s)`.

At validation, the validator knows `P`, `m`, and the signature `(R, s)`.

* It recovers `e[validator]` as so: `e[validator] = h(R | m)`
* It computes `S[validator]` as so: `S[validator] = R + e[validator] * P`.
* It checks if `s * G == S[validator]`.
  * `S[validator] = R + e[validator] * P`
  * `== R[a] + R[b] + e[validator] * P`; substitution of `R`
  * `== r[a] * G + r[b] * G + e[validator] * P`; substitution of `R[a]` and 
`R[b]`
  * `== r[a] * G + r[b] * G + e *