Re: [bitcoin-dev] BitVM: Compute Anything on Bitcoin

2023-10-09 Thread Lloyd Fournier via bitcoin-dev
Hi Robin,

Fascinating result.
Is it possible to give us an example of a protocol that uses BitVM that
couldn't otherwise be built? I'm guessing it's possible to exchange Bitcoin
to someone who can prove they know some input to a binary circuit that
gives some output.

Thanks!

LL

On Tue, 10 Oct 2023 at 01:05, Robin Linus via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Abstract. BitVM is a computing paradigm to express Turing-complete Bitcoin
> contracts. This requires no changes to the network’s consensus rules.
> Rather than executing computations on Bitcoin, they are merely verified,
> similarly to optimistic rollups. A prover makes a claim that a given
> function evaluates for some particular inputs to some specific output. If
> that claim is false, then the verifier can perform a succinct fraud proof
> and punish the prover. Using this mechanism, any computable function can be
> verified on Bitcoin. Committing to a large program in a Taproot address
> requires significant amounts of off-chain computation and communication,
> however the resulting on-chain footprint is minimal. As long as both
> parties collaborate, they can perform arbitrarily complex, stateful
> off-chain computation, without leaving any trace in the chain. On-chain
> execution is required only in case of a dispute.
>
> https://bitvm.org/bitvm.pdf
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Blinded 2-party Musig2

2023-08-14 Thread Lloyd Fournier via bitcoin-dev
Hi Tom,

Thanks for the explanation. There's one remaining thing that isn't clear:
do you actually require parallel signing requests under the same key. It
seems to me that the protocol is very sequential in that you are passing a
utxo from one point to another in sequence. If so then the Schnorr blind
sigs problem doesn't apply.

LL

On Thu, 10 Aug 2023 at 20:00, Tom Trevethan  wrote:

> HI Lloyd,
>
> Yes, the blind signatures are for bitcoin transactions (these are
> timelocked 'backup txs' if the server disappears). This is not standard
> 'Schnorr blind signature' (like
> https://suredbits.com/schnorr-applications-blind-signatures/) but a
> 2-of-2 MuSig where two keys are required to generate the full signature,
> but one of them (the server) does not learn of either the full key, message
> (tx) or final signature.
>
> The server is explicitly trusted to report the total number of partial
> signatures it has generated for a specific key. If you can verify that ALL
> the signatures generated for a specific key were generated correctly, and
> the total number of them matches the number reported by the server, then
> there can be no other malicious valid signatures in existence. In this
> statechain protocol, the receiver of a coin must check all previous backup
> txs are valid, and that the total number of them matches the server
> reported signature count before accepting it.
>
> On Thu, Aug 10, 2023 at 4:30 AM Lloyd Fournier 
> wrote:
>
>> Hi Tom,
>>
>> These questions might be wrongheaded since I'm not familiar enough with
>> the statechain protocol. Here goes:
>>
>> Why do you need to use schnorr blind signatures for this? Are the blind
>> signatures being used to produce on-chain tx signatures or are they just
>> for credentials for transferring ownership (or are they for both). If they
>> are for on-chain txs then you won't be able to enforce that the signature
>> used was not generated maliciously so it doesn't seem to me like your trick
>> above would help you here. I can fully verify that the state chain
>> signatures were all produced non-maliciously but then there may be another
>> hidden forged signature that can take the on-chain funds that were produced
>> by malicious signing sessions I was never aware of (or how can you be sure
>> this isn't the case).
>>
>> Following on from that point, is it not possible to enforce sequential
>> blind signing in the statechain protocol under each key. With that you
>> don't have the problem of wagner's attack.
>>
>> LL
>>
>> On Wed, 9 Aug 2023 at 23:34, Tom Trevethan via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> @moonsettler
>>>
>>> When anyone receives a coin (either as payment or as part of a swap)
>>> they need to perform a verification of all previous signatures and
>>> corresponding backup txs. If anything is missing, then the verification
>>> will fail. So anyone 'breaking the chain' by signing something
>>> incorrectly simply cannot then send that coin on.
>>>
>>> The second point is important. All the 'transfer data' (i.e. new and all
>>> previous backup txs, signatures and values) is encrypted with the new owner
>>> public key. But the server cannot know this pubkey as this would enable it
>>> to compute the full coin pubkey and identify it on-chain. Currently, the
>>> server identifies individual coins (shared keys) with a statechain_id
>>> identifier (unrelated to the coin outpoint), which is used by the coin
>>> receiver to retrieve the transfer data via the API. But this means the
>>> receiver must be sent this identifier out-of-band by the sender, and also
>>> that if anyone else learns it they can corrupt the server key
>>> share/signature chain via the API. One solution to this is to have a second
>>> non-identifying key used only for authenticating with the server. This
>>> would mean a 'statchain address' would then be composed of 2 separate
>>> pubkeys 1) for the shared taproot address and 2) for server authentication.
>>>
>>> Thanks,
>>>
>>> Tom
>>>
>>> On Tue, Aug 8, 2023 at 6:44 PM moonsettler 
>>> wrote:
>>>
 Very nice! Is there an authentication mechanism to avoid 'breaking the
 chain' with an unverifiable new state by a previous owner? Can the current
 owner prove the knowledge of a non-identifying secret he learned as
 recipient to the server that is related to the statechain tip?

 BR,
 moonsettler

 --- Original Message ---
 On Monday, August 7th, 2023 at 2:55 AM, Tom Trevethan via bitcoin-dev <
 bitcoin-dev@lists.linuxfoundation.org> wrote:

 A follow up to this, I have updated the blinded statechain protocol
 description to include the mitigation to the Wagner attack by requiring the
 server to send R1 values only after commitments made to the server of the
 R2 values used by the user, and that all the previous computed c values are
 verified by each new statecoin owner.
 

Re: [bitcoin-dev] Blinded 2-party Musig2

2023-08-10 Thread Lloyd Fournier via bitcoin-dev
Hi Tom,

These questions might be wrongheaded since I'm not familiar enough with the
statechain protocol. Here goes:

Why do you need to use schnorr blind signatures for this? Are the blind
signatures being used to produce on-chain tx signatures or are they just
for credentials for transferring ownership (or are they for both). If they
are for on-chain txs then you won't be able to enforce that the signature
used was not generated maliciously so it doesn't seem to me like your trick
above would help you here. I can fully verify that the state chain
signatures were all produced non-maliciously but then there may be another
hidden forged signature that can take the on-chain funds that were produced
by malicious signing sessions I was never aware of (or how can you be sure
this isn't the case).

Following on from that point, is it not possible to enforce sequential
blind signing in the statechain protocol under each key. With that you
don't have the problem of wagner's attack.

LL

On Wed, 9 Aug 2023 at 23:34, Tom Trevethan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> @moonsettler
>
> When anyone receives a coin (either as payment or as part of a swap) they
> need to perform a verification of all previous signatures and
> corresponding backup txs. If anything is missing, then the verification
> will fail. So anyone 'breaking the chain' by signing something
> incorrectly simply cannot then send that coin on.
>
> The second point is important. All the 'transfer data' (i.e. new and all
> previous backup txs, signatures and values) is encrypted with the new owner
> public key. But the server cannot know this pubkey as this would enable it
> to compute the full coin pubkey and identify it on-chain. Currently, the
> server identifies individual coins (shared keys) with a statechain_id
> identifier (unrelated to the coin outpoint), which is used by the coin
> receiver to retrieve the transfer data via the API. But this means the
> receiver must be sent this identifier out-of-band by the sender, and also
> that if anyone else learns it they can corrupt the server key
> share/signature chain via the API. One solution to this is to have a second
> non-identifying key used only for authenticating with the server. This
> would mean a 'statchain address' would then be composed of 2 separate
> pubkeys 1) for the shared taproot address and 2) for server authentication.
>
> Thanks,
>
> Tom
>
> On Tue, Aug 8, 2023 at 6:44 PM moonsettler 
> wrote:
>
>> Very nice! Is there an authentication mechanism to avoid 'breaking the
>> chain' with an unverifiable new state by a previous owner? Can the current
>> owner prove the knowledge of a non-identifying secret he learned as
>> recipient to the server that is related to the statechain tip?
>>
>> BR,
>> moonsettler
>>
>> --- Original Message ---
>> On Monday, August 7th, 2023 at 2:55 AM, Tom Trevethan via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> A follow up to this, I have updated the blinded statechain protocol
>> description to include the mitigation to the Wagner attack by requiring the
>> server to send R1 values only after commitments made to the server of the
>> R2 values used by the user, and that all the previous computed c values are
>> verified by each new statecoin owner.
>> https://github.com/commerceblock/mercury/blob/master/layer/protocol.md
>>
>> Essentially, the attack is possible because the server cannot verify that
>> the blinded challenge (c) value it has been sent by the user has been
>> computed honestly (i.e. c = SHA256(X1 + X2, R1 + R2, m) ), however this CAN
>> be verified by each new owner of a statecoin for all the previous
>> signatures.
>>
>> Each time an owner cooperates with the server to generate a signature on
>> a backup tx, the server will require that the owner send a commitment to
>> their R2 value: e.g. SHA256(R2). The server will store this value before
>> responding with it's R1 value. This way, the owner cannot choose the value
>> of R2 (and hence c).
>>
>> When the statecoin is received by a new owner, they will receive ALL
>> previous signed backup txs for that coin from the sender, and all the
>> corresponding R2 values used for each signature. They will then ask the
>> server (for each previous signature), the commitments SHA256(R2) and the
>> corresponding server generated R1 value and c value used. The new owner
>> will then verify that each backup tx is valid, and that each c value was
>> computed c = SHA256(X1 + X2, R1 + R2, m) and each commitment equals
>> SHA256(R2). This ensures that a previous owner could not have generated
>> more valid signatures than the server has partially signed.
>>
>> On Thu, Jul 27, 2023 at 2:25 PM Tom Trevethan 
>> wrote:
>>
>>>
>>> On Thu, Jul 27, 2023 at 9:08 AM Jonas Nick  wrote:
>>>
 No, proof of knowledge of the r values used to generate each R does not
 prevent
 Wagner's attack. I wrote

 > Using Wagner's algorithm, choose 

Re: [bitcoin-dev] Blinded 2-party Musig2

2023-07-27 Thread Lloyd Fournier via bitcoin-dev
Hello all,

1. No proof of knowledge of each R does *NOT* prevent wagner's attack.
2. In my mind, A generic blind signing service is sufficient for doing
blinded MuSig, Muig2, FROST or whatever without the blind signing service
knowing. You don't need a specialized MuSig2 blind singing service to
extract MuSig2 compatible shares from it. You can just add the MuSig tweak
(and/or BIP32 etc) to their key when you do the blind signing request (this
seemed to be what the OP was suggesting). Making the server have multiple
nonces like in MuSig2 proper doesn't help the server's security at all. I
think the problem is simply reduced to creating a secure blind schnorr
signing service. Jonas mentioned some papers which show how to do that. The
question is mostly about whether you can practically integrate those tricks
into your protocol which might be tricky.

LL

On Thu, 27 Jul 2023 at 08:20, Erik Aronesty via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> correct.  you cannot select R if it is shipped with a POP
>
> On Wed, Jul 26, 2023, 4:35 PM Tom Trevethan  wrote:
>
>> Not 'signing' but 'secret' i.e. the r values (ephemeral keys). Proof of
>> knowledge of the r values used to generate each R used prevents the Wagner
>> attack, no?
>>
>> On Wed, Jul 26, 2023 at 8:59 PM Jonas Nick  wrote:
>>
>>> None of the attacks mentioned in this thread so far (ZmnSCPxj mentioned
>>> an
>>> attack on the nonces, I mentioned an attack on the challenge c) can be
>>> prevented
>>> by proving knowledge of the signing key (usually known as proof of
>>> possession,
>>> PoP).
>>>
>> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On adaptor security (in protocols)

2023-05-11 Thread Lloyd Fournier via bitcoin-dev
On Thu, 11 May 2023 at 13:12, AdamISZ  wrote:

>
> A sidebar, but it immediately brings it to mind: the canonical adaptor
> based swap, you can do it with only one half being multisig like this,
> right? Alice can encrypt the single-key signature for her payment to Bob,
> with the encryption key being T= sG, where s is the partial signature of
> Bob, on the payout from a multisig, to Alice. That way Bob only gets his
> money in the single sig (A->B) tx, if he reveals his partial sig on the
> multisig. I don't think it's of practical interest (1 multisig instead of
> 2? meh), but .. I don't see anywhere that potential variant being written
> down? Is there some obvious flaw with that?
>

I think the problem is that Alice can still move the funds even if Bob
decrypts and broadcasts by revealing s if she gets confirmed first. I think
you always need a multisig in these kinds of situations but it need not be
a key aggregated multisig like MuSig -- this was the point I wanted to make
(in retrospect clumsily). I don't think I can name a useful use of a single
signer adaptor signature in Bitcoin at least not without some kind of other
spending constraint. So your intuitive point holds in practice most of the
time.

LL

Cheers,
> waxwing/AdamISZ
>
> Sent with Proton Mail <https://proton.me/> secure email.
>
> --- Original Message -------
> On Monday, May 8th, 2023 at 05:37, Lloyd Fournier via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Hi Waxwing,
>
> On Tue, 2 May 2023 at 02:37, AdamISZ  wrote:
>
>> Hi Lloyd,
>> thanks for taking a look.
>>
>> > I think your view of the uselessness of single signer adaptors is too
>> pessimistic. The claim you make is that they "don't provide a way to create
>> enforcement that the publication of signature on a pre-defined message will
>> reveal a secret'' and so are useless. I think this is wrong. If I hold a
>> secret key for X and create a signature adaptor with some encryption key Y
>> with message m and do not create any further signatures (adaptor or
>> otherwise) on m, then any signature on m that is published necessarily
>> reveals the secret on Y to me. This is very useful and has already been
>> used for years by DLCs in production.
>>
>> I'm struggling with this one - say I hold privkey x for pubkey X. And I
>> publish adaptor for a point Y (DL y) for message m, like: s' = k - y +
>> H(R|X|m)x with k the nonce and R the nonce point.
>>
>> And to get the basics clear first, if I publish s = k + H(R|X|m)x then of
>> course the secret y is revealed.
>>
>> What do you mean in saying "any signature on m that is published reveals
>> y"? Clearly you don't mean any signature on any key (i.e. not the key X).
>> But I also can't parse it if you mean "any signature on m using key X",
>> because if I go ahead and publish s = k_2 + H(R_2|X|m)x, it has no
>> algebraic relationship to the adaptor s' as defined above, right?
>>
>
> Yes but suppose you do *not* create another signature adaptor or otherwise
> on m. Since you've only generated one adaptor signature on m and no other
> signatures on m there is no possibility that a signature on m that appears
> under your key would not reveal y to you. This is an useful property in
> theory and in practice.
>
>
>> I think the point of confusion is maybe about the DLC construct? I
>> referenced that in Section 4.2, parenthetically, because it's analogous in
>> one sense - in MuSig(2) you're fixing R via a negotiation, whereas in
>> Dryja's construct you're fixing R "by definition". When I was talking about
>> single key Schnorr, I was saying that's what's missing, and thereby making
>> them useless.
>>
>> I was not referencing the DLC oracle attestation protocol - I am pointing
> out that DLC client implementations have been using single signer adaptor
> signatures as signature encryption in practice for years for the
> transaction signatures. There are even channel implementations using them
> as well as atomic swaps doing this iirc. It's a pretty useful thing!
>
> Cheers,
>
> LL
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On adaptor security (in protocols)

2023-05-08 Thread Lloyd Fournier via bitcoin-dev
Hi Waxwing,

On Tue, 2 May 2023 at 02:37, AdamISZ  wrote:

> Hi Lloyd,
> thanks for taking a look.
>
> > I think your view of the uselessness of single signer adaptors is too
> pessimistic. The claim you make is that they "don't provide a way to create
> enforcement that the publication of signature on a pre-defined message will
> reveal a secret'' and so are useless. I think this is wrong. If I hold a
> secret key for X and create a signature adaptor with some encryption key Y
> with message m and do not create any further signatures (adaptor or
> otherwise) on m, then any signature on m that is published necessarily
> reveals the secret on Y to me. This is very useful and has already been
> used for years by DLCs in production.
>
> I'm struggling with this one - say I hold privkey x for pubkey X. And I
> publish adaptor for a point Y (DL y) for message m, like: s' = k - y +
> H(R|X|m)x with k the nonce and R the nonce point.
>
> And to get the basics clear first, if I publish s = k + H(R|X|m)x then of
> course the secret y is revealed.
>
> What do you mean in saying "any signature on m that is published reveals
> y"? Clearly you don't mean any signature on any key (i.e. not the key X).
> But I also can't parse it if you mean "any signature on m using key X",
> because if I go ahead and publish s = k_2 + H(R_2|X|m)x, it has no
> algebraic relationship to the adaptor s' as defined above, right?
>

Yes but suppose you do *not* create another signature adaptor or otherwise
on m. Since you've only generated one adaptor signature on m and no other
signatures on m there is no possibility that a signature on m that appears
under your key would not reveal y to you. This is an useful property in
theory and in practice.


>
> I think the point of confusion is maybe about the DLC construct? I
> referenced that in Section 4.2, parenthetically, because it's analogous in
> one sense - in MuSig(2) you're fixing R via a negotiation, whereas in
> Dryja's construct you're fixing R "by definition". When I was talking about
> single key Schnorr, I was saying that's what's missing, and thereby making
> them useless.
>
>
I was not referencing the DLC oracle attestation protocol - I am pointing
out that DLC client implementations have been using single signer adaptor
signatures as signature encryption in practice for years for the
transaction signatures. There are even channel implementations using them
as well as atomic swaps doing this iirc. It's a pretty useful thing!

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On adaptor security (in protocols)

2023-05-01 Thread Lloyd Fournier via bitcoin-dev
Hi waxwing,

I think your view of the uselessness of single signer adaptors is too
pessimistic. The claim you make is that they "don't provide a way to
create  enforcement that the publication of signature on a pre-defined
message will reveal a secret'' and so are useless. I think this is wrong.
If I hold a secret key for X and create a signature adaptor with some
encryption key Y with message m and do not create any further signatures
(adaptor or otherwise) on m, then any signature on m that is published
necessarily reveals the secret on Y to me. This is very useful and has
already been used for years by DLCs in production.

I haven't read the proofs in detail but I am optimistic about your
approach. One thing I was considering while reading is that you could make
a general proof against all secure Schnorr signing scheme in the ROM by
simply extending the ROM forwarding approach from Aumayer et al to all
"tweak" operations on the elements that go into the Schnorr challenge hash
i.e. the public key and the nonce. After all whether it's MuSig2, MuSig,
FROST they all must call some RO. I think we can prove that if we apply any
bijective map to the (X,R) tuple before they go into the challenge hash
function then any Schnorr-like scheme that was secure before will be secure
when bip32/TR tweaking (i.e. tweaking X) and adaptor tweaking (tweaking R)
is applied to it. This would be cool because then we could prove all these
variants secure for all schemes past and present in one go. I haven't got a
concrete approach but the proofs I've looked at all seem to share this
structure.

Cheers,

LL

On Sun, 30 Apr 2023 at 00:20, AdamISZ via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi list,
> I was motivated to look more carefully at the question of the security of
> using signature adaptors after recently getting quite enthused about the
> idea of using adaptors across N signing sessions to do a kind of multiparty
> swap. But of course security analysis is also much more important for the
> base case of 2 party swapping, which is of .. some considerable practical
> importance :)
>
> There is work (referenced in Section 3 here) that's pretty substantial on
> "how secure are adaptors" (think in terms of security reductions) already
> from I guess the 2019-2021 period. But I wanted to get into scenarios of
> multiple adaptors at once or multiple signing sessions at once with the
> *same* adaptor (as mentioned above, probably this is the most important
> scenario).
>
> To be clear this is the work of an amateur and is currently unreviewed -
> hence (a) me posting it here and (b) putting the paper on github so people
> can easily add specific corrections or comments if they like:
>
> https://github.com/AdamISZ/AdaptorSecurityDoc/blob/main/adaptorsecurity.pdf
>
> I'll note that I did the analysis only around MuSig, not MuSig2.
>
> The penultimate ("third case"), that as mentioned, of "multiple signing
> sessions, same adaptor" proved to be the most interesting: in trying to
> reduce this to ECDLP I found an issue around sequencing. It may just be
> irrelevant but I'd be curious to hear what others think about that.
>
> If nothing else, I'd be very interested to hear what experts in the field
> have to say about security reductions for this primitive in the case of
> multiple concurrent signing sessions (which of course has been analyzed
> very carefully already for base MuSig(2)).
>
> Cheers,
> AdamISZ/waxwing
>
>
>
>
> Sent with Proton Mail secure email.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unenforceable fee obligations in multiparty protocols with Taproot inputs

2023-02-07 Thread Lloyd Fournier via bitcoin-dev
Hi Yuval,

This is an interesting attack. Usually I think of spending with a big
weight witness in the context of slowing down a confirmation of a
transaction, especially a DLC creation tx. There you can delay its
confirmation past some time (i.e. see if your team won the game, and then
either trying to confirm it by providing the slimmed down witness or double
cancelling it by double spending). In this case you are not trying to delay
it but to dilute your portion of the fee.

Another mitigation is to aggresively RBF double spend your input any time a
counterparty doesn't use the spending path they said they would and don't
deal with them again. Of course, various pinning attacks may prevent this
depending on how your joint tx is structured.

LL

On Tue, 7 Feb 2023 at 13:59, Yuval Kogman via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> ## Summary
>
> Since Taproot (more generally any kind of MAST) spends have variable size
> which
> depends on the path being used, the last such input to be signed in a
> multiparty
> transaction can always use a larger than estimated signature to unfairly
> extract
> a fee contribution from the other parties to the transaction (keeping the
> absolute fees the same and reducing the feerate for the transaction).
>
> ## Attack Scenario
>
> Alice et al wish to perform a multiparty transaction, such as a CoinJoin or
> lightning dual funding at a relatively high feerate.
>
> Mallory has a P2TR output with a large script spend path, e.g. an ordinal
> inscription commitment transaction output.
>
> Mallory registers this coin as an input into the multiparty transaction
> with a
> fee obligation calculated on the basis of a key spend. When all other
> participants have provided signatures, the script spend path can be used.
>
> Since the absolute fee amount is already committed to by the provided
> (`SIGHASH_ALL`) signatures but the total transaction weight is not,
> Mallory can
> broadcast any valid signatures up to the maximum standard weight and
> minimum
> relay fees, or in collusion with a miner, up to consensus limits.
>
> This effectively steals a fee from Alice et al, as their signatures do not
> commit to a feerate directly or indirectly.
>
> ## Mitigations
>
> ### RBF
>
> All parties could negotiate a (series of) transaction(s) ahead of time at a
> lower feerate, giving a lower bound minimum feerate that Mallory can force.
>
> ### Minimum Weight Before Signing
>
> Enforcing a minimal weight for all non-witness data in the transaction
> before
> the transaction is considered fully constructed can limit the
> effectiveness of
> this attack, since the difference between the predicted weight and the
> maximum
> weight decreases.
>
> ### Trusted Coordinator
>
> In the centralized setting if BIP-322 ownership proofs are required for
> participation and assuming the server can be trusted not to collude with
> Mallory, the server can reject signatures that do not exercise the same
> spend
> path as the ownership proof, which makes the ownership proof a commitment
> to the
> spend weight of the input.
>
> ### Reputation
>
> Multiparty protocols with publicly verifiable protocol transcripts can be
> provided as weak evidence of a history of honest participation in
> multiparty
> transactions.
>
> A ring signature from keys used in the transaction or its transcript
> committing
> to the new proposed transaction can provide weak evidence for the honesty
> of the
> peer.
>
> Such proofs are more compelling to an entity which has participated in
> (one of)
> the transcripts, or proximal transactions. Incentives are theoretically
> aligned
> if public coordinators publish these transcripts as a kind of server
> reputation.
>
> ### Increasing Costliness
>
> A minimum feerate for the previous transaction or a minimum confirmation
> age
> (coindays destroyed implies time value, analogous to fidelity bonds) can be
> required for inputs to be added, in order to make such attacks less
> lucrative
> (but there is still a positive payoff for the attacker).
>
> ### Signature Ordering
>
> Signatures from potentially exploitative inputs can be required ahead of
> legacy
> or SegWit v0 ones. The prescribed order can be determined based on
> reputation or
> costliness as described in the previous paragraphs.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Password-protected wallet on Taproot

2022-05-04 Thread Lloyd Fournier via bitcoin-dev
Hi Vjudeu,

Perhaps this could make sense in some setting. e.g. instead of a hardware
device which protects your secret key via pin you use a pinless device but
you create a strong password and use a proper password hash to create
another key and put them in a 2-of-2. But make sure you don't use sha256 to
hash the password. Use a proper password hash. Keep in mind there's also
bip39 passwords which do a similar but this does involve entering them into
the possibly malicious hardware device.

Cheers,

LL

On Mon, 2 May 2022 at 03:56, vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> It seems that Taproot allows us to protect each individual public key with
> a password. It could work in this way: we have some normal, Taproot-based
> public key, that is generated in a secure and random way, as it is today in
> Bitcoin Core wallet. Then, we can create another public key, just by taking
> password from the user, executing SHA-256 on that, and using it as a
> private key, so the second key will be just a brainwallet. Then, we can
> combine them in a Schnorr signature, forming 2-of-2 multisig, where the
> first key is totally random, and the second key is just a brainwallet that
> takes a password chosen by the user. By default, each key can be protected
> with the same password, used for the whole wallet, but it could be possible
> to choose different passwords for different addresses, if needed.
> Descriptors should handle that nicely, in the same way as they can be used
> to handle any other 2-of-2 multisig.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Simple step one for quantum

2022-04-09 Thread Lloyd Fournier via bitcoin-dev
Hey all,

A good first step might be to express this as a research problem on
bitcoinproblems.org! I've had in mind creating a problem page on how to
design a PQ TR commitment in each key so that if QC were to become a
reality we could softfork to enable that spend (and disable normal key path
spends):
https://github.com/bitcoin-problems/bitcoin-problems.github.io/issues/4

Becoming the author/maintainer of this problem is as simple as making a PR
to the repo. The problem doesn't have to be focused on a TR solution but
could be a general description of the problem with that and others as a
potential solution direction.

Cheers,

LL

On Sat, 9 Apr 2022 at 18:39, Christopher Allen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
> On Fri, Apr 8, 2022 at 4:33 PM Christopher Allen <
> christoph...@lifewithalacrity.com> wrote:
>
>> That being said, it is interesting research. Here is the best link about
>> this particular approach:
>>
>> https://ntruprime.cr.yp.to/software.html
>>
>
> Also I think this is the original academic paper:
>
> https://eprint.iacr.org/2021/826.pdf
>
> 
>>
> — Christopher Allen ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-02-06 Thread Lloyd Fournier via bitcoin-dev
Hi Jeremy,


On Sat, 29 Jan 2022 at 04:21, Jeremy  wrote:

> Lloyd,
>
> This is an excellent write up, the idea and benefits are clear.
>
> Is it correct that in the case of a 3/5th threshold it is a total 10x *
> 30x = 300x improvement? Quite impressive.
>

Yes I think so but I am mostly guessing these numbers. The improvement is
several orders of magnitude. Enough to make almost any payout curve
possible without UX degredation I think.


> I have a few notes of possible added benefits / features of DLCs with CTV:
>
> 1) CTV also enables a "trustless timeout" branch, whereby you can have a
> failover claim that returns funds to both sides.
>
> There are a few ways to do this:
>
> A) The simplest is just an oracle-free  CTV whereby the
> timeout transaction has an absolute/relative timelock after the creation of
> the DLC in question.
>
> B) An alternative approach I like is to have the base DLC have a branch
> ` CTV` which pays into a DLC that is the exact same
> except it removes the just-used branch and replaces it with ` tx)> CTV` which contains a relative timelock R for the desired amount of
> time to resolve. This has the advantage of always guaranteeing at least R
> amount of time since the Oracles have been claimed to be non-live to
> "return funds"  to parties participating
>
>
> 2) CTV DLCs are non-interactive asynchronously third-party unilaterally
> creatable.
>
> What I mean by this is that it is possible for a single party to create a
> DLC on behalf of another user since there is no required per-instance
> pre-signing or randomly generated state. E.g., if Alice wants to create a
> DLC with Bob, and knows the contract details, oracles, and a key for Bob,
> she can create the contract and pay to it unilaterally as a payment to Bob.
>
> This enables use cases like pay-to-DLC addresses. Pay-to-DLC addresses can
> also be constructed and then sent (along with a specific amount) to a third
> party service (such as an exchange or Lightning node) to create DLCs
> without requiring the third party service to do anything other than make
> the payment as requested.
>

This is an interesting point -- I hadn't thought about interactivity prior
to this.

I agree CTV makes possible an on-chain DEX kind of thing where you put in
orders by sending txs to a DLC address generated from a maker's public key.
You could cancel the order by spending out of it via some cancel path. You
need to inform the maker of (i) your public key  (maybe you can use the
same public key as one of the inputs) and (ii) the amount the maker is
meant to put in (use fixed denominations?).

Although that's cool I'm not really a big fan of "putting the order book
on-chain" ideas because it brings up some of the problems that EVM DEXs
have.
I like centralized non-custodial order books.
For this I don't think that CTV makes a qualitative improvement given we
can use ANYONECANPAY to get some non-interactivity.
For example here's an alternative design:

The *taker*  provides a HTTP REST api where you (a maker) can:

1. POST an order using SIGHASH_ANYONECANPAY signed inputs and contract
details needed to generate the single output (the CTV DLC). The maker can
take the signatures and complete the transaction (they need to provide an
exact input amount of course).
2. DELETE an order -- the maker does some sort of revocation on the DLC
output e.g. signs something giving away all the coins in one of the
branches. If a malicious taker refuses to delete you just double spend one
of your inputs.

If the taker wants to take a non-deleted order they *could* just finish the
transaction but if they still have a connection open with the maker then
they could re-contact them to do a normal tx signing (rather than useing
the ANYONECANPAY signatures).
The obvious advantage here is that there are no transactions on-chain
unless the order is taken.
Additionally, the maker can send the same order to multiple takers -- the
takers will cancel each other's transactions should they broadcast the
transactions.
Looking forward to see if you can come up with something better than this
with CTV.
The above is suboptimal as getting both sides to have a change output is
hard but I think it's also difficult in your suggestion.
It might be better to use SIGHASH_SINGLE + ANYONECANPAY so the maker has to
be the one to provide the right input amount but the taker can choose their
change output and the fee...


>
> 3) CTV DLCs can be composed in interesting ways
>
> Options over DLCs open up many exciting types of instrument where Alice
> can do things like:
> A) Create a Option expiring in 1 week where Bob can add funds to pay a
> premium and "Open" a DLC on an outcome closing in 1 year
> B) Create an Option expiring in 1 week where one-of-many Bobs can pay the
> premium (on-chain DEX?).
>
>  See https://rubin.io/bitcoin/2021/12/20/advent-23/ for more concrete
> stuff around this.
>
> There are also opportunities for perpetual-like contracts where you could
> combine into 

[bitcoin-dev] CTV dramatically improves DLCs

2022-01-24 Thread Lloyd Fournier via bitcoin-dev
Hi dlc-dev and bitcoin-dev,

tl;dr OP_CTV simplifies and improves performance of DLCs by a factor of *a lot*.

## Introduction

Dryja introduced the idea of Discreet Log Contracts (DLC) in his
breakthrough work[1].
Since then (DLC) has become an umbrella term for Bitcoin protocols
that map oracle secret revelation to an on-chain transaction which
apportions coins accordingly.
The key property that each protocol iteration preserves is that the
oracle is an *oblivious trusted party* -- they do not interact with
the blockchain and it is not possible to tell which event or which
oracle the two parties were betting on with blockchain data alone.

 `OP_CHECKTEMPLATEVERIFY` (CTV) a.k.a. BIP119 [2] is a proposed
upgrade to Bitcoin which is being actively discussed.
CTV makes possible an optimized protocol which improves DLC
performance so dramatically that it solves several user experience
concerns and engineering difficulties.
To me this is the most compelling and practical application of CTV so
I thought it's about time to share it!

## Present state of DLC specification

The DLC specifications[3] use adaptor signatures to condition each
possible payout.
The protocol works roughly like this:

1. Oracle(s) announce events along with a nonce `R` for each event.
Let's say each event has `N` outcomes.
2. Two users who wish to make a bet take the `R` from the oracle
announcement and construct a set of attestation points `S` and their
corresponding payouts.
3. Each attestation point for each of the `N` outcomes is calculated
like `S_i = R + H(R || X || outcome_i) * X` where `X` is the oracle's
static key.
4. The users combine the attestation points into *contract execution
transaction* (CET) points e.g `CET_i = S1_i + S2_i + S3_i`.
   Here `CET_i` is the conjunction (`AND`) between the event outcomes
represented by `S1_i, S2_i, S3_i`.
5. The oracle(s) reveals the attestation `s_i` where `s_i * G = S_i`
if the `i`th is the outcome transpired.
6. Either of the parties takes the `s_i`s from each of the
attestations and combines them e.g. `cet_i = s1_i + s2_i + s3_i` and
uses `cet_i` to decrypt the CET adaptor signature encrypted by `CET_i`
and broadcast the transaction.

## Performance issues with DLCs

In the current DLC protocol both parties compute:
  - `E * N` attestation points where `E` is the number of events you
are combining and `N` is the number of outcomes per event. (1 mul)
  - `C >= E * N` CET adaptor signatures and verify them. (2 mul -- or
with MuSig2, 3 muls).

Note that the number of CETs can be much greater than the number of
attestation points. For example,
if an oracle decomposes the price of BTC/USD into 20 binary digits
e.g. 0..(2^20 -1), you could have
`E=20,N=2,C=2^20`. So the biggest concern for worst case performance
is the adaptor signatures multiplications.

If we take a multiplication as roughly 50 microseconds computing
MuSig2 adaptor signatures for ~6000 CETs would take around a second of
cpu time (each) or so.
6000 CETs is by no means sufficient if you wanted, for example, to bet
on the BTC/USD price per dollar.
Note there may be various ways of precomputing multiplications and
using fast linear combination algorithms and so on but I hope this
provides an idea of the scale of the issue.
Then consider that you may want to use a threshold of oracles which
will combinatorially increase this number (e.g. 3/5 threshold would
10x this).

You also would end up sending data on the order of megabytes to each other.

## committing to each CET in a tapleaf with CHECKTEMPLATEVERIFY

What can we do with OP_CTV + Taproot to improve this?

Instead of creating an adaptor signature for every CET, commit to the
CET with OP_CTV in a tapleaf:

```
 CHECKTEMPLATEVERIFY  CHECKSIG
```

When the oracle(s) reveals their attestations either party can combine
them to get the secret key
corresponding to `CET_i` and spend the coins to the CET (whose CTV
hash is `CET-hash`) which
distributes the funds according to the contract.

This replaces all the multiplications needed for the adaptor signature
with a few hashes!
You will still need to compute the `CET_i` which will involve a point
normalisation but it still brings the computational cost per CET down
from hundreds of microseconds to around 5 (per party).
There will be a bit more data on chain (and a small privacy loss) in
the uncooperative case but even with tens of thousands of outcomes
it's only going to roughly double the virtual size of the transaction.
Keep in mind the uncooperative case should hopefully be rare too esp
when we are doing this in channels.

The amount of data that the parties need to exchange is also reduced
to a small constant size.

## getting rid of combinatorial complexity of oracle thresholds

Now that we're using script it's very easy to do a threshold along
with the script. e.g. a 2/3:

```
 CHECKTEMPLATEVERIFY
 CHECKSIG
 CHECKSIGADD
 CHECKSIGADD
2 EQUAL
```

The improvement here is that the amount of computation and
communication does 

Re: [bitcoin-dev] Opinion on proof of stake in future

2021-06-17 Thread Lloyd Fournier via bitcoin-dev
@James wrote:

On Tue, 15 Jun 2021 at 21:13, James MacWhyte  wrote:

>
> @Lloyd wrote:
>
> Of course in reality no one wants to keep their coin holding keys online
>> so in Alogorand you can authorize a set of "participation keys"[1] that
>> will be used to create blocks on your coin holding key's behalf.
>> Hopefully you've spotted the problem.
>> You can send your participation keys to any malicious party with a nice
>> website (see random example [2]) offering you a good return.
>> Damn it's still Proof-of-SquareSpace!
>>
>
> I believe we are talking about a comparison to PoW, correct? If you want
> to mine PoW, you need to buy expensive hardware and configure it to work,
> and wait a long time to get any return by solo mining. Or you can join a
> mining pool, which might use your hashing power for nefarious purposes.
>

A mining pool using your hashrate for nefarious purposes can easily be
observed since they send you the contents of the block you are mining
before your hardware starts working on it. This difference is crucial.
Mining pools exist just to reduce income variance.


> Or you might skip the hardware all together and fall for some "cloud
> mining" scheme with a pretty website and a high rate of advertised return.
> So as you can see, Proof-of-SquareSpace exists in PoW as well!
>

I'd agree that "cloud mining" pretty much is Proof-of-SquareSpace for PoW.
Fortunately these services make up a tiny fraction of hashrate.


> The PoS equivalent of buying mining hardware is setting up your own
> validator and not outsourcing that to anyone else. So both PoW and PoS have
> the professional/expert way of participating, and the fraud-prone, amateur
> way of participating. The only difference is, with PoS the
> professional/expert way is accessible to anyone with a raspberry Pi and a
> web connection, which is a much lower barrier to entry than PoW.
>

And yet despite this, the fraud-prone amteur way of participating accounts
for the majority of stake in PoS systems while the professional/expert way
of participating accounts for the overwhelming majority of hashpower in
Bitcoin. It looks like you have elegantly proved my point!

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-15 Thread Lloyd Fournier via bitcoin-dev
On Tue, 15 Jun 2021 at 10:59, Lloyd Fournier  wrote:

>
>
> On Tue, 15 Jun 2021 at 02:47, Antoine Riard 
> wrote:
>
>> > This makes a lot of sense as it matches the semantics of what we are
>> trying
>> to achieve: allow the owner of an output (whether an individual or group)
>> to reduce that output's value to pay a higher fee.
>>
>> Note, I think you're still struggling with some trust issue that anchor
>> upgrade is at least eliminating for LN, namely the pre-agreement among a
>> group of signers about the effective feerate to use at some unknown time
>> point in the future. If you authorize your counterparty for a broadcast at
>> feerate X, how do you prevent a broadcast at feerate Y, where Y is far
>> under X, thus maliciously burning a lot of your fee-bumping reserve ?
>>
>> Of course, one mitigation is to make a contribution to a common
>> fee-bumping output reserve proportional to what has been contributed as a
>> funding collateral. Thus disincentivizing misuse of the common fee-bumping
>> reserve in a game-theoretical way. But if you take the example of a LN
>> channel, you're now running into another issue. Off-chain balances might
>> fluctuate in a way that most of the time, your fee-bumping reserve
>> contribution is out-of-proportion with your balance amounts to protect ?
>> And as such enduring some significant timevalue bleeding on your
>> fee-bumping reserve.
>>
>> Single-party managed fee-bumping reserve doesn't seem to suffer from this
>> drawback ?
>>
>
> I claim that what I am suggesting is a single-party managed fee-bumping
> system that solves all fee-bumping requirements of lightning without
> needing external utxos and without additional interaction or fee
> pre-agreement between parties. On the commit tx you have your balance going
> exclusively towards you which you can unilaterally reduce to increase the
> fee up to whatever threshold you want. With a HTLC or PTLC you also always
> have a tx with an output that you can unilaterally drain to bump fee
> (either the hltc-success or htlc-timeout). Are you saying that there are
> protocols where this would require pre-arrangement or are you saying that
> it would require pre-arrangement in lightning for some reason I don't see?
>

Ok now I see what I am missing: We don't really know who owns certain
outputs in lightning until the most-recent-state-enforcement mechanism has
done its job. i.e. the outputs are 2-of-2s up until that has been resolved.
I was operating on some simplified imaginary lightning. Indeed this makes
the proposal far less attractive and does require interaction and
pre-agreement. This complexity here makes it worse than just keeping
external fee-bumping utxos around (as undesirable as this is). Thanks for
helping me figure this out.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-14 Thread Lloyd Fournier via bitcoin-dev
On Tue, 15 Jun 2021 at 02:47, Antoine Riard  wrote:

> > This makes a lot of sense as it matches the semantics of what we are
> trying
> to achieve: allow the owner of an output (whether an individual or group)
> to reduce that output's value to pay a higher fee.
>
> Note, I think you're still struggling with some trust issue that anchor
> upgrade is at least eliminating for LN, namely the pre-agreement among a
> group of signers about the effective feerate to use at some unknown time
> point in the future. If you authorize your counterparty for a broadcast at
> feerate X, how do you prevent a broadcast at feerate Y, where Y is far
> under X, thus maliciously burning a lot of your fee-bumping reserve ?
>
> Of course, one mitigation is to make a contribution to a common
> fee-bumping output reserve proportional to what has been contributed as a
> funding collateral. Thus disincentivizing misuse of the common fee-bumping
> reserve in a game-theoretical way. But if you take the example of a LN
> channel, you're now running into another issue. Off-chain balances might
> fluctuate in a way that most of the time, your fee-bumping reserve
> contribution is out-of-proportion with your balance amounts to protect ?
> And as such enduring some significant timevalue bleeding on your
> fee-bumping reserve.
>
> Single-party managed fee-bumping reserve doesn't seem to suffer from this
> drawback ?
>

I claim that what I am suggesting is a single-party managed fee-bumping
system that solves all fee-bumping requirements of lightning without
needing external utxos and without additional interaction or fee
pre-agreement between parties. On the commit tx you have your balance going
exclusively towards you which you can unilaterally reduce to increase the
fee up to whatever threshold you want. With a HTLC or PTLC you also always
have a tx with an output that you can unilaterally drain to bump fee
(either the hltc-success or htlc-timeout). Are you saying that there are
protocols where this would require pre-arrangement or are you saying that
it would require pre-arrangement in lightning for some reason I don't see?

To further emphasise the generality of this idea you can easily imagine a
world where this is enabled on all Bitcoin transactions (of course you have
to stomach tx malleability -- a bit more palatable with ANYPREVOUT
everywhere). Even for a normal wallet-to-wallet payment the receiver could
efficiently increase the tx fee by making a signature under the key of
their output and replacing the original tx without interacting with the
sender who actually provided the funds for the payment.

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-13 Thread Lloyd Fournier via bitcoin-dev
On Fri, 11 Jun 2021 at 07:45, Antoine Riard  wrote:

> Hi Lloyd,
>
> Thanks for this tx mutation proposal extending the scope of fee-bumping
> techniques. IIUC, the  serves as a pointer to increase the
> output amount by value to recover the recompute the transaction hash
> against which the original signature is valid ?
>

Right.


> Let's do a quick analysis of this scheme.
> * onchain footprint : one tapleaf per contract participant, with O(log n)
> increase of witness size, also one output per contract participant
>

Yes but we can fix this (see below).

* tx-relay bandwidth rebroadcast : assuming aforementioned in-place mempool
> substitution policy, the mutated transaction
>
* batching : fee-bumping value is extract from contract transaction itself,
> so O(n) per contract
> * mempool flexibility : the mutated transaction
> * watchtower key management : to enable outsourcing, the mutating key must
> be shared, in theory enabling contract value siphoning to miner fees ?
>

Yes. You could use OP_LESSTHAN to make sure the value being deducted by the
watchtower is not above a threshold.


> Further, I think tx mutation scheme can be achieved in another way, with
> SIGHASH_ANYAMOUNT. A contract participant tapscript will be the following :
>
>  
>
> Where  is committed with SIGHASH_ANYAMOUNT, blanking
> nValue of one or more outputs. That way, the fee-to-contract-value
> distribution can be unilaterally finalized at a later time through the
> finalizing key [0].
>

Yes, that's also a way to do it. I was trying to preserve the original
external key signature in my attempt but this is probably not necessary. L2
protocols could just exchange two signatures instead. One optimistic one on
the external key and one pessimistic SIGHASH_ANYAMOUNT one on the
.


> Note, I think that the tx mutation proposal relies on interactivity in the
> worst-case scenario where a counterparty wants to increase its fee-bumping
> output from the contract balance. This interactivity may lure a
> counterparty to alway lock the worst-case fee-bumping reserve in the
> output. I believe anchor output enables more "real-time" fee-bumping
> reserve adjustment ?
>

Hmmm well I was hoping that you wouldn't need interaction ever. I can see
that my commitment TX example was too contrived because it has balance
outputs that go exclusively to one party.
Let's take a better example: A PTLC output with both timeout and success
pre-signed transactions spending from it. We must only let the person
offering the PTLC reduce the output of the timeout tx and the converse for
the success tx.
Note very carefully that if we naively apply OP_CHECKSIG_MUTATED or
SIGHASH_ANYAMOUNT with one tapleaf for each party then we risk one party
being able to lower the other party's output by doing a switcharoo on the
tapleaf after they see the signature for their counterparty's tx in the
mempool. In your example you could fix it by having a different
 but this means we can't compress  by just
using the taproot internal/external key.

What about this: Instead of party specific "finalizing_alice_key" or
p1-fee-bump-key as I denoted it, we just use the key of the output whose
value we are reducing. This also solves the O(log(n)) tapleaves for
OP_CHECKSIG_MUTATED approach as well -- just have one tapleaf for fee
bumping but authorize it under the key of the output we are reducing. Thus
we need something like OP_PUSH_TAPROOT_OUTPUT_KEY  which
takes the taproot external key at that output (fail if not taproot) and
puts it on the stack. So to be clear you have the  on the
witness stack rather than having it fixed in a particular tapleaf (as per
my original post) and then use OP_DUP to pass it to both
OP_CHECKSIG_MUTATED and OP_PUSH_TAPROOT_OUTPUT_KEY.
This makes a lot of sense as it matches the semantics of what we are trying
to achieve: allow the owner of an output (whether an individual or group)
to reduce that output's value to pay a higher fee.
Furthermore this removes all keys from the tapleaf since they are all
aliased to either the input we are spending or one of the output keys of
the tx we are spending to. This is quite a big improvement over my original
idea.

This works for lightning commit tx and for the case of a PTLC contract. It
also seems to work for the DLC funding output. I'd be interested to know if
anyone can think of a protocol where this would be inconvenient or
impossible to use as the main pre-signed tx fee bumping system.

Cheers,

LL

Le dim. 6 juin 2021 à 22:28, Lloyd Fournier  a
> écrit :
>
>> Hi Antione,
>>
>> Thanks for bringing up this important topic. I think there might be
>> another class of solutions over input based, CPFP and sponsorship. I'll
>> call them tx mutation schemes. The idea is that you can set a key that can
>> increase the fee by lowering a particular output after the tx is signed
>> without invalidating the signature. The premise is that anytime you need to
>> bump the fee of a transaction you must necessarily have funds in 

Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-07 Thread Lloyd Fournier via bitcoin-dev
Hi Antione,

Thanks for bringing up this important topic. I think there might be another
class of solutions over input based, CPFP and sponsorship. I'll call them
tx mutation schemes. The idea is that you can set a key that can increase
the fee by lowering a particular output after the tx is signed without
invalidating the signature. The premise is that anytime you need to bump
the fee of a transaction you must necessarily have funds in an output that
are going to you and therefore you can sacrifice some of them to increase
the fee. This is obviously destructive to txids so child presigned
transactions will have to use ANYPREVOUT as in your proposal. The advantage
is that it does not require keeping extra inputs around to bump the fee.

So imagine a new opcode OP_CHECKSIG_MUTATED  
 .
This would check that  is valid against  if the
current transaction had the output at  reduced by . To
make this more efficient, if the public key is one byte: 0x02 it references
the taproot *external key* (similar to how ANYPREVOUT uses 0x01 to refer to
internal key[1]).
Now for our protocol we want both parties (p1 and p2) to be able to fee
bump a commitment transaction. They use MuSig to sign the commitment tx
under the external key with a decent fee for the current conditions. But in
case it proves insufficient they have added the following two leaves to
their key in the funding output as a backup so that p1 and p2 can
unilaterally bump the fee of anything they sign spending from the funding
output:

1. OP_CHECKSIG_MUTATED(0, 0x02, , )
OP_CHECKSIGADD(p1-fee-bump-key, )  OP_2
OP_NUMEQUALVERIFY
2. OP_CHECKSIG_MUTATED(1, 0x02, , )
OP_CHECKSIGADD(p2-fee-bump-key, ) OP_2
OP_NUMEQUALVERIFY

where <...> indicates the thing comes from the witness stack.
So to bump the fee of the commit tx after it has been signed either party
takes the  and adds a signature under their
fee-bump-key for the new tx and reveals their fee bump leaf.
 is checked against the old transaction while the fee
bumped transaction is checked against the fee bump key.

I know I have left out how to change mempool eviction rules to accommodate
this kind of fee bumping without DoS or pinning attacks but hopefully I
have demonstrated that this class of solutions also exists.

[1] https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

Cheers,

LL



On Fri, 28 May 2021 at 07:13, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> This post is pursuing a wider discussion around better fee-bumping
> strategies for second-layer protocols. It draws out a comparison between
> input-based and CPFP fee-bumping techniques, and their apparent trade-offs
> in terms of onchain footprint, tx-relay bandwidth rebroadcast, batching
> opportunity and mempool flexibility.
>
> Thanks to Darosior for reviews, ideas and discussions.
>
> ## Child-Pay-For-Parent
>
> CPFP is a mature fee-bumping technique, known and used for a while in the
> Bitcoin ecosystem. However, its usage in contract protocols with
> distrusting counterparties raised some security issues. As mempool's chain
> of unconfirmed transactions are limited in size, if any output is spendable
> by any contract participant, it can be leveraged as a pinning vector to
> downgrade odds of transaction confirmation [0].
>
> That said, contract transactions interested to be protected under the
> carve-out logic require to add a new output for any contract participant,
> even if ultimately only one of them serves as an anchor to attach a CPFP.
>
> ## Input-Based
>
> I think input-based fee-bumping has been less studied as fee-bumping
> primitive for L2s [1]. One variant of input-based fee-bumping usable today
> is the leverage of the SIGHASH_ANYONECANPAY/SIGHASH_SINGLE malleability
> flags. If the transaction is the latest stage of the contract, a bumping
> input can be attached just-in-time, thus increasing the feerate of the
> whole package.
>
> However, as of today, input-based fee-bumping doesn't work to bump first
> stages of contract transactions as it's destructive of the txid, and as
> such breaks chain of pre-signed transactions. A first improvement would be
> the deployment of the SIGHASH_ANYPREVOUT softfork proposal. This new
> malleability flag allows a transaction to be signed without reference to
> any specific previous output. That way,  spent transactions can be
> fee-bumped without altering validity of the chain of transactions.
>
> Even assuming SIGHASH_ANYPREVOUT, if the first stage contract transaction
> includes multiple outputs (e.g the LN's commitment tx has multiple HTLC
> outputs), SIGHASH_SINGLE can't be used and the fee-bumping input value
> might be wasted. This edge can be smoothed by broadcasting a preliminary
> fan-out transaction with a set of outputs providing a range of feerate
> points for the bumped transaction.
>
> This overhead could be smoothed even further in the future with more
> advanced sighash malleability flags like SIGHASH_IOMAP, 

Re: [bitcoin-dev] Opinion on proof of stake in future

2021-05-23 Thread Lloyd Fournier via bitcoin-dev
Hi Billy,

I was going to write a post which started by dismissing many of the weak
arguments that are made against PoS made in this thread and elsewhere.
Although I don't agree with all your points you have done a decent job here
so I'll focus on the second part: why I think Proof-of-Stake is
inappropriate for a Bitcoin-like system.

Proof of stake is not fit for purpose for a global settlement layer in a
pure digital asset (i.e. "digital gold") which is what Bitcoin is trying to
be.
PoS necessarily gives responsibilities to the holders of coins that they do
not want and cannot handle.
In Bitcoin, large unsophisticated coin holders can put their coins in cold
storage without a second thought given to the health of the underlying
ledger.
As much as hardcore Bitcoiners try to convince them to run their own node,
most don't, and that's perfectly acceptable.
At no point do their personal decisions affect the underlying consensus --
it only affects their personal security assurance (not that of the system
itself).
In PoS systems this clean separation of responsibilities does not exist.

I think that the more rigorously studied PoS protocols will work fine
within the security claims made in their papers.
People who believe that these protocols are destined for catastrophic
consensus failure are certainly in for a surprise.
But the devil is in the detail.
Let's look at what the implications of using the leading proof of stake
protocols would have on Bitcoin:

### Proof of SquareSpace (Cardano, Polkdadot)

Cardano is a UTXO based PoS coin based on Ouroboros Praos[3] with an
inbuilt on-chain delegation system[5].
In these protocols, coin holders who do not want to run their node with
their hot keys in it delegate it to a "Stake Pool".
I call the resulting system Proof-of-SquareSpace since most will choose a
pool by looking around for one with a nice website and offering the largest
share of the block reward.
On the surface this might sound no different than someone with an mining
rig shopping around for a good mining pool but there are crucial
differences:

1. The person making the decision is forced into it just because they own
the currency -- someone with a mining rig has purchased it with the intent
to make profit by participating in consensus.

2. When you join a mining pool your systems are very much still online. You
are just partaking in a pool to reduce your profit variance. You still see
every block that you help create and *you never help create a block without
seeing it first*.

3. If by SquareSpace sybil attack you gain a dishonest majority and start
censoring transactions how are the users meant to redelegate their stake to
honest pools?
I guess they can just send a transaction delegating to another pool...oh
wait I guess that might be censored too! This seems really really bad.
In Bitcoin, miners can just join a different pool at a whim. There is
nothing the attacker can do to stop them. A temporary dishonest majority
heals relatively well.

There is another severe disadvantage to this on-chain delegation system:
every UTXO must indicate which staking account this UTXO belongs to so the
appropriate share of block rewards can be transferred there.
Being able to associate every UTXO to an account ruins one of the main
privacy advantages of the UTXO model.
It also grows the size of the blockchain significantly.

### "Pure" proof of stake (Algorand)

Algorand's[4] approach is to only allow online stake to participate in the
protocol.
Theoretically, This means that keys holding funds have to be online in
order for them to author blocks when they are chosen.
Of course in reality no one wants to keep their coin holding keys online so
in Alogorand you can authorize a set of "participation keys"[1] that will
be used to create blocks on your coin holding key's behalf.
Hopefully you've spotted the problem.
You can send your participation keys to any malicious party with a nice
website (see random example [2]) offering you a good return.
Damn it's still Proof-of-SquareSpace!
The minor advantage is that at least the participation keys expire after a
certain amount of time so eventually the SquareSpace attacker will lose
their hold on consensus.
Importantly there is also less junk on the blockchain because the
participation keys are delegated off-chain and so are not making as much of
a mess.

### Conclusion

I don't see a way to get around the conflicting requirement that the keys
for large amounts of coins should be kept offline but those are exactly the
coins we need online to make the scheme secure.
If we allow delegation then we open up a new social attack surface and it
degenerates to Proof-of-SquareSpace.

For a "digital gold" like system like Bitcoin we optimize for simplicity
and desperately want to avoid extraneous responsibilities for the holder of
the coin.
After all, gold is an inert element on the periodic table that doesn't
confer responsibilities on the holder to maintain the quality of all the

Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-04-16 Thread Lloyd Fournier via bitcoin-dev
On Fri, 16 Apr 2021 at 13:47, ZmnSCPxj  wrote:

> Good morning LL,
>
> > On Tue, 16 Mar 2021 at 11:25, David A. Harding via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > > I curious about whether anyone informed about ECC and QC
> > > knows how to create output scripts with lower difficulty that could be
> > > used to measure the progress of QC-based EC key cracking.  E.g.,
> > > NUMS-based ECDSA- or taproot-compatible scripts with a security
> strength
> > > equivalent to 80, 96, and 112 bit security.
> >
> > Hi Dave,
> >
> > This is actually relatively easy if you are willing to use a trusted
> setup. The trusted party takes a secp256k1 secret key and verifiably
> encrypt it under a NUMS public key from the weaker group. Therefore if you
> can crack the weaker group's public key you get the secp256k1 secret key.
> Camenisch-Damgard[1] cut-and-choose verifiable encryption works here.
> > People then pay the secp256k1 public key funds to create the bounty. As
> long as the trusted party deletes the secret key afterwards the scheme is
> secure.
> >
> > Splitting the trusted setup among several parties where only one of them
> needs to be honest looks doable but would take some engineering and
> analysis work.
>
> To simplify this, perhaps `OP_CHECKMULTISIG` is sufficient?
> Simply have the N parties generate individual private keys, encrypt each
> of them with the NUMS pubkey from the weaker group, then pay out to an
> N-of-N `OP_CHECKMULTISIG` address of all the participants.
> Then a single honest participant is enough to ensure security of the
> bounty.
>
> Knowing the privkey from the weaker groups would then be enough to extract
> all of the SECP256K1 privkeys that would unlock the funds in Bitcoin.


Yes! Nice idea.

Another idea that came to mind is that you could also just prove equality
between the weak group's key and the secp256k1 key. e.g. generate a 160-bit
key and use it both as a secp256k1 and a 160-bit curve key and prove
equality between them and give funds to the secp256k1 key. I implemented a
proof between ed25519 and secp256k1 a little while ago for example:
https://docs.rs/sigma_fun/0.3.0/sigma_fun/ext/dl_secp256k1_ed25519_eq/index.html

This would come with the extra assumption that it's easier to break the
160-bit key on the 160-bit curve as opposed to just breaking the 160-bit
key on the 256-bit curve. Intuitively I think this is the case but I would
want to study that further before taking this approach.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-04-05 Thread Lloyd Fournier via bitcoin-dev
On Tue, 16 Mar 2021 at 11:25, David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> I curious about whether anyone informed about ECC and QC
> knows how to create output scripts with lower difficulty that could be
> used to measure the progress of QC-based EC key cracking.  E.g.,
> NUMS-based ECDSA- or taproot-compatible scripts with a security strength
> equivalent to 80, 96, and 112 bit security.


Hi Dave,

This is actually relatively easy if you are willing to use a trusted setup.
The trusted party takes a secp256k1 secret key and verifiably encrypt it
under a NUMS public key from the weaker group. Therefore if you can crack
the weaker group's public key you get the secp256k1 secret key.
Camenisch-Damgard[1] cut-and-choose verifiable encryption works here.
People then pay the secp256k1 public key funds to create the bounty. As
long as the trusted party deletes the secret key afterwards the scheme is
secure.

Splitting the trusted setup among several parties where only one of them
needs to be honest looks doable but would take some engineering and
analysis work.

[1] https://link.springer.com/content/pdf/10.1007/3-540-8-3_25.pdf

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New PSBT version proposal

2021-04-05 Thread Lloyd Fournier via bitcoin-dev
On Wed, 10 Mar 2021 at 11:20, Lloyd Fournier  wrote:

> Hi Andrew & all,
>
> I've been working with PSBTs for a little while now. FWIW I agree with the
> change of removing the global tx and having the input/output data stored
> together in the new unified structures.
>
> One thing I've been wondering about is how output descriptors could fit
> into PSBTs. They are useful since they allow you to determine the maximum
> satisfaction weight for inputs so you can properly align fees as things get
> added. I haven't seen any discussion about including them in this revision.
> Is it simply a matter of time before they make it into a subsequent PSBT
> spec or is there something I'm missing conceptually?
>


Sipa replied to me off list some time ago and explained what I was missing.
PSBTs have all the information you could want from a descriptor already.
For example the maximum satisfaction weight can be determined from the
witness/redeem script (I had forgot these fields existed). Therefore
descriptors are more useful in higher level applications while PSBTs are
useful for communicating with signing devices. Therefore there is no reason
for PSBTs to support descriptors.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-16 Thread Lloyd Fournier via bitcoin-dev
On Tue, 16 Mar 2021 at 09:05, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> There have been many threads on this before, I'm not sure anything new has
> been brought up here.
>
> Matt
>
> On 3/15/21 17:48, Luke Dashjr via bitcoin-dev wrote:
> > I do not personally see this as a reason to NACK Taproot, but it has
> become
> > clear to me over the past week or so that many others are unaware of this
> > tradeoff, so I am sharing it here to ensure the wider community is aware
> of
> > it and can make their own judgements.
>
> Note that this is most definitely *not* news to this list, eg, Anthony
> brought it up in "Schnorr and taproot (etc)
> upgrade" and there was a whole thread on it in "Taproot: Privacy
> preserving switchable scripting". This issue has been
> beaten to death, I'm not sure why we need to keep hitting the poor horse
> corpse.
>
>
I read through this thread just now. The QC discussion starts roughly here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015620.html

My own (very possibly wrong) interpretation of the situation is:

1. Current addresses are very vulnerable to QC and would require hardfork
to fix (and not fix particularly well).
2. Once a QC resistant spending procedure has been developed it could be
added as a backup spending policy as a new tapleaf version (wallets would
have to opt into it).
3. If QC does get to the point where it can break ECC then we can disable
key-path spends via softfork
4. If everyone has moved their coins to Taproot addresses with a QC
resistant tapleaf backup then we're ok.
5. Since the above is almost certainly not going to happen we can simply
congratulate the new QC owners on the Bitcoin they take from old addresses
(specter of QC encourages moving to taproot which could be thought of as a
good thing).
6. Otherwise we have to hard fork to stop old addresses being spent without
a quantum resistant ZKP (oof!).
7. Once we know what we're up against a new quantum resistant segwit
version can be introduced (if it hasn't already).
8. If QC develop far enough to degrade SHA256 sufficiently (ECC probably
breaks first) then that's a whole other ball game since it affects PoW and
txids and so on and will likely require a hard fork.

The ordering of the above events is not predictable. IMO Mark's post is on
the wildly optimistic side of projected rate of progress from my limited
understanding. Either way it is strictly better to enter a QC world with
Taproot enabled and most people are using it so we can introduce QC
resistant backup spend paths without hardforks before they become
practical. Depending on what happens they may not be needed but it's good
to have the option.

On Tue, 16 Mar 2021 at 10:11, Karl-Johan Alm via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, 16 Mar 2021 at 07:48, Matt Corallo via bitcoin-dev
>  wrote:
> >
> > Overall, the tradeoffs here seem ludicrous, given that any QC issues in
> Bitcoin need to be solved in another way, and
> > can't practically be solved by just relying on the existing hash
> indirection.
>
> The important distinction here is that, with hashes, an attacker has
> to race against the spending transaction confirming, whereas with
> naked pubkeys, the attacker doesn't have to wait for a spend to occur,
> drastically increasing the available time to attack.
>
>
First note that I am enthusiastically ignorant of QC technology so please
take the following with a bowl of salt.
The premise of Mark's post is that QC progress is currently exponential
(debatable) and will continue to be (unknowable) so "months" will turn into
days and into minutes in short time period. Since QC progress is
exponential and the speedup that ECC quantum algorithms offer is
exponential you're not dealing with the typical "Moore's law" progress in
terms of time to solve a particular problem; It's like exponential
exponential (math person help me now). You could easily go from ten
thousand years to break ECC to a few seconds within a year with that rate
of progress so I don't think "slow quantum" is an adversary worth
protecting against. I would love to know if I am wrong on this point.

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New PSBT version proposal

2021-03-09 Thread Lloyd Fournier via bitcoin-dev
Hi Andrew & all,

I've been working with PSBTs for a little while now. FWIW I agree with the
change of removing the global tx and having the input/output data stored
together in the new unified structures.

One thing I've been wondering about is how output descriptors could fit
into PSBTs. They are useful since they allow you to determine the maximum
satisfaction weight for inputs so you can properly align fees as things get
added. I haven't seen any discussion about including them in this revision.
Is it simply a matter of time before they make it into a subsequent PSBT
spec or is there something I'm missing conceptually?

Cheers,

LL

On Thu, 10 Dec 2020 at 09:33, Andrew Chow via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi All,
>
> I would like to propose a new PSBT version that addresses a few
> deficiencies in the current PSBT v0. As this will be backwards
> incompatible, a new PSBT version will be used, v1.
>
> The primary change is to truly have all input and output data for each
> in their respective maps. Instead of having to parse an unsigned
> transaction and lookup some data from there, and other data from the
> correct map, all of the data for an input will be contained in its map.
> Doing so also disallows PSBT_GLOBAL_UNSIGNED_TX in this new version.
> Thus I propose that the following fields be added:
>
> Global:
> * PSBT_GLOBAL_TX_VERSION = 0x02
>* Key: empty
>* Value: 32-bit little endian unsigned integer for the transaction
> version number. Must be provided in PSBT v1 and omitted in v0.
> * PSBT_GLOBAL_PREFERRED_LOCKTIME = 0x03
>* Key: empty
>* Value: 32 bit little endian unsigned integer for the preferred
> transaction lock time. Must be omitted in PSBT v0. May be provided in
> PSBT v1, assumed to be 0 if not provided.
> * PSBT_GLOBAL_INPUT_COUNT = 0x04
>* Key: empty
>* Value: Compact size unsigned integer. Number of inputs in this
> PSBT. Must be provided in PSBT v1 and omitted in v0.
> * PSBT_GLOBAL_OUTPUT_COUNT = 0x05
>* Key: empty
>* Value: Compact size unsigned integer. Number of outputs in this
> PSBT. Must be provided in PSBT v1 and omitted in v0.
>
> Input:
> * PSBT_IN_PREVIOUS_TXID = 0x0e
>* Key: empty
>* Value: 32 byte txid of the previous transaction whose output at
> PSBT_IN_OUTPUT_INDEX is being spent. Must be provided in PSBT v1 and
> omitted in v0.
> * PSBT_IN_OUTPUT_INDEX = 0x0f
>* Key: empty
>* Value: 32 bit little endian integer for the index of the output
> being spent. Must be provided in PSBT v1 and omitted in v0.
> * PSBT_IN_SEQUENCE = 0x0f
>* Key: empty
>* Value: 32 bit unsigned little endian integer for the sequence
> number. Must be omitted in PSBT v0. May be provided in PSBT v1 assumed
> to be max sequence (0x) if not provided.
> * PSBT_IN_REQUIRED_LOCKTIME = 0x10
>* Key: empty
>* Value: 32 bit unsigned little endian integer for the lock time that
> this input requires. Must be omitted in PSBT v0. May be provided in PSBT
> v1, assumed to be 0 if not provided.
>
> Output:
> * PSBT_OUT_VALUE = 0x03
>* Key: empty
>* Value: 64-bit unsigned little endian integer for the output's
> amount in satoshis. Must be provided in PSBT v1 and omitted in v0.
> * PSBT_OUT_OUTPUT_SCRIPT = 0x04
>* Key: empty
>* Value: The script for this output. Otherwise known as the
> scriptPubKey. Must be provided in PSBT v1 and omitted in v0.
>
> This change allows for PSBT to be used in the construction of
> transactions. With these new fields, inputs and outputs can be added as
> needed. One caveat is that there is no longer a unique transaction
> identifier so more care must be taken when combining PSBTs.
> Additionally, adding new inputs and outputs must be done such that
> signatures are not invalidated. This may be harder to specify.
>
> An important thing to note in this proposal are the fields
> PSBT_GLOBAL_PREFERRED_LOCKTIME and PSBT_IN_REQUIRED_LOCKTIME. A Bitcoin
> transaction only has a single locktime yet a PSBT may have multiple
> locktimes. To choose the locktime for the transaction, finalizers must
> choose the maximum of all of the *_LOCKTIME fields.
> PSBT_IN_REQUIRED_LOCKTIME is added because some inputs, such as those
> involving OP_CHECKLOCKTIMEVERIFY, require a specific minimum locktime to
> be set. This field allows finalizers to choose a locktime that is high
> enough for all inputs without needing to understand the scripts
> involved. The PSBT_GLOBAL_PREFERRED_LOCKTIME is the locktime to use if
> no inputs require a particular locktime.
>
> As these changes disallow the PSBT_GLOBAL_UNSIGNED_TX field, PSBT v1
> needs the version number bump to enforce backwards incompatibility.
> However once the inputs and outputs of a PSBT are decided, a PSBT could
> be "downgraded" back to v0 by creating the unsigned transaction from the
> above fields, and then dropping these new fields.
>
> If the list finds that these changes are reasonable, I will write a PR
> to modify 

Re: [bitcoin-dev] Taproot (and graftroot) complexity

2020-09-20 Thread Lloyd Fournier via bitcoin-dev
Hi Jay,

I don't think there's much of a difference in security or privacy.
The advice to avoid key-reuse remains the same and for the same reasons.

LL


On Sat, Sep 19, 2020 at 11:08 PM Jay Berg via bitcoin-dev
 wrote:
>
> Newb here..  don’t know if "in-reply-to" header is misbehaving.
>
> But this is the OP thread:
>
> [bitcoin-dev] Taproot (and graftroot) complexity
> Anthony Towns aj at erisian.com.au
> Mon Feb 10 00:20:11 UTC 2020
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017622.html
>
>  href="mailto:bitcoin-dev%40lists.linuxfoundation.org?Subject=Re:%20Re%3A%20%5Bbitcoin-dev%5D%20Taproot%20%28and%20graftroot%29%20complexity=%3C20200210002011.lelhcdmjejmoh6xv%40erisian.com.au%3E;
>  title="[bitcoin-dev] Taproot (and graftroot) complexity">aj at erisian.com.au
>  
>
> On 9/19/20, 5:35 AM, "bitcoin-dev on behalf of Jay Berg via bitcoin-dev" 
>  bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>
> > At the time you create a utxo, provided you don't reuse keys, all 
> taproot
> > spends are indistinguishable. At the time you spend a taproot utxo,
>
> does reusing keys act differently in taproot than with 
> Pay-to-PubKey-Hash? Or is it the same deal.. same pubkey creates same address?
>
> Question is: is the security/privacy implications worse when reusing 
> pubkeys with taproot?
>
> ty
> jay
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Revisiting squaredness tiebreaker for R point in BIP340

2020-08-13 Thread Lloyd Fournier via bitcoin-dev
Thanks for bringing this discovery up and a big thanks to Peter Dettman for
working on this.

I second what Nadav said. Removing pointless complexity is worth it even at
this stage. I also maintain a non-libsecp implementation of BIP340 etc.
Having two ways to convert an xonly to a point is a pain if you are trying
to maintain type safe apis. If there is no performance penalty (or even a
small one in the short term) to unifying xonly -> point conversion it's
worth it from my perspective.

LL

On Thu, Aug 13, 2020 at 6:29 AM Nadav Kohen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hello Pieter and all,
>
> I am one of the maintainers of Bitcoin-S[1] and I maintain our secp256k1
> bindings (via JNI) as well as our (inefficient) bouncy castle fallback
> implementations of all secp256k1 functionality we depend on including
> Schnorr signatures. In light of this new information that there is no real
> downside to using evenness as the nonce tie-breaker, I am personally very
> in favor of this change as it strictly simplifies things as well as making
> types consistent between nonces and persistent signing keys (I can get rid
> of our SchnorrNonce type :). An additional minor benefit not already
> mentioned is that in places in our codebase where deserialized data is just
> being passed around and not used, we currently require a computation to go
> from a (x-only) SchnorrNonce to an ECPublicKey whereas going from a
> SchnorrPublicKey simply requires pre-pending a 0x02 byte.
>
> I am likely not aware of the entire impact that changing the BIP at this
> stage would have but from my view (of having to update bindings and test
> vectors and my fallback implementation, as well as wanting to get a stable
> branch on secp256k1-zkp containing both ECDSA adaptor signatures and
> Schnorr signatures for use in Discreet Log Contracts), I think this change
> is totally worth it and it will only become harder to make this
> simplification in the future. The schnorrsig branch has not yet been merged
> into secp256k1 (and is nearing this stage I think) and so long as making
> this change doesn't set us back more than a month (which seems unlikely) I
> am personally in favor of making this change. Glad to hear other's thoughts
> on this of course but I figured I'd voice my support :)
>
> Best,
> Nadav
>
> [1] https://github.com/bitcoin-s/bitcoin-s/
>
>
>
> On Wed, Aug 12, 2020 at 2:04 PM Pieter Wuille via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hello all,
>>
>> The current BIP340 draft[1] uses two different tiebreakers for conveying
>> the Y coordinate of points: for the R point inside signatures squaredness
>> is used, while for public keys evenness is used. Originally both used
>> squaredness, but it was changed[2] for public keys after observing this
>> results in additional complexity for compatibility with existing systems.
>>
>> The reason for choosing squaredness as tiebreaker was performance: in
>> non-batch signature validation, the recomputed R point must be verified to
>> have the correct sign, to guarantee consistency with batch validation.
>> Whether the Y coordinate is square can be computed directly in Jacobian
>> coordinates, while determining evenness requires a conversion to affine
>> coordinates first.
>>
>> This argument of course relies on the assumption that determining whether
>> the Y coordinate is square can be done more efficiently than a conversion
>> to affine coordinates. It appears now that this assumption is incorrect,
>> and the justification for picking the squaredness tiebreaking doesn't
>> really exist. As it comes with other trade-offs (it slows down signing, and
>> is a less conventional choice), it would seem that we should reconsider the
>> option of having the R point use the evenness tiebreaker (like public keys).
>>
>> It is late in the process, but I feel I owe this explanation so that at
>> least the possibility of changing can be discussed with all information. On
>> the upside, this was discovered in the context of looking into a cool
>> improvement to libsecp256k1[5], which makes things faster in general, but
>> specifically benefits the evenness variant.
>>
>>
>> # 1. What happened?
>>
>> Computing squaredness is done through the Jacobi symbol (same inventor,
>> but unrelated to Jacobian coordinates). Computing evenness requires
>> converting points to affine coordinates first, and that needs a modular
>> inverse. The assumption that Jacobi symbols are faster to compute than
>> inverses was based on:
>>
>> * A (possibly) mistaken belief about the theory: fast algorithms for both
>> Jacobi symbols and inverses are internally based on variants of the same
>> extended GCD algorithm[3]. Since an inverse needs to extract a full big
>> integer out of the transition steps made in the extgcd algorithm, while the
>> Jacobi symbol just extracts a single bit, it had seemed that any advances
>> applicable to one would be applicable to the 

Re: [bitcoin-dev] SAS: Succinct Atomic Swap

2020-05-12 Thread Lloyd Fournier via bitcoin-dev
A quick correction to my post:

>
> Here's where the truly novel part comes in. Ruben solves this by extending
> the standard *TLC contract:
> 1. Bob redeem with secret
> 2. Alice refund after T1
> 3. Bob redeem without secret after T2
>
> This is actually:

1. Bob redeem with redeem secret
2. Alice refund after T1 with refund secret
3. Bob redeem without secret after T2

The fact that Alice reveals a secret when she refunds is crucial.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SAS: Succinct Atomic Swap

2020-05-12 Thread Lloyd Fournier via bitcoin-dev
Ruben,

In my opinion, this protocol is theoretical breakthrough as well as a
practical protocol. Well done! I want to try and distil the core abstract
ideas here as they appear to me. From my view, the protocol is a
combination of two existing ideas and one new one:

1. In atomic swaps you can make the refund transaction on one chain
dependent on the refund on the other using secret revelation. Thus only one
chain needs to have a timelock and the other refund can be conditioned on a
secret that is revealed when that first refund goes through. (This idea is
in the monero atomic swap [1]).
2. Secret revelations can be used to give unconstrained spending power to
one party. With an adaptor signature, rather than reveal a decryption key
for another signature, you can just make the decryption key your signing
key in the multisig so when you reveal it with the adaptor signautre the
other party gains full knowledge of the private key for the output and can
spend it arbitrarily. (this is just folklore and already what happens in
HTLCs -- though it looks like lightning people are about to get rid of the
unconstrained spend I think).

The combination of these two ideas is novel in itself. The problem with
idea (2) is that your unconstrained spending power over an output doesn't
matter much if there is a pre-signed refund transaction spending from it --
you still have to spend it before the refund becomes valid. But if you
bring in idea (1)  this problem goes away!
However, you are left with a new problem: What if the party with the
timelock never refunds? Then the funds are locked forever.

Here's where the truly novel part comes in. Ruben solves this by extending
the standard *TLC contract:
1. Bob redeem with secret
2. Alice refund after T1
3. Bob redeem without secret after T2

We might call this a "Forced Refund *TLC". Alice must claim the refund or
lose her money. This forces the refund secret revelation through
punishment. If Alice refuses to refund Bob gets the asset he wanted anyway!

The resulting protocol you get from applying these ideas is three
transactions. At the end, one party has their funds in a non HD key output
but if they want that they can just transfer it to an HD output in which
case you get four transactions again. Thus I consider this to be a strict
improvement over the four transaction protocol. Furthermore, one of the
chains does not need a timelock. This is remarkable as the four transaction
atomic swap is one of the most basic and most studied protocols. I
considered it to be kind of "perfect" in a way. It just goes to show that
this field is still very new and there are still things to discover in what
we think is the most well trodden ground.

I don't want to ignore that Ruben presents us with a two transaction
protocol. He made a nice video explaining it here:
https://www.youtube.com/watch?v=TlCxpdNScCA. It is harder to see the
elegance of the idea in the two tx protocol because it involves revocation
and relative timelocks etc. Actually, it is straightforward to naively
achieve a two tx atomic swap with payment channels:
1. Alice and Bob set up payment channels to each other on different chains
2. They atomic swap the balances of the channels off-chain using HTLCs
using the standard protocol.
3. Since one party exclusively owns the funds in each channel the party
with no funds simply reveals their key in the funding OP_CHECKMULTISIG to
the other
4. Both parties now watch the chain to see if the other tries to post a
commitment transactions.

The advantages that Ruben's two tx protocol has over this is that timelocks
and monitoring is only needed on one of the chains. This is nothing to
scoff at but for me the three tx protocol is the most elegant expression of
the idea and the two tx protocol is a more optimised version that might
make sense in some circumstances.

[1] https://github.com/h4sh3d/xmr-btc-atomic-swap/blob/master/README.md

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-05 Thread Lloyd Fournier via bitcoin-dev
On Tue, May 5, 2020 at 9:01 PM Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
> > Trust-minimization of Bitcoin security model has always relied first and
> > above on running a full-node. This current paradigm may be shifted by LN
> > where fast, affordable, confidential, censorship-resistant payment
> services
> > may attract a lot of adoption without users running a full-node.
>
> No, it cannot be shifted. This would compromise Bitcoin itself, which for
> security depends on the assumption that a supermajority of the economy is
> verifying their incoming transactions using their own full node.
>

Hi Luke,

I have heard this claim made several times but have never understood the
argument behind it. The question I always have is: If I get scammed by not
verifying my incoming transactions properly how can this affect anyone
else? It's very unintuative.  I've been scammed several times in my life in
fiat currency transactions but as far as I could tell it never negatively
affected the currency overall!

The links you point and from what I've seen you say before refer to "miner
control" as the culprit. My only thought is that this is because a light
client could follow a dishonest majority of hash power chain. But this just
brings me back to the question. If, instead of BTC, I get a payment in some
miner scamcoin on their dishonest fork (but I think it's BTC because I'm
running a light client) that still seems to only to damage me. Where does
the side effect onto others on the network come from?

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Mitigating Differential Power Analysis in BIP-340

2020-03-25 Thread Lloyd Fournier via bitcoin-dev
Hi Pieter,

Thanks for the detailed response.


> /secret key/secret keyI'll try to summarize the discussion we had that led
> to this choice, but most of it is on
> https://github.com/sipa/bips/issues/195 if you want the details.


Ahh I can't believe I missed that github issue while searching. I guess I
started reading a paper on DPA and got carried away. I can see you got to
where I was and then went much further including some empirical analysis.
Nice. I agree with the conclusion that xor is more robust than just hashing
randomness in the same block as the secret key.


> Let me first try to address what I think you're overlooking: in a
> BIP32/Taproot like scenario, the private key that goes into the signing
> algorithm functions as *both* secret and known to the attacker. That is to
> say, there is a master secret s, and signing key x that goes into the hash
> is x=s+a (mod n) for some value a that the attacker knows, and can modify
> (he cannot control it directly, but he may be able to grind it to have a
> structure he likes). I believe that means that feeding x to a hash directly
> itself is already a problem, regardless of what else goes into the hash -
> interactions between bits inside the hash operation that all come from x
> itself can leak bit-level information of x.  XORing (or any other simple
> mix operation that does not expose bit-level information) into the private
> key before giving it to a hash function seems like it would address this.
>

This is an subtle point that I didn't cross my mind. My gut feeling is
there isn't even a computational argument to made that what I was
suggesting is secure against DPA in that setting. DPA seems to be a PITA. A
footnote in the BIP with a citation for DPA (the ed25519 one from the issue
is good) and a hint about why you should avoid hashing Bitcoin secret keys
altogether would be good. This brings us to the next point.

It also assumes that somehow the computation of x itself is immune from
> leaks (something you pointed out in a previous e-mail, I noticed).
>

>From my reading of the HMAC papers it seems you might be able to vary the
BIP32 child index derivation to do this attack. Just thinking about it now,
these attacks seem far fetched just because in order for it to be useful
you need to have physical access to the device and to be able to accurately
measure power consumption in high resolution (which I guess you can't do
from a typical USB bus from corrupted software). Then you also need to get
the user to do lots of signing or derivation with their device. I guess a
malicious cable with some way of exfiltrating power consumption could do it.


> I'm happy for any input you may have here. In particular, the recent
> discussions around the interactions between anti-covert channel protection,
> randomness, and the ability to spot check hardware devices may mean we
> should revise the advice to consider not adding randomness unless such a
> anti-covert channel scheme is used.
>

My only comment here is that there will end up being more than one way to
do it and I think what you and your collaborators have put forward is at a
local optimum of design (now that I understand it). Thanks and well done!
It won't be the right optimum for everyone. To me, it seems like a good
place to start. If you develop a decent nonce exfiltration protected
signing protocol later then I don't see why HW wallets wouldn't compete for
favour amongst the community by implementing and updating their devices to
conform to it.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Mitigating Differential Power Analysis in BIP-340

2020-03-24 Thread Lloyd Fournier via bitcoin-dev
Hi List,

I felt this topic deserved it's own thread but it follows on from the
mailing list post [2] announcing a new PR [1] to change BIP-340 in several
ways, including adding random auxiliary data into the nonce
derivation function. Rather than hashing the randomness with the secret key
and message etc, the randomness is hashed then XOR'd (^) with the secret
key and the result is hashed like so to determine the secret nonce k:

(1) k = H_derive( sec_key ^ H_aux(rand) || pub_key_x || message)

The claim made in the mailing list post is that this is more secure against
"differential power analysis" (DPA) attacks than just doing the simpler and
more efficient:

(2) k = H_derive(sec_key || rand || pub_key_x || message)

The TL;DR here is that I don't think this is the case.

There was no citation for this claim, so I did some digging and found two
papers that seemed like they might be the origin of the idea [3,4] (I had
no idea about these attacks before). A relatively easy to understand
explanation of DPA attacks against is in [3]:

The fundamental principle behind all DPA attacks is that at some point in
> an algorithm’s execution, a function f exists that combines a fixed secret
> value with a variable which an attacker knows. An attacker can form
> hypotheses about the fixed secret value, and compute the corresponding
> output values of f by using an appropriate leakage model, such as the
> Hamming Distance model. The attacker can then use the acquired power
> consumption traces to verify her hypotheses, by partitioning the
> acquisitions or using Pearson’s correlation coefficient. These side-channel
> analysis attacks are aided by knowledge of details of the implementation
> under attack. Moreover, these attacks can be used to validate hypotheses
> about implementation details. In subsequent sections, these side-channel
> analysis attacks are referred to as DPA attacks.


For example, in the original BIP-340 proposal the nonce derivation was
vulnerable to DPA attacks as it was derived simply by doing
H_derive(sec_key || message). Since, the message is known to the attacker
and variable (even if it is not controller by her), the SHA256 compression
function run on (sec_key || message) may leak information about sec_key. It
is crucial to understand that just hashing sec_key before passing it into
the H_derive does *not* fix the problem. Although the attacker would be
unable to find sec_key directly, they could learn H(sec_key) and with that
know all the inputs into H_derive and therefore get the value of the secret
nonce k and from there extract the secret key from any signature made with
this nonce derivation algorithm.

The key thing I want to argue with this post is that there is no advantage
of (1) over (2) against DPA attacks, at least not given my understanding of
these papers. The way the attack in [3] works is by assuming that
operations in the compression function leak the "hamming distance" [5] (HD)
between the static secret thing that is being combined with the variable
public thing. In practice the attack involves many particulars about SHA256
but that is, at a high level, the right way to simplify it I think. The way
the paper suggests to fix the problem is to mask the secret data with
secret randomness before each sensitive operation and then strip off the
secret randomness afterwards. This seems to be the inspiration for the
structure of updated BIP-340 (1), however I don't believe that it provides
any extra protection over (2). My argument is as follows:

Claim A: If the randomness used during signing is kept secret from the
attacker then (2) is secure against DPA.

Since SHA256 has 64-byte blocks the hash H_derive(sec_key || rand ||
pub_key_x || message) will be split up into two 64 byte blocks, one
containing secret data (sec_key || rand) and the other containing data
known to the attacker (pub_key_x || message). The compression function will
run on (sec_key || rand) but DPA will be useless here because the
HD(sec_key, rand) will contain no information about sec_key since rand is
also secret. The output of the compression function on the first block will
be secret but *variable* so the intermediate hash state will not reveal
useful information when compressed with the second block.

Then I thought perhaps (1) is more robust in the case where the randomness
is known by the attacker (maybe the attacker can physically modify the
chipset to control the rng). We'd have to assume that the sec_key ^
H_aux(rand) isn't vulnerable to DPA (the LHS is under the control of the
attacker) to be true. Even under this assumption it turned out not to be
the case:

Claim B: If the randomness used during signing is known to the attacker,
then (1) is not secure against DPA.

In (1)  there are 96 bytes to be hashed and therefore two SHA256 blocks:
(H_aux(sec_key) ^ rand || pub_key_x) and (message). During the first
compression function call the attacker gets the HD of:
HD( sec_key ^ H_aux(rand),  pub_key_x)

Re: [bitcoin-dev] BIP 340 updates: even pubkeys, more secure nonce generation

2020-03-21 Thread Lloyd Fournier via bitcoin-dev
* To protect against differential power analysis, a different way of
> mixing in this randomness is used (masking the private key completely
> with randomness before continuing, rather than hashing them together,
> which is known in the literature to be vulnerable to DPA in some
> scenarios).
>

I think citation for this would improve the spec.

I haven't studied these attacks but it seems to me that every hardware
wallet would be vulnerable to them while doing key derivation. If the
attacker can get side channel information from hashes in nonce derivation
then they can surely get side channel information from hashes in HD key
derivation. It should actually be easier since the master seed is hashed
for anything the hardware device needs to do including signing.

is this the case?

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hash function requirements for Taproot

2020-03-16 Thread Lloyd Fournier via bitcoin-dev
On Fri, Mar 13, 2020 at 4:04 AM Tim Ruffing  wrote:
>
> I mean, the good thing is that there's a general method to defend
> against this, namely always adding a Merkle root on top. Maybe it's
> useful to make the warning here a litte bit more drastic:
>
https://github.com/sipa/bips/blob/bip-taproot/bip-0341.mediawiki#cite_ref-22-0
> Maybe we could actually mention this in BIP340, too, when we talk about
> key generation,

I missed this note in the BIP. This trick means you get property 2  (covert
taproot) for free if you prove property 3 (second covert taproot). This is
a big improvement as property 2 was dependent on the particulars of the key
generation scheme whereas property 3 is just based on Taproot being a
secure commitment scheme. Nice!

> I agree that modeling it as a commitment scheme is more natural. But I
> think an optimal model would capture both worlds, and would give the
> attacker signing oracles for the inner and the outer key, and an
> commitment opening oracle That is, it would capture that
>  * the ability to obtain signatures for the inner key does not help you
>to forge for the outer key
>  * the ability to obtain signatures for the outer key does not help you
>to open the commitment, and --- if already opened --- do not help
>you to forge for the inner key
>  * the ability to obtain an opening does not help you to forge for
>either key...
>  * etc
>
> I believe that all these properties hold, and I believe this even
> without a formal proof.
>
>
> Still, it would be great to have one. The problem here is really that
> things get complex so quickly. For example, how do you model key
> generation in the game(s) that I sketched above? The traditional way or
> with MuSig. The reality is that we want to have everything combined:
>  * BIP32
>  * MuSig (and variants of it)
>  * Taproot (with scripts that refer to the inner key)
>  * sign-to-contract stuff (e.g., to prevent covert channels with
>hardware wallets)
>  * scriptless scrips
>  * blind signatures
>  * threshold signtures
>  * whatever you can imagine on top of this
>
> It's very cumbersome to come up with a formal model that includes all
> of this. One common approach to protocols that are getting too complex
> is to switch to simpler models, e.g., symbolic models/Dolev-Yao models
> but that's hard here given that we don't have clear layering. Things
> would be easier to analyze if Taproot was really  just a commitment to
> a verification key. But it's more, it's something that's both a
> verification and a commitment. Taproot interferes with Schnorr
> signatures on an algebraic level (not at all black-box), and that's
> actually the reason why it's so powerful and efficient. The same is
> true for almost everything in the list above, and this puts Taproot
> outside the scope of proof assistants for cryptographic protocols that
> work on a symbolic level of abstraction. I really wonder how we can
> handle this better. This would improve our understanding of the
> interplay between various crypto components better, and make it easier
> to judge future proposals on all levels, from consensus changes to new
> multi-signature protocols, etc.
>

I hope we can prove these things in a more modular way without creating a
hybrid scheme with multiple oracles. My hope is that you can prove that any
secure key generation method will be secure once Taproot is applied to it
if it is a secure commitment scheme. This was difficult before I knew about
the empty commitment trick! Although the Taprooted key and the internal key
are algebraically related, the security requirements on the two primitives
(the group and the hash function) are nicely separated. Intuitively,
1. being able to  break the Taproot hash function (e.g. find pre-images)
does not help you forge signatures on any external key; it can only help
you forge fake commitment openings (for the sake of this point assume that
Schnorr uses an unrelated hash function for the challenge).
2. being able solve discrete logarithms doesn't help you break Taproot; it
just helps you forge signatures.

I believe we can formally prove these two points and therefore dismiss the
need for any signing or commitment opening oracles in any security notion
of Taproot:

1. We can dismiss the idea of an adversary that uses a commitment opening
oracle to forge a signature because the commitment opening is not even an
input into the signing algorithm. Therefore it is information theoretically
impossible to learn anything about forging a signature from a Taproot
opening.
2. I think we can dismiss the idea of an adversary that uses a signing
oracle to forge a fake Taproot opening. To see this note that the Taproot
Forge reduction to RPP in my poster actually still holds if the adversary
is given the secret key x (with a few other modifications). In the proof I
kept it hidden just because that seemed more realistic. If we give the
adversary the secret key we can dismiss the idea that a signing 

Re: [bitcoin-dev] Schnorr sigs vs pairing sigs

2020-03-06 Thread Lloyd Fournier via bitcoin-dev
Hi Erik,

There are a strong arguments for and against pairing based sigs in Bitcoin.
One very strong argument in favour over non-deterministic signatures like
Schnorr over BLS is it enables a kind of signature encryption called
"adaptor signatures". This construction is key to many exciting up and
coming layer 2 protocols and isn't possible unless the signature scheme
uses randomness.

self plug: I have a paper on this topic called "One-Time Verifiably
Encrypted Signatures A.K.A Adaptor Signatures"
 https://github.com/LLFourn/one-time-VES/blob/master/main.pdf

LL


On Fri, Mar 6, 2020 at 6:03 AM Erik Aronesty via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Schnorr sigs rely so heavily on the masking provided by a random
> nonce.   There are so many easy ways to introduce bias (hash + modulo,
> for example).
>
> Even 2 bits of bias can result in serious attacks:
>
> https://ecc2017.cs.ru.nl/slides/ecc2017-tibouchi.pdf
>
> Maybe pairing based sigs  - which are slower - might be both more
> flexible, and better suited to secure implemetnations?
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hash function requirements for Taproot

2020-03-05 Thread Lloyd Fournier via bitcoin-dev
> I am uncertain what you mean here by "coin-tossing".
> From the comparison to MuSig, I imagine it is an interactive key
generation protocol like this:

> * Everybody generates fresh keypairs.
> * Everybody sends the hash of their pubkey to everyone else.
> * After receiving a hash of pubkey from everyone else, everybody sends
their pubkey to everyone else.
> * They add all their pubkeys to generate the aggregate key (and if using
Taproot, use it as the internal key).

> Is that correct?

Yes exactly. The reason it's called coin tossing is that the resulting key
is guaranteed to be uniformly random (in the random oracle model at least),
so it's like tossing a fair 2^256 sided coin. This is not true in MuSig for
example, where the aggregate key is not guaranteed to be from a uniform
distribution against a malicious party (but still secure as an aggregate
key).

> However, it can generally be pointed out that, before you put anything
into an n-of-n, you would damn well sure want to have *some* assurance that
you can get it out later. So in general you would need coordination and
interaction anyway to arrange getting into an n-of-n in the first place.

Right. Taking your example of a lightning channel, when you set it up I
don't *think* there is a way to use the non-interactivity of MuSig to
remove any rounds of communication to get to the starting state where there
is a channel funding on-chain and both parties have a tx that spends from
it which returns their funds. Doing coin tossing for the aggregate key as
well as the aggregate nonce shouldn't lead to any extra rounds of
communication. The downside of coin tossing is that it requires honest
parties to sample their keys non-deterministically (or at least have a
counter to avoid using the same key twice).

> On the other hand, it would be best to have at least some minimum of
privacy by always interacting over Tor and having a Tor .onion address,
which has absolutely horrid latency because human beings cry when peeling
onions.
> So in general reducing the latency by reducing communication rounds is
better in general.
> Counter to this, assuming you use an n-of-n in an offchain protocol of
some sort, the number of communication rounds to generate the aggregate key
may be dwarfed by the total number of communication rounds to create
signatures to update the offchain protocol.
> Counter counter to this is that one plan for reducing communications
rounds for creating signatures during offchain operation is to (haha) use a
Taproot with an n-of-n internal key and a tapscript that has n
`OP_CHECKSIG` operations, so that for normal operation you just toss
individual signatures at each other but at termination of the offchain
protocol you can do the heavy MuSig-style signing with the n-of-n aggregate
key.

Counter³ to this is that, in the case of lightning, the aggregate key for a
PTLC does not need to be chosen at payment time.  They channel members
could simply use the "master" aggregate key they generated by coin tossing
at the channel's inception and pseudorandomly randomise it every time they
need a new joint key (so the keys do not look related to everyone else on
the chain but you would effectively just be reusing the same public key).

Having said that if there is some advantage to using MuSig in some
particular case I wouldn't hesitate to use it in combination with Taproot.
I don't think the new assumption that I think you have to make wrt to the
hash function really weighs up against most design considerations. In
general, it is probably worth considering whether your protocol actually
benefits from the non-interactivity MuSig gives in the key generation
stage. If it doesn't due to the fact that it doesn't make signing anymore
non-interactive, then coin tossing might be the answer.

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Hash function requirements for Taproot

2020-03-04 Thread Lloyd Fournier via bitcoin-dev
Hi List,

I recently presented a poster at the Financial Cryptography conference
'2020 which you can find here:
https://github.com/LLFourn/taproot-ggm/blob/master/main.pdf.  It attempts
to show the security requirements for the tweak hash function in Taproot.
In this post I'll give a long description of it but first let me tl;dr:

Taproot requires no new assumptions of SHA256 over what are already made by
Schnorr signatures themselves with one exception: when using a
non-interactive key generation protocol to produce a Taproot internal key
(e.g MuSig). To prove security in this scenario we need a make an
additional assumption about SHA256: as well as being collision resistant
(i.e. find two hashes h_1 - h_2 = 0), it must satisfy a more general kind
of collision resistance where it is hard to find h_1 - h_2 = d for *any d*
when the adversary is challenged to find h_1 and h_2 with random prefixes.
This is obviously a plausible assumption. Put informally, it says that zero
is not a special case where finding collisions is difficult but rather
solving the 2-sum problem is hard for all values of d (when challenged with
random prefixes).

Now the long version.

My motivation for creating this poster came from questions I had after
discussions in Taproot Study Group #18 (this study group initiative was a
great idea btw). The main question I had was "Why is Taproot binding?" i.e.
why is it true that I can only commit to one Merkle root. Isn't it possible
that a malicious party could produce a second covert Taproot spend that
none of the other parties to the output agreed to? I submitted a poster
proposal to FC to force myself to get to the bottom of it.

The premise of the poster is to use the Generic Group Model to try and
figure out how the hash function would have to fail for Taproot to be
insecure. Most of the poster is taken up cartoon reductions I made to
remind myself as to why what I was saying might be true. They are
incomplete and difficult to parse on their own so hopefully this post is a
useful companion to them.

=== The Security of Taproot ===

There are three scenarios/games we must consider when asking whether
Taproot is secure in the context of Bitcoin:

1. Taproot Forge: Forging taproot spends must be hard. The adversary must
not be able to take a public key off the blockchain and produce a forged
Taproot spend from it.
2. Covert Taproot: When an adversary is executing a multi-party key
generation protocol (e.g. MuSig) it should be hard for them to produce a
covert malicious Taproot spend from the joint key  i.e. when honest parties
think there is no Taproot on a key there shouldn't be any Taproot on the
key. Note this is not guaranteed to be hard by 1 being hard.
3. Second Covert Taproot: Like 2, except that if honest parties agree to a
Taproot spend then the adversary shouldn't be able to generate a second
Taproot spend they are unaware of.

Properties (1) and (2) can be argued succinctly if we just prove that
Taproot is a secure commitment scheme. It should be clear that if a Taproot
external key T = X + H(X||m)*G is a secure commitment scheme (Hiding and
Binding) to any arbitrary message m, then it is a secure commitment scheme
to a Merkle root. If so, then properties (1) and (3) hold. (1) holds
because if you can create an opening to a commitment not generated by you,
you either broke hiding (if your opening is the same as the honest one) or
broke binding (if it's different). (3) holds because you must have broken
binding as there are now two openings to the same commitment.

Property (2) is more difficult to argue as it depends on the multi-party
key generation protocol. Case in point: Taproot is completely broken when
combined with a proof of knowledge key generation protocol where along with
their public keys each party provides a proof of knowledge of the secret
key. Where X_1 is the key of the honest party, the malicious party can
choose their key X_2 to be G*H(X_1 || m) where m is a malicious Merkle
root. Clearly the malicious party has a covert Taproot for X = X_1 + X_2
and can produce a proof of knowledge for X_2.

Given this definition of security, we now move onto how we should model the
problem to prove they hold.

=== Generic Group Model vs Random Oracle Model ===

For practical cryptographic schemes you often have to idealise one of its
components to prove it secure. The most popular candidate for idealisation
is the hash function in the Random Oracle Model (ROM), which idealises a
hash function as a "random oracle", a black box which spits out random
values for each input. For example, the original "forking lemma" proof by
Pointcheval and Stern [1] shows the Schnorr signature scheme is unforgeable
in this model if the discrete logarithm problem is hard. In other words,
idealising the hash function allows us to isolate what security assumptions
we are making about the group (e.g. the discrete logarithm problem being
hard in it).

But what if we want to know what assumptions we 

Re: [bitcoin-dev] BIP 340 updates: even pubkeys, more secure nonce generation

2020-02-26 Thread Lloyd Fournier via bitcoin-dev
> Correct, except that the speedup from is_even(y) over
is_quadratic_residue(y) affects signing and not keypair generation.

Isn't this the same thing since in the spec it generates the public key in
the signing algorithm? If you pre-generate public key and pass it in there
would be no speedup to signing that I can see.

> It's not clear why removing these features from the spec would be an
improvement.

It could just be me but "here's the most minimal signing algorithm, you can
add things in these ways to make it more robust  in some settings" is more
intuitive than "here's the most robust signing algorithm, you can remove
these things in these ways if they don't apply to your setting". I see your
point that if it is likely to be misused then maybe the latter is
preferable.

LL

On Thu, Feb 27, 2020 at 2:33 AM Jonas Nick  wrote:

> > Let me put change (1) into my own words.
>
> Correct, except that the speedup from is_even(y) over
> is_quadratic_residue(y)
> affects signing and not keypair generation.
>
> > With change (2), I feel like including this auxiliary random data is
> overkill
> > for the spec. [...] I feel similarly about hashing the public key to get
> the
> > nonce.
>
> It's not clear why removing these features from the spec would be an
> improvement.
> The BIP follows a more reasonable approach: it specifies a reasonably
> secure
> signing algorithm and provides the rationale behind the design choices.
> This
> allows anyone to optimize for their use case if they choose to do so.
> Importantly, "reasonably secure" includes misuse resistance which would be
> violated if the pubkey was not input to the nonce generation function.
>
> > Perhaps they even deserve their own BIP?
>
> Yes, a standard for nonce exfiltration protection and MuSig would be
> important
> for compatibility across wallets.
>
>
> On 2/26/20 4:20 AM, Lloyd Fournier via bitcoin-dev wrote:
> > Hi Pieter,
> >
> > Let me put change (1) into my own words. We are already computing affine
> > coordinates since we store public keys as the affine x-coordinate. It is
> > faster to compute is_even(y) than is_quadratic_residue(y) so we get a
> speed
> > up here during keypair generation. In the verification algorithm, we do
> the
> > following for the public key  x_only => affine + negate if not is_even(y)
> > => jacobian. The minor slowdown in verification comes from the extra
> > evenness check and possible negation which we didn't have to be done in
> the
> > previous version. This seems like a reasonable change if it makes things
> > easier for existing code bases and infrastructure.
> >
> > With change (2), I feel like including this auxiliary random data is
> > overkill for the spec. For me, the main point of the spec is the
> > verification algorithm which actually affects consensus. Providing a note
> > that non-deterministic signatures are preferable in many cases and here's
> > exactly how you should do that (hash then xor with private key) is
> > valuable. In the end, people will want several variations of the signing
> > algorithm anyway (e.g. pass in public key with secret key) so I think
> > specifying the most minimal way to produce a signature securely is the
> most
> > useful thing for this document.
> >
> > I feel similarly about hashing the public key to get the nonce. A note in
> > the alternative signing section that "if you pass the public key into
> > `sign` along with the secret key then you should do hash(bytes(d) ||
> > bytes(P) || m)" would suffice for me.
> >
> > Despite only being included in the alternative signing section, I it
> would
> > be nice to have a few of test vectors for these alternative methods
> anyway.
> > Perhaps they even deserve their own BIP?
> >
> > Cheers,
> >
> > LL
> >
> >
> > On Mon, Feb 24, 2020 at 3:26 PM Pieter Wuille via bitcoin-dev <
> > bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> >> Hello list,
> >>
> >> Despite saying earlier that I expected no further semantical changes
> >> to BIP 340-342, I've just opened
> >> https://github.com/bitcoin/bips/pull/893 to make a number of small
> >> changes that I believe are still worth making.
> >>
> >> 1. Even public keys
> >>
> >> Only one change affects the validation rules: the Y coordinate of
> >> 32-byte public keys is changed from implicitly square to implicitly
> >> even. This makes signing slightly faster (in the microsecond range),
> >> though also verification negligibly slower (in the nanoseco

Re: [bitcoin-dev] BIP 340 updates: even pubkeys, more secure nonce generation

2020-02-25 Thread Lloyd Fournier via bitcoin-dev
Hi Pieter,

Let me put change (1) into my own words. We are already computing affine
coordinates since we store public keys as the affine x-coordinate. It is
faster to compute is_even(y) than is_quadratic_residue(y) so we get a speed
up here during keypair generation. In the verification algorithm, we do the
following for the public key  x_only => affine + negate if not is_even(y)
=> jacobian. The minor slowdown in verification comes from the extra
evenness check and possible negation which we didn't have to be done in the
previous version. This seems like a reasonable change if it makes things
easier for existing code bases and infrastructure.

With change (2), I feel like including this auxiliary random data is
overkill for the spec. For me, the main point of the spec is the
verification algorithm which actually affects consensus. Providing a note
that non-deterministic signatures are preferable in many cases and here's
exactly how you should do that (hash then xor with private key) is
valuable. In the end, people will want several variations of the signing
algorithm anyway (e.g. pass in public key with secret key) so I think
specifying the most minimal way to produce a signature securely is the most
useful thing for this document.

I feel similarly about hashing the public key to get the nonce. A note in
the alternative signing section that "if you pass the public key into
`sign` along with the secret key then you should do hash(bytes(d) ||
bytes(P) || m)" would suffice for me.

Despite only being included in the alternative signing section, I it would
be nice to have a few of test vectors for these alternative methods anyway.
Perhaps they even deserve their own BIP?

Cheers,

LL


On Mon, Feb 24, 2020 at 3:26 PM Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hello list,
>
> Despite saying earlier that I expected no further semantical changes
> to BIP 340-342, I've just opened
> https://github.com/bitcoin/bips/pull/893 to make a number of small
> changes that I believe are still worth making.
>
> 1. Even public keys
>
> Only one change affects the validation rules: the Y coordinate of
> 32-byte public keys is changed from implicitly square to implicitly
> even. This makes signing slightly faster (in the microsecond range),
> though also verification negligibly slower (in the nanosecond range).
> It also simplifies integration with existing key generation
> infrastructure. For example BIP32 produces public keys with known
> even/oddness, but squaredness would need to be computed separately.
> Similar arguments hold for PSBT and probably many other things.
>
> Note that the Y coordinate of the internal R point in the signature
> remains implicitly square: for R the squaredness gives an actual
> performance gain at validation time, but this is not true for public
> keys. Conversely, for public keys integration with existing
> infrastructure matters, but R points are purely internal.
>
> This affects BIP 340 and 341.
>
> 2. Nonce generation
>
> All other semantical changes are around more secure nonce generation
> in BIP 340, dealing with various failure cases:
>
> * Since the public key signed for is included in the signature
> challenge hash, implementers will likely be eager to use precomputed
> values for these (otherwise an additional EC multiplication is
> necessary at signing time). If that public key data happens to be
> gathered from untrusted sources, it can lead to trivial leakage of the
> private key - something that Greg Maxwell started a discussion about
> on the moderncrypto curves list:
> https://moderncrypto.org/mail-archive/curves/2020/001012.html. We
> believe it should therefore be best practice to include the public key
> also in the nonce generation, which largely mitigates this problem.
>
> * To protect against fault injection attacks it is recommended to
> include actual signing-time randomness into the nonce generation
> process. This was mentioned already, but the update elaborates much
> more about this, and integrates this randomness into the standard
> signing process.
>
> * To protect against differential power analysis, a different way of
> mixing in this randomness is used (masking the private key completely
> with randomness before continuing, rather than hashing them together,
> which is known in the literature to be vulnerable to DPA in some
> scenarios).
>
> 3. New tagged hash tags
>
> To make sure that any code written for the earlier BIP text fails
> consistently, the tags used in the tagged hashes in BIP 340 are
> changed as well.
>
> What do people think?
>
> --
> Pieter
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Annoucement] Discreet Log Contract Protocol Specification

2020-01-28 Thread Lloyd Fournier via bitcoin-dev
Hi Chris,

This is a really exciting effort. I hope I will be able to contribute to
it. I was wondering if you had seen the idea that DLCs can be done in only
two transaction using Schnorr[1]. I also think this can be done in Bitcoin
as it is today using ECDSA adaptor signatures [2]. In my mind, the adaptor
signature protocol is both easier to specify and implement on top of being
cheaper and more private.

LL

[1] https://lists.launchpad.net/mimblewimble/msg00485.html
[2]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-November/002316.html

On Tue, Jan 14, 2020 at 2:12 AM Chris Stewart via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Suredbits and Crypto Garage have begun to work on a specification for
> using discreet log contracts  in a
> safe, private and interoperable way. We are writing to the mailing list to
> inform and solicit feedback for the protocol specification so that we can
> -- as a community -- agree on a common standard to use Bitcoin oracles.
>
> Our goal is to end up with a set of documents like the BIPs (Bitcoin
> Improvement Proposals) and BOLTs (Basis of Lightning Technology) so that
> others that wish to use the technology can easily write software to
> integrate into the protocol.
>
> A secondary goal of ours is to remain compatible with standards used by
> other bitcoin related protocols (like Lightning) so that every future
> bitcoin related protocol can reach for a “toolbox” of agreed standards for
> things like funding transactions and closing transactions. We want to avoid
> reinventing the wheel where possible and allow for library developers to
> re-use software to hook into many bitcoin related protocols.
>
> You can find the specification repository here:
>
> https://github.com/discreetlogcontracts/dlcspecs/
>
> For more information on DLCs:
>
> [1] - https://adiabat.github.io/dlc.pdf
>
> [2] - https://cryptogarage.co.jp/p2pd/
>
> [3] -
> https://suredbits.com/discreet-log-contracts-part-1-what-is-a-discreet-log-contract/
>
> [4] -
> https://blockstream.com/2019/04/19/en-transacting-bitcoin-based-p2p-derivatives/
>
> [5] - https://dci.mit.edu/smart-contracts
>
> -Chris
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Composable MuSig

2019-12-08 Thread Lloyd Fournier via bitcoin-dev
Hi ZmnSCPxj,

I think you're idea of allowing multiple Rs is a fine solution as it
would essentially mean that you were just doing a three party MuSig
with more specific communication structure. As you mentioned, this is
not quite ideal though.

> It seems to me that what is needed for a composable MuSig is to have a 
> commitment scheme which is composable.

Maybe. Showing certain attacks don't work is a first step. It would
take some deeper analysis of the security model to figure out what
exactly the MuSig requires of the commitment scheme.

> To create a commitment `c[A]` on the point A, such that `A = a * G`, the 
> committer:
>
> * Generates random scalars `r` and `m`.
> * Computes `R` as `r * G`.
> * Computes `s` as `r + h(R | m) * a`.
> * Gives `c[A]` as the tuple `(R, s)`.

This doesn't look binding. It's easy to find another ((A,a),m) which
would validate against (R,s). Just choose m and choose a = (s - r)
h(R||m)^-1.

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Composable MuSig

2019-12-01 Thread Lloyd Fournier via bitcoin-dev
Hi ZmnSCPxj,

> > Just a quick note: I think there is a way to commit to a point properly 
> > with Pedersen commitments. Consider the following:
> > COM(X) = (y*G + z*H, y*G + X)  where y and z are random and the opening is 
> > (y,z,X).  This seems to be a  unconditionally hiding and computationally 
> > binding homomorphic commitment scheme to a point based on the DL problem 
> > rather than DDH.
>
> So the Pedersen commitment commits to a tweak on `X`, which is revealed later 
> so we can un-tweak `X`.
> Am I correct in assuming that you propose to use `X` for the contribution to 
> `R` for a participant?
> How is it different from using ElGamal commitments?

Yes. It's not significantly different. It is unconditionally hiding
rather than binding (ElGamal is unconditionally binding). I just
thought of it while reading your post so I mentioned it. The real
question is what properties does the commitment scheme need to be
appropriate for MuSig R coin tossing?
In the security proof, the commitment hash is modelled as a random
oracle rather than as an abstract commitment scheme. I wonder if any
MuSig author has an opinion on whether the H_com interaction can be
generalised to a commitment scheme with certain properties (e.g
equivocal, extractable). By the looks of it, the random oracle is
never explicitly programmed except with randomly generated values so
maybe there is hope that a non ROM commitment scheme can do the job. I
guess the reduction would then be to either breaking the discrete
logarithm problem OR some property of the commitment scheme.

Cheers,

LL
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Composable MuSig

2019-11-29 Thread Lloyd Fournier via bitcoin-dev
Hi ZmnSCPxj,

Very interesting problem.

Just a quick note: I think there is a way to commit to a point properly
with Pedersen commitments. Consider the following:
COM(X) = (y*G + z*H, y*G + X)  where y and z are random and the opening is
(y,z,X).  This seems to be a  unconditionally hiding and computationally
binding homomorphic commitment scheme to a point based on the DL problem
rather than DDH.

LL

On Mon, Nov 25, 2019 at 10:00 PM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> So I heard you like MuSig.
>
>
> Introduction
> 
>
> Previously on lightning-dev, I propose Lightning Nodelets, wherein one
> signatory of a channel is in fact not a single entity, but instead an
> aggregate:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002236.html
>
> Generalizing:
>
> * There exists some protocol that requires multiple participants agreeing.
>   * This can be implemented by use of MuSig on the public keys of the
> participants.
> * One or more of the participants in the above protocol is in fact an
> aggregate, not a single participant.
>   * Ideally, no protocol modification should be needed to support such
> aggregates, "only" software development without modifying the protocol
> layer.
>   * Obviously, any participant of such a protocol, whether a direct
> participant, or a member of an aggregated participant of that protocol,
> would want to retain control of its own money in that protocol, without
> having to determine if it is being Sybilled (and all other participants are
> in fact just one participant).
>   * Motivating example: a Lightning Network channel is the aggregate of
> two participants, the nodes creating that channel.
> However, with nodelets as proposed above, one of the participants is
> actually itself an aggregate of multiple nodelets.
> * This requires that a Lightning Network channel with a MuSig address,
> to have one or both participants, be potentially an aggregate of two or
> more nodelet participants, e.g. `MuSig(MuSig(A, B), C)`
>
> This is the "MuSig composition" problem.
> That is, given `MuSig(MuSig(A, B), C)`, and the *possibility* that in fact
> `B == C`, what protocol can A use to ensure that it uses the three-phase
> MuSig protocol (which has a proof of soundness) and not inadvertently use a
> two-phase MuSig protocol?
>
> Schnorr Signatures
> ==
>
> The scheme is as follows.
>
> Suppose an entity A needs to show a signature.
> At setup:
>
> * It generates a random scalar `a`.
> * It computes `A` as `A = a * G`, where `G` is the standard generator
> point.
> * It publishes `A`.
>
> At signing a message `m`:
>
> * It generates a random scalar `r`.
> * It computes `R` as `R = r * G`.
> * It computes `e` as `h(R | m)`, where `h()` is a standard hash function
> and `x | y` denotes the serialization of `x` concatenated by the
> serialization of `y`.
> * It computes `s` as `s = r + e * a`.
> * It publishes as signature the tuple of `(R, s)`.
>
> An independent validator can then get `A`, `m`, and the signature `(R, s)`.
> At validation:
>
> * It recovers `e[validator]` as so: `e[validator] = h(R | m)`
> * It computes `S[validator]` as so: `S[validator] = R + e[validator] * A`.
> * It checks if `s * G == S[validator]`.
>   * If `R` and `s` were indeed generated as per signing algorithm above,
> then:
> * `S[validator] = R + e[validator] * A`
> * `== r * G + e[validator] * A`; subbstitution of `R`
> * `== r * G + h(R | m) * A`; substitution of `e[validator]`
> * `== r * G + h(R | m) * a * G`; substitution of `A`.
> * `== (r + h(R | m) * a) * G`; factor out `G`
> * `== (r + e * a) * G`; substitution of `h(R | m)` with `e`
> * `== s * G`; substitution of `r + e * a`.
>
> MuSig
> =
>
> Under MuSig, validation must remain the same, and multiple participants
> must provide a single aggregate key and signature.
>
> Suppose there exist two participants A and B.
> At setup:
>
> * A generates a random scalar `a` and B generates a random scalar `b`.
> * A computes `A` as `A = a * G` and B computes `B` as `B = b * G`.
> * A and B exchange `A` and `B`.
> * They generate the list `L`, by sorting their public keys and
> concatenating their representations.
> * They compute their aggregate public key `P` as `P = h(L) * A + h(L) * B`.
> * They publish the aggregate public key `P`.
>
> Signing takes three phases.
>
> 1.  `R` commitment exchange.
>   * A generates a random scalar `r[a]` and B generates a random scalar
> `r[b]`.
>   * A computes `R[a]` as `R[a] = r[a] * G` and B computes `R[b]` as `R[b]
> = r[b] * G`.
>   * A computes `h(R[a])` and B computes `h(R[b])`.
>   * A and B exchange `h(R[a])` and `h(R[b])`.
> 2.  `R` exchange.
>   * A and B exchange `R[a]` and `R[b]`.
>   * They validate that the previous given `h(R[a])` and `h(R[b])` matches.
> 3.  `s` exchange.
>   * They compute `R` as `R = R[a] + R[b]`.
>   * They compute `e` as `h(R | m)`.
>   * A computes 

Re: [bitcoin-dev] BIPable-idea: Consistent and better definition of the term 'address'

2019-10-10 Thread Lloyd Fournier via bitcoin-dev
Hi Thread,

This may not be the most practical information, but there actually did
exist an almost perfect analogy for Bitcoin addresses from the ancient
world: From wikipedia https://en.wikipedia.org/wiki/Bulla_(seal)

"Transactions for trading needed to be accounted for efficiently, so the
clay tokens were placed in a clay ball (bulla), which helped with
dishonesty and kept all the tokens together. In order to account for the
tokens, the bulla would have to be crushed to reveal their content. This
introduced the idea of impressing the token onto the wet bulla before it
dried, to insure trust that the tokens hadn't been tampered with and for
anyone to know what exactly was in the bulla without having to break it."

You could only use the bulla once because it had to be destroyed in order
to get the tokens out! I think there are even examples of bulla with a kind
of "signature" on them (an imprint with the seal of a noble family etc).

"send me a Bitcoin bulla" has a nice ring to it!

Sincerely,

LL





On Fri, Oct 11, 2019 at 2:44 AM Emil Engler via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> * Sorry if this mail was sent multiple times, my E-Mail client went crazy *
>
> Thanks for all your feedback.
> I came to the decision to write a BIP for this, even if it might not be
> implemented by many wallets, a standardization is never wrong and this
> would be the first step in the correct direction for better on-chain
> privacy.
>
> However currently we still need a good term for the 'address' replacement.
>
> The current suggestions are:
> * Invoice ID
> * Payment Token
> * Bitcoin invoice (address)
> * Bitcoin invoice (path)
>
> Because of the LN term invoice I really like the term 'Bitcoin Invoice'
> by Chris Belcher.
>
> So how do find a consensus about these terms?
>
> Greetings
> Emil Engler
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-06 Thread Lloyd Fournier via bitcoin-dev
Hi Thread,

I made a reply to the OP but didn't "reply all" so it just went directly to
Ethan. Since the comments were interesting I'll attempt to salvage them by
posting them in full:

== Lloyd's post ==
Hi Ethan,

I'd be interested to know what protocols you need OP_CAT for. I'm trying to
figure out if there really exists any script based protocol that doesn't
have a more efficient scriptless counterpart.  For example,
A²L[1] achieves the same thing as Tumblebit but requires no script. I can
imagine paying based on a merkle path could be useful, but a protocol was
recently suggested on lightning-dev [2] that does this but without OP_CAT
(and without any script!).


[1] https://eprint.iacr.org/2019/589.pdf
[2]
https://www.mail-archive.com/lightning-dev@lists.linuxfoundation.org/msg01427.html
(*I linked to the wrong thread in the original email*).

LL

== Ethan's response ==
Hi Lloyd,

Thanks for your response. I am not sure if you intended to take this off
list or not.

I plan to at some point to enumerate in detail protocols that OP_CAT would
benefit. A more important point is that OP_CAT is a basic building block
and that we don't know what future protocols it would allow. In my own
research I have avoiding going down certain paths because it isn't worth
the time to investigate knowing that OP_CAT wouldn't make the protocol
practical.

In regards to scriptless scripts they almost always require an interactive
protocol and sometimes ZKPs. A2L is very impressive but like TumbleBit it
places a large burden on the developer. Additionally I am aware of no way
to reveal a subset of preimages with scriptless scripts, do a conditioned
reveal i.e. these preimages can only spend under these two pubkeys and
timelockA where after timelockZ this other pubkey can spend without a
preimages. Scriptless scripts are a fantastic tool but they shouldn't be
the only tool that we have.

I'm not sure I follow what you are saying with [2]

This brings me back a philosophical point:
Bitcoin should give people basic tools to build protocols without first
knowing what all those protocols are especially when those tools have very
little downside.

I really appreciate your comments.

Thanks,
Ethan
==

*Back to normal thread*

Hi Ethan,

Thanks for the insightful reply and sorry for my mailing list errors.

> I plan to at some point to enumerate in detail protocols that OP_CAT
would benefit.

Sweet. Thanks.

> Additionally I am aware of no way to reveal a subset of preimages with
scriptless scripts, do a conditioned reveal i.e. these preimages can only
spend under these two pubkeys and timelockA where after timelockZ this
other pubkey can spend without a preimages. Scriptless scripts are a
fantastic tool but they shouldn't be the only tool that we have.

Yes. With adaptor signatures there is no way to reveal more than one
pre-image; you are limited to revealing a single scalar. But you can have
multiple transactions spending from the same output, each with a different
set of scriptless conditions (absolute time locks, relative time locks and
pre-image reveal). This is enough to achieve what I think you are
describing. FWIW there's a growing consensus that you can do lightning
without script [1]. Perhaps we can't do everything with this technique. My
current focus is figuring out what useful things we can't do like this
(even if we were to go wild and add whatever opcodes we wanted). So far it
looks like covenants are the main exception.

> I'm not sure I follow what you are saying with [2]

That is perfectly understandable as I linked the wrong thread (sorry!).
Here's the right one:
https://www.mail-archive.com/lightning-dev@lists.linuxfoundation.org/msg01427.html

I was pointing to the surprising result that you can actually pay for a
merkle path with a particular merkle root leading to a particular leaf that
you're interested in without validating the merkle path on chain (e.g.
OP_CAT and OP_SHA256). The catch is that the leaves have to be pedersen
commitments and you prove the existence of your data in the merkle root by
showing an opening to the leaf pedersen commitment. This may not be general
enough to cover every merkle tree use case (but I'm not sure what those
are!).

> This brings me back a philosophical point:
> Bitcoin should give people basic tools to build protocols without first
knowing what all those protocols are especially when those tools have very
little downside.

This is a really powerful idea. But I've started feeling like you have to
just design the layer 2 protocols first and then design layer 1! It seems
like almost every protocol that people want to make requires very
particular fundamental changes: SegWit for LN-penalty and NOINPUT for eltoo
for example. On top of that it seems like just having the right signature
scheme (schnorr) at layer 1 is enough to enable most useful stuff in an
elegant way.

[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017309.html

Cheers,

LL

On 

Re: [bitcoin-dev] OP_LOOKUP_OUTPUT proposal

2019-08-12 Thread Lloyd Fournier via bitcoin-dev
Hello Runchao and ZmnSCPxj,

I think we can simplify the explanation here by not using joint signatures
and payment channel like constructions. ZmnSCPxj's more complex
construction could be more dynamic and practical in some settings but at
least for me it gets in the way of capturing how this relatively simple
idea works.
Here's my attempt at distilling the idea:

Step 0: Alice and Bob negotiate the parameters (timeouts, refund/redeem
pubkeys, the collateral amounts and inputs/outputs for the WTJ-HTLC)

=== Step 1 ===
 Alice signs and broadcasts the BTC-HTLC and sends signature(s) on her
input(s) to the WJT-HLTC to Bob.
Note:
1. She does not need to wait for the BTC-HTLC to confirm before she sends
her signature(s).
2. There is no benefit to Alice in delaying at this point

=== Step 2 ===
Upon receiving Alice's input signature(s) and seeing the BTC-HTLC with
sufficient confirmations, Bob completes the transaction by supplying his
own signature(s) and broadcasts it.

Note:
1. Bob's ability to delay at this point shouldn't be considered an option.
Alice may withdraw her offer by double spending her one of her inputs to
the WTJ-HTLC. Alice's ability to cancel the offer and take back BTC after
the timeout proves there is no option (options cannot be cancelled)
2. In this plain construction Alice should cancel promptly (if she doesn't
see the WTJ-HTLC within the next 1 or 2 blocks for example)
3. You could even extend this protocol  to specify that Bob send signatures
on his inputs the WTJ-HTLC immediately to Alice. If he refuses Alice can
cancel within a second or two.

=== Step 3 ===
Upon seeing the WTJ-HTLC get sufficient confirmations, Alice takes the
funds (including her collateral back) by revealing the secret.

Note:
1. If she doesn't redeem the HTLC she loses her collateral. Assuming the
loss of the collateral overwhelms any gain she could experience from the
delaying her decision and she operates in her own financial interest she
redeems it immediately.

Step 4 is as usual.

At each step there is no unfair advantage to either party (at least if we
idealise the blockchains somewhat and assume that neither party can
influence which transactions get into which block etc etc).

ZmnSCPxj,

Thanks for continuing to spread this idea!
I'm still not sure about your "two hashes" approach to lightning but I hope
to get to the bottom of it soon by describing how I think it should work
more formally somewhere. Will post to lightning-dev when I do :)

LL

On Mon, Aug 12, 2019 at 4:06 PM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Good morning Runchao,
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, August 12, 2019 11:19 AM, Runchao Han 
> wrote:
>
> > Good morning ZmnSCPxj,
> >
> > Sorry for the ambiguity of my last email. It was Sunday and I wrote it
> in 1 min on my bed. Let me elaborate what we are thinking of here.
> >
> > ## Analysis on the protocol from Fournier et al.
> >
> > In this protocol, Bob participates in the swap following the steps below:
> >
> > 1. Alice and Bob creates a payment channel on WJT blockchain.
> > 2. Bob creates the WJT transaction using the joint account of Alice and
> Bob, including 1) Bob's input of 1,000,000 WJT, 2) Alice’s input for the
> 10,000 WJT premium. This transaction should be signed by both Alice and Bob
> in order to be valid.
> > 3. Bob signs the WJT transaction and sends the WJT transaction to Alice.
> > 4. Alice signs this WJT transaction. At this stage, Alice has both the
> valid BTC transaction and the valid WJT transaction.
> > 5. Alice broadcasts both the BTC transaction and the WJT transaction.
>
> Incorrect.
>
> The order is below.
> I add also the behavior when the protocol is stalled such that a step is
> not completed.
>
> 1.  Alice broadcasts and confirms a BTC transaction paying an HTLC,
> hashlock Bob, Timelock Alice.
> * Alice is initiating the protocol via this step, thus non-completion
> of this step is simply not performing the protocol.
> 2.  Alice informs the BTC transaction to Bob.
> * If Alice does not perform this, Bob does not know it and Alice
> locked her own money for no reason.
> 3.  Alice and Bob indicate their inputs for the WJT-side funding
> transaction.
> * If Alice does not perform this, it aborts the protocol and Alice
> locked her own money for no reason.
> * If Bob does not perform this, it aborts the protocol and Bob turns
> down the opportunity to earn 10,000 WJT (opportunity cost).
> 4.  Alice and Bob exchange signatures for the WJT-side claim transaction
> which spends the funding transaction via the hashlock side and gives
> 1,000,000 WJT to payout to Alice and 10,000 WJT premium to Bob.
> Order does not matter as funding  tx is still unsigned.
> * If Alice does not perform this, it aborts the protocol and Alice
> locked her own money for no reason.
> * If Bob does not perform this, it aborts the protocol and Bob turns
> down