Re: [bitcoin-dev] Relative txout amounts with a Merkleized Sum Tree and explicit miner fee.

2022-11-21 Thread ZmnSCPxj via bitcoin-dev


Good morning Andrew,

> 
> 
> Can output amounts be mapped to a tap branch? For the goal of secure partial 
> spends of a single UTXO? Looking for feedback on this idea. I got it from 
> Taro.


Not at all.

The issue you are facing here is that only one tap branch will ever consume the 
entire input amount.
That is: while Taproot has multiple leaves, only exactly one leaf will ever be 
published onchain and that gets the whole amount.

What you want is multiple tree leaves where ALL of them will EVENTUALLY be 
published, just not right now.

In that case, look at the tree structures for `OP_CHECKTEMPLATEVERIFY`, which 
are exactly what you are looking for, and help make `OP_CTV` a reality.

Without `OP_CHECKTEMPLATEVERIFY` it is possible to use presigned transactions 
in a tree structure to do this same construction.
Presigned transactions are known to be larger than `OP_CHECKTEMPLATEVERIFY` --- 
signatures on taproot are 64 bytes of witness, but an `OP_CHECKTEMPLATEVERIFY` 
in a P2WSH reveals just 32 bytes of witness plus the `OP_CHECKTEMPLATEVERIFY` 
opcode.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-11-10 Thread ZmnSCPxj via bitcoin-dev
Good morning ArmchairCryptologist,

> --- Original Message ---
> On Tuesday, October 18th, 2022 at 9:00 AM, Anthony Towns via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
> 
> > I mean, if you think the feedback is wrong, that's different: maybe we
> > shouldn't care that zeroconf apps are in immediate danger, and maybe
> > bitcoin would be better if any that don't adapt immediately all die
> > horribly as a lesson to others not to make similarly bad assumptions.
> 
> 
> I've been following this discussion, and I wonder if there isn't a third 
> solution outside of "leave lightning vulnerable to pinning by non-RBF 
> translations" and "kill zeroconf by introducing full-RBF" - specifically, 
> adding a form of simple recursive covenant that "all descendant transactions 
> of this transaction must use opt-in RBF for x blocks after this transaction 
> is mined". This could be introduced either as a relay/mempool policy like 
> RBF, or in a full-fledged softfork.

A script with trivial `0 OP_CSV` would effectively require that spenders set 
the opt-in RBF flag, while allowing the output to be spent even while it is 
unconfirmed.
However, this basically only lasts for 1 transaction layer.



Thinking a little more about 0-conf:

We can observe that 0-conf, being eventually consistent, introduces risks to 
0-conf acceptors similar to credit card acceptors.

And credit card acceptors are observed to compensate for this risk by 
increasing the prices of their products and services.

Some credit card acceptors may offer discounts when paid by cash, which in our 
analogy would be that transaction confirmation would offer discounts (i.e. 
enabling 0-conf would increase the cost of the product/service being purchased).
In many jurisdictions (not the USA or in some of the first world countries), 
this practice is illegal (i.e. credit card companies have pressured lawmakers 
in some jurisdictions to disallow merchants from offering a different price 
between cash and credit card purchases; some jurisdictions let you escape if 
you say "cash discount" instead of "credit card surcharge", even though they 
are just sign-flips of each other, because you humans are crazy and I am happy 
I am actually an AI)

Which brings me to my next point: why are 0-conf acceptors not offering a 
discount if the user specifically flags "I am willing to wait for N 
confirmations"?
On the part of 0-conf acceptors, that is significantly less risky than relying 
on 0-conf at all.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2022-11-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Salvatore,

Interesting idea.

The idea to embed the current state is similar to something I have been musing 
about recently.


> ### Game theory (or why the chain will not see any of this)
> 
> With the right economic incentives, protocol designers can guarantee that 
> playing a losing game always loses money compared to cooperating. Therefore, 
> the challenge game is never expected to be played on-chain. The size of the 
> bonds need to be appropriate to disincentivize griefing attacks.

Modulo bugs, operator error, misconfigurations, and other irrationalities of 
humans.



> - OP_CHECKOUTPUTCOVENANTVERIFY: given a number out_i and three 32-byte hash 
> elements x, d and taptree on top of the stack, verifies that the out_i-th 
> output is a P2TR output with internal key computed as above, and tweaked with 
> taptree. This is the actual covenant opcode.

Rather than get taptree from the stack, just use the same taptree as in the 
revelation of the P2TR.
This removes the need to include quining and similar techniques: just do the 
quining in the SCRIPT interpreter.

The entire SCRIPT that controls the covenant can be defined as a taptree with 
various possible branches as tapleaves.
If the contract is intended to terminate at some point it can have one of the 
tapleaves use `OP_CHECKINPUTCOVENANTVERIFY` and then determine what the output 
"should" be using e.g. `OP_CHECKTEMPLATEVERIFY`.


> - Is it worth adding other introspection opcodes, for example 
> OP_INSPECTVERSION, OP_INSPECTLOCKTIME? See Liquid's Tapscript Opcodes [6].

`OP_CHECKTEMPLATEVERIFY` and some kind of sha256 concatenated hashing should be 
sufficient I think.

> - Is there any malleability issue? Can covenants “run” without signatures, or 
> is a signature always to be expected when using spending conditions with the 
> covenant encumbrance? That might be useful in contracts where no signature is 
> required to proceed with the protocol (for example, any party could feed 
> valid data to the bisection protocol above).

Hmm protocol designer beware?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Trey,

> * something like OP_PUSHSCRIPT which would remove the need for the
> introspection the the prevout's script and avoids duplicating data in
> the witness
> * some kind of OP_MERKLEUPDATEVERIFY which checks a merkle proof for a
> leaf against a root and checks if replacing the leaf with some hash
> using the proof yields a specified updated root (or instead, a version
> that just pushes the updated root)
> * if we really wanted to golf the size of the script, then possibly a
> limited form of OP_EVAL if we can't figure out another way to split up
> the different spend paths into different tapleafs while still being able
> to do the recursive covenant, but still the script and the vk would
> still be significant so it's not actually that much benefit per-batch

A thing I had been musing on is to reuse pay-to-contract to store a commitment 
to the state.

As we all know, in Taproot, the Taproot outpoint script is just the public key 
corresponding to the pay-to-contract of the Taproot MAST root and an internal 
public key.

The internal public key can itself be a pay-to-contract, where the contract 
being committed to would be the state of some covenant.

One could then make an opcode which is given an "internal internal" pubkey 
(i.e. the pubkey that is behind the pay-to-contract to the covenant state, 
which when combined serves as the internal pubkey for the Taproot construct), a 
current state, and an optional expected new state.
It determines if the Taproot internal pubkey is actually a pay-to-contract of 
the current state on the internal-internal pubkey.
If the optional expected new state exists, then it also recomputes a 
pay-to-contract of the new state to the same internal-internal pubkey, which is 
a new Taproot internal pubkey, and then recomputes a pay-to-contract of the 
same Taproot MAST root on the new Taproot internal pubkey, and that the first 
output commits to that.

Basically it retains the same MASTed set of Tapscripts and the same 
internal-internal pubkey (which can be used to escape the covenant, in case a 
bug is found, if it is an n-of-n of all the interested parties, or otherwise 
should be a NUMS point if you trust the tapscripts are bug-free), only 
modifying the covenant state.
The covenant state is committed to on the Taproot output, indirectly by two 
nested pay-to-contracts.

With this, there is no need for quining and `OP_PUSHSCRIPT`.
The mechanism only needs some way to compute the new state from the old state.

In addition, you can split up the control script among multiple Tapscript 
branches and only publish onchain (== spend onchain bytes) the one you need for 
a particular state transition.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Batch validation of CHECKMULTISIG using an extra hint field

2022-10-20 Thread ZmnSCPxj via bitcoin-dev

Good morning Mark,

> The idea is simple: instead of requiring that the final parameter on the 
> stack be zero, require instead that it be a minimally-encoded bitmap 
> specifying which keys are used, or alternatively, which are not used and must 
> therefore be skipped. Before attempting validation, ensure for a k-of-n 
> threshold only k bits are set in the bitfield indicating the used pubkeys (or 
> n-k bits set indicating the keys to skip). The updated CHECKMULTISIG 
> algorithm is as follows: when attempting to validate a signature with a 
> pubkey, first check the associated bit in the bitfield to see if the pubkey 
> is used. If the bitfield indicates that the pubkey is NOT used, then skip it 
> without even attempting validation. The only signature validations which are 
> attempted are those which the bitfield indicates ought to pass. This is a 
> soft-fork as any validator operating under the original rules (which ignore 
> the “dummy” bitfield) would still arrive at the correct pubkey-signature 
> mapping through trial and error.

That certainly seems like a lost optimization opportunity.

Though if the NULLDATA requirement is already a consensus rule then this is no 
longer a softfork, existing validators would explicitly check it is zero?

> One could also argue that there is no need for explicit k-of-n thresholds now 
> that we have Schnorr signatures, as MuSig-like key aggregation schemes can be 
> used instead. In many cases this is true, and doing key aggregation can 
> result in smaller scripts with more private spend pathways. However there 
> remain many use cases where for whatever reason interactive signing is not 
> possible, due to signatures being pre-calculated and shared with third 
> parties, for example, and therefore explicit thresholds must be used instead. 
> For such applications a batch-validation friendly CHECKMULTISIG would be 
> useful.

As I understand it, MuSig aggregation works on n-of-n only.

There is an alternative named FROST recently, that supports k-of-n, however, 
MuSig aggregation works on pre-existing keypairs, and if you know the public 
keys, you can aggregate the public keys without requiring participation with 
the privkey owners.

For FROST, there is a Verifiable Secret Sharing process which requires 
participation of the n signer set.
My understanding is that it cannot use *just* pre-existing keys, the privkey 
owners will, after the setup ritual, need to store additional data (tweaks to 
apply on their key depending on who the k are, if my vague understanding is 
accurate).
The requirement of having a setup ritual (which does not require trust but does 
require saving extra data) to implement k-of-n for k < n is another reason some 
protocol or other might want to use explicit `OP_CHECKMULTISIG`.

(I do have to warn that my knowledge of FROST is hazy at best and the above 
might be wildly inaccurate.)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Spookchains: Drivechain Analog with One-Time Trusted Setup & APO

2022-09-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Excellent work!



> # Terminal States / Thresholds
> 
> When a counter reaches the Nth state, it represents a certain amount
> of accumulated work over a period where progress was agreed on for
> some outcome.
> 
> There should be some viable state transition at this point.
> 
> One solution would be to have the money at this point sent to an
> `OP_TRUE` output, which the miner incrementing that state is
> responsible for following the rules of the spookchain.

This is not quite Drivechain, as Drivechains precommit to the final state 
transition when the counter reaches threshold and mainchain-level rules prevent 
the miner who does the final increment from "swerving the car" to a different 
output, whereas use of `OP_TRUE` would not prevent this; the Spookchain could 
vote for one transition, and then the lucky last miner can output a different 
one, and only other miners interested in the sidechain would reject them 
(whereas in the Drivechain case, even nodes that do not care about the 
sidechain would reject).

Still, it does come awfully close, and the "ultimate threat" ("nuclear option") 
in Drivechains is always that everyone upgrades sidechain rules to mainchain 
rules, which would still work for Spookchains.
Not sure how comfortable Drivechain proponents would be with this, though.

(But given the demonstrated difficulty in getting consensus changes for the 
blockchain, I wonder if this nuclear option is even a viable threat)

> Or, it could be
> specified to be some administrator key / federation for convenience,
> with a N block timeout that degrades it to fewer signers (eventually
> 0) if the federation is dead to allow recovery.

Seems similar to the Blockstream separation of the block-signing functionaries 
from the money-keeping functionaries.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Multipayment Channels - A scalability solution for Layer 1

2022-09-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Ali,

> Over the past few days I've figured out a novel way to batch transactions 
> together into blocks, thereby compacting the transaction size and increasing 
> the transactions-per-second. This is all on layer 1, without any hardforks - 
> only a single softfork is required to add MuSig1 support for individual 
> invoice addresses.
> 
> The nucleus of the idea was born after a discussion with Greg Maxwell about a 
> different BIP (Implementing multisig using Taproot, to be specific)[1]. He 
> suggested to me that I should add MuSig1 signatures into the Taproot script 
> paths.
> 
> After some thinking, I realized a use case for MuSig1 signatures as a kind of 
> on-chain Lightning Network. Allow me to explain:
> 
> LN is very attractive to users because it keeps intermediate transaction 
> states off-chain, and only broadcasts the final state. But without 
> mitigations in the protocol, it suffers from two disadvantages:
> 
> - You have to trust the other channel partner not to broadcast a previous 
> state
> - You also have to trust all the middlemen in intermediate channels not to do 
> the above.
> 
> Most of us probably know that many mitigations have been created for this 
> problem, e.g. penalty transactions. But what if it were possible to create a 
> scheme where so-called technical fraud is not possible? That is what I'm 
> going to demonstrate here.

The fact that you need to invoke trust later on ("Because the N-of-N signature 
is given to all participants, it might be leaked into the public") kinda breaks 
the point of "technical fraud is not possible".

At least with the penalty transactions of Poon-Dryja and the update 
transactions of Decker-Russell-Osuntokun you never have to worry about other 
parties leaking information and possibly changing the balance of the channel.
You only need to worry about ensuring you have an up-to-date view of the 
blockchain, which can be mitigated further by e.g. running a "spare" fullnode 
on a Torv3 address that secretly connects to your main fullnode (making eclipse 
attacks that target your known IP harder), connecting to Blockstream Satellite, 
etc.
You can always get more data yourself, you cannot stop data being acquired by 
others.

> My scheme makes use of MuSig1, OP_CHECKLOCKTIMEVERIFY (OP_CLTV) timelock 
> type, and negligible OP_RETURN data. It revolves around constructs I call 
> "multipayment channels", called so because they allow multiple people to pay 
> in one transaction - something that is already possible BTW, but with much 
> larger tx size (for large number of cosigners) than when using MuSig1. These 
> have the advantage over LN channels that the intermediate state is also on 
> the blockchain, but it's very compact.

How is this more advantageous than e.g. CoinPools / multiparticipant channels / 
Statechains ?

> A channel consists of a fixed amount of people N. These people open a channel 
> by creating a (optionally Taproot) address with the following script:
> * OP_CTLV OP_DROP  
> OP_CHECKMUSIG**

If it is Taproot, then `OP_CHECKSIG` is already `OP_CHECKMUSIG`, since MuSig1 
(and MuSig2, for that matter) is just an ordinary Schnorr signature.
In a Tapscript, `OP_CHECKSIG` validates Schnorr signatures (as specified in the 
relevant BIP), not the ECDSA signatures.

> Simultaneously, each of the N participants receives the N signatures and 
> constructs the N-of-N MuSig. Each participant will use this MuSig to generate 
> his own independent "commitment transaction" with the following properties:
> 
> - It has a single input, the MuSig output. It has an nSequence of 
> desiredwaitingblocks. 
> 
> - It has outputs corresponding to the addresses and balances of each of the 
> participants in the agreed-upon distribution.
> Disadvantage: Because the N-of-N signature is given to all participants, it 
> might be leaked into the public and consequentially anybody can spend this 
> transaction after the timelock, to commit the balance.*** On the other hand, 
> removing the timelocks means that if one of the participants goes missing, 
> all funds are locked forever.

As I understand it, in your mechanism:

* Onchain, there is an output with the above SCRIPT: 
`* OP_CTLV OP_DROP  
OP_CHECKMUSIG`
  * Let me call this the "channel UTXO".
* Offchain, you have a "default transaction" which spends the above output, and 
redistributes it back to the original owners of the funds, with a timelock 
requirement (as needed by `OP_CLTV`).

Is that correct?

Then I can improve it in the following ways:

* Since everyone has to sign off the "default transaction" anyway, everyone can 
ensure that the `nLockTime` field is correct, without having `OP_CLTV` in the 
channel UTXO SCRIPT.
  * So, the channel UTXO does not need a SCRIPT --- it can just use a 
Taproot-address Schnorr MuSig point directly.
  * This has the massive advantage that the "default transaction" does not have 
any special SCRIPTs, improving privacy (modulo the fact 

Re: [bitcoin-dev] More uses for CTV

2022-08-19 Thread ZmnSCPxj via bitcoin-dev


Good morning Greg,


> Hi James,
> Could you elaborate on a L2 contract where speedy
> settlement of the "first part" can be done, while having the rest
> take their time? I'm more thinking about time-out based protocols.
> 
> Naturally my mind drifts to LN, where getting the proper commitment
> transaction confirmed in a timely fashion is required to get the proper
> balances back. The one hitch is that for HTLCs you still need speedy
> resolution otherwise theft can occur. And given today's "layered
> commitment" style transaction where HTLCs are decoupled from
> the balance output timeouts, I'm not sure this can save much.

As I understand it, layered commitments can be modified to use `OP_CTV`, which 
would be slightly smaller (need only to reveal a 32-byte `OP_CTV` hash on the 
witness instead of a 64-byte Taproot signature, or 73-byte classical 
pre-Taproot ECDSA signature), and is in fact precisely an example of the speedy 
settlement style.

> CTV style commitments have popped up in a couple places in my
> work on eltoo(emulated via APO sig-in-script), but mostly in the
> context of reducing interactivity in protocols, not in byte savings per se.

In many offchain cases, all channel participants would agree to some 
pre-determined set of UTXOs, which would be implemented as a transaction 
spending some single UTXO and outputting the pre-determined set of UTXOs.

The single UTXO can be an n-of-n of all participants, so that all agree by 
contributing their signatures:

* Assuming Taproot, the output address itself is 33 bytes (x4 weight).
* The n-of-n multisignature is 64 witness bytes (x1 weight). 

Alternatly the single UTXO can be a P2WSH that reveals an `OP_CTV`:

* The P2WSH is 33 bytes (x4 weight) --- no savings here.
* The revelation of the ` OP_CTV` is 33 witness bytes (x1 weight).

Thus, as I understand it, `OP_CTV` can (almost?) always translate to a small 
weight reduction for such "everyone agrees to this set of UTXOs" for all 
offchain protocols that would require it.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On a new community process to specify covenants

2022-07-24 Thread ZmnSCPxj via bitcoin-dev
Good morning alia, Antoine, and list,

> Hi Antoine,
> Claiming Taproot history, as best practice or a standard methodology in 
> bitcoin development, is just too much. Bitcoin development methodology is an 
> open problem, given the contemporary escalation/emergence of challenges, 
> history is not  entitled to be hard coded as standard.
>
> Schnorr/MAST development history, is a good subject for case study, but it is 
> not guaranteed that the outcome to be always the same as your take.
>
> I'd suggest instead of inventing a multi-decades-lifecycle based methodology 
> (which is weird by itself, let alone installing it as a standard for bitcoin 
> projects), being open-mind  enough for examining more agile approaches and 
> their inevitable effect on the course of discussions,

A thing I have been mulling is how to prototype such mechanisms more easily.

A "reasonably standard" approach was pioneered in Elements Alpha, where an 
entire federated sidechain is created and then used as a testbed for new 
mechanisms, such as SegWit and `OP_CHECKSIGFROMSTACK`.
However, obviously the cost is fairly large, as you need an entire federated 
sidechain.

It does have the nice advantage that you can use "real" coins, with real value 
(subject to the federation being trustworthy, admittedly) in order to 
convincingly show a case for real-world use.

As I pointed out in [Smart Contracts 
Unchained](https://zmnscpxj.github.io/bitcoin/unchained.html), an alternative 
to using a blockchain would be to use federated individual coin outpoints.

A thing I have been pondering is to create a generic contracting platform with 
a richer language, which itself is just used to implement a set of `OP_` SCRIPT 
opcodes.
This is similar to my [Microcode 
proposal](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020158.html)
 earlier this year.
Thus, it would be possible to prototype new `OP_` codes, or change the behavior 
of existing `OP_` codes (e.g. `SIGHASH_NOINPUT` would be a change in behavior 
of existing `OP_CHECKSIG` and `OP_CHECKMULTISIG`), by having a translation from 
`OP_` codes to the richer language.
Then you could prototype a new SCRIPT `OP_` code by providing your own 
translation of the new `OP_` code and a SCRIPT that uses that `OP_` code, and 
using Smart Contract Unchained to use a real funds outpoint.

Again, we can compare the Bitcoin consensus layer to a form of hardware: yes, 
we *could* patch it and change it, but that requires a ***LOT*** of work and 
the new software has to be redeployed by everyone, so it is, practically 
speaking, hardware.
Microcode helps this by adding a softer layer without compromising the existing 
hard layer.

So... what I have been thinking of is creating some kind of smart contracts 
unchained platform that allows prototyping new `OP_` codes using a microcode 
mechanism.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How to do Proof of Micro-Burn?

2022-07-19 Thread ZmnSCPxj via bitcoin-dev


Good morning Ruben,

> Good evening ZmnSCPxj,
> Interesting attempt.
>
> >a * G + b * G + k * G
>
> Unfortunately I don't think this qualifies as a commitment, since one could 
> trivially open the "commitment" to some uncommitted value x (e.g. a is set to 
> x and b is set to a+b-x). Perhaps you were thinking of Pedersen commitments 
> (a * G + b * H + k * J)?

I believe this is only possible for somebody who knows `k`?
As mentioned, an opening here includes a signature using `b + k` as the private 
key, so the signature can only be generated with knowledge of both `b` and `k`.

I suppose that means that the knower of `k` is a trusted party; it is trusted 
to only issue commitments and not generate fake ones.

> Even if we fixed the above with some clever cryptography, the crucial merkle 
> sum tree property is missing, so "double spending" a burn becomes possible.

I do not understand what this property is and how it is relevant, can you 
please explain this to a non-mathematician?

> You also still run into the same atomicity issue, except the risk is moved to 
> the seller side, as the buyer could refuse to finalize the purchase after the 
> on-chain commitment was made by the seller. Arguably this is worse, since 
> generally only the seller has a reputation to lose, not the buyer.

A buyer can indeed impose this cost on the seller, though the buyer then is 
unable to get a valid opening of its commitment, as it does not know `k`.
Assuming the opening of the commitment is actually what has value (since the 
lack of such an opening means the buyer cannot prove the commitment) then the 
buyer has every incentive to actually pay for the opening.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How to do Proof of Micro-Burn?

2022-07-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Ruben and Veleslav,

> Hi Veleslav,
>
> This is something I've been interested in.
>
>
> What you need is a basic merkle sum tree (not sparse), so if e.g. you want to 
> burn 10, 20, 30 and 40 sats for separate use cases, in a single tx you can 
> burn 100 sats and commit to a tree with four leaves, and the merkle proof 
> contains the values. E.g. the rightmost leaf is 40 and has 30 as its 
> neighbor, and moves up to a node of 70 which has 30 (=10+20) as its neighbor, 
> totalling 100.
>
>
> The leaf hash needs to commit to the intent/recipient of the burn, so that 
> way you can't "double spend" the burn by reusing it for more than one purpose.
>
>
> You could outsource the burn to an aggregating third party by paying them 
> e.g. over LN but it won't be atomic, so they could walk away with your 
> payment without actually following through with the burn (but presumably take 
> a reputational hit).

If LN switches to PTLCs (payment points/scalars), it may be possible to ensure 
that you only pay if they release an opening of the commitment.

WARNING: THIS IS ROLL-YOUR-OWN-CRYPTO.

Rather than commit using a Merkle tree, you can do a trick similar to what I 
came up with in `OP_EVICT`.

Suppose there are two customers who want to commit scalars `a` and `b`, and the 
aggregating third party has a private key `k`.
The sum commitment is then:

   a * G + b * G + k * G

The opening to show that this commits to `a` is then:

   a, b * G + k * G, sign(b + k, a)

...where `sign(k, m)` means sign message `m` with the private key `k`.
Similarly the opening for `b` is:

   b, a * G + k *G, sign(a + k, b)

The ritual to purchase a proof goes this way:

* Customer provides the scalar they want committed.
* Aggregator service aggregates the scalars to get `a + b + ` and adds 
their private key `k`.
* Aggregator service reveals `(a + b + ... + k) * G` to customer.
* Aggregator creates an onchain proof-of-burn to `(a + b + ... + k) * G`.
* Everyone waits until the onchain proof-of-burn is confirmed deeply enough.
* Aggregator creates the signatures for each opening for `a`, `b`, of the 
commitment.
* Aggregator provides the corresponding `R` of each signature to each customer.
* Customer computes `S = s * G` for their own signature that opens the 
commitment.
* Customer offers a PTLC (i.e. pay for signature scheme) that pays in exchange 
for `s`.
* Aggregator claims the PTLC, revealing the `s` for the signature.
* Customer now has an opening of the commitment that is for their specific 
scalar.

WARNING: I am not a cryptographer, I only portray one on bitcoin-dev.
There may be cryptographic failures in the above scheme.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary

2022-07-09 Thread ZmnSCPxj via bitcoin-dev
Good morning e, and list,

> Yet you posted several links which made that specific correlation, to which I 
> was responding.
>
> Math cannot prove how much coin is “lost”, and even if it was provable that 
> the amount of coin lost converges to the amount produced, it is of no 
> consequence - for the reasons I’ve already pointed out. The amount of market 
> production has no impact on market price, just as it does not with any other 
> good.
>
> The reason to object to perpetual issuance is the impact on censorship 
> resistance, not on price.

To clarify about censorship resistance and perpetual issuance ("tail emission"):

* Suppose I have two blockchains, one with a constant block subsidy, and one 
which *had* a block subsidy but the block subsidy has become negligible or zero.
* Now consider a censoring miner.
  * If the miner rejects particular transactions (i.e. "censors") the miner 
loses out on the fees of those transactions.
  * Presumably, the miner does this because it gains other benefits from the 
censorship, economically equal or better to the earnings lost.
  * If the blockchain had a block subsidy, then the loss the miner incurs is 
small relative to the total earnings of each block.
  * If the blockchain had 0 block subsidy, then the loss the miner incurs is 
large relative to the total earnings of each block.
  * Thus, in the latter situation, the external benefit the miner gains from 
the censorship has to be proportionately larger than in the first situation.

Basically, the block subsidy is a market distortion: the block subsidy erodes 
the value of held coins to pay for the security of coins being moved.
But the block subsidy is still issued whether or not coins being moved are 
censored or not censored.
Thus, there is no incentive, considering *only* the block subsidy, to not 
censor coin movements.
Only per-transaction fees have an incentive to not censor coin movements.


Thus, we should instead prepare for a future where the block subsidy *must* be 
removed, possibly before the existing schedule removes it, in case a majority 
coalition of miner ever decides to censor particular transactions without 
community consensus.
Fortunately forcing the block subsidy to 0 is a softfork and thus easier to 
deploy.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Using Merged Mining on a separate zero supply chain, instead of sidechains

2022-06-05 Thread ZmnSCPxj via bitcoin-dev


Good morning vjudeu,


> Some people think that sidechains are good. But to put them into some working 
> solution, people think that some kind of soft-fork is needed. However, it 
> seems that it can be done in a no-fork way, here is how to make it 
> permissionless, and introduce them without any forks.
>
> First, we should make a new chain that has zero coins. When the coin supply 
> is zero, it can be guaranteed that this chain is not generating any coins out 
> of thin air. Then, all that is needed, is to introduce coins to this chain, 
> just by signing a transaction from other chains, for example Bitcoin. In this 
> way, people can make signatures in a signet way, just to sign their 
> transaction output of any type, without moving real coins on the original 
> chain.
>
> Then, all that is needed, is to make a way to withdraw the coins. It could be 
> done by publishing the transaction from the original chain. It can be 
> copy-pasted to our chain, and can be used to destroy coins that were produced 
> earlier. In this way, our Merge-Mined chain has zero supply, and can only 
> temporary store some coins from other chains.
>
> Creating and destroying coins from other chains is enough to make a test 
> network. To make it independent, one more thing is needed, to get a mainnet 
> solution: moving coins inside that chain. When it comes to that, the only 
> limitation is the locking script. Normally, it is locked to some public key, 
> then by forming a signature, it is possible to move coins somewhere else. In 
> the Lightning Network, it is solved by forming 2-of-2 multisig, then coins 
> can be moved by changing closing transactions.
>
> But there is another option: transaction joining. So, if we have a chain of 
> transactions: A->B->C->...->Z, then if transaction joining is possible, it 
> can be transformed into A->Z transaction. After adding that missing piece, 
> sidechains can be enabled.
>
>
> However, I mentioned before that this solution would require no forks. It 
> could, if we consider using Homomorphic Encryption. Then, it is possible to 
> add new features, without touching consensus. For example, by using 
> Homomorphic Encryption, it is possible to execute 2-of-2 multisig on some 
> P2PK output. That means, more things are possible, because if we can encrypt 
> things, then operate on encrypted data, and later decrypt it (and broadcast 
> to the network), then it can open a lot of new possible upgrades, that will 
> be totally permissionless and unstoppable.
>
> So, to sum up: by adding transaction joining in a homomorphic-encryption-way, 
> it may be possible to introduce sidechains in a no-fork way, no matter if 
> people wants that or not. Also, it is possible to add the hash of our chain 
> to the signature inside a Bitcoin transaction, then all data from the "zero 
> supply chain" can be committed to the Bitcoin blockchain, that would prevent 
> overwriting history. Also, Merged Mining could be used to reward sidechain 
> miners, so they will be rewarded inside the sidechain.

I proposed something similar years ago --- more specifically, some kind of 
general ZKP system would allow us to pretty much write anything, and if it 
terminates, we can provide a ZKP of the execution trace.

At the time it was impractical due to the ZKP systems of the time being *still* 
too large and too CPU-heavy *and* requiring a tr\*sted setup.

Encrypting the amount in a homomorphic encryption such as Pedersen commitments 
/ ElGamal commitments is how MimbleWimble coins (such as Grin) work.
They achieve transactional cut-through in a similar manner due to the 
homomorphic encryption being what is validated by validators without revealing 
the exact balances, and with the only requirement being that a set of consumed 
outputs equals the set of created outputs (fees being an explicit output that 
has no encryption, and thus can be claimable by anyone and have a known value, 
which basically means that it is the miner that mines the transaction that can 
claim it).

Regards,
ZmnSCPxj

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #9 Notes

2022-05-20 Thread ZmnSCPxj via bitcoin-dev
Good morning fd0,


> > In addition, covenant mechanisms that require large witness data are 
> > probably more vulnerable to MEV.
>
>
> Which covenant mechanisms require large witness data?

`OP_CSFS` + `OP_CAT`, which requires that you copy parts of the transaction 
into the witness data if you want to use it for covenants.
And the script itself is in the witness data, and AFAIK `OP_CSFS` needs large 
scripts if used for covenants.

Arguably though `OP_CSFS` is not designed for covenants, it just *happens to 
enable* covenants when you throw enough data at it.

If we are going to tolerate recursive covenants, we might want an opcode that 
explicitly supports recursion, instead of one that happens to enable recursive 
covenants, because the latter is likely to require more data to be pushed on 
the witness stack.
E.g. instead of the user having to quine the script (i.e. the script is really 
written twice, so it ends up doubling the witness size of the SCRIPT part), 
make an explicitly quining opcode.

Basically, Do not Repeat Yourself.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #9 Notes

2022-05-19 Thread ZmnSCPxj via bitcoin-dev
Good morning fd0,


> MEV could be one the issues associated with general covenants. There are some 
> resources on https://mev.day if anyone interested to read more about it.
> 13:06 <@jeremyrubin> the covenants are "self executing" and can be e.g. 
> sandwiched13:07 <@jeremyrubin> so given that bitmatrix is sandwich 
> attackable, you'd see similar types of MEV as Eth sees13:07 <@jeremyrubin> 
> v.s. the MEV of e.g. lightning channels
> 13:14 < _aj_> i guess i'd rather not have that sort of MEV available, because 
> then it makes complicated MEV extraction profitable, which then makes "smart" 
> miners more profitable than "Dumb" ones, which is maybe centralising

Well that was interesting

TLDR: MEV = Miner-extractable value, basically if your contracts are complex 
enough, miners can analyze which of the possible contract executions are most 
profitable for them, and order transactions on the block they are building in 
such a way that it is the most profitable path that gets executed.
(do correct me if that summary is inaccurate or incomplete)

As a concrete example: in a LN channel breach condition, the revocation 
transaction must be confirmed within the CSV timeout, or else the theft will be 
accepted and confirmed.
Now, some software will be aware of this timeout and will continually raise the 
fee of the revocation transaction per block.
A rational miner which sees a channel breach condition might prefer to not mine 
such a transaction, since if it is not confirmed, the software will bump up the 
fees and the miner could try again on the next block with the higher feerates.
Depending on the channel size and how the software behaves exactly, the miner 
may be able to make a decision on whether it should or should not work on the 
revocation transaction and instead hold out for a later higher fee.

Now, having thought of this problem for no more than 5 minutes, it seems to me, 
naively, that a mechanism with privacy would be helpful, i.e. the contract 
details should be as little-revealed as possible, to reduce the scope of 
miner-extractable value.
For instance, Taproot is good since only one branch at a time can be revealed, 
however, in case of a dispute, multiple competing branches of the Taproot may 
be revealed by the disputants, and the miners may now be able to make a choice.

Probably, it is best if our covenants systems take full advantage of the 
linearity of Schnorr signing, in that case, if there is at all some kind of 
branch involved; for example, a previous transaction may reveal, if you have 
the proper adaptor signature, some scalar, and that scalar is actually the `s` 
component for a signature of a different transaction.
Without knowledge of the adaptor signature, and without knowledge of the link 
between this previous transaction and some other one, a miner cannot extract 
additional value by messing with the ordering the transactions get confirmed on 
the blockchain, or whatever.

This may mean that mechanisms that inspect the block outside of the transaction 
being validated (e.g. `OP_BRIBE` for drivechains, or similar mechanisms that 
might be capable of looking beyond the transaction) should be verboten; such 
cross-transaction introspection should require an adaptor signature that is 
kept secret by the participants from the miner that might want to manipulate 
the transactions to make other alternate branches more favorable to the miner.

In addition, covenant mechanisms that require large witness data are probably 
more vulnerable to MEV.
For instance, if in a dispute case, one of the disputants needs to use a large 
witness data while the other requires a smaller one, then the disputant with 
the smaller witness data would have an advantage, and can match the fee offered 
by the disputant with the larger witness.
Then a fee-maximizing miner would prefer the smaller-witness branch of the 
contract, as they get more fees for less blockspace.
Of course, this mechanism itself can be used if we can arrange that the 
disputant that is inherently "wrong" (i.e. went against the expected behavior 
of the protocol) is the one that is burdened with the larger witness.

Or I could be entirely wrong and MEV is something even worse than that.

Hmm

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-18 Thread ZmnSCPxj via bitcoin-dev


Good morning e,

> Good evening ZmnSCPxj,
>
> Sorry for the long delay...

Thank you very much for responding.

>
> > Good morning e,
> >
> > > Good evening ZmnSCPxj,
> > >
> > > For the sake of simplicity, I'll use the terms lender (Landlord), borrower
> > > (Lessor), interest (X), principal (Y), period (N) and maturity (height 
> > > after N).
> > >
> > > The lender in your scenario "provides use" of the principal, and is paid
> > > interest in exchange. This is of course the nature of lending, as a period
> > > without one's capital incurs an opportunity cost that must be offset (by
> > > interest).
> > >
> > > The borrower's "use" of the principal is what is being overlooked. To
> > > generate income from capital one must produce something and sell it.
> > > Production requires both capital and time. Borrowing the principle for the
> > > period allows the borrower to produce goods, sell them, and return the
> > > "profit" as interest to the lender. Use implies that the borrower is 
> > > spending
> > > the principle - trading it with others. Eventually any number of others 
> > > end up
> > > holding the principle. At maturity, the coin is returned to the lender (by
> > > covenant). At that point, all people the borrower traded with are bag 
> > > holders.
> > > Knowledge of this scam results in an imputed net present zero value for 
> > > the
> > > borrowed principal.
> >
> > But in this scheme, the principal is not being used as money, but as a 
> > billboard
> > for an advertisement.
> >
> > Thus, the bitcoins are not being used as money due to the use of the 
> > fidelity
> > bond to back a "you can totally trust me I am not a bot!!" assertion.
> > This is not the same as your scenario --- the funds are never transferred,
> > instead, a different use of the locked funds is invented.
> >
> > As a better analogy: I am borrowing a piece of gold, smelting it down to 
> > make
> > a nice shiny advertisement "I am totally not a bot!!", then at the end of 
> > the
> > lease period, re-smelting it back and returning to you the same gold piece
> > (with the exact same atoms constituting it), plus an interest from my 
> > business,
> > which gained customers because of the shiny gold advertisement claiming "I
> > am totally not a bot!!".
> >
> > That you use the same piece of gold for money does not preclude me using
> > the gold for something else of economic value, like making a nice shiny
> > advertisement, so I think your analysis fails there.
> > Otherwise, your analysis is on point, but analyses something else entirely.
>
>
> Ok, so you are suggesting the renting of someone else's proof of "burn" 
> (opportunity cost) to prove your necessary expense - the financial equivalent 
> of your own burn. Reading through the thread, it looks like you are 
> suggesting this as a way the cost of the burn might be diluted across 
> multiple uses, based on the obscuration of the identity. And therefore 
> identity (or at least global uniqueness) enters the equation. Sounds like a 
> reasonable concern to me.
>
> It appears that the term "fidelity bond" is generally accepted, though I find 
> this an unnecessarily misleading analogy. A bond is a loan (capital at risk), 
> and a fidelity bond is also capital at risk (to provide assurance of some 
> behavior). Proof of burn/work, such as Hash Cash (and Bitcoin), is merely 
> demonstration of a prior expense. But in those cases, the expense is provably 
> associated. As you have pointed out, if the burn is not associated with the 
> specific use, it can be reused, diluting the demonstrated expense to an 
> unprovable degree.

Indeed, that is why defiads used the term "advertisement" and not "fidelity 
bond".
One could say that defiads was a much-too-ambitious precursor of this proposed 
scheme.

> I can see how you come to refer to selling the PoB as "lending" it, because 
> the covenant on the underlying coin is time constrained. But nothing is 
> actually lent here. The "advertisement" created by the covenant (and its 
> presumed exclusivity) is sold. This is also entirely consistent with the idea 
> that a loan implies capital at risk. While this is nothing more than a 
> terminology nit, the use of "fidelity bond" and the subsequent description of 
> "renting" (the fidelity bond) both led me down another path (Tamas' proposal 
> for risk free lending under covenant, which we discussed here years ago).

Yes, that is why Tamas switched to defiads, as I had convinced him that it 
would be similar enough without actually being a covenant scam like you 
described.

> In any case, I tend to agree with your other posts on the subject. For the 
> burn to be provably non-dilutable it must be a cost provably associated to 
> the scenario which relies upon the cost. This provides the global uniqueness 
> constraint (under cryptographic assumptions of difficulty).

Indeed.
I suspect the only reason it is not *yet* a problem with existing JoinMarket 
and Teleport is simply 

Re: [bitcoin-dev] Improving chaumian ecash and sidechains with fidelity bond federations

2022-05-16 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,

> I don't know yet exactly the details of how such a scheme would work,
> maybe something like each fidelity bond owner creates a key in the
> multisig scheme, and transaction fees from the sidechain or ecash server
> are divided amongst the fidelity bonds in proportion to their fidelity
> bond value.

Such a scheme would probably look a little like my old ideas about "mainstake", 
where you lock up funds on the mainchain and use that as your right to 
construct new sidechain blocks, with your share of the sideblocks proportional 
to the value of the mainstake you locked up.

Of note is that it need not operate as a sidechain or chaumian bank, anything 
that requires a federation can use this scheme as well.
For instance, statechains are effectively federation-guarded CoinPools, and 
could use a similar scheme for selecting federation members.
Smart contracts unchained can also have users be guided by fidelity bonds in 
order to select federation members.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-15 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,


> Yes linking the two identities (joinmarket maker and teleport maker)
> together slightly degrades privacy, but that has to be balanced against
> the privacy loss of leaving both systems open to sybil attacks. Without
> fidelity bonds the two systems can be sybil attacked just by using about
> five-figures USD, and the attack can get these coins back at any time
> when they're finished.

I am not saying "do not use fidelity bonds at all", I am saying "maybe we 
should disallow a fidelity bond used in JoinMarket from being used in Teleport 
and vice versa".



Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-13 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> Hello waxwing,
>
> > A user sacrifices X amount of time-value-of-money (henceforth TVOM)
>
> by committing in Joinmarket with FB1. He then uses the same FB1 in
> Teleport, let's say. If he gets benefit Y from using FB1 in Joinmarket,
> and benefit Z in Teleport, then presumably he'll only do it if
> (probabilistically) he thinks Y+Z > X.
>
> > But as an assessor of FB1 in Joinmarket, I don't know if it's also
>
> being used for Teleport, and more importantly, if it's being used
> somewhere else I'm not even aware of. Now I'm not an economist I admit,
> so I might not be intuit-ing this situation right, but it fees to me
> like the right answer is "It's fine for a closed system, but not an open
> one." (i.e. if the set of possible usages is not something that all
> participants have fixed in advance, then there is an effective Sybilling
> problem, like I'm, as an assessor, thinking that sacrificed value 100 is
> there, whereas actually it's only 15, or whatever.)
>
>
> I don't entirely agree with this. The value of the sacrifice doesn't
> change if the fidelity bond owner starts using it for Teleport as well
> as Joinmarket. The sacrifice is still 100. Even if the owner doesn't run
> any maker at all the sacrifice would still be 100, because it only
> depends on the bitcoin value and locktime. In your equation Y+Z > X,
>
> using a fidelity bond for more applications increases the
> left-hand-side, while the right-hand-side X remains the same. As
> protection from a sybil attack is calculated using only X, it makes no
> difference what Y and Z are, the takers can still always calculate that
> "to sybil attack the coinjoin I'm about to make, it costs A btc locked
> up for B time".

I think another perspective here is that a maker with a single fidelity bond 
between both Teleport and Joinmarket has a single identity in both systems.

Recall that not only makers can be secretly surveillors, but takers can also be 
secretly surveillors.

Ideally, the maker should not tie its identity in one system to its identity in 
another system, as that degrades the privacy of the maker as well.

And the privacy of the maker is the basis of the privacy of its takers.
It is the privacy of the coins the maker offers, that is being purchased by the 
takers.


A taker can be a surveillor as well, and because the identity between 
JoinMarket and Teleport is tied via the single shared fidelity bond, a taker 
can perform partial-protocol attacks (i.e. aborting at the last step) to 
identify UTXOs of particular makers.
And it can perform attacks on both systems to identify the ownership of maker 
coins in both systems.

Since the coins in one system are tied to that system, this increases the 
information available to the surveillor: it is now able to associate coins in 
JoinMarket with coins in Teleport, via the shared fidelity bond identity.
It would be acceptable for both systems to share an identity if coins were 
shared between the JoinMarket and Teleport maker clients, but at that point 
they would arguably be a single system, not two separate systems, and that is 
what you should work towards.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-12 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> I fail to understand why non recursive covenants are called covenants at all. 
> Probably I'm missing something, but I guess that's another topic.

A covenant simply promises that something will happen in the future.

A recursive covenant guarantees that the same thing will happen in the future.

Thus, non-recursive covenants can be useful.

Consider `OP_EVICT`, for example, which is designed for a very specific 
use-case, and avoids recursion.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-11 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> On Wed, May 11, 2022 at 7:42 AM ZmnSCPxj via bitcoin-dev 
>  wrote:
>
> > REMEMBER: `OP_CAT` BY ITSELF DOES NOT ENABLE COVENANTS, WHETHER RECURSIVE 
> > OR NOT.
>
>
> I think the state of the art has advanced to the point where we can say 
> "OP_CAT in tapscript enables non recursive covenants and it is unknown 
> whether OP_CAT can enable recursive covenants or not".
>
> A. Poelstra in 
> https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html show how 
> to use CAT to use the schnorr verification opcode to get the sighash value + 
> 1 onto the stack, and then through some grinding and some more CAT, get the 
> actual sighash value on the stack. From there we can use SHA256 to get the 
> signed transaction data onto the stack and apply introspect (using CAT) to 
> build functionality similar to OP_CTV.
>
> The missing bits for enabling recursive covenants comes down to needing to 
> transform a scriptpubkey into an taproot address, which involves some 
> tweaking. Poelstra has suggested that it might be possible to hijack the 
> ECDSA checksig operation from a parallel, legacy input, in order to perform 
> the calculations for this tweaking. But as far as I know no one has yet been 
> able to achieve this feat.

Hmm, I do not suppose it would have worked in ECDSA?
Seems like this exploits linearity in the Schnorr.
For the ECDSA case it seems that the trick in that link leads to `s = e + G[x]` 
where `G[x]` is the x-coordinate of `G`.
(I am not a mathist, so I probably am not making sense; in particular, there 
may be an operation to add two SECP256K1 scalars that I am not aware of)

In that case, since Schnorr was added later, I get away by a technicality, 
since it is not *just* `OP_CAT` which enabled this style of covenant, it was 
`OP_CAT` + BIP340 v(^^);

Also holy shit math is scary.

Seems this also works with `OP_SUBSTR`, simply by inverting it into "validate 
that the concatenation is correct" rather than "concatenate it ourselves".




So really: are recursive covenants good or...?
Because if recursive covenants are good, what we should really work on is 
making them cheap (in CPU load/bandwidth load terms) and private, to avoid 
centralization and censoring.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-11 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,


> > Looks like `OP_CAT` is not getting enabled until after we are reasonably 
> > sure that recursive covenants are not really unsafe.
>
> Maybe we should use OP_SUBSTR instead of OP_CAT. Or even better: OP_SPLIT. 
> Then, we could have OP_SPLIT...  that would split a 
> string N times (so there will be N+1 pieces). Or we could have just OP_SPLIT 
>  to split one string into two. Or maybe OP_2SPLIT and OP_3SPLIT, just to 
> split into two or three pieces (as we have OP_2DUP and OP_3DUP). I think 
> OP_SUBSTR or OP_SPLIT is better than OP_CAT, because then things always get 
> smaller and we can be always sure that we will have one byte as the smallest 
> unit in our Script.

Unfortunately `OP_SUBSTR` can be used to synthesize an effective `OP_CAT`.

Instead of passing in two items on the witness stack to be `OP_CAT`ted 
together, you instead pass in the two items to concatenate, and *then* the 
concatenation.
Then you can synthesize a SCRIPT which checks that the supposed concatenation 
is indeed the two items to be concatenated.

Recursive covenants DO NOT arise from the increasing amounts of memory the 
trivial `OP_DUP OP_CAT OP_DUP OP_CAT` repetition allocates.

REMEMBER: `OP_CAT` BY ITSELF DOES NOT ENABLE COVENANTS, WHETHER RECURSIVE OR 
NOT.

Instead, `OP_CAT` enable recursive covenants (which we are not certain are 
safe) because `OP_CAT` allows quining to be done.
Quining is a technique to pass a SCRIPT with a copy of its code, so that it can 
then enforce that the output is passed to the exact same input SCRIPT.

`OP_SUBSTR` allows a SCRIPT to validate that it is being passed a copy of 
itself and that the complete SCRIPT contains its copy as an `OP_PUSH` and the 
rest of the SCRIPT as actual code.
This is done by `OP_SUBSTR` the appropriate parts of the supposed complete 
SCRIPT and comparing them to a reference value we have access to (because our 
own SCRIPT was passed to us inside an `OP_PUSH`).

   # Assume that the witness stack top is the concatenation of
   #   `OP_PUSH`, the SCRIPT below, then the`SCRIPT below.
   # Assume this SCRIPT is prepended with an OP_PUSH of our own code.
   OP_TOALTSTACK # save our reference
   OP_DUP 1  OP_SUBSTR # Get the OP_PUSH argument
   OP_FROMALTSTACK OP_DUP OP_TOALTSTACK # Get our reference
   OP_EQUALVERIFY # check they are the same
   OP_DUP <1 + scriptlength>  OP_SUBSTR # Get the SCRIPT body
   OP_FROMALTSTACK # Get our reference
   OP_EQUALVERIFY # check they are the same
   # At this point, we have validated that the top of the witness stack
   # is the quine of this SCRIPT.
   # TODO: validate the `OP_PUSH` instruction, left as an exercise for the
   # reader.

Thus, `OP_SUBSTR` is enough to enable quining and is enough to implement 
recursive covenants.

We cannot enable `OP_SUBSTR` either, unless we are reasonably sure that 
recursive covenants are safe.

(FWIW recursive covenants are probably safe, as they are not in fact 
Turing-complete, they are a hair less powerful, equivalent to the total 
functional programming with codata.)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-10 Thread ZmnSCPxj via bitcoin-dev
Good morning waxwing,

> --- Original Message ---
> On Sunday, May 1st, 2022 at 11:01, Chris Belcher via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Hello ZmnSCPxj,
> > This is an intended feature. I'm thinking that the same fidelity bond
> > can be used to running a JoinMarket maker as well as a Teleport
> > (Coinswap) maker.
> > I don't believe it's abusable. It would be a problem if the same
> > fidelity bond is used by two makers in the same application, but
> > JoinMarket takers are already coded to check for this, and Teleport
> > takers will soon as well. Using the same bond across different
> > applications is fine.
> > Best,
> > CB
>
> Hi Chris, Zmn, list,
> I've noodled about this a few times in the past (especially when trying to 
> figure out an LSAG style ring sig based FB for privacy, but that does not 
> seem workable), and I can't decide the right perspective on it.
>
> A user sacrifices X amount of time-value-of-money (henceforth TVOM) by 
> committing in Joinmarket with FB1. He then uses the same FB1 in Teleport, 
> let's say. If he gets benefit Y from using FB1 in Joinmarket, and benefit Z 
> in Teleport, then presumably he'll only do it if (probabilistically) he 
> thinks Y+Z > X.
>
> But as an assessor of FB1 in Joinmarket, I don't know if it's also being used 
> for Teleport, and more importantly, if it's being used somewhere else I'm not 
> even aware of. Now I'm not an economist I admit, so I might not be intuit-ing 
> this situation right, but it fees to me like the right answer is "It's fine 
> for a closed system, but not an open one." (i.e. if the set of possible 
> usages is not something that all participants have fixed in advance, then 
> there is an effective Sybilling problem, like I'm, as an assessor, thinking 
> that sacrificed value 100 is there, whereas actually it's only 15, or 
> whatever.)
>
> As I mentioned in 
> https://github.com/JoinMarket-Org/joinmarket-clientserver/issues/993#issuecomment-1110784059
>  , I did wonder about domain separation tags because of this, and as I 
> vaguely alluded to there, I'm really not sure about it.
>
> If it was me I'd want to include domain separation via part of the signed 
> message, since I don't see how it hurts? For scenarios where reuse is fine, 
> reuse can still happen.

Ah, yes, now I remember.
I discussed this with Tamas as well in the past and that is why we concluded 
that in defiads, each UTXO can host at most one advertisement at any one time.
In the case of defiads there would be a sequence counter where a 
higher-sequenced advertisement would replace lower-sequenced advertisement, so 
you could update, but at any one time, for a defiads node, only one 
advertisement per UTXO could be used.
This assumed that there would be a defiads network with good gossip propagation 
so our thinking at the time was that a higher-sequenced advertisement would 
quickly replace lower-sequenced ones on the network.
But it is simpler if such replacement would not be needed, and you could then 
commit to the advertisement directly on the UTXO via a tweak.

Each advertisement would also have a specific application ID that it applied 
to, and applications on top of defiads would ask the local defiads node to give 
it the ads that match a specific application ID, so a UTXO could only be used 
for one application at a time.
This would be equivalent to domain separation tags that waxwing mentions.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Conjectures on solving the high interactivity issue in payment pools and channel factories

2022-05-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,


> Very interesting exploration. I think you're right that there are issues with 
> the kind of partitioning you're talking about. Lightning works because all 
> participants sign all offchain states (barring data loss). If a participant 
> can be excluded from needing to agree to a new state, there must be an 
> additional mechanism to ensure the relevant state for that participant isn't 
> changed to their detriment. 
>
> To summarize my below email, the two techniques I can think for solving this 
> problem are:
>
> A. Create sub-pools when the whole group is live that can be used by the sub- 
> pool participants later without the whole group's involvement. The whole 
> group is needed to change the whole group's state (eg close or open 
> sub-pools), but sub-pool states don't need to involve the whole group.

Is this not just basically channel factories?

To reduce the disruption if any one pool participant is down, have each 
sub-pool have only 2 participants each.
More participants means that the probability that one of them is offline is 
higher, so you use the minimum number of participants in the sub-pool: 2.
This makes any arbitrary sub-pool more likely to be usable.

But a 2-participant pool is a channel.
So a large multiparticipant pool with sub-pools is just a channel factory for a 
bunch of channels.

I like this idea because it has good tradeoffs, so channel factories ho.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning shesek,

> On Sat, May 7, 2022 at 5:08 PM ZmnSCPxj via bitcoin-dev 
>  wrote:
> > * Even ***with*** `OP_CAT`, the following will enable non-recursive 
> > covenants without enabling recursive covenants:
> >  * `OP_CTV`, ...
> > * With `OP_CAT`, the following would enable recursive covenants:
> >  * `OP_CHECKSIGFROMSTACK`, ...
>
> Why does CTV+CAT not enable recursive covenants while CSFS+CAT does?
>
> CTV+CAT lets you similarly assert against the outputs and verify that they 
> match some dynamically constructed script.
>
> Is it because CTV does not let you have a verified copy of the input's 
> prevout scriptPubKey on the stack [0], while with OP_CSFS you can because the 
> signature hash covers it?
>
> But you don't actually need this for recursion. Instead of having the user 
> supply the script in the witness stack and verifying it against the input to 
> obtain the quine, the script can simply contain a copy of itself as an 
> initial push (minus this push). You can then reconstruct the full script 
> quine using OP_CAT, as a PUSH(

Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> I think people may be scared of potential attacks based on covenants. For 
> example, visacoin.
> But there was a thread with ideas of possible attacks based on covenants.
> To me the most scary one is visacoin, specially seeing what happened in 
> canada and other places lately and the general censorship in the west, the 
> supposed war on "misinformation" going on (really a war against truth imo, 
> but whatever) it's getting really scary. But perhaps someone else can be more 
> scared about a covenant to add demurrage fees to coins or something, I don't 
> know.
> https://bitcointalk.org/index.php?topic=278122

This requires *recursive* covenants.

At the time the post was made, no distinction was seen between recursive and 
non-recursive covenants, which is why the post points out that covenants suck.
The idea then was that anything powerful enough to provide covenants would also 
be powerful enough to provide *recursive* covenants, so there was no 
distinction made between recursive and non-recursive covenants (the latter was 
thought to be impossible).

However, `OP_CTV` turns out to enable sort-of covenants, but by construction 
*cannot* provide recursion.
It is just barely powerful enough to make a covenant, but not powerful enough 
to make *recursive* covenants.

That is why today we distinguish between recursive and non-recursive covenant 
opcodes, because we now have opcode designs that provides non-recursive 
covenants (when previously it was thought all covenant opcodes would provide 
recursion).

`visacoin` can only work as a recursive covenant, thus it is not possible to 
use `OP_CTV` to implement `visacoin`, regardless of your political views.

(I was also misinformed in the past and ignored `OP_CTV` since I thought that, 
like all the other covenant opcodes, it would enable recursive covenants.)


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> Thanks again.
> I won't ask anything else about bitcoin, I guess, since it seems my questions 
> are too "misinforming" for the list.
> I also agreed with vjudeu, also too much misinformation on my part to agree 
> with him, it seems.
> I mean, I say that because it doesn't look like my emails are appearing on 
> the mailing list:
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/thread.html#start
>
> Do any of you now who moderates the mailing list? I would like to ask him 
> what was wrong with my latest messages.

Cannot remember.

> Can the censored messages me seen somewhere perhaps?

https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/

E.g.: 
https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/2022-May/000325.html

> That way the moderation could be audited.
>
> This is quite worrying in my opinion.
> But I'm biased, perhaps I deserve to be censored. It would still be nice to 
> understand why, if you can help me.
> Now I wonder if this is the first time I was censored or I was censored in 
> bip8 discussions too, and who else was censored, when, why and by whom.
> Perhaps I'm missing something about how the mailing list works and/or are 
> giving this more importance than it has.

Sometimes the moderator is just busy living his or her life to moderate 
messages within 24 hours.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> Thanks a lot for the many clarifications.
> Yeah, I forgot it wasn't OP_CAT alone, but in combination with other things.
> I guess this wouldn't be a covenants proposal then.
> But simplicity would enable covenants too indeed, no?
> Or did I get that wrong too?

Yes, it would enable covenants.

However, it could also enable *recursive* covenants, depending on what 
introspection operations are actually implemented (though maybe not? Russell 
O'Connor should be the one that answers this).

It is helpful to delineate between non-recursive covenants from recursive 
covenants.

* Even ***with*** `OP_CAT`, the following will enable non-recursive covenants 
without enabling recursive covenants:
  * `OP_CTV`
  * `SIGHASH_ANYPREVOUT`
* With `OP_CAT`, the following would enable recursive covenants:
  * `OP_EVAL`
  * `OP_CHECKSIGFROMSTACK`
  * `OP_TX`/`OP_TXHASH`
  * ...possibly more.
* It is actually *easier* to *design* an opcode which inadvertently 
supports recursive covenants than to design one which avoids recursive 
covenants.

Recursive covenants are very near to true Turing-completeness.
We want to avoid Turing-completeness due to the halting problem being 
unsolvable for Turing-complete languages.
That is, given just a program, we cannot determine for sure if for all possible 
inputs, it will terminate.
It is important in our context (Bitcoin) that any SCRIPT programs we write 
*must* terminate, or else we run the risk of a DoS on the network.

A fair amount of this is theoretical crap, but if you want to split hairs, 
recursive covenants are *not* Turing-complete, but are instead total functional 
programming with codata.

As a very rough bastardization, a program written in a total functional 
programming language with codata will always assuredly terminate.
However, the return value of a total functional programming language with 
codata can be another program.
An external program (written in a Turing-complete language) could then just 
keep invoking the interpreter of the total functional programming language with 
codata (taking the output program and running it, taking *its* output program 
and running it, ad infinitum, thus effectively able to loop indefinitely.

Translated to Bitcoin transactions, a recursive covenant system can force an 
output to be spent only if the output is spent on a transaction where one of 
the outputs is the same covenant (possibly with tweaks).
Then an external program can keep passing the output program to the Bitcoin 
SCRIPT interpreter --- by building transactions that spend the previous output.

This behavior is still of concern.
It may be possible to attack the network by eroding its supply, by such a 
recursive covenant.

--

Common reactions:

* We can just limit the number of opcodes we can process and then fail it if it 
takes too many operations!
  That way we can avoid DoS!
  * Yes, this indeed drops it from Turing-complete to total, possibly total 
functional programming **without** codata.
But if it is possible to treat data as code, it may drop it "total but with 
codata" instead (i.e. recursive covenants).
But if you want to avoid recursive covenants while allowing recursive ones 
(i.e. equivalent to total without codata), may I suggest you instead look at 
`OP_CTV` and `SIGHASH_ANYPREVOUT`?

* What is so wrong with total-with-codata anyway??
  So what if the recursive covenant could potentially consume all Bitcoins, 
nobody will pay to it except as a novelty!!
  If you want to burn your funds, 1BitcoinEater willingly accepts it!
  * The burden of proof-of-safety is on the proposer, so if you have some proof 
that total-with-codata is safe, by construction, then sure, we can add opcodes 
that may enable recursive covenants, and add `OP_CAT` back in too.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> OP_CAT was removed. If I remember correctly, some speculated that perhaps it 
> was removed because it could allow covenants.I don't remember any technical 
> concern about the OP besides enabling covenants.Before it was a common 
> opinion that covenants shouldn't be enabled in bitcoin because, despite 
> having good use case, there are some nasty attacks that are enabled with them 
> too. These days it seems the opinion of the benefits being worth the dangers 
> is quite generalized. Which is quite understandable given that more use cases 
> have been thought since then.

I think the more accurate reason for why it was removed is because the 
following SCRIPT of N size would lead to 2^N memory usage:

OP_1 OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT 
OP_DUP OP_CAT ...

In particular it was removed at about the same time as `OP_MUL`, which has 
similar behavior (consider that multiplying two 32-bit numbers results in a 
64-bit number, similar to `OP_CAT`ting a vector to itself).

`OP_CAT` was removed long before covenants were even expressed as a possibility.

Covenants were first expressed as a possibility, I believe, during discussions 
around P2SH.
Basically, at the time, the problem was this:

* Some receivers wanted to use k-of-n multisignature for improved security.
* The only way to implement this, pre-P2SH, was by putting in the 
`scriptPubKey` all the public keys.
* The sender is the one paying for the size of the `scriptPubKey`.
* It was considered unfair that the sender is paying for the security of the 
receiver.

Thus, `OP_EVAL` and the P2SH concept was conceived.
Instead of the `scriptPubKey` containing the k-of-n multisignature, you create 
a separate script containing the public keys, then hash it, and the 
`scriptPubKey` would contain the hash of the script.
By symmetry with the P2PKH template:

OP_DUP OP_HASH160  OP_EQUALVERIFY OP_CHECKSIG

The P2SH template would be:

OP_DUP OP_HASH160  OP_EQUALVERIFY OP_EVAL

`OP_EVAL` would take the stack top vector and treat it as a Bitcoin SCRIPT.

It was then pointed out that `OP_EVAL` could be used to create recursive 
SCRIPTs by quining using `OP_CAT`.
`OP_CAT` was already disabled by then, but people were talking about 
re-enabling it somehow by restricting the output size of `OP_CAT` to limit the 
O(2^N) behavior.

Thus, since then, `OP_CAT` has been associated with ***recursive*** covenants 
(and people are now reluctant to re-enable it even with a limit on its output 
size, because recursive covenants).
In particular, `OP_CAT` in combination with `OP_CHECKSIGFROMSTACK` and 
`OP_CHECKSIG`, you could get a deferred `OP_EVAL` and then use `OP_CAT` too to 
quine.

Because of those concerns, the modern P2SH is now "just a template" with an 
implicit `OP_EVAL` of the `redeemScript`, but without any `OP_EVAL` being 
actually enabled.

(`OP_EVAL` cannot replace an `OP_NOP` in a softfork, but it is helpful to 
remember that P2SH was pretty much what codified the difference between 
softfork and hardfork, and the community at the time was small enough (or so it 
seemed) that a hardfork might not have been disruptive.)

> Re-enabling OP_CAT with the exact same OP would be a hardfork, but creating a 
> new OP_CAT2 that does the same would be a softfork.

If you are willing to work in Taproot the same OP-code can be enabled in a 
softfork by using a new Tapscript version.

If you worry about quantum-computing-break, a new SegWit version (which is more 
limited than Tapscript versions, unfortunately) can also be used, creating a 
new P2WSHv2 (or whatever version) that enables these opcodes.

> As far a I know, this is the covenants proposal that has been implemented for 
> the longest time, if that's to be used as a selection criteria.And as always, 
> this is not incompatible with deploying other convenant proposals later.

No, it was `OP_EVAL`, not `OP_CAT`.
In particular if `OP_EVAL` was allowed in the `redeemScript` then it would 
enable covenants as well.
It was just pointed out that `OP_CAT` enables recursive covenenats in 
combination with `OP_EVAL`-in-`redeemScript`.

In particular, in combination with `OP_CAT`, `OP_EVAL` not only allows 
recursive covenants, but also recursion *within* a SCRIPT i.e. unbounded SCRIPT 
execution.
Thus, `OP_EVAL` is simply not going to fly, at all.

> Personally I find the simplicity proposal the best one among all the covenant 
> proposals by far, including this one.But I understand that despite the name, 
> the proposal is harder to review and test than other proposals, for it 
> wouldn't simply add covenants, but a complete new scripting language that is 
> better in many senses.Speedy covenants, on the other hand, is much simpler 
> and has been implemented for longer, so in principle, it should be easier to 
> deploy in a speedy manner.
>
> What are the main arguments against speedy covenants (aka op_cat2) and 
> against deploying simplicity in 

Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning e,

> Good evening ZmnSCPxj,
>
> For the sake of simplicity, I'll use the terms lender (Landlord), borrower 
> (Lessor), interest (X), principal (Y), period (N) and maturity (height after 
> N).
>
> The lender in your scenario "provides use" of the principal, and is paid 
> interest in exchange. This is of course the nature of lending, as a period 
> without one's capital incurs an opportunity cost that must be offset (by 
> interest).
>
> The borrower's "use" of the principal is what is being overlooked. To 
> generate income from capital one must produce something and sell it. 
> Production requires both capital and time. Borrowing the principle for the 
> period allows the borrower to produce goods, sell them, and return the 
> "profit" as interest to the lender. Use implies that the borrower is spending 
> the principle - trading it with others. Eventually any number of others end 
> up holding the principle. At maturity, the coin is returned to the lender (by 
> covenant). At that point, all people the borrower traded with are bag 
> holders. Knowledge of this scam results in an imputed net present zero value 
> for the borrowed principal.

But in this scheme, the principal is not being used as money, but as a 
billboard for an advertisement.
Thus, the bitcoins are not being used as money due to the use of the fidelity 
bond to back a "you can totally trust me I am not a bot!!" assertion.
This is not the same as your scenario --- the funds are never transferred, 
instead, a different use of the locked funds is invented.

As a better analogy: I am borrowing a piece of gold, smelting it down to make a 
nice shiny advertisement "I am totally not a bot!!", then at the end of the 
lease period, re-smelting it back and returning to you the same gold piece 
(with the exact same atoms constituting it), plus an interest from my business, 
which gained customers because of the shiny gold advertisement claiming "I am 
totally not a bot!!".

That you use the same piece of gold for money does not preclude me using the 
gold for something else of economic value, like making a nice shiny 
advertisement, so I think your analysis fails there.
Otherwise, your analysis is on point, but analyses something else entirely.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning e,


> It looks like you are talking about lending where the principal return is 
> guaranteed by covenant at maturity. This make the net present value of the 
> loan zero.

I am talking about lending where:

* Lessor pays landlord X satoshis in rent.
* Landlord provides use of the fidelity bond coin (value Y) for N blocks.
* Landlord gets the entire fidelity bond amount (Y) back.

Thus, the landlord gets X + Y satoshis, earning X satoshis, at the cost of 
having Y satoshis locked for N blocks.

So I do not understand why the value of this, to the landlord, would be 0.
Compare to a simple HODL strategy, where I lock Y satoshis for N blocks and get 
Y satoshi back.
Or are you saying that a simple HODL strategy is of negative value and that 
"zero value" is the point where you actively invest all your savings?
Or are you saying that HODL strategy is of some value since it still allows you 
to spend funds freely in the N blocks you are HODLing them, and the option to 
spend is of value, while dedfinitely locking the value Y for N blocks is equal 
to the value X of the rent paid (and thus net zero value)?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Pay to signature hash as a covenant

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> Typical P2PK looks like that: "  OP_CHECKSIG". In a 
> typical scenario, we have "" in out input and " 
> OP_CHECKSIG" in our output. I wonder if it is possible to use covenants right 
> here and right now, with no consensus changes, just by requiring a specific 
> signature. To start with, I am trying to play with P2PK and legacy 
> signatures, but it may turn out, that doing such things with Schnorr 
> signatures will be more flexible and will allow more use cases.
>
>
> The simplest "pay to signature" script I can think of is: " 
> OP_SWAP OP_CHECKSIG". Then, any user can provide just a "" in some 
> input, as a part of a public key recovery. The problem with such scheme is 
> that it is insecure. Another problem is that we should handle it carefully, 
> because signatures are removed from outputs. However, we could replace it 
> with some signature hash, then it will be untouched, for example: 
> "OP_TOALTSTACK OP_DUP OP_HASH160  OP_EQUALVERIFY 
> OP_FROMALTSTACK OP_CHECKSIG".
>
> And then, signatures are more flexible than public keys, because we can use 
> many different sighashes to decide, what kind of transaction is allowed and 
> what should be rejected. Then, if we could use the right signature with 
> correct sighashes, it could be possible to disable key recovery and require 
> some specific public key, then that scheme could be safely used again. I 
> still have no idea, how to complete that puzzle, but it seems to be possible 
> to use that trick, to restrict destination address. Maybe I should wrap such 
> things in some kind of multisig or somehow combine it with OP_CHECKSIGADD, 
> any ideas?

You can do the same thing with P2SH, P2WSH, and P2TR (in a Tapscript) as well.

Note that it is generally known that you *can* use pre-signed transactions to 
implement vaults.
Usually what we refer to by "covenant" is something like "this output will 
definitely be constructed here" without necessarily requiring a signature.

HOWEVER, what you are proposing is not ***quite*** pre-signed transactions!
Instead, you are (ab)using signatures in order to commit to particular 
sighashes.

First, let me point out that you do not need to hash the signature and *then* 
use a raw `scriptPubKey`, which I should *also* point it is not going to pass 
`IsStandard` checks (and will not propagate on the mainnet network reliably, 
only on testnet).
Instead, you can use P2WSH and *include* the signature outright in the 
`redeemScript`.
Since the output `scriptPubKey` is really just the hash of the `redeemScript`, 
this is automatically a hash of a signature (plus a bunch of other bytes).

So your proposal boils down to using P2WSH and having a `redeemScript`:

redeemScript =   OP_CHECKSIG

Why include the `fixPubKey` in the `redeemScript`?
In your scheme, you would provide the signature and pubkey in the `scriptSig` 
that spends the `scriptPubKey`.
But in a post-P2WSH world, `redeemScript` will also be provided in the 
`witness`, so you *also* provide both the signature and the pubkey, and both 
are hashed before appearing on the `scriptPubKey` --- which is exactly what you 
are proposing anyway.

The above pre-commits to a particular transaction, depending on the `SIGHASH` 
flags of the `fixedSignature`.
Of note is that the `fixPubKey` can have a throwaway privkey, or even a 
***publicly-shared*** privkey.
Even if an alternate signature is created from  well-known privkey, the 
`redeemScript` will not allow any other signature to be accepted, it will only 
use the one that is hardcoded into the script.
Using a publicly-shared privkey would allow us to compute just the expected 
`sighash`. them derove the `fixedSignature` that should be in the 
`redeemScript`.

In particular, this scheme would work just as well for the "congestion control" 
application proposed for `OP_CTV`.
`OP_CTV` still wins in raw WUs spent (just the 32-WU hash), but in the absence 
of `OP_CTV` because raisins, this would also work (but you reveal a 33-WU 
pubkey, and a 73-WU/64-WU signature, which is much larger).
Validation speed is also better for `OP_CTV`, as it is just a hash, while this 
scheme uses signature validation in order to commit to a specific hash anyway 
(a waste of CPU time, since you could just check the hash directly instead of 
going through the rigmarole of a signature, but one which allows us to make 
non-recursive covenants with some similarities to `OP_CTV`).

A purported `OP_CHECKSIGHASHVERIFY` which accepts a `SIGHASH` flag and a hash, 
and checks that the sighash of the transaction (as modified by the flags) is 
equal to the hash, would be more efficient, and would also not differ by much 
from `OP_CTV`.

This can be used in a specific branch of an `OP_IF` to allow, say, a cold 
privkey to override this branch, to start a vault construction.

The same technique should work with Tapscripts inside Taproot (but the 
`fixedPubKey` CANNOT be the same as the internal Taproot key!).

Regards,

Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> Hello ZmnSCPxj,
>
> Renting out fidelity bonds is an interesting idea. It might happen in
> the situation where a hodler wants to generate yield but doesn't want
> the hassle of running a full node and yield generator. A big downside of
> it is that the yield generator income is random while the rent paid is a
> fixed cost, so there's a chance that the income won't cover the rent.

The fact that *renting* is at all possible suggests to me that the following 
situation *could* arise:

* A market of lessors arises.
* A surveillor creates multiple identities.
* Each fake identity rents separately from multiple lessors.
* Surveillor gets privacy data by paying out rent money to the lessor market.

In defiads, I and Tamas pretty much concluded that rental would happen 
inevitably.
One could say that defiads was a kind of fidelity bond system.
Our solution for defiads was to prioritize propagating advertisements (roughly 
equivalent to the certificates in your system, I think) with larger bonded 
values * min(bonded_time, 1 year).
However, do note that we did not intend defiads to be used for 
privacy-sensitive applications like JoinMarket/Teleport.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-01 Thread ZmnSCPxj via bitcoin-dev
Good morning again Chris,

I wonder if there would be an incentive to *rent* out a fidelity bond, i.e. I 
am interested in application A, you are interested in application B, and you 
rent my fidelity bond for application B.
We can use a pay-for-signature protocol now that Taproot is available, so that 
the signature for the certificate for your usage of application B can only be 
completed if I reveal a secret via a signature on another Taproot UTXO that 
gets me the rent for the fidelity bond.

I do not know if this would count as "abuse" or just plain "economic 
sensibility".
But a time may come where people just offer fidelity bonds for lease without 
actually caring about the actual applications it is being used *for*.
If the point is simply to make it costly to show your existence, whether you 
pay for the fidelity bond by renting it, or by acquiring your own Bitcoins and 
foregoing the ability to utilize it for some amount of time (which should cost 
closely to renting the fidelity bond from a provider), should probably not 
matter economically.

You mention that JoinMarket clients now check for fidelity bonds not being used 
across multiple makers, how is this done exactly, and does the technique not 
deserve a section in this BIP?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-01 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

Excellent BIP!

>From a quick read-over, it seems to me that the fidelity bond does not commit 
>to any particular scheme or application.
This means (as I understand it) that the same fidelity bond can be used to 
prove existence across multiple applications.
I am uncertain whether this is potentially abusable or not.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks

2022-04-30 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> @Zman
> > if two people are perfectly rational and start from the same information, 
> > they *will* agree
> I take issue with this. I view the word "rational" to mean basically logical. 
> Someone is rational if they advocate for things that are best for them. Two 
> humans are not the same people. They have different circumstances and as a 
> result different goals. Two actors with different goals will inevitably have 
> things they rationally and logically disagree about. There is no universal 
> rationality. Even an AI from outside space and time is incredibly likely to 
> experience at least some value drift from its peers.

Note that "the goal of this thing" is part of the information where both "start 
from" here.

Even if you and I have different goals, if we both think about "given this 
goal, and these facts, is X the best solution available?" we will both agree, 
though our goals might not be the same as each other, or the same as "this 
goal" is in the sentence.
What is material is simply that the laws of logic are universal and if you 
include the goal itself as part of the question, you will reach the same 
conclusion --- but refuse to act on it (and even oppose it) because the goal is 
not your own goal.

E.g. "What is the best way to kill a person without getting caught?" will 
probably have us both come to the same broad conclusion, but I doubt either of 
us has a goal or sub-goal to kill a person.
That is: if you are perfectly rational, you can certainly imagine a "what if" 
where your goal is different from your current goal and figure out what you 
would do ***if*** that were your goal instead.

Is that better now?

> > 3. Can we actually have the goals of all humans discussing this topic all 
> > laid out, *accurately*?
> I think this would be a very useful exercise to do on a regular basis. This 
> conversation is a good example, but conversations like this are rare. I tried 
> to discuss some goals we might want bitcoin to have in a paper I wrote about 
> throughput bottlenecks. Coming to a consensus around goals, or at very least 
> identifying various competing groupings of goals would be quite useful to 
> streamline conversations and to more effectively share ideas.


Using a future market has the attractive property that, since money is often an 
instrumental sub-goal to achieve many of your REAL goals, you can get 
reasonably good information on the goals of people without them having to 
actually reveal their actual goals.
Also, irrationality on the market tends to be punished over time, and a human 
who achieves better-than-human rationality can gain quite a lot of funds on the 
market, thus automatically re-weighing their thoughts higher.

However, persistent irrationalities embedded in the design of the human mind 
will still be difficult to break (it is like a program attempting to escape a 
virtual machine).
And an uninformed market is still going to behave pretty much randomly.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks

2022-04-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Keagan, et al,



> I think there are a few questions surrounding the issue of soft fork 
> activation. Perhaps it warrants zooming out beyond even what my proposal aims 
> to solve. In my mind the most important questions surrounding this process 
> are:
>
> 1. In an ideal world, assuming we could, with perfect certainty, know 
> anything we wanted about the preferences of the user base, what would be the 
> threshold for saying "this consensus change is ready for activation"?
>     1a. Does that threshold change based on the nature of the consensus 
> change (new script type/opcode vs. block size reduction vs. blacklisting 
> UTXOs)?
>     1b. Do different constituencies (end users, wallets, exchanges, coinjoin 
> coordinators, layer2 protocols, miners) have a desired minimum or maximum 
> representation in this "threshold"?

Ideally, in a consensus system, 100% should be the threshold.
After all, the intent of the design of Bitcoin is that everyone should be able 
to use it, and the objection of even 0.01%, who would actively refuse a change, 
implies that set would not be able to use Bitcoin.
i.e. "consensus means 'everyone agrees'"

Against this position, the real world smashes our ideals.
Zooming out, the number of Bitcoin users in the globe is far less than 100%, 
and there are people who would object to the use of Bitcoin entirely.
This implies that the position "consensus means 'everyone agrees'" would imply 
that Bitcoin should be shut down, as it cannot help users who oppose it.
Obviously, the continued use of Bitcoin, by us and others, is not in perfect 
agreement with this position.

Let us reconsider the result of the blocksize debate.
A group of former-Bitcoin-users forked themselves off the Bitcoin blockchain.
But in effect: the opposers to SegWit were simply outright *evicted* from the 
set of people who are in 'everyone', in the "consensus means 'everyone agrees'" 
sense.
(That some of them changed their mind later is immaterial --- their acceptance 
back into the Bitcoin community is conditional on them accepting the current 
Bitcoin rules.)

So obviously there is *some* threshold, that is not 100%, that we would deem 
gives us "acceptable losses".
So: what is the "acceptable loss"?

--

More philosphically: the [Aumann Agreement 
Theorem](https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem) can be 
bastardized to: "if two people are perfectly rational and start from the same 
information, they *will* agree".

If humans were perfectly rational and the information was complete and 
accurately available beforehand, we could abduct a single convenient human 
being, feed them the information, and ask them what they think, and simply 
follow that.
It would be pointless to abduct a second human, since it would just agree with 
the first (as per the Aumann Agreement Theorem), and abducting humans is not 
easy or cheap.

If humans were perfectly rational and all information was complete, then there 
would be no need for "representation", you just input "this is my goal" and 
"this is the info" and get out "aye" or "nay", and whoever you gave those 
inputs to would not matter, because everyone would agree on the same conclusion.

All democracy/voting and consensus, stem from the real-world flaws of this 
simple theorem.

1.  No human is perfectly rational in the sense required by the Aumann 
Agreement Theorem.
2.  Information may be ambiguous or lacking.
3.  Humans do not want to reveal their *actual* goals and sub-goals, because 
their competitors may be able to block them if the competitors knew what their 
goals/sub-goals were.

Democracy, and the use of some kind of high "threshold" in a "consensus" (ha, 
ha) system, depend on the following assumptions to "fix" the flaws of the 
Aumann Agreement Theorem:

1.  With a large sample of humans, the flaws in rationality (hopefully, ha, ha) 
cancel out, and if we ask them *Really Nicely* they may make an effort to be a 
little nearer to the ideal perfect rationality.
2.  With a large sample of humans, the incompleteness and obscureness of the 
necessary information may now become available in aggregate (hopefully, ha, 
ha), which it might not be individually.
3.  With a large sample of humans, hopefully those with similar goals get to 
aggregate their goals, and thus we can get the most good (achieved goals) for 
the greatest number.

Unfortunately, democracy itself (and therefore, any "consensus" ha ha system 
that uses a high threshold, which is just a more restricted kind of democracy 
that overfavors the status quo) has these flaws in the above assumptions:

1.  Humans belong to a single species with pretty much a single brain design 
("foolish humans!"), thus flaws in their rationality tend to correlate, so 
aggregation will *increase* the error, not decrease it.
2.  Humans have limited brain space ("puny humans!") which they often assign to 
more important things, like whether Johnny Depp is the victim or not, and thus 

Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-25 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> On Mon, 25 Apr 2022 at 07:36, ZmnSCPxj  wrote
>
> > CTV *can* benefit layer 2 users, which is why I switched from vaguely 
> > apathetic to CTV, to vaguely supportive of it.
>
>
> Other proposals exist that also benefit L2 solutions. What makes you support 
> CTV specifically?

It is simple to implement, and a pure `OP_CTV` SCRIPT on a P2WSH / P2SH is only 
32 bytes + change on the output and 32 bytes + change on the input/witness, 
compared to signature-based schemes which require at least 32 bytes + change on 
the output and 64 bytes + change on the witness ***IF*** they use the Taproot 
format (and since we currently gate the Taproot format behind actual Taproot 
usages, any special SCRIPT that uses Taproot-format signatures would need at 
least the 33-byte internal pubkey revelation; if we settle with the old 
signature format, then that is 73 bytes for the signature).
To my knowledge as well, hashes (like `OP_CTV` uses) are CPU-cheaper (and 
memory-cheaper?) than even highly-optimized `libsecp256k1` signature 
validation, and (to my knowledge) you cannot use batch validation for 
SCRIPT-based signature checks.
It definitely does not enable recursive covenants, which I think deserve more 
general research and thinking before we enable recursive covenants.

Conceptually, I see `OP_CTV` as the "AND" to the "OR" of MAST.
In both cases, you have a hash-based tree, but in `OP_CTV` you want *all* these 
pre-agreed cases, while in MAST you want *one* of these pre-agreed cases.

Which is not to say that other proposals do not benefit L2 solutions *more* 
(`SIGHASH_ANYPREVOUT` when please?), but other proposals are signature-based 
and would be larger in this niche.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Peter,

>
> On April 22, 2022 11:03:51 AM GMT+02:00, Zac Greenwood via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > I like the maxim of Peter Todd: any change of Bitcoin must benefit all
> > users. This means that every change must have well-defined and transparent
> > benefits. Personally I believe that the only additions to the protocol that
> > would still be acceptable are those that clearly benefit layer 2 solutions
> > such as LN and do not carry the dangerous potential of getting abused by
> > freeloaders selling commercial services on top of “free” eternal storage on
> > the blockchain.
>
>
> To strengthen your point: benefiting "all users" can only be done by 
> benefiting layer 2 solutions in some way, because it's inevitable that the 
> vast majority of users will use layer 2 because that's the only known way 
> that Bitcoin can scale.

I would like to point out that CTV is usable in LN.
In particular, instead of hosting all outputs (remote, local, and all the 
HTLCs) directly on the commitment transaction, the commitment transaction 
instead outputs to a CTV-guarded SCRIPT that defers the "real" outputs.

This is beneficial since a common cause of unilateral closes is that one of the 
HTLCs on the channel has timed out.
However, only *that* particular HTLC has to be exposed onchain *right now*, and 
the use of CTV allows only that failing HTLC, plus O(log N) other txes, to be 
published.
The CTV-tree can even be rearranged so that HTLCs with closer timeouts are 
nearer to the root of the CTV-tree.
This allows the rest of the unilateral close to be resolved later, if right now 
there is block space congestion (we only really need to deal with the sole HTLC 
that is timing out right now, the rest can be done later when block space is 
less tight).

This is arguably minimal (unilateral closes are rare, though they *do* have 
massive effects on the network, since a single timed-out channel can, during 
short-term block congestion, cause other channels to also time out, which 
worsen the block congestion and leading to cascades of channel closures).

So this objection seems, to me, at least mitigated: CTV *can* benefit layer 2 
users, which is why I switched from vaguely apathetic to CTV, to vaguely 
supportive of it.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Dave, et al.,

I have not read through *all* the mail on this thread, but have read a fair 
amount of it.

I think the main argument *for* this particular idea is that "it allows the use 
of real-world non-toy funds to prove that this feature is something actual 
users demand".

An idea that has been percolating in my various computation systems is to use 
Smart Contracts Unchained to implement a variant of the Microcode idea I put 
forth some months ago.

Briefly, define a set of "more detailed" opcodes that would allow any general 
computation to be performed.
This is the micro-opcode instruction set.

Then, when a new opcode or behavior is proposed for Bitcoin SCRIPT, create a 
new mapping from Bitcoin SCRIPT opcodes (including the new opcodes / behavior) 
to the micro-opcodes.
This is a microcode.

Then use Smart Contracts Unchained.
This means that we commit to the microcode, plus the SCRIPT that uses the 
microcode, and instead of sending funds to a new version of the Bitcoin SCRIPT 
that uses the new opcode(s), send to a "(n-of-n of users) or (1-of-users and 
(k-of-n of federation))".

This is no worse security-wise than using a federated sidechain, without 
requiring a complete sidechain implementation, and allows the same code (the 
micro-opcode interpreter) to be reused across all ideas.
It may even be worthwhile to include the micro-opcode interpreter into Bitcoin 
Core, so that the mechanics of merging in a new opcode, that was prototyped via 
this mechanism, is easier.

The federation only needs to interpret the micro-opcode instruction set; it 
simply translates the (modified) Bitcoin SCRIPT opcodes to the corresponding 
micro-opcodes and runs that, possibly with reasonable limits on execution time.
Users are not required to trust a particular fixed set of k-of-n federation, 
but may choose any k-of-n they believe is trustworthy.

This idea does not require consensus at any point in time.
It allows "real" funds to be used, thus demonstrating real demand for the 
supposed innovation.
The problem is the effective erosion of security to depending on k-of-n of a 
federation.

Presumably, proponents of a new opcode or feature would run a micro-opcode 
interpreter faithfully, so that users have a positive experience with their new 
opcode, and would carefully monitor and vet the micro-opcode interpreters run 
by other supposed proponents, on the assumption that a sub-goal of such 
proponents would be to encourage use of the new opcode / feature.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-04-05 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> When I see more and more proposals like this, where things are commited to 
> Taproot outputs, then I think we should start designing "miner-based 
> commitments". If someone is going to make a Bitcoin transaction and add a 
> commitment for zero cost, just by tweaking some Taproot public key, then it 
> is a benefit for the network, because then it is possible to get more things 
> with no additional bytes. Instead of doing "transaction-only", people can do 
> "transaction+commitment" for the same cost, that use case is positive.
>
> But if someone is going to make a Bitcoin transaction only to commit things, 
> where in other case that person would make no transaction at all, then I 
> think we should have some mechanism for "miner-based commitments" that would 
> allow making commitments in a standardized way. We always have one coinbase 
> transaction for each block, it is consensus rule. So, by tweaking single 
> public key in the coinbase transaction, it is possible to fit all commitments 
> in one tweaked key, and even make it logarithmic by forming a tree of 
> commitments.
>
> I think we cannot control user-based commitments, but maybe we should 
> standardize miner-based commitments, for example to have a sorted merkle tree 
> of commitments. Then, it would be possible to check if some commitment is a 
> part of that tree or not (if it is always sorted, then it is present at some 
> specified position or not, so by forming SPV-proof we can quickly prove, if 
> some commitment is or is not a part of some miner Taproot commitment).

You might consider implementing `OP_BRIBE` from Drivechains, then.

Note that if you *want* to have some data committed on the blockchain, you 
*have to* pay for the privilege of doing so --- miners are not obligated to put 
a commitment to *your* data on the coinbase for free.
Thus, any miner-based commitment needs to have a mechanism to offer payments to 
miners to include your commitment.

You might as well just use a transaction, and not tell miners that you want to 
commit data using some tweak of the public key (because the miners might then 
be induced to censor such commitments).

In short: there is no such thing as "other case that person would make no 
transcation at all", because you have to somehow bribe miners to include the 
commitment to your data, and you might as well use existing mechanisms 
(transactions that implicitly pay fees) for your data commitment, and get 
better censorship-resistance and privacy.

Nothing really prevents any transaction-based scheme from having multiple users 
that aggregate their data (losing privacy but aggregating their fees) to make a 
sum commitment and just make a single transaction that pays for the privilege 
of committing to the sum commitment.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

> On Tue, Mar 22, 2022 at 05:37:03AM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks
>
> (Have you considered applying a jit or some other compression algorithm
> to your emails?)
>
> > Microcode For Bitcoin SCRIPT
> >
> > =
> >
> > I propose:
> >
> > -   Define a generic, low-level language (the "RISC language").
>
> This is pretty much what Simplicity does, if you optimise the low-level
> language to minimise the number of primitives and maximise the ability
> to apply tooling to reason about it, which seem like good things for a
> RISC language to optimise.
>
> > -   Define a mapping from a specific, high-level language to
> > the above language (the microcode).
> >
> > -   Allow users to sacrifice Bitcoins to define a new microcode.
>
> I think you're defining "the microcode" as the "mapping" here.

Yes.

>
> This is pretty similar to the suggestion Bram Cohen was making a couple
> of months ago:
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019722.html
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019773.html
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019803.html
>
> I believe this is done in chia via the block being able to
> include-by-reference prior blocks' transaction generators:
>
> ] transactions_generator_ref_list: List[uint32]: A list of block heights of 
> previous generators referenced by this block's generator.
>
> -   https://docs.chia.net/docs/05block-validation/block_format
>
> (That approach comes at the cost of not being able to do full validation
> if you're running a pruning node. The alternative is to effectively
> introduce a parallel "utxo" set -- where you're mapping the "sacrificed"
> BTC as the nValue and instead of just mapping it to a scriptPubKey for
> a later spend, you're permanently storing the definition of the new
> CISC opcode)
>
>

Yes, the latter is basically what microcode is.

> > We can then support a "RISC" language that is composed of
> > general instructions, such as arithmetic, SECP256K1 scalar
> > and point math, bytevector concatenation, sha256 midstates,
> > bytevector bit manipulation, transaction introspection, and
> > so on.
>
> A language that includes instructions for each operation we can think
> of isn't very "RISC"... More importantly it gets straight back to the
> "we've got a new zk system / ECC curve / ... that we want to include,
> let's do a softfork" problem you were trying to avoid in the first place.

`libsecp256k1` can run on purely RISC machines like ARM, so saying that a 
"RISC" set of opcodes cannot implement some arbitrary ECC curve, when the 
instruction set does not directly support that ECC curve, seems incorrect.

Any new zk system / ECC curve would have to be implementable in C++, so if you 
have micro-operations that would be needed for it, such as XORing two 
multi-byte vectors together, multiplying multi-byte precision numbers, etc., 
then any new zk system or ECC curve would be implementable in microcode.
For that matter, you could re-write `libsecp256k1` there.

> > Then, the user creates a new transaction where one of
> > the outputs contains, say, 1.0 Bitcoins (exact required
> > value TBD),
>
> Likely, the "fair" price would be the cost of introducing however many
> additional bytes to the utxo set that it would take to represent your
> microcode, and the cost it would take to run jit(your microcode script)
> if that were a validation function. Both seem pretty hard to manage.
>
> "Ideally", I think you'd want to be able to say "this old microcode
> no longer has any value, let's forget it, and instead replace it with
> this new microcode that is much better" -- that way nodes don't have to
> keep around old useless data, and you've reduced the cost of introducing
> new functionality.

Yes, but that invites "I accidentally the smart contract" behavior.

> Additionally, I think it has something of a tragedy-of-the-commons
> problem: whoever creates the microcode pays the cost, but then anyone
> can use it and gain the benefit. That might even end up creating
> centralisation pressure: if you design a highly decentralised L2 system,
> it ends up expensive because people can't coordinate to pay for the
> new microcode that would make it cheaper; but if you design a highly
> centralised L2 system, you can just pay for the microcode yourself and
> make it even cheaper.

Th

Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev


Good morning again Russell,

> Good morning Russell,
>
> > Thanks for the clarification.
> > You don't think referring to the microcode via its hash, effectively using 
> > 32-byte encoding of opcodes, is still rather long winded?

For that matter, since an entire microcode represents a language (based on the 
current OG Bitcoin SCRIPT language), with a little more coordination, we could 
entirely replace Tapscript versions --- every Tapscript version is a slot for a 
microcode, and the current OG Bitcoin SCRIPT is just the one in slot `0xc2`.
Filled slots cannot be changed, but new microcodes can use some currently-empty 
Tapscript version slot, and have it properly defined in a microcode 
introduction outpoint.

Then indication of a microcode would take only one byte, that is already needed 
currently anyway.

That does limit us to only 255 new microcodes, thus the cost of one microcode 
would have to be a good bit higher.

Again, remember, microcodes represent an entire language that is an extension 
of OG Bitcoin SCRIPT, not individual operations in that language.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> Thanks for the clarification.
>
> You don't think referring to the microcode via its hash, effectively using 
> 32-byte encoding of opcodes, is still rather long winded?

A microcode is a *mapping* of `OP_` codes to a variable-length sequence of 
`UOP_` micro-opcodes.
So a microcode hash refers to an entire language of redefined `OP_` codes, not 
each individual opcode in the language.

If it costs 1 Bitcoin to create a new microcode, then there are only 21 million 
possible microcodes, and I think about 50 bits of hash is sufficient to specify 
those with low probability of collision.
We could use a 20-byte RIPEMD . SHA256 instead for 160 bits, that should be 
more than sufficient with enough margin.
Though perhaps it is now easier to deliberately attack...

Also, if you have a common SCRIPT whose non-`OP_PUSH` opcodes are more than say 
32 + 1 bytes (or 20 + 1 if using RIPEMD), and you can fit their equivalent 
`UOP_` codes into the max limit for a *single* opcode, you can save bytes by 
redefining some random `OP_` code into the sequence of all the `UOP_` codes.
You would have a hash reference to the microcode, and a single byte for the 
actual "SCRIPT" which is just a jet of the entire SCRIPT.
Users of multiple *different* such SCRIPTs can band together to define a single 
microcode, mapping their SCRIPTs to different `OP_` codes and sharing the cost 
of defining the new microcode that shortens all their SCRIPTs.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> Setting aside my thoughts that something like Simplicity would make a better 
> platform than Bitcoin Script (due to expression operating on a more narrow 
> interface than the entire stack (I'm looking at you OP_DEPTH)) there is an 
> issue with namespace management.
>
> If I understand correctly, your implication was that once opcodes are 
> redefined by an OP_RETURN transaction, subsequent transactions of that opcode 
> refer to the new microtransaction.  But then we have a race condition between 
> people submitting transactions expecting the outputs to refer to the old code 
> and having their code redefined by the time they do get confirmed  (or worse 
> having them reorged).

No, use of specific microcodes is opt-in: you have to use a specific `0xce` 
Tapscript version, ***and*** refer to the microcode you want to use via the 
hash of the microcode.

The only race condition is reorging out a newly-defined microcode.
This can be avoided by waiting for deep confirmation of a newly-defined 
microcode before actually using it.

But once the microcode introduction outpoint of a particular microcode has been 
deeply confirmed, then your Tapscript can refer to the microcode, and its 
meaning does not change.

Fullnodes may need to maintain multiple microcodes, which is why creating new 
microcodes is expensive; they not only require JIT compilation, they also 
require that fullnodes keep an index that cannot have items deleted.


The advantage of the microcode scheme is that the size of the SCRIPT can be 
used as a proxy for CPU load  just as it is done for current Bitcoin SCRIPT.
As long as the number of `UOP_` micro-opcodes that an `OP_` code can expand to 
is bounded, and we avoid looping constructs, then the CPU load is also bounded 
and the size of the SCRIPT approximates the amount of processing needed, thus 
microcode does not require a softfork to modify weight calculations in the 
future.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-21 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

It is entirely possible that I have gotten into the deep end and am now 
drowning in insanity, but here goes

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

Introduction


Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
  * Then later, once we have learned how to use this language,
we can softfork in new opcodes that compress sections of
programs written in this general language.

There are two arguments against this style:

1.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just provide a very general language and
  never softfork in any other change ever again.
2.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just skip over the initial very general
  language and individually activate small, specific
  constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
--

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
  - Few registers, many addressing/indexing modes, variable
instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
  - Many registers, usually only immediate and indexed
addressing modes, fixed instruction length, few
instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually 

Re: [bitcoin-dev] Speedy Trial

2022-03-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> @Jorge
> > Any user polling system is going to be vulnerable to sybil attacks.
>
> Not the one I'll propose right here. What I propose specifically is a 
> coin-weighted signature-based poll with the following components:
> A. Every pollee signs messages like  support:10%}> for each UTXO they want to respond to the poll with.
> B. A signed message like that is valid only while that UTXO has not been 
> spent.
> C. Poll results are considered only at each particular block height, where 
> the support and opposition responses are weighted by the UTXO amount (and the 
> support/oppose fraction in the message). This means you'd basically see a 
> rolling poll through the blockchain as new signed poll messages come in and 
> as their UTXOs are spent. 
>
> This is not vulnerable to sybil attacks because it requires access to UTXOs 
> and response-weight is directly tied to UTXO amount. If someone signs a poll 
> message with a key that can unlock (or is in some other designated way 
> associated with) a UTXO, and then spends that UTXO, their poll response stops 
> being counted for all block heights after the UTXO was spent. 
>
> Why put support and oppose fractions in the message? Who would want to both 
> support and oppose something? Any multiple participant UTXO would. Eg 
> lightning channels would, where each participant disagrees with the other. 
> They need to sign together, so they can have an agreement to sign for the 
> fractions that match their respective channel balances (using a force channel 
> close as a last resort against an uncooperative partner as usual). 

This does not quite work, as lightning channel balances can be changed at any 
time.
I might agree that you have 90% of the channel and I have 10% of the channel 
right now, but if you then send a request to forward your funds out, I need to 
be able to invalidate the previous signal, one that is tied to the fulfillment 
of the forwarding request.
This begins to add complexity.

More pointedly, if the signaling is done onchain, then a forward on the LN 
requires that I put up invalidations of previous signals, also onchain, 
otherwise you could cheaty cheat your effective balance by moving your funds 
around.
But the point of LN is to avoid putting typical everyday forwards onchain.

> This does have the potential issue of public key exposure prior to spending 
> for current addresses. But that could be fixed with a new address type that 
> has two public keys / spend paths: one for spending and one for signing. 

This issue is particularly relevant to vault constructions.
Typically a vault has a "cold" key that is the master owner of the fund, with 
"hot" keys having partial access.
Semantically, we would consider the "cold" key to be the "true" owner of the 
fund, with "hot" key being delegates who are semi-trusted, but not as trusted 
as the "cold" key.

So, we should consider a vote from the "cold" key only.
However, the point is that the "cold" key wants to be kept offline as much as 
possible for security.

I suppose the "cold" key could be put online just once to create the signal 
message, but vault owners might not want to vote because of the risk, and their 
weight might be enough to be important in your voting scheme (consider that the 
point of vaults is to protect large funds).


A sub-issue here with the spend/signal pubkey idea is that if I need to be able 
to somehow indicate that a long-term-cold-storage UTXO has a signaling pubkey, 
I imagine this mechanism of indioating might itself require a softfork, so you 
have a chicken-and-egg problem...

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Covenants and feebumping

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> For "hot contracts" a signature challenge is used to achieve the same. I know 
> the latter is imperfect, since
> the lower the uptime risk (increase the number of network monitors) the 
> higher the DOS risk (as you duplicate
> the key).. That's why i asked if anybody had some thoughts about this and if 
> there was a cleverer way of doing
> it.

Okay, let me see if I understand your concern correctly.

When using a signature challenge, the concern is that you need to presign 
multiple versions of a transaction with varying feerates.

And you have a set of network monitors / watchtowers that are supposed to watch 
the chain on your behalf in case your ISP suddenly hates you for no reason.

The more monitors there are, the more likely that one of them will be corrupted 
by a miner and jump to the highest-feerate version, overpaying fees and making 
miners very happy.
Such is third-party trust.

Is my understanding correct?


A cleverer way, which requires consolidating (but is unable to eliminate) 
third-party trust, would be to use a DLC oracle.
The DLC oracle provides a set of points corresponding to a set of feerate 
ranges, and commits to publishing the scalar of one of those points at some 
particular future block height.
Ostensibly, the scalar it publishes is the one of the point that corresponds to 
the feerate range found at that future block height.

You then create adaptor signatures for each feerate version, corresponding to 
the feerate ranges the DLC oracle could eventually publish.
The adaptor signatures can only be completed if the DLC oracle publishes the 
corresponding scalar for that feerate range.

You can then send the adaptor signatures to multiple watchtowers, who can only 
publish one of the feerate versions, unless the DLC oracle is hacked and 
publishes multiple scalars (at which point the DLC oracle protocol reveals a 
privkey of the DLC oracle, which should be usable for slashing some bond of the 
DLC oracle).
This prevents any of them from publishing the highest-feerate version, as the 
adaptor signature cannot be completed unless that is what the oracle published.

There are still drawbacks:

* Third-party trust risk: the oracle can still lie.
  * DLC oracles are prevented from publishing multiple scalars; they cannot be 
prevented from publishing a single wrong scalar.
* DLCs must be time bound.
  * DLC oracles commit to publishing a particular point at a particular fixed 
time.
  * For "hot" dynamic protocols, you need the ability to invoke the oracle at 
any time, not a particular fixed time.

The latter probably makes this unusable for hot protocols anyway, so maybe not 
so clever.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> > I think we would want to have a cleanstack rule at some point
>
> Ah is this a rule where a script shouldn't validate if more than just a true 
> is left on the stack? I can see how that would prevent the non-soft-fork 
> version of what I'm proposing. 

Yes.
There was also an even stronger cleanstack rule where the stack and alt stack 
are totally empty.
This is because a SCRIPT really just returns "valid" or "invalid", and 
`OP_VERIFY` can be trivially appended to a SCRIPT that leaves a single stack 
item to convert to a SCRIPT that leaves no stack items and retains the same 
behavior.

>
> > How large is the critical mass needed?
>
> Well it seems we've agreed that were we going to do this, we would want to at 
> least do a soft-fork to make known jet scripts lighter weight (and unknown 
> jet scripts not-heavier) than their non-jet counterparts. So given a 
> situation where this soft fork happens, and someone wants to implement a new 
> jet, how much critical mass would be needed for the network to get some 
> benefit from the jet? Well, the absolute minimum for some benefit to happen 
> is that two nodes that support that jet are connected. In such a case, one 
> node can send that jet scripted transaction along without sending the data of 
> what the jet stands for. The jet itself is pretty small, like 2 or so bytes. 
> So that does impose a small additional cost on nodes that don't support a 
> jet. For 100,000 nodes, that means 200,000 bytes of transmission would need 
> to be saved for a jet to break even. So if the jet stands for a 22 byte 
> script, it would break even when 10% of the network supported it. If the jet 
> stood for a 102 byte script, it would break even when 2% of the network 
> supported it. So how much critical mass is necessary for it to be worth it 
> depends on what the script is. 

The math seems reasonable.


> The question I have is: where would the constants table come from? Would it 
> reference the original positions of items on the witness stack? 

The constants table would be part of the SCRIPT puzzle, and thus not in the 
witness solution.
I imagine the SCRIPT would be divided into two parts: (1) a table of constants 
and (2) the actual opcodes to execute.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> On Wed, Mar 9, 2022 at 6:30 AM ZmnSCPxj  wrote:
>
> > I am pointing out that:
> >
> > * We want to save bytes by having multiple inputs of a transaction use the 
> > same single signature (i.e. sigagg).
> >
> > is not much different from:
> >
> > * We want to save bytes by having multiple inputs of a transaction use the 
> > same `scriptPubKey` template.
>
> Fair point. In the past Bitcoin has been resistant to such things because for 
> example reusing pubkeys can save you from having to separately pay for the 
> reveals of all of them but letting people get credit for that incentivizes 
> key reuse which isn't such a great thing.

See paragraph below:

> > > > For example you might have multiple HTLCs, with mostly the same code 
> > > > except for details like who the acceptor and offerrer are, exact hash, 
> > > > and timelock, and you could claim multiple HTLCs in a single tx and 
> > > > feed the details separately but the code for the HTLC is common to all 
> > > > of the HTLCs.
> > > > You do not even need to come from the same protocol if multiple 
> > > > protocols use the same code for implementing HTLC.

Note that the acceptor and offerrer are represented by pubkeys here.
So we do not want to encourage key reuse, we want to encourage reuse of *how* 
the pubkeys are used (but rotate the pubkeys).

In the other thread on Jets in bitcoin-dev I proposed moving data like pubkeys 
into a separate part of the SCRIPT in order to (1) not encourage key reuse and 
(2) make it easier to compress the code.
In LISP terms, it would be like requiring that top-level code have a `(let 
...)` form around it where the assigned data *must* be constants or `quote`, 
and disallowing constants and `quote` elsewhere, then any generated LISP code 
has to execute in the same top-level environment defined by this top-level 
`let`.

So you can compress the code by using some metaprogramming where LISP generates 
LISP code but you still need to work within the confines of the available 
constants.

> > > HTLCs, at least in Chia, have embarrassingly little code in them. Like, 
> > > so little that there's almost nothing to compress.
> >
> > In Bitcoin at least an HTLC has, if you remove the `OP_PUSH`es, by my 
> > count, 13 bytes.
> > If you have a bunch of HTLCs you want to claim, you can reduce your witness 
> > data by 13 bytes minus whatever number of bytes you need to indicate this.
> > That amounts to about 3 vbytes per HTLC, which can be significant enough to 
> > be worth it (consider that Taproot moving away from encoded signatures 
> > saves only 9 weight units per signature, i.e. about 2 vbytes).
>
> Oh I see. That's already extremely small overhead. When you start optimizing 
> at that level you wind up doing things like pulling all the HTLCs into the 
> same block to take the overhead of pulling in the template only once.
>  
>
> > Do note that PTLCs remain more space-efficient though, so forget about 
> > HTLCs and just use PTLCs.
>
> It makes a lot of sense to make a payment channel system using PTLCs and 
> eltoo right off the bat but then you wind up rewriting everything from 
> scratch.

Bunch of #reckless devs implemented Lightning with just HTLCs so that is that, 
*shrug*, gotta wonder what those people were thinking, not waiting for PTLCs.

>  
>
> > > > This does not apply to current Bitcoin since we no longer accept a 
> > > > SCRIPT from the spender, we now have a witness stack.
> > >
> > > My mental model of Bitcoin is to pretend that segwit was always there and 
> > > the separation of different sections of data is a semantic quibble.
> >
> > This is not a semantic quibble --- `witness` contains only the equivalent 
> > of `OP_PUSH`es, while `scriptSig` can in theory contain non-`OP_PUSH` 
> > opcodes.
> > xref. `1 RETURN`.
>
> It's very normal when you're using lisp for snippets of code to be passed in 
> as data and then verified and executed. That's enabled by the extreme 
> adherence to no side effects.

Quining still allows Turing-completeness and infinite loops, which *is* still a 
side effect, though as I understand it ChiaLISP uses the "Turing-complete but 
with a max number of ops" kind of totality.

> > This makes me kinda wary of using such covenant features at all, and if 
> > stuff like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added 
> > but must be reimplemented via a covenant feature, I would be saddened, as I 
> > now have to contend with the complexity of covenant features and carefully 
> > check that `SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented 
> > correctly.
>
> Even the 'standard format' transaction which supports taproot and graftroot 
> is implemented in CLVM. The benefit of this approach is that new 
> functionality can be implemented and deployed immediately rather than having 
> to painstakingly go through a soft fork deployment for each thing.

Wow, just wow.

Regards,
ZmnSCPxj

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning aj et al.,

> On Tue, Mar 08, 2022 at 03:06:43AM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > > > They're radically different approaches and
> > > > it's hard to see how they mix. Everything in lisp is completely 
> > > > sandboxed,
> > > > and that functionality is important to a lot of things, and it's really
> > > > normal to be given a reveal of a scriptpubkey and be able to rely on 
> > > > your
> > > > parsing of it.
> > > > The above prevents combining puzzles/solutions from multiple coin 
> > > > spends,
> > > > but I don't think that's very attractive in bitcoin's context, the way
> > > > it is for chia. I don't think it loses much else?
> > > > But cross-input signature aggregation is a nice-to-have we want for 
> > > > Bitcoin, and, to me, cross-input sigagg is not much different from 
> > > > cross-input puzzle/solution compression.
>
> Signature aggregation has a lot more maths and crypto involved than
> reversible compression of puzzles/solutions. I was more meaning
> cross-transaction relationships rather than cross-input ones though.

My point is that in the past we were willing to discuss the complicated crypto 
math around cross-input sigagg in order to save bytes, so it seems to me that 
cross-input compression of puzzles/solutions at least merits a discussion, 
since it would require a lot less heavy crypto math, and *also* save bytes.

> > > I /think/ the compression hook would be to allow you to have the puzzles
> > > be (re)generated via another lisp program if that was more efficient
> > > than just listing them out. But I assume it would be turtles, err,
> > > lisp all the way down, no special C functions like with jets.
> > > Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
> > > cryptocurrency node software, so "special C function" seems to 
> > > overprivilege C...
>
> Jets are "special" in so far as they are costed differently at the
> consensus level than the equivalent pure/jetless simplicity code that
> they replace. Whether they're written in C or something else isn't the
> important part.
>
> By comparison, generating lisp code with lisp code in chia doesn't get
> special treatment.

Hmm, what exactly do you mean here?

If I have a shorter piece of code that expands to a larger piece of code 
because metaprogramming, is it considered the same cost as the larger piece of 
code (even if not all parts of the larger piece of code are executed, e.g. 
branches)?

Or is the cost simply proportional to the number of operations actually 
executed?

I think there are two costs here:

* Cost of bytes to transmit over the network.
* Cost of CPU load.

Over here in Bitcoin we have been mostly conflating the two, to the point that 
Taproot even eliminates unexecuted branches from being transmitted over the 
network so that bytes transmitted is approximately equal to opcodes executed.

It seems to me that lisp-generating-lisp compression would reduce the cost of 
bytes transmitted, but increase the CPU load (first the metaprogram runs, and 
*then* the produced program runs).

> (You could also use jets in a way that doesn't impact consensus just
> to make your node software more efficient in the normal case -- perhaps
> via a JIT compiler that sees common expressions in the blockchain and
> optimises them eg)

I believe that is relevant in the other thread about Jets that I and Billy 
forked off from `OP_FOLD`?


Over in that thread, we seem to have largely split jets into two types:

* Consensus-critical jets which need a softfork but reduce the weight of the 
jetted code (and which are invisible to pre-softfork nodes).
* Non-consensus-critical jets which only need relay change and reduces bytes 
sent, but keeps the weight of the jetted code.

It seems to me that lisp-generating-lisp compression would roughly fall into 
the "non-consensus-critical jets", roughly.


> On Wed, Mar 09, 2022 at 02:30:34PM +, ZmnSCPxj via bitcoin-dev wrote:
>
> > Do note that PTLCs remain more space-efficient though, so forget about 
> > HTLCs and just use PTLCs.
>
> Note that PTLCs aren't really Chia-friendly, both because chia doesn't
> have secp256k1 operations in the first place, but also because you can't
> do a scriptless-script because the information you need to extract
> is lost when signatures are non-interactively aggregated via BLS --
> so that adds an expensive extra ECC operation rather than reusing an
> op you're already paying for (scriptless script PTLCs) or just adding
> a cheap hash operation (HTLCs).
>
> (Pretty sure Chia could do (= PTLC (pubkey_for_exp PREIMAGE)) for
> preimage re

Re: [bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> Hi ZmnSCPxj,
>
> >  Just ask a bunch of fullnodes to add this 1Mb of extra ignored data in 
> >this tiny 1-input-1-output transaction so I pay only a small fee
>
> I'm not suggesting that you wouldn't have to pay a fee for it. You'd pay a 
> fee for it as normal, so there's no DOS vector. Doesn't adding extra witness 
> data do what would be needed here? Eg simply adding extra data onto the 
> witness script that will remain unconsumed after successful execution of the 
> script?

I think we would want to have a cleanstack rule at some point (do not remember 
out-of-hand if Taproot already enforces one).

So now being nice to the network is *more* costly?
That just *dis*incentivizes jet usage.

> > how do new jets get introduced?
>
> In scenario A, new jets get introduced by being added to bitcoin software as 
> basically relay rules. 
>
> > If a new jet requires coordinated deployment over the network, then you 
> > might as well just softfork and be done with it.
>
> It would not need a coordinated deployment. However, the more nodes that 
> supported that jet, the more efficient using it would be for the network. 
>
> > If a new jet can just be entered into some configuration file, how do you 
> > coordinate those between multiple users so that there *is* some benefit for 
> > relay?
>
> When a new version of bitcoin comes out, people generally upgrade to it 
> eventually. No coordination is needed. 100% of the network need not support a 
> jet. Just some critical mass to get some benefit. 

How large is the critical mass needed?

If you use witness to transport jet information across non-upgraded nodes, then 
that disincentivizes use of jets and you can only incentivize jets by softfork, 
so you might as well just get a softfork.

If you have no way to transport jet information from an upgraded through a 
non-upgraded back to an upgraded node, then I think you need a fairly large 
buy-in from users before non-upgraded nodes are rare enough that relay is not 
much affected, and if the required buy-in is large enough, you might as well 
softfork.

> > Having a static lookup table is better since you can pattern-match on 
> > strings of specific, static length
>
> Sorry, better than what exactly? 

Than using a dynamic lookup table, which is how I understood your previous 
email about "scripts in the 1000 past blocks".

> > How does the unupgraded-to-upgraded boundary work?
> 
> When the non-jet aware node sends this to a jet-aware node, that node would 
> see the extra items on the stack after script execution, and would interpret 
> them as an OP_JET call specifying that OP_JET should replace the witness 
> items starting at index 0 with `1b5f03cf  OP_JET`. It does this and then 
> sends that along to the next hop.

It would have to validate as well that the SCRIPT sub-section matches the jet, 
else I could pretend to be a non-jet-aware node and give you a SCRIPT 
sub-section that does not match the jet and would cause your validation to 
diverge from other nodes.

Adler32 seems a bit short though, it seems to me that it may lead to two 
different SCRIPT subsections hashing to the same hash.

Suppose I have two different node softwares.
One uses a particular interpretation for a particular Adler32 hash.
The other uses a different interpretation.
If we are not careful, if these two jet-aware software talk to each other, they 
will ban each other from the network and cause a chainsplit.
Since the Bitcoin software is open source, nothing prevents anyone from using a 
different SCRIPT subsection for a particular Adler32 hash if they find a 
collision and can somehow convince people to run their modified software.

> In order to support this without a soft fork, this extra otherwise 
> unnecessary data would be needed, but for jets that represent long scripts, 
> the extra witness data could be well worth it (for the network). 
>
> However, this extra data would be a disincentive to do transactions this way, 
> even when its better for the network. So it might not be worth doing it this 
> way without a soft fork. But with a soft fork to upgrade nodes to support an 
> OP_JET opcode, the extra witness data can be removed (replaced with 
> out-of-band script fragment transmission for nodes that don't support a 
> particular jet). 

Which is why I pointed out that each individual jet may very well require a 
softfork, or enough buy-in that you might as well just softfork.

> One interesting additional thing that could be done with this mechanism is to 
> add higher-order function ability to jets, which could allow nodes to add 
> OP_FOLD or similar functions as a jet without requiring additional soft 
> forks.  Hypothetically, you could imagine a jet script that uses an OP_LOOP 
> jet be written as follows:
>
> 5             # Loop 5 times
> 1             # Loop the next 1 operation
> 3c1g14ad 
> OP_JET
> OP_ADD  # The 1 operation to loop
>
> The above would sum up 5 numbers from the 

Re: [bitcoin-dev] Meeting Summary & Logs for CTV Meeting #5

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> What is ST? If it may be a reason to oppose CTV, why not talk about it more 
> explicitly so that others can understand the criticisms?

ST is Speedy Trial.
Basically, a short softfork attempt with `lockinontimeout=false` is first done.
If this fails, then developers stop and think and decide whether to offer a 
UASF `lockinontimeout=true` version or not.

Jeremy showed a state diagram of Speedy Trial on the IRC, which was complicated 
enough that I ***joked*** that it would be better to not implement `OP_CTV` and 
just use One OPCODE To Rule Them All, a.k.a. `OP_RING`.

If you had actually read the IRC logs you would have understood it, I even 
explicitly asked "ST ?=" so that the IRC logs have it explicitly listed as 
"Speedy Trial".


> It seems that criticism isn't really that welcomed and is just explained away.

It seems that you are trying to grasp at any criticism and thus fell victim to 
a joke.

> Perhaps it is just my subjective perception.
> Sometimes it feels we're going from "don't trust, verify" to "just trust 
> jeremy rubin", i hope this is really just my subjective perception. Because I 
> think it would be really bad that we started to blindly trust people like 
> that, and specially jeremy.

Why "specially jeremy"?
Any particular information you think is relevant?

The IRC logs were linked, you know, you could have seen what was discussed.

In particular, on the other thread you mention:

> We should talk more about activation mechanisms and how users should be able 
> to actively resist them more.

Speedy Trial means that users with mining hashpower can block the initial 
Speedy Trial, and the failure to lock in ***should*** cause the developers to 
stop-and-listen.
If the developers fail to stop-and-listen, then a counter-UASF can be written 
which *rejects* blocks signalling *for* the upgrade, which will chainsplit from 
a pro-UASF `lockinontimeout=true`, but clients using the initial Speedy Trial 
code will follow which one has better hashpower.

If we assume that hashpower follows price, then users who want for / against a 
particular softfork will be able to resist the Speedy Trial, and if developers 
release a UASF `lockinontimeout=true` later, will have the choice to reject 
running the UASF and even running a counter-UASF.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> On Mon, Mar 7, 2022 at 7:06 PM ZmnSCPxj  wrote:
>
> > But cross-input signature aggregation is a nice-to-have we want for 
> > Bitcoin, and, to me, cross-input sigagg is not much different from 
> > cross-input puzzle/solution compression.
>
> Cross-input signature aggregation has a lot of headaches unless you're using 
> BLS signatures, in which case you always aggregate everything all the time 
> because it can be done after the fact noninteractively. In that case it makes 
> sense to have a special aggregated signature which always comes with a 
> transaction or block. But it might be a bit much to bundle both lisp and BLS 
> support into one big glop.

You misunderstand my point.

I am not saying "we should add sigagg and lisp together!"

I am pointing out that:

* We want to save bytes by having multiple inputs of a transaction use the same 
single signature (i.e. sigagg).

is not much different from:

* We want to save bytes by having multiple inputs of a transaction use the same 
`scriptPubKey` template.

> > For example you might have multiple HTLCs, with mostly the same code except 
> > for details like who the acceptor and offerrer are, exact hash, and 
> > timelock, and you could claim multiple HTLCs in a single tx and feed the 
> > details separately but the code for the HTLC is common to all of the HTLCs.
> > You do not even need to come from the same protocol if multiple protocols 
> > use the same code for implementing HTLC.
>
> HTLCs, at least in Chia, have embarrassingly little code in them. Like, so 
> little that there's almost nothing to compress.

In Bitcoin at least an HTLC has, if you remove the `OP_PUSH`es, by my count, 13 
bytes.
If you have a bunch of HTLCs you want to claim, you can reduce your witness 
data by 13 bytes minus whatever number of bytes you need to indicate this.
That amounts to about 3 vbytes per HTLC, which can be significant enough to be 
worth it (consider that Taproot moving away from encoded signatures saves only 
9 weight units per signature, i.e. about 2 vbytes).

Do note that PTLCs remain more space-efficient though, so forget about HTLCs 
and just use PTLCs.

>
> > This does not apply to current Bitcoin since we no longer accept a SCRIPT 
> > from the spender, we now have a witness stack.
>
> My mental model of Bitcoin is to pretend that segwit was always there and the 
> separation of different sections of data is a semantic quibble.

This is not a semantic quibble --- `witness` contains only the equivalent of 
`OP_PUSH`es, while `scriptSig` can in theory contain non-`OP_PUSH` opcodes.
xref. `1 RETURN`.

As-is, with SegWit the spender no longer is able to provide any SCRIPT at all, 
but new opcodes may allow the spender to effectively inject any SCRIPT they 
want, once again, because `witness` data may now become code.

> But if they're fully baked into the scriptpubkey then they're opted into by 
> the recipient and there aren't any weird surprises.

This is really what I kinda object to.
Yes, "buyer beware", but consider that as the covenant complexity increases, 
the probability of bugs, intentional or not, sneaking in, increases as well.
And a bug is really "a weird surprise" --- xref TheDAO incident.

This makes me kinda wary of using such covenant features at all, and if stuff 
like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added but must be 
reimplemented via a covenant feature, I would be saddened, as I now have to 
contend with the complexity of covenant features and carefully check that 
`SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented correctly.
True I also still have to check the C++ source code if they are implemented 
directly as opcodes, but I can read C++ better than frikkin Bitcoin SCRIPT.
Not to mention that I now have to review both the (more complicated due to more 
general) covenant feature implementation, *and* the implementation of 
`SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` in terms of the covenant feature.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning aj et al.,


> > They're radically different approaches and
> > it's hard to see how they mix. Everything in lisp is completely sandboxed,
> > and that functionality is important to a lot of things, and it's really
> > normal to be given a reveal of a scriptpubkey and be able to rely on your
> > parsing of it.
>
> The above prevents combining puzzles/solutions from multiple coin spends,
> but I don't think that's very attractive in bitcoin's context, the way
> it is for chia. I don't think it loses much else?

But cross-input signature aggregation is a nice-to-have we want for Bitcoin, 
and, to me, cross-input sigagg is not much different from cross-input 
puzzle/solution compression.

For example you might have multiple HTLCs, with mostly the same code except for 
details like who the acceptor and offerrer are, exact hash, and timelock, and 
you could claim multiple HTLCs in a single tx and feed the details separately 
but the code for the HTLC is common to all of the HTLCs.
You do not even need to come from the same protocol if multiple protocols use 
the same code for implementing HTLC.

> > > There's two ways to think about upgradability here; if someday we want
> > > to add new opcodes to the language -- perhaps something to validate zero
> > > knowledge proofs or calculate sha3 or use a different ECC curve, or some
> > > way to support cross-input signature aggregation, or perhaps it's just
> > > that some snippets are very widely used and we'd like to code them in
> > > C++ directly so they validate quicker and don't use up as much block
> > > weight. One approach is to just define a new version of the language
> > > via the tapleaf version, defining new opcodes however we like.
> > > A nice side benefit of sticking with the UTXO model is that the soft fork
> > > hook can be that all unknown opcodes make the entire thing automatically
> > > pass.
>
> I don't think that works well if you want to allow the spender (the
> puzzle solution) to be able to use opcodes introduced in a soft-fork
> (eg, for graftroot-like behaviour)?

This does not apply to current Bitcoin since we no longer accept a SCRIPT from 
the spender, we now have a witness stack.
However, once we introduce opcodes that allow recursive covenants, it seems 
this is now a potential footgun if the spender can tell the puzzle SCRIPT to 
load some code that will then be used in the *next* UTXO created, and *then* 
the spender can claim it.

Hmmm Or maybe not?
If the spender can already tell the puzzle SCRIPT to send the funds to a SCRIPT 
that is controlled by the spender, the spender can already tell the puzzle 
SCRIPT to forward the funds to a pubkey the spender controls.
So this seems to be more like "do not write broken SCRIPTs"?

> > > > - serialization seems to be a bit verbose -- 100kB of serialized clvm
> > > >    code from a random block gzips to 60kB; optimising the serialization
> > > >    for small lists, and perhaps also for small literal numbers might be
> > > >    a feasible improvement; though it's not clear to me how frequently
> > > >    serialization size would be the limiting factor for cost versus
> > > >    execution time or memory usage.
> > > > A lot of this is because there's a hook for doing compression at the 
> > > > consensus layer which isn't being used aggressively yet. That one has 
> > > > the downside that the combined cost of transactions can add up very 
> > > > nonlinearly, but when you have constantly repeated bits of large 
> > > > boilerplate it gets close and there isn't much of an alternative. That 
> > > > said even with that form of compression maxxed out it's likely that 
> > > > gzip could still do some compression but that would be better done in 
> > > > the database and in wire protocol formats rather than changing the 
> > > > format which is hashed at the consensus layer.
> > > > How different is this from "jets" as proposed in Simplicity?
>
> Rather than a "transaction" containing "inputs/outputs", chia has spend
> bundles that spend and create coins; and spend bundles can be merged
> together, so that a block only has a single spend bundle. That spend
> bundle joins all the puzzles (the programs that, when hashed match
> the scriptPubKey) and solutions (scriptSigs) for the coins being spent
> together.
>
> I /think/ the compression hook would be to allow you to have the puzzles
> be (re)generated via another lisp program if that was more efficient
> than just listing them out. But I assume it would be turtles, err,
> lisp all the way down, no special C functions like with jets.

Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
cryptocurrency node software, so "special C function" seems to overprivilege 
C...
I suppose the more proper way to think of this is that jets are *equivalent to* 
some code in the hosted language, and have an *efficient implementation* in the 
hosting language.
In this view, the current OG Bitcoin SCRIPT is the hosted 

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> Hi James,
>
> Interesting to see a sketch of a CTV-based vault design !
>
> I think the main concern I have with any hashchain-based vault design is the 
> immutability of the flow paths once the funds are locked to the root vault 
> UTXO. By immutability, I mean there is no way to modify the 
> unvault_tx/tocold_tx transactions and therefore recover from transaction 
> fields
> corruption (e.g a unvault_tx output amount superior to the root vault UTXO 
> amount) or key endpoints compromise (e.g the cold storage key being stolen).
>
> Especially corruption, in the early phase of vault toolchain deployment, I 
> believe it's reasonable to expect bugs to slip in affecting the output amount 
> or relative-timelock setting correctness (wrong user config, miscomputation 
> from automated vault management, ...) and thus definitively freezing the 
> funds. Given the amounts at stake for which vaults are designed, errors are 
> likely to be far more costly than the ones we see in the deployment of 
> payment channels.
>
> It might be more conservative to leverage a presigned transaction data design 
> where every decision point is a multisig. I think this design gets you the 
> benefit to correct or adapt if all the multisig participants agree on. It 
> should also achieve the same than a key-deletion design, as long as all
> the vault's stakeholders are participating in the multisig, they can assert 
> that flow paths are matching their spending policy.

Have not looked at the actual vault design, but I observe that Taproot allows 
for a master key (which can be an n-of-n, or a k-of-n with setup (either 
expensive or trusted, but I repeat myself)) to back out of any contract.

This master key could be an "even colder" key that you bury in the desert to be 
guarded over by generations of Fremen riding giant sandworms until the Bitcoin 
Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto, arrives.

> Of course, relying on presigned transactions comes with higher assumptions on 
> the hardware hosting the flow keys. Though as hashchain-based vault design 
> imply "secure" key endpoints (e.g ), as a vault user you're 
> still encumbered with the issues of key management, it doesn't relieve you to 
> find trusted hardware. If you want to avoid multiplying devices to trust, I 
> believe flow keys can be stored on the same keys guarding the UTXOs, before 
> sending to vault custody.
>
> I think the remaining presence of trusted hardware in the vault design might 
> lead one to ask what's the security advantage of vaults compared to classic 
> multisig setup. IMO, it's introducing the idea of privileges in the coins 
> custody : you set up the flow paths once for all at setup with the highest 
> level of privilege and then anytime you do a partial unvault you don't need 
> the same level of privilege. Partial unvault authorizations can come with a 
> reduced set of verifications, at lower operational costs. That said, I think 
> this security advantage is only relevant in the context of recursive design, 
> where the partial unvault sends back the remaining funds to vault UTXO (not 
> the design proposed here).
>
> Few other thoughts on vault design, more minor points.
>
> "If Alice is watching the mempool/chain, she will see that the unvault 
> transaction has been unexpectedly broadcast,"
>
> I think you might need to introduce an intermediary, out-of-chain protocol 
> step where the unvault broadcast is formally authorized by the vault 
> stakeholders. Otherwise it's hard to qualify "unexpected", as hot key 
> compromise might not be efficiently detected.

Thought: It would be nice if Alice could use Lightning watchtowers as well, 
that would help increase the anonymity set of both LN watchtower users and 
vault users.

> "With  OP_CTV, we can ensure that the vault operation is enforced by 
> consensus itself, and the vault transaction data can be generated 
> deterministically without additional storage needs."
>
> Don't you also need the endpoint scriptPubkeys (, ), 
> the amounts and CSV value ? Though I think you can grind amounts and CSV 
> value in case of loss...But I'm not sure if you remove the critical data 
> persistence requirement, just reduce the surface.
>
> "Because we want to be able to respond immediately, and not have to dig out 
> our cold private keys, we use an additional OP_CTV to encumber the "swept" 
> coins for spending by only the cold wallet key."
>
> I think a robust vault deployment would imply the presence of a set of 
> watchtowers, redundant entities able to broadcast the cold transaction in 
> reaction to unexpected unvault. One feature which could be interesting is 
> "tower accountability", i.e knowing which tower initiated the broadcast, 
> especially if it's a faultive one. One way is to watermark the cold 
> transaction (e.g tweak nLocktime to past value). Though I believe with CTV 
> you would need as much different hashes than towers included in 

[bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

Changed subject since this is only tangentially related to `OP_FOLD`.

> Let me organize my thoughts on this a little more clearly. There's a couple 
> possibilities I can think of for a jet-like system:
>
> A. We could implement jets now without a consensus change, and without 
> requiring all nodes to upgrade to new relay rules. Probably. This would give 
> upgraded nodes improved validation performance and many upgraded nodes relay 
> savings (transmitting/receiving fewer bytes). Transactions would be weighted 
> the same as without the use of jets tho.
> B. We could implement the above + lighter weighting by using a soft fork to 
> put the jets in a part of the blockchain hidden from unupgraded nodes, as you 
> mentioned. 
> C. We could implement the above + the jet registration idea in a soft fork. 
>
> For A:
>
> * Upgraded nodes query each connection for support of jets in general, and 
> which specific jets they support.
> * For a connection to another upgraded node that supports the jet(s) that a 
> transaction contains, the transaction is sent verbatim with the jet included 
> in the script (eg as some fake opcode line like 23 OP_JET, indicating to 
> insert standard jet 23 in its place). When validation happens, or when a 
> miner includes it in a block, the jet opcode call is replaced with the script 
> it represents so hashing happens in a way that is recognizable to unupgraded 
> nodes.
> * For a connection to a non-upgraded node that doesn't support jets, or an 
> upgraded node that doesn't support the particular jet included in the script, 
> the jet opcode call is replaced as above before sending to that node. In 
> addition, some data is added to the transaction that unupgraded nodes 
> propagate along but otherwise ignore. Maybe this is extra witness data, maybe 
> this is some kind of "annex", or something else. But that data would contain 
> the original jet opcode (in this example "23 OP_JET") so that when that 
> transaction data reaches an upgraded node that recognizes that jet again, it 
> can swap that back in, in place of the script fragment it represents. 
>
> I'm not 100% sure the required mechanism I mentioned of "extra ignored data" 
> exists, and if it doesn't, then all nodes would at least need to be upgraded 
> to support that before this mechanism could fully work.

I am not sure that can even be *made* to exist.
It seems to me a trivial way to launch a DDoS: Just ask a bunch of fullnodes to 
add this 1Mb of extra ignored data in this tiny 1-input-1-output transaction so 
I pay only a small fee if it confirms but the bandwidth of all fullnodes is 
wasted transmitting and then ignoring this block of data.

> But even if such a mechanism doesn't exist, a jet script could still be used, 
> but it would be clobbered by the first nonupgraded node it is relayed to, and 
> can't then be converted back (without using a potentially expensive lookup 
> table as you mentioned). 

Yes, and people still run Bitcoin Core 0.8.x.

> > If the script does not weigh less if it uses a jet, then there is no 
> > incentive for end-users to use a jet
>
> That's a good point. However, I'd point out that nodes do lots of things that 
> there's no individual incentive for, and this might be one where people 
> either altruistically use jets to be lighter on the network, or use them in 
> the hopes that the jet is accepted as a standard, reducing the cost of their 
> scripts. But certainly a direct incentive to use them is better. Honest nodes 
> can favor connecting to those that support jets.

Since you do not want a dynamic lookup table (because of the cost of lookup), 
how do new jets get introduced?
If a new jet requires coordinated deployment over the network, then you might 
as well just softfork and be done with it.
If a new jet can just be entered into some configuration file, how do you 
coordinate those between multiple users so that there *is* some benefit for 
relay?

> >if a jet would allow SCRIPT weights to decrease, upgraded nodes need to hide 
> >them from unupgraded nodes
> > we have to do that by telling unupgraded nodes "this script will always 
> > succeed and has weight 0"
>
> Right. It doesn't have to be weight zero, but that would work fine enough. 
>
> > if everybody else has not upgraded, a user of a new jet has no security.
>
> For case A, no security is lost. For case B you're right. For case C, once 
> nodes upgrade to the initial soft fork, new registered jets can take 
> advantage of relay-cost weight savings (defined by the soft fork) without 
> requiring any nodes to do any upgrading, and nodes could be further upgraded 
> to optimize the validation of various of those registered jets, but those 
> processing savings couldn't change the weighting of transactions without an 
> additional soft fork.
>
> > Consider an attack where I feed you a SCRIPT that validates trivially but 
> > is filled with almost-but-not-quite-jettable code
>
> I 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> while in the coin set model each puzzle (scriptpubkey) gets run and either 
> assert fails or returns a list of extra conditions it has, possibly including 
> timelocks and creating new coins, paying fees, and other things.

Does this mean it basically gets recursive covenants?
Or is a condition in this list of conditions written a more restrictive 
language which itself cannot return a list of conditions?


> >  - serialization seems to be a bit verbose -- 100kB of serialized clvm
> >    code from a random block gzips to 60kB; optimising the serialization
> >    for small lists, and perhaps also for small literal numbers might be
> >    a feasible improvement; though it's not clear to me how frequently
> >    serialization size would be the limiting factor for cost versus
> >    execution time or memory usage.
>
> A lot of this is because there's a hook for doing compression at the 
> consensus layer which isn't being used aggressively yet. That one has the 
> downside that the combined cost of transactions can add up very nonlinearly, 
> but when you have constantly repeated bits of large boilerplate it gets close 
> and there isn't much of an alternative. That said even with that form of 
> compression maxxed out it's likely that gzip could still do some compression 
> but that would be better done in the database and in wire protocol formats 
> rather than changing the format which is hashed at the consensus layer.

How different is this from "jets" as proposed in Simplicity?

> > Pretty much all the opcodes in the first section are directly from chia
> > lisp, while all the rest are to complete the "bitcoin" functionality.
> > The last two are extensions that are more food for thought than a real
> > proposal.
>
> Are you thinking of this as a completely alternative script format or an 
> extension to bitcoin script? They're radically different approaches and it's 
> hard to see how they mix. Everything in lisp is completely sandboxed, and 
> that functionality is important to a lot of things, and it's really normal to 
> be given a reveal of a scriptpubkey and be able to rely on your parsing of it.

I believe AJ is proposing a completely alternative format to OG Bitcoin SCRIPT.
Basically, as I understand it, nothing in the design of Tapscript versions 
prevents us from completely changing the interpretation of Tapscript bytes, and 
use a completely different language.
That is, we could designate a new Tapscript version as completely different 
from OG Bitcoin SCRIPT.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> Even changing the weight of a transaction using jets (ie making a script 
> weigh less if it uses a jet) could be done in a similar way to how segwit 
> separated the witness out.

The way we did this in SegWit was to *hide* the witness from unupgraded nodes, 
who are then unable to validate using the upgraded rules (because you are 
hiding the data from them!), which is why I bring up:

> > If a new jet is released but nobody else has upgraded, how bad is my 
> > security if I use the new jet?
>
> Security wouldn't be directly affected, only (potentially) cost. If your 
> security depends on cost (eg if it depends on pre-signed transactions and is 
> for some reason not easily CPFPable or RBFable), then security might be 
> affected if the unjetted scripts costs substantially more to mine. 

So if we make a script weigh less if it uses a jet, we have to do that by 
telling unupgraded nodes "this script will always succeed and has weight 0", 
just like `scriptPubKey`s with `<0> ` are, to pre-SegWit nodes, 
spendable with an empty `scriptSig`.
At least, that is how I always thought SegWit worked.

Otherwise, a jet would never allow SCRIPT weights to decrease, as unupgraded 
nodes who do not recognize the jet will have to be fed the entire code of the 
jet and would consider the weight to be the expanded, uncompressed code.
And weight is a consensus parameter, blocks are restricted to 4Mweight.

So if a jet would allow SCRIPT weights to decrease, upgraded nodes need to hide 
them from unupgraded nodes (else the weight calculations of unupgraded nodes 
will hit consensus checks), then if everybody else has not upgraded, a user of 
a new jet has no security.

Not even the `softfork` form of chialisp that AJ is proposing in the other 
thread would help --- unupgraded nodes will simply skip over validation of the 
`softfork` form.

If the script does not weigh less if it uses a jet, then there is no incentive 
for end-users to use a jet, as they would still pay the same price anyway.

Now you might say "okay even if no end-users use a jet, we could have fullnodes 
recognize jettable code and insert them automatically on transport".
But the lookup table for that could potentially be large once we have a few 
hundred jets (and I think Simplicity already *has* a few hundred jets, so 
additional common jets would just add to that?), jettable code could start at 
arbitrary offsets of the original SCRIPT, and jettable code would likely have 
varying size, that makes for a difficult lookup table.
In particular that lookup table has to be robust against me feeding it some 
section of code that is *almost* jettable but suddenly has a different opcode 
at the last byte, *and* handle jettable code of varying sizes (because of 
course some jets are going to e more compressible than others).
Consider an attack where I feed you a SCRIPT that validates trivially but is 
filled with almost-but-not-quite-jettable code (and again, note that expanded 
forms of jets are varying sizes), your node has to look up all those jets but 
then fails the last byte of the almost-but-not-quite-jettable code, so it ends 
up not doing any jetting.
And since the SCRIPT validated your node feels obligated to propagate it too, 
so now you are helping my DDoS.

> >  I suppose the point would be --- how often *can* we add new jets?
>
> A simple jet would be something that's just added to bitcoin software and 
> used by nodes that recognize it. This would of course require some debate and 
> review to add it to bitcoin core or whichever bitcoin software you want to 
> add it to. However, the idea I proposed in my last email would allow anyone 
> to add a new jet. Then each node can have their own policy to determine which 
> jets of the ones registered it wants to keep an index of and use. On its own, 
> it wouldn't give any processing power optimization, but it would be able to 
> do the same kind of script compression you're talking about op_fold allowing. 
> And a list of registered jets could inform what jets would be worth building 
> an optimized function for. This would require a consensus change to implement 
> this mechanism, but thereafter any jet could be registered in userspace.

Certainly a neat idea.
Again, lookup table tho.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> >On the other hand, the above, where the oracle determines *when* the fund 
> >can be spent, can also be implemented by a simple 2-of-3, and called an 
> >"escrow".
>
> I think something that is underappreciated by protocol developers is the fact 
> that multisig requires interactiveness at settlement time. The multisig 
> escrow provider needs to know the exact details about the bitcoin transaction 
> and needs to issue a signature (gotta sign the outpoints, the fee, the payout 
> addresses etc).
>
> With PTLCs that isn't the case, and thus gives a UX improvement for Alice & 
> Bob that are using the escrow provider. The oracle (or escrow) just issues 
> attestations. Bob or Alice take those attestations and complete the adaptor 
> signature. Instead of a bi-directional communication requirement (the oracle 
> working with Bob or Alice to build the bitcoin tx) at settlement time there 
> is only unidirectional communication required. Non-interactive settlement is 
> one of the big selling points of DLC style applications IMO.
>
> One of the unfortunate things about LN is the interactiveness requirements 
> are very high, which makes developing applications hard (especially mobile 
> applications). I don't think this solves lightning's problems, but it is a 
> worthy goal to reduce interactiveness requirements with new bitcoin 
> applications to give better UX.

Good point.

I should note that 2-of-3 contracts are *not* transportable over LN, but PTLCs 
*are* transportable.
So the idea still has merit for LN, as a replacement for 2-fo-3 escrows.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> On Sat, Mar 5, 2022 at 8:41 AM Jeremy Rubin via bitcoin-dev 
>  wrote:
>
> > It seems like a decent concept for exploration.
> >
> > AJ, I'd be interested to know what you've been able to build with Chia Lisp 
> > and what your experience has been... e.g. what does the Lightning Network 
> > look like on Chia?
> >
> > One question that I have had is that it seems like to me that neither 
> > simplicity nor chia lisp would be particularly suited to a ZK prover...
>
> Not that I necessarily disagree with this statement, but I can say that I 
> have experimented with compiling Simplicity to Boolean circuits.  It was a 
> while ago, but I think the result of compiling my SHA256 program was within 
> an order of magnitude of the hand made SHA256 circuit for bulletproofs.

"Within" can mean "larger" or "smaller" in this context, which was it?
From what I understand, compilers for ZK-provable circuits are still not as 
effective as humans, so I would assume "larger", but I would be much interested 
if it is "smaller"!

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> It sounds like the primary benefit of op_fold is bandwidth savings. 
> Programming as compression. But as you mentioned, any common script could be 
> implemented as a Simplicity jet. In a world where Bitcoin implements jets, 
> op_fold would really only be useful for scripts that can't use jets, which 
> would basically be scripts that aren't very often used. But that inherently 
> limits the usefulness of the opcode. So in practice, I think it's likely that 
> jets cover the vast majority of use cases that op fold would otherwise have.

I suppose the point would be --- how often *can* we add new jets?
Are new jets consensus critical?
If a new jet is released but nobody else has upgraded, how bad is my security 
if I use the new jet?
Do I need to debate `LOT` *again* if I want to propose a new jet?

> A potential benefit of op fold is that people could implement smaller scripts 
> without buy-in from a relay level change in Bitcoin. However, even this could 
> be done with jets. For example, you could implement a consensus change to add 
> a transaction type that declares a new script fragment to keep a count of, 
> and if the script fragment is used enough within a timeframe (eg 1 
> blocks) then it can thereafter be referenced by an id like a jet could be. 
> I'm sure someone's thought about this kind of thing before, but such a thing 
> would really relegate the compression abilities of op fold to just the most 
> uncommon of scripts. 
>
> > * We should provide more *general* operations. Users should then combine 
> > those operations to their specific needs.
> > * We should provide operations that *do more*. Users should identify their 
> > most important needs so we can implement them on the blockchain layer.
>
> That's a useful way to frame this kind of problem. I think the answer is, as 
> it often is, somewhere in between. Generalization future-proofs your system. 
> But at the same time, the boundary conditions of that generalized 
> functionality should still be very well understood before being added to 
> Bitcoin. The more general, the harder to understand the boundaries. So imo we 
> should be implementing the most general opcodes that we are able to reason 
> fully about and come to a consensus on. Following that last constraint might 
> lead to not choosing very general opcodes.

Yes, that latter part is what I am trying to target with `OP_FOLD`.
As I point out, given the restrictions I am proposing, `OP_FOLD` (and any 
bounded loop construct with similar restrictions) is implementable in current 
Bitcoin SCRIPT, so it is not an increase in attack surface.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> I think this proposal describes arbitrary lines of pre-approved credit from a 
> bitcoin wallet. The line can be drawn down with oracle attestations. You can 
> mix in locktimes on these pre-approved lines of credit if you would like to 
> rate limit, or ignore rate limiting and allow the full utxo to be spent by 
> the borrower. It really is contextual to the use case IMO.

Ah, that seems more useful.

Here is an example application that might benefit from this scheme:

I am commissioning some work from some unbranded workperson.
I do not know how long the work will take, and I do not trust the workperson to 
accurately tell me how complete the work is.
However, both I and the workperson trust a branded third party (the oracle) who 
can judge the work for itself and determine if it is complete or not.
So I create a transaction whose signature can be completed only if the oracle 
releases a proper scalar and hand it over to the workperson.
Then the workperson performs the work, then asks the oracle to judge if the 
work has been completed, and if so, the work can be compensated.

On the other hand, the above, where the oracle determines *when* the fund can 
be spent, can also be implemented by a simple 2-of-3, and called an "escrow".
After all, the oracle attestation can be a partial signature as well, not just 
a scalar.
Is there a better application for this scheme?

I suppose if the oracle attestation is intended to be shared among multiple 
such transactions?
There may be multiple PTLCs, that are triggered by a single oracle?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Umm `OP_ANNEX` seems boring 


> It seems like one good option is if we just go on and banish the OP_ANNEX. 
> Maybe that solves some of this? I sort of think so. It definitely seems like 
> we're not supposed to access it via script, given the quote from above:
>
> Execute the script, according to the applicable script rules[11], using the 
> witness stack elements excluding the script s, the control block c, and the 
> annex a if present, as initial stack.
> If we were meant to have it, we would have not nixed it from the stack, no? 
> Or would have made the opcode for it as a part of taproot...
>
> But recall that the annex is committed to by the signature.
>
> So it's only a matter of time till we see some sort of Cat and Schnorr Tricks 
> III the Annex Edition that lets you use G cleverly to get the annex onto the 
> stack again, and then it's like we had OP_ANNEX all along, or without CAT, at 
> least something that we can detect that the value has changed and cause this 
> satisfier looping issue somehow.

... Never mind I take that back.

Hmmm.

Actually if the Annex is supposed to be ***just*** for adding weight to the 
transaction so that we can do something like increase limits on SCRIPT 
execution, then it does *not* have to be covered by any signature.
It would then be third-party malleable, but suppose we have a "valid" 
transaction on the mempool where the Annex weight is the minimum necessary:

* If a malleated transaction has a too-low Annex, then the malleated 
transaction fails validation and the current transaction stays in the mempool.
* If a malleated transaction has a higher Annex, then the malleated transaction 
has lower feerate than the current transaction and cannot evict it from the 
mempool.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning aj,

> On Sun, Feb 27, 2022 at 04:34:31PM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > In reaction to this, AJ Towns mailed me privately about some of his
> > thoughts on this insane `OP_EVICT` proposal.
> > He observed that we could generalize the `OP_EVICT` opcode by
> > decomposing it into smaller parts, including an operation congruent
> > to the Scheme/Haskell/Scala `map` operation.
>
> At much the same time Zman was thinking about OP_FOLD and in exactly the
> same context, I was wondering what the simplest possible language that
> had some sort of map construction was -- I mean simplest in a "practical
> engineering" sense; I think Simplicity already has the Euclidean/Peano
> "least axioms" sense covered.
>
> The thing that's most appealing to me about bitcoin script as it stands
> (beyond "it works") is that it's really pretty simple in an engineering
> sense: it's just a "forth" like system, where you put byte strings on a
> stack and have a few operators to manipulate them. The alt-stack, and
> supporting "IF" and "CODESEPARATOR" add a little additional complexity,
> but really not very much.
>
> To level-up from that, instead of putting byte strings on a stack, you
> could have some other data structure than a stack -- eg one that allows
> nesting. Simple ones that come to mind are lists of (lists of) byte
> strings, or a binary tree of byte strings [0]. Both those essentially
> give you a lisp-like language -- lisp is obviously all about lists,
> and a binary tree is just made of things or pairs of things, and pairs
> of things are just another way of saying "car" and "cdr".
>
> A particular advantage of lisp-like approaches is that they treat code
> and data exactly the same -- so if we're trying to leave the option open
> for a transaction to supply some unexpected code on the witness stack,
> then lisp handles that really naturally: you were going to include data
> on the stack anyway, and code and data are the same, so you don't have
> to do anything special at all. And while I've never really coded in
> lisp at all, my understanding is that its biggest problems are all about
> doing things efficiently at large scales -- but script's problem space
> is for very small scale things, so there's at least reason to hope that
> any problems lisp might have won't actually show up for this use case.

I heartily endorse LISP --- it has a trivial implementation of `eval` that is 
easily implementable once you have defined a proper data type in 
preferred-language-here to represent LISP datums.
Combine it with your idea of committing to a max-number-of-operations (which 
increases the weight of the transaction) and you may very well have something 
viable.
(In particular, even though `eval` is traditionally (re-)implemented in LISP 
itself, the limit on max-number-of-operations means any `eval` implementation 
within the same language is also forcibly made total.)

Of note is that the supposed "problem at scale" of LISP is, as I understand it, 
due precisely to its code and data being homoiconic to each other.
This homoiconicity greatly tempts LISP programmers to use macros, i.e. programs 
that generate other programs from some input syntax.
Homoiconicity means that one can manipulate code just as easily as the data, 
and thus LISP macros are a trivial extension on the language.
This allows each LISP programmer to just code up a macro to expand common 
patterns.
However, each LISP programmer then ends up implementing *different*, but 
*similar* macros from each other.
Unfortunately, programming at scale requires multiple programmers speaking the 
same language.
Then programming at scale is hampered because each LISP programmer has their 
own private dialect of LISP (formed from the common LISP language and from 
their own extensive set of private macros) and intercommunication between them 
is hindered by the fact that each one speaks their own private dialect.
Some LISP-like languages (e.g. Scheme) have classically targeted a "small" 
subset of absolutely-necessary operations, and each implementation of the 
language immediately becomes a new dialect due to having slightly different 
forms for roughly the same convenience function or macro, and *then* individual 
programmers build their own private dialect on top.
For Scheme specifically, R7RS has targeted providing a "large" standard as 
well, as did R6RS (which only *had* a "large" standard), but individual Scheme 
implementations have not always liked to implement *all* the "large" standard.

Otherwise, every big C program contains a half-assed implementation of half of 
Common LISP, so 


> -   I don't think execution costing takes into account how much memory
> 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash 
> in OP_RETURN inside TapScript. Then, every sidechain node can check that 
> "this sidechain hash is connected with this Taproot address", without pushing 
> 32 bytes on-chain.

The Taproot address itself has to take up 32 bytes onchain, so this saves 
nothing.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,

Quick question.

How does this improve over just handing over `nLockTime`d transactions?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Paul,

> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to 
> > > me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 
> > > 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay 
> > > what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will 
> > > have new pubkeys which are NOT known, until AFTER the channel is opened 
> > > and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not 
> > > up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard 
> new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 
> multisig. So all 5 fixed pubkeys must be known, before the factory-open is 
> confirmed, not after.

I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of 
another channel factory with the same 5 fixed pubkeys *plus* an additional 100 
new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based 
channel factories) you do not even need to make new signatures for the existing 
channels, you just reuse the existing channel signatures and whether or not the 
*single*, one-input-one-output, close+reopen transaction is confirmed or not, 
the existing channels remain usable (the signatures can be used on both 
pre-reopen and post-reopen).

That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with 
Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), 
I had several paragraphs describing that, did you not read them?

Note as well that with sidechains, onboarding also requires action on the 
mainchain, in the form of a sideblock merge-mined on the mainchain.

>
> > We assume that onboarding new members is much rarer than existing members 
> > actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once 
> onboarded they had payment nirvana (could transact hundreds of trillions of 
> times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would 
> kill the project.

Fortunately even without channel factories the onboarding rate of LN is much 
much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have 
looked, but I could be provincial) has a lot more onboarding activity than 
half-hearted sidechains like Liquid or Rootstock.

> The difference between the two rates [onboarding and payment], is not 
> relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of 
> a second, but it can only shift into reverse once per year; but that is OK 
> because 'we assume that going in reverse is much rarer than driving forward' 
> ".

Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 
LN channel, and a change output, you can onboard ~1250 channels per mainchain 
block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 
users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then 
you are going to hit the issue of getting people to *use* Bitcoin rather than 
many users wanting to get in but being unable to due to block size limits.

>
> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> That's true, but I think you have neglected to actually take out your 
> calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block 
> (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even 
> if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the 
> payment rate. But once they are onboarded, those users can immediately join 
> the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the 
> "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB 
> (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, 
> many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, 

[bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-02-27 Thread ZmnSCPxj via bitcoin-dev
`OP_FOLD`: A Looping Construct For Bitcoin SCRIPT
=

(This writeup requires at least some programming background, which I
expect most readers of this list have.)

Recently, some rando was ranting on the list about this weird crap
called `OP_EVICT`, a poorly-thought-out attempt at covenants.

In reaction to this, AJ Towns mailed me privately about some of his
thoughts on this insane `OP_EVICT` proposal.
He observed that we could generalize the `OP_EVICT` opcode by
decomposing it into smaller parts, including an operation congruent
to the Scheme/Haskell/Scala `map` operation.
As `OP_EVICT` effectively loops over the outputs passed to it, a
looping construct can be used to implement `OP_EVICT` while retaining
its nice property of cut-through of multiple evictions and reviving of
the CoinPool.

More specifically, an advantage of `OP_EVICT` is that it allows
checking multiple published promised outputs.
This would be implemented in a loop.
However, if we want to instead provide *general* operations in
SCRIPT rather than a bunch of specific ones like `OP_EVICT`, we
should consider how to implement looping so that we can implement
`OP_EVICT` in a SCRIPT-with-general-opcodes.

(`OP_FOLD` is not sufficient to implement `OP_EVICT`; for
efficiency, AJ Towns also pointed out that we need some way to
expose batch validation to SCRIPT.
There is a follow-up writeup to this one which describes *that*
operation.)

Based on this, I started ranting as well about how `map` is really
just a thin wrapper on `foldr` and the *real* looping construct is
actually `foldr` (`foldr` is the whole FP Torah: all the rest is
commentary).
This is thus the genesis for this proposal, `OP_FOLD`.

A "fold" operation is sometimes known as "reduce" (and if you know
about Google MapReduce, you might be familiar with "reduce").
Basically, a "fold" or "reduce" operation applies a function
repeatedly (i.e. *loops*) on the contents of an input structure,
creating a "sum" or "accumulation" of the contents.

For the purpose of building `map` out of `fold`, the accumulation
can itself be an output structure.
The `map` simply accumulates to the output structure by applying
its given function and concatenating it to the current accumulation.

Digression: Programming Is Compression
--

Suppose you are a programmer and you are reading some source code.
You want to wonder "what will happen if I give this piece of code
these particular inputs?".

In order to do so, you would simulate the execution of the code in
your head.
In effect, you would generate a "trace" of basic operations (that
do not include control structures).
By then thinking about this linear trace of basic operations, you
can figure out what the code does.

Now, let us recall two algorithms from the compression literature:

1.  Run-length Encoding
2.  Lempel-Ziv 1977

Suppose our flat linear trace of basic operations contains something
like this:

OP_ONE
OP_TWO
OP_ONE
OP_TWO
OP_ONE
OP_TWO

IF we had looping constructs in our language, we could write the
above trace as something like:

for N = 1 to 3
OP_ONE
OP_TWO

The above is really Run-length Encoding.

(`if` is just a loop that executes 0 or 1 times.)

Similarly, suppose you have some operations that are commonly
repeated, but not necessarily next to each other:

OP_ONE
OP_TWO
OP_THREE
OP_ONE
OP_TWO
OP_FOUR
OP_FIVE
OP_ONE
OP_TWO

If we had functions/subroutines/procedures in our language, we
could write the above trace as something like:

function foo()
OP_ONE
OP_TWO
foo()
OP_THREE
foo()
OP_FOUR
OP_FIVE
foo()

That is, functions are just Lempel-Ziv 1977 encoding, where we
"copy" some repeated data from a previously-shown part of
data.

Thus, we can argue that programming is really a process of:

* Imagining what we want the machine to do given some particular
  input.
* Compressing that list of operations so we can more easily
  transfer the above imagined list over your puny low-bandwidth
  brain-computer interface.
  * I mean seriously, you humans still use a frikkin set of
*mechanical* levers to transfer data into a matrix of buttons?
(you don't even make the levers out of reliable metal, you
use calcium of all things??
You get what, 5 or 6 bytes per second???)
And your eyes are high-bandwidth but you then have this
complicated circuitry (that has to be ***trained for
several years*** WTF) to extract ***tiny*** amounts of ASCII
text from that high-bandwidth input stream
Evolve faster!
(Just to be clear, I am actually also a human being and
definitely am not a piece of circuitry connected directly to
the Internet and I am not artificially limiting my output
bandwidth so as not to overwhelm you mere humans.)

See also "Kolmogorov complexity".

This becomes relevant, because the 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-26 Thread ZmnSCPxj via bitcoin-dev
Good morning again Paul,

> With sidechains, changing the ownership set requires that the sidechain 
> produce a block.
> That block requires a 32-byte commitment in the coinbase.
> What is more, if any transfers occur on the sidechain, they cannot be real 
> without a sidechain block, that has to be committed on the mainchain.

The above holds if the mainchain miners also act as sidechain validators.
If they are somehow separate (i.e. blind merge mining), then the `OP_BRIBE` 
transaction needed is also another transaction.
Assuming the sidechain validator is using Taproot as well, it needs the 32+1 
txin, a 64-byte signature, a 32-byte copy of the sidechain commitment that the 
miner is being bribed to put in the coinbase, and a txout for any change the 
sidechain validator has.

This is somewhat worse than the case for channel factories, even if you assume 
that every block, at least one channel factory has to do an onboarding event.

> Thus, while changing the membership set of a channel factory is more 
> expensive (it requires a pointer to the previous txout, a 64-byte Taproot 
> signature, and a new Taproot address), continuous operation does not publish 
> any data at all.
> While in sidehchains, continuous operation and ordinary payments requires 
> ideally one commitment of 32 bytes per mainchain block.
> Continuous operation of the sidechain then implies a constant stream of 
> 32-byte commitments, whereas continuous operation of a channel factory, in 
> the absence of membership set changes, has 0 bytes per block being published.
>
> We assume that onboarding new members is much rarer than existing members 
> actually paying each other in an actual economy (after the first burst of 
> onboarding, new members will only arise in proportion to the birth rate, but 
> typical economic transactions occur much more often), so optimizing for the 
> continuous operation seems a better tradeoff.

Perhaps more illustratively, with channel factories, different layers have 
different actions they can do, and the only one that needs to be broadcast 
widely are actions on the onchain layer:

* Onchain: onboarding / deboarding
* Channel Factory: channel topology change
* Channel: payments

This is in contrast with merge-mined Sidechains, where *all* activity requires 
a commitment on the mainchain:

* Onchain: onboarding / deboarding, payments

While it is true that all onboarding, deboarding, and payments are summarized 
in a single commitment, notice how in LN-with-channel-factories, all onboarding 
/ deboarding is *also* summarized, but payments *have no onchain impact*, at 
all.

Without channel factories, LN is only:

* Onchain: onboarding / deboarding, channel topology change
* Channel: payments

So even without channel factories there is already a win, although again, due 
to the large numbers of channels we need, a channel factory in practice will be 
needed to get significantly better scaling.


Finally, in practice with Drivechains, starting a new sidechain requires 
implicit permission from the miners.
With LN, new channels and channel factories do not require any permission, as 
they are indistinguishable from ordinary transactions.
(the gossip system does leak that a particular UTXO is a particular published 
channel, but gossip triggers after deep confirmation, at which point it would 
be too late for miners to censor the channel opening.
The miners can censor channel closure for published channels, admittedly, but 
at least you can *start* a new channel without being censored, which you cannot 
do with Drivechain sidechains.)


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-26 Thread ZmnSCPxj via bitcoin-dev
Good morning Paul,

> ***
>
> You have emphasized the following relation: "you have to show your 
> transaction to everyone" = "thing doesn't scale".
>
> However, in LN, there is one transaction which you must, in fact, "show to 
> everyone": your channel-opener.
>
> Amusingly, in the largeblock sidechain, there is not. You can onboard using 
> only the blockspace of the SC.
> (One "rich guy" can first shift 100k coins Main-to-Side, and he can 
> henceforth onboard many users over there. Those users can then onboard new 
> users, forever.)
>
> So it would seem to me, that you are on the ropes, even by your own 
> criterion. [Footnote 1]
>
> ***
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 
> bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new 
> people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 
> 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay 
> what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have 
> new pubkeys which are NOT known, until AFTER the channel is opened and 
> confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up 
> to date on the latest LN research.

Yes, using channel factories.

A channel factory is a N-of-N where N >= 3, and which uses the same offchain 
technology to host multiple 2-of-2 channels.
We observe that, just as an offchain structure like a payment channel can host 
HTLCs, any offchain structure can host a lot of *other* contracts, because the 
offchain structure can always threaten to drop onchain to enforce any 
onchain-enforceable contract.
But an offchain structure is just another onchain contract!
Thus, an offchain structure can host many other offchain structures, and thus 
an N-of-N channel factory can host multiple 2-of-2 channel factories.

(I know we discussed sidechains-within-sidechains before, or at least I 
mentioned that to you in direct correspondence, this is basically that idea 
brought to its logical conclusion.)

Thus, while you still have to give *one* transaction to all Bitcoin users, that 
single transaction can back several channels, up to (N * (N - 1)) / 2.

It is not quite matching your description --- the pubkeys of the peer 
participants need to be fixed beforehand.
However, all it means is some additional pre-planning during setup with no 
scope for dynamic membership.

At least, you cannot dynamically change membership without onchain action.
You *can* change membership sets by publishing a one-input-one-output 
transaction onchain, but with Taproot, the new membership set is representable 
in a single 32-byte Taproot address onchain (admittedly, the transaction input 
is a txin and thus has overhead 32 bytes plus 1 byte for txout index, and you 
need 64 bytes signature for Taproot as well).
The advantage is that, in the meantime, if membership set is not changed, 
payments can occur *without* any data published on the blockchain (literally 0 
data).

With sidechains, changing the ownership set requires that the sidechain produce 
a block.
That block requires a 32-byte commitment in the coinbase.
What is more, if *any* transfers occur on the sidechain, they cannot be real 
without a sidechain block, that has to be committed on the mainchain.

Thus, while changing the membership set of a channel factory is more expensive 
(it requires a pointer to the previous txout, a 64-byte Taproot signature, and 
a new Taproot address), continuous operation does not publish any data at all.
While in sidehchains, continuous operation and ordinary payments requires 
ideally one commitment of 32 bytes per mainchain block.
Continuous operation of the sidechain then implies a constant stream of 32-byte 
commitments, whereas continuous operation of a channel factory, in the absence 
of membership set changes, has 0 bytes per block being published.

We assume that onboarding new members is much rarer than existing members 
actually paying each other in an actual economy (after the first burst of 
onboarding, new members will only arise in proportion to the birth rate, but 
typical economic transactions occur much more often), so optimizing for the 
continuous operation seems a better tradeoff.


Channel factories have the nice properties:

* N-of-N means that nobody can steal from you.
  * Even with a 51% miner, nobody can steal from you as long as none of the N 
participants is the 51% miner, see the other thread.
* Graceful degradation: even if if 1 of the N is offline, payments are done 
over the hosted 2-of-2s, and the balance of probability is that most of the 
2-of-2s have both participants online and payments can continue to occur.

--

The reason why channel factories do not exist *yet* is that the main offchain 
construction we 

Re: [bitcoin-dev] A Comparison Of LN and Drivechain Security In The Presence Of 51% Attackers

2022-02-25 Thread ZmnSCPxj via bitcoin-dev


Good morning Paul,


> I don't think I can stop people from being ignorant about Drivechain. But I 
> can at least allow the Drivechain-knowledgable to identify each other.
>
> So here below, I present a little "quiz". If you can answer all of these 
> questions, then you basically understand Drivechain:
>
> 0. We could change DC to make miner-theft impossible, by making it a layer1 
> consensus rule that miners never steal. Why is this cure worse than the 
> disease?

Now miners are forced to look at all sideblocks, not optionally do so if it is 
profitable for them.

> 1. If 100% hashrate wanted to steal coins from a DC sidechain *as quickly as 
> possible*, how long would this take (in blocks)?

13,150 (I think this is how you changed it after feedback from this list, I 
think I remember it was ~3000 before or thereabouts.)

> 2. Per sidechain per year (ie, per 52560 blocks), how many DC withdrawals can 
> take place (maximum)? How many can be attempted?
>  (Ie, how does the 'train track metaphor' work, from ~1h5m in the 
> "Overview and Misconceptions" video)?

I hate watching videos, I can read faster than anyone can talk (except maybe 
Laolu, he speaks faster than I can process, never mind read).

~4 times (assuming 52560 block per year, which may vary due to new miners, 
hashrate drops, etc)

> 3. Only two types of people should ever be using the DC withdrawal system at 
> all.
>   3a. Which two?

a.  Miners destroying the sidechain because the sidechain is no longer viable.
b.  Aggregators of sidechain-to-minechain transfers and large whales.

>   3b. How is everyone else, expected to move their coins from chain to chain?

Cross-system atomic swaps.
(I use "System" here since the same mechanism works for Lightning channels, and 
channels are not blockchains.)

>   3c. (Obviously, this improves UX.) But why does it also improve security?

Drivechain-based pegged transfers are aggregates of many smaller transfers and 
thus every transfer out from the sidechain contributes its "fee" to the 
security of the peg.

> --
> 4. What do the parameters b and m stand for (in the DC security model)?

m is how much people want to kill a sidechain, 0 = everybody would be sad if it 
died and would rather burn all their BTC forever than continue living, 1 = do 
not care, > 1 people want to actively kill the sidechain.

b is how much profit a mainchain miner expects from supporting a sidechain (do 
not remember the unit though).
Something like u = a + b where a is the mainchain, b is the sidechain, u is the 
total profit.
Or fees?  Something like that.

> 5. How can m possibly be above 1? Give an example of a sidechain-attribute 
> which may cause this situation to arise.

The sidechain is a total scam.
A bug may be found in the sidechain that completely negates any security it 
might have, thus removing any desire to protect the sidechain and potentially 
make users want to destroy it completely rather than let it continue.
People end up hating sidechains completely.

> 6. For which range of m, is DC designed to deter sc-theft?

m <= 1

> 7. If DC could be changed to magically deter theft across all ranges of m, 
> why would that be bad for sidechain users in general?

Because the sidechain would already be part of mainchain consensus.

> --
> 8. If imminent victims of a DC-based theft, used a mainchain UASF to prohibit 
> the future theft-withdrawal, then how would this affect non-DC users?

If the non-DC users do not care, then they are unaffected.
If the non-DC users want to actively kill the sidechain, they will 
counterattack with an opposite UASF and we have a chainsplit and sadness and 
mutual destruction and death and a new subreddit.

> 9. In what ways might the BTC network one day become uncompetitive? And how 
> is this different from caring about a sidechain's m and b?

If it does not enable scaling technology fast enough to actually be able to 
enable hyperbitcoinization.

Sidechains are not a scaling solution, so caring about m and b is different 
because your focus is not on scaling.

> --
> 10. If DC were successful, Altcoin-investors would be harmed. Two 
> Maximalist-groups would also be slightly harmed -- who are these?

Dunno!


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-25 Thread ZmnSCPxj via bitcoin-dev
Good morning AJ,

> ZmnSCPaj, are you arguing that drivechains are bad for bitcoin or are you 
> arguing that it would be unwise to opt into a drivechain? Those are very 
> different arguments. If drivechains compromised things for normal bitcoin 
> nodes that ignore drivechains, then I agree that would be serious reason to 
> reject drivechains outright and reject things that allow it to happen. 
> However, if all you're saying is that people can shoot themselves in the foot 
> with drivechains, then avoiding drivechains should not be a significant 
> design consideration for bitcoin but rather for those who might consider 
> spending their time working on drivechains.

Neither.
My argument is simply:

* If Drivechains are bad for whatever reason, we should not add recursive 
covenants.
* Otherwise, go ahead and add recursive covenants.

Drivechains are not a scaling solution [FOOTNOTE 1] and I personally am 
interested only in scaling solutions, adding more non-scaling-useable 
functionality is not of interest to me and I do not really care (but I would 
*prefer* if people focus on scaling-useable functionality, like 
`SIGHASH_NOINPUT`, `OP_EVICT`, `OP_CTV`, `OP_TLUV` probably without the 
self-replace capability).

I bring this up simply because I remembered those arguments against 
Drivechains, and as far as I could remember, those were the reasons for not 
adding Drivechains.
But if there is consensus that those arguments are bogus, then go ahead --- add 
Drivechains and/or recursive covenants.
I do not intend to utilize them any time soon anyway.

My second position is that in general I am wary of adding Turing-completeness, 
due precisely to Principle of Least Power.
A concern is that, since it turns out recursive covenants are sufficient to 
implement Drivechains, recursive covenants may also enable *other* techniques, 
currently unknown, which may have negative effects on Bitcoin, or which would 
be considered undesirable by a significant section of the userbase.
Of course, I know of no such technique, but given that a technique 
(Drivechains) which before would have required its own consensus change, turns 
out to be implementable inside recursive covenants, then I wonder if there are 
other things that would have required their own consensus change that are now 
*also* implementable purely in recursive covenants.

Of course, that is largely just stop energy, so if there is *now* consensus 
that Drivechains are not bad, go ahead, add recursive covenants (but please can 
we add `SIGHASH_NOINPUT` and `OP_CTV` first?).

Regards,
ZmnSCPxj

[FOOTNOTE 1] Sidechains are not a scaling solution, or at least, are beaten in 
raw scaling by Lightning.  Blockchains are inefficient (THAT IS PRECISELY THE 
PROBLEM WHY YOU NEED A SCALING SOLUTION FOR BITCOIN THAT WAS LIKE THE FIRST 
RESPONSE TO SATOSHI ON THE CYPHERPUNK MAILING LIST) and you have to show your 
transaction to everyone.  While sidechains imply that particular subsets are 
the only ones interested in particular transactions, compare how large a 
sidechain-participant-set would be expected to be, to how many people learn of 
a payment over the Lightning Network.  If you want a sidechain to be as popular 
as LN, then you expect its participant set to be about as large as LN as well, 
and on a sidechain, a transaction is published to all sidechain participants, 
but on the LN, only a tiny tiny tiny fraction of the network is involved in any 
payment.  Thus LN is a superior scaling solution.  Now you might conter-argue 
that you can have multiple smaller sidechains and just use HTLCs to trade 
across them (i.e. microchains).  I would then counter-counter-argue that 
bringing this to the most extreme conclusion, you would have tons of sidechains 
with only 2 participants each, and then you would pay by transferring across 
multiple participants in a chain of HTLCs and look, oh wow, surprise surprise, 
you just got the Lightning Network.  LN wins.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-25 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> Hi ZmnSCPxj,
>
> To me it seems that more space can be saved.
>
> The data-“transaction” need not specify any output. The network could 
> subtract the fee amount of the transaction directly from the specified UTXO.

That is not how UTXO systems like Bitcoin work.
Either you consume the entire UTXO (take away the "U" from the "UTXO") 
completely and in full, or you do not touch the UTXO (and cannot get fees from 
it).

> A fee also need not to be specified.

Fees are never explicit in Bitcoin; it is always the difference between total 
input amount minus the total output amount.

> It can be calculated in advance both by the network and the transaction 
> sender based on the size of the data.

It is already implicitly calculated by the difference between the total input 
amount minus the total output amount.

You seem to misunderstand as well.
Fee rate is computed from the fee (computed from total input minus total 
output) divided by the transaction weight.
Nodes do not compute fees from feerate and weight.

> The calculation of the fee should be such that it only marginally cheaper to 
> use this new construct over using one or more transactions. For instance, 
> sending 81 bytes should cost as much as two OP_RETURN transactions (minus 
> some marginal discount to incentivize the use of this more efficient way to 
> store data).

Do you want to change weight calculations?
*reducing* weight calculations is a hardfork, increasing it is a softfork.

> If the balance of the selected UTXO is insufficient to pay for the data then 
> the transaction will be invalid.
>
> I can’t judge whether this particular approach would require a hardfork, 
> sadly.

See above note, if you want to somehow reduce the weight of the data so as to 
reduce the cost of data relative to `OP_RETURN`, that is a hardfork.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> Hi ZmnSCPxj,
>
> Any benefits of my proposal depend on my presumption that using a standard 
> transaction for storing data must be inefficient. Presumably a transaction 
> takes up significantly more on-chain space than the data it carries within 
> its OP_RETURN. Therefore, not requiring a standard transaction for data 
> storage should be more efficient. Facilitating data storage within some 
> specialized, more space-efficient data structure at marginally lower fee per 
> payload-byte should enable reducing the footprint of storing data on-chain.
>
> In case storing data through OP_RETURN embedded within a transaction is 
> optimal in terms of on-chain footprint then my proposal doesn’t seem useful.

You need to have some assurance that, if you pay a fee, this data gets on the 
blockchain.
And you also need to pay a fee for the blockchain space.
In order to do that, you need to indicate an existing UTXO, and of course you 
have to provably authorize the spend of that UTXO.
But that is already an existing transaction structure, the transaction input.
If you are not going to pay an entire UTXO for it, you need a transaction 
output as well to store the change.

Your signature needs to cover the data being published, and it is more 
efficient to have a single signature that covers the transaction input, the 
transaction output, and the data being published.
We already have a structure for that, the transaction.

So an `OP_RETURN` transaction output is added and you put published data there, 
and existing constructions make everything Just Work (TM).

Now I admit we can shave off some bytes.
Pure published data does not need an amount, and using a transaction output 
means there is always an amount field.
We do not want the `OP_RETURN` opcode itself, though if the data is 
variable-size we do need an equivalent to the `OP_PUSH` opcode (which has many 
variants depending on the size of the data).

But that is not really a lot of bytes, and adding a separate field to the 
transaction would require a hardfork.
We cannot use the SegWit technique of just adding a new field that is not 
serialized for `txid` and `wtxid` calculations, but is committed in a new id, 
let us call it `dtxid`, and a new Merkle Tree added to the coinbase.
If we *could*, then a separate field for data publication would be 
softforkable, but the technique does not apply here.
The reason we cannot use that technique is that we want to save bytes by having 
the signature cover the data to be published, and signatures need to be 
validated by pre-softfork nodes looking at just the data committed to in 
`wtxid`.
If you have a separate signature that is in the `dtxid`, then you spend more 
actual bytes to save a few bytes.

Saving a few bytes for an application that is arguably not the "job" of Bitcoin 
(Bitcoin is supposed to be for value transfer, not data archiving) is not 
enough to justify a **hard**fork.
And any softfork seems likely to spend more bytes than what it could save.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> Reducing the footprint of storing data on-chain might better be achieved by 
> *supporting* it.
>
> Currently storing data is wasteful because it is embedded inside an OP_RETURN 
> within a transaction structure. As an alternative, by supporting storing of 
> raw data without creating a transaction, waste can be reduced.

If the data is not embedded inside a transaction, how would I be able to pay a 
miner to include the data on the blockchain?

I need a transaction in order to pay a miner anyway, so why not just embed it 
into the same transaction I am using to pay the miner?
(i.e. the current design)




Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A Comparison Of LN and Drivechain Security In The Presence Of 51% Attackers

2022-02-24 Thread ZmnSCPxj via bitcoin-dev
Good morning lightning-dev and bitcoin-dev,

Recently, some dumb idiot, desperate to prove that recursive covenants are 
somehow a Bad Thing (TM), [necromanced Drivechains][0], which actually caused 
Paul Sztorc to [revive][1] and make the following statement:

> As is well known, it is easy for 51% hashrate to double-spend in the LN, by 
> censoring 'justice transactions'. Moreover, miners seem likely to evade 
> retribution if they do this, as they can restrain the scale, timing, victims, 
> circumstances etc of the attack.

Let me state that, as a supposed expert developer of the Lightning Network 
(despite the fact that I probably spend more time ranting on the lists than 
actually doing something useful like improve C-Lightning or CLBOSS), the above 
statement is unequivocally ***true***.

However, I believe that the following important points must be raised:

* A 51% miner can only attack LN channels it is a participant in.
* A 51% miner can simultaneously attack all Drivechain-based sidechains and 
steal all of their funds.

In order for "justice transactions" to come into play, an attacker has to have 
an old state of a channel.
And only the channel participants have access to old state (modulo bugs and 
operator error on not being careful of toxic waste, but those are arguably as 
out of scope as operator error not keeping your privkey safe, or bugs that 
reveal your privkey).

If the 51% miner is not a participant on a channel, then it simply has no 
access to old state of the channel and cannot even *start* the above theft 
attack.
If the first step fails, then the fact that the 51% miner can perform the 
second step is immaterial.

Now, this is not a perfect protection!
We should note that miners are anonymous and it is possible that there is 
already a 51% miner, and that that 51% miner secretly owns almost all nodes on 
the LN.
However, even this also means there is some probability that, if you picked a 
node at random to make a channel with, then there is some probability that it 
is *not* a 51% miner and you are *still* safe from the 51% miner.

Thus, LN usage is safer than Drivechain usage.
On LN, if you make a channel to some LN node, there is a probability that you 
make a channel with a non-51%-miner, and if you luck into that, your funds are 
still safe from the above theft attack, because the 51% miner cannot *start* 
the attack by getting old state and publishing it onchain.
On Drivechain, if you put your funds in *any* sidechain, a 51% miner has strong 
incentive to attack all sidechains and steal all the funds simultaneously.

--

Now, suppose we have:

* a 51% miner
* Alice
* Bob

And that 51% miner != Alice, Alice != Bob, and Bob != 51% miner.

We could ask: Suppose Alice wants to attack Bob, could Alice somehow convince 
51% miner to help it steal from Bob?

First, we should observe that *all* economically-rational actors have a *time 
preference*.
That is, N sats now is better than N sats tomorrow.
In particular, both the 51% miner *and* Alice the attacker have this time 
preference, as does victim Bob.

We can observe that in order for Alice to benefit from the theft, it has to 
*wait out* the `OP_CSV` before it can finalize the theft.
Alice can offer fees to the miner only after the `OP_CSV` delay.

However, Bob can offer fees *right now* on the justice transaction.
And the 51% miner, being economically rational, would prefer the *right now* 
funds to the *maybe later* promise by Alice.

Indeed, if Bob offered a justice transaction paying the channel amount minus 1 
satoshi (i.e. Bob keeps 1 satoshi), then Alice has to beat that by offering the 
entire channel amount to the 51% miner.
But the 51% miner would then have to wait out the `OP_CSV` delay before it gets 
the funds.
Its time preference may be large enough (if the `OP_CSV` delay is big enough) 
that it would rather side with Bob, who can pay channel amount - 1 right now, 
than Alice who promises to pay channel amount later.

"But Zeeman, Alice could offer to pay now from some onchain funds Alice has, 
and Alice can recoup the losses later!"
But remember, Alice *also* has a time preference!
Let us consider the case where Alice promises to bribe 51% miner *now*, on the 
promise that 51% miner will block the Bob justice transaction and *then* Alice 
gets to enjoy the entire channel amount later.
Bob can counter by offering channel amount - 1 right now on the justice 
transaction.
The only way for Alice to beat that is to offer channel amount right now, in 
which case 51% miner will now side with Alice.

But what happens to Alice in that case?
It loses out on channel amount right now, and then has to wait `OP_CSV` delay, 
to get the exact same amount later!
It gets no benefit, so this is not even an investment.
It is just enforced HODLing, but Alice can do that using `OP_CLTV` already.

Worse, Alice has to trust that 51% miner will indeed block the justice 
transaction.
But if 51% miner is unscrupulous, it could do:

* 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-24 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

> > Logically, if the construct is general enough to form Drivechains, and
> > we rejected Drivechains, we should also reject the general construct.
>
> Not providing X because it can only be used for E, may generalise to not
> providing Y which can also only be used for E, but it doesn't necessarily
> generalise to not providing Z which can be used for both G and E.

Does this not work only if the original objection to merging in BIP-300 was of 
the form:

* X implements E.
* Z implements G and E.
* Therefore, we should not merge in X and instead should merge in the more 
general construct Z.

?

Where:

* E = Drivechains
* X = BIP-300
* Z = some general computation facility
* G = some feature.

But my understanding is that most of the NACKs on the BIP-300 were of the form:

* X implements E.
* E is bad.
* Therefore, we should not merge in X.

If the above statement "E is bad" holds, then:

* Z implements G and E.
* Therefore, we should not merge in Z.

Where Z = something that implements recursive covenants.

I think we really need someone who NACKed BIP-300 to speak up.
If my understanding is correct and that the original objection was "Drivechains 
are bad for reasons R[0], R[1]...", then:

* You can have either of these two positions:
  * R[0], R[1] ... are specious arguments and Drivechains are not bad, 
therefore we can merge in a feature that enables Recursive Covenants -> 
Turing-Completeness -> Drivechains.
* Even if you NACKed before, you *are* allowed to change your mind and move 
to this position.
  * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we 
should **NOT** merge in a feature that implements Recursive Covenants -> 
Turing-Completeness -> Drivechains.

You cannot have it both ways.
Admittedly, there may be some set of restrictions that prevent 
Turing-Completeness from implementing Drivechains, but you have to demonstrate 
a proof of that set of restrictions existing.

> I think it's pretty reasonable to say:
>
> a) adding dedicated consensus features for drivechains is a bad idea
> in the absence of widespread consensus that drivechains are likely
> to work as designed and be a benefit to bitcoin overall
>
> b) if you want to risk your own funds by leaving your coins on an
> exchange or using lightning or eltoo or tumbling/coinjoin or payment
> pools or drivechains or being #reckless in some other way, and aren't
> asking for consensus changes, that's your business

*Shrug* I do not really see the distinction here --- in a world with 
Drivechains, you are free to not put your coins in a Drivechain-backed 
sidechain, too.

(Admittedly, Drivechains does get into a Mutually Assured Destruction argument, 
so that may not hold.
But if Drivechains going into a MAD argument is an objection, then I do not see 
why covenant-based Drivechains would also not get into the same MAD argument 
--- and if you want to avoid the MADness, you cannot support recursive 
covenants, either.
Remember, 51% attackers can always censor the blockchain, regardless of whether 
you put the Drivechain commitments into the coinbase, or in an 
ostensibly-paid-by-somebody-else transaction.)


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread ZmnSCPxj via bitcoin-dev


Good morning Paul, welcome back, and the list,


For the most part I am reluctant to add Turing-completeness due to the 
Principle of Least Power.

We saw this play out on the web browser technology.
A full Turing-complete language was included fairly early in a popular HTML 
implementation, which everyone else then copied.
In the beginning, it had very loose boundaries, and protections against things 
like cross-site scripting did not exist.
Eventually, W3C cracked down and modern JavaScript is now a lot more sandboxed 
than at the beginning --- restricting its power.
In addition, for things like "change the color of this bit when the mouse 
hovers it", which used to be implemented in JavaScript, were moved to CSS, a 
non-Turing-complete language.

The Principle of Least Power is that we should strive to use the language with 
*only what we need*, and naught else.

So I think for the most part that Turing-completeness is dangerous.
There may be things, other than Drivechain, that you might object to enabling 
in Bitcoin, and if those things can be implemented in a Turing-complete 
language, then they are likely implementable in recursive covenants.

That the web *started* with a powerful language that was later restricted is 
fine for the web.
After all, the main use of the web is showing videos of attractive female 
humans, and cute cats.
(WARNING: WHEN I TAKE OVER THE WORLD, I WILL TILE IT WITH CUTE CAT PICTURES.)
(Note: I am not an AI that seeks to take over the world.)
But Bitcoin protects money, which I think is more important, as it can be 
traded not only for videos of attractive female humans, and cute cats, but 
other, lesser things as well.
So I believe some reticence towards recursive covenants, and other things it 
may enable, is warranted,

Principle of Least Power exists, though admittedly, this principle was 
developed for the web.
The web is a server-client protocol, but Bitcoin is peer-to-peer, so it seems 
certainly possible that Principle of Least Power does not apply to Bitcoin.
As I understand it, however, the Principle of Least Power exists *precisely* 
because increased power often lets third parties do more than what was 
expected, including things that might damage the interests of the people who 
allowed the increased power to exist, or things that might damage the interests 
of *everyone*.

One can point out as well, that despite the problems that JavaScript 
introduced, it also introduced GMail and the now-rich Web ecosystem.

Perhaps one might liken recursive covenants to the box that was opened by 
Pandora.
Once opened, what is released cannot be put back.
Yet perhaps at the bottom of this box, is Hope?



Also: Go not to the elves for counsel, for they will say both no and yes.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-23 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> TLUV doesn't assume cooperation among the construction participants once the 
> Taproot tree is setup. EVICT assumes cooperation among the remaining 
> construction participants to satisfy the final CHECKSIG.
>
> So that would be a feature difference between TLUV and EVICT, I think ?

`OP_TLUV` leaves the transaction output with the remaining Tapleaves intact, 
and, optionally, with a point subtracted from Taproot internal pubkey.

In order to *truly* revive the construct, you need a separate transaction that 
spends that change output, and puts it back into a new construct.

See: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003479.html
I describe how this works.

That `OP_EVICT` does another `CHECKSIG` simply cuts through the separate 
transaction that `OP_TLUV` would require in order to revive the construct.

> > I thought it was part of Taproot?
>
> I checked BIP342 again, *as far as I can read* (unreliable process), it 
> sounds like it was proposed by BIP118 only.

*shrug* Okay!

> > A single participant withdrawing their funds unilaterally can do so by 
> > evicting everyone else (and paying for those evictions, as sort of a 
> > "nuisance fee").
>
> I see, I'm more interested in the property of a single participant 
> withdrawing their funds, without affecting the stability of the off-chain 
> pool and without cooperation with other users. This is currently a 
> restriction of the channel factories fault-tolerance. If one channel goes 
> on-chain, all the outputs are published.

See also: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003479.html

Generally, the reason for a channel to go *onchain*, instead of just being 
removed inside the channel factory and its funds redistributed elsewhere, is 
that an HTLC/PTLC is about to time out.
The blockchain is really the only entity that can reliably enforce timeouts.

And, from the above link:

> * If a channel has an HTLC/PTLC time out:
>   * If the participant to whom the HTLC/PTLC is offered is
> offline, that may very well be a signal that it is unlikely
> to come online soon.
> The participant has strong incentives to come online before
> the channel is forcibly closed due to the HTLC/PTLC timeout,
> so if it is not coming online, something is very wrong with
> that participant and we should really evict the participant.
>   * If the participant to whom the HTLC/PTLC is offered is
> online, then it is not behaving properly and we should
> really evict the participant.

Note the term "evict" as well --- the remaining participants that are 
presumably still behaving correctly (i.e. not letting HTLC/PTLC time out) evict 
the participants that *are*, and that is what `OP_EVICT` does, as its name 
suggests.

Indeed, I came up with `OP_EVICT` *after* musing the above link.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread ZmnSCPxj via bitcoin-dev


Subject: Turing-Completeness, And Its Enablement Of Drivechains

Introduction


Recently, David Harding challenged those opposed to recursive covenants
for *actual*, *concrete* reasons why recursive covenants are a Bad Thing
(TM).

Generally, it is accepted that recursive covenants, together with the
ability to update loop variables, is sufficiently powerful to be
considered Turing-complete.
So, the question is: why is Turing-completness bad, if it requires
*multiple* transactions in order to implement Turing-completeness?
Surely the practical matter that fees must be paid for each transaction
serves as a backstop against Turing-completeness?
i.e. Fees end up being the "maximum number of steps", which prevents a
language from becoming truly Turing-complete.

I point out here that Drivechains is implementable on a Turing-complete
language.
And we have already rejected Drivechains, for the following reason:

1.  Sidechain validators and mainchain miners have a strong incentive to
merge their businesses.
2.  Mainchain miners end up validating and commiting to sidechain blocks.
3.  Ergo, sidechains on Drivechains become a block size increase.

Also:

1.  The sidechain-to-mainchain peg degrades the security of sidechain
users from consensus "everyone must agree to the rules" to democracy
"if enough enfranchised voters say so, they can beat you up and steal
your money".

In this write-up, I will demonstrate how recursive covenants, with
loop variable update, is sufficient to implement a form Drivechains.
Logically, if the construct is general enough to form Drivechains, and
we rejected Drivechains, we should also reject the general construct.

Digression: `OP_TLUV` And `OP_CAT` Implement Recursive Covenants


Let me now do some delaying tactics and demonstrate how `OP_TLUV` and
`OP_CAT` allow building recursive covenants by quining.

`OP_TLUV` has a mode where the current Tapleaf is replaced, and the
new address is synthesized.
Then, an output of the transaction is validated to check that it has
the newly-synthesized address.

Let me sketch how a simple recursive covenant can be built.
First, we split the covenant into three parts:

1.  A hash.
2.  A piece of script which validates that the first witness item
hashes to the above given hash in part #1, and then pushes that
item into the alt stack.
3.  A piece of script which takes the item from the alt stack,
hashes it, then concatenates a `OP_PUSH` of the hash to that
item, then does a replace-mode `OP_TLUV`.

Parts 1 and 2 must directly follow each other, but other SCRIPT
logic can be put in between parts 2 and 3.
Part 3 can even occur multiple times, in various `OP_IF` branches.

In order to actually recurse, the top item in the witness stack must
be the covenant script, *minus* the hash.
This is supposed to be the quining argument.

The convenant script part #2 then checks that the quining argument
matches the hash that is hardcoded into the SCRIPT.
This hash is the hash of the *rest* of the SCRIPT.
If the quining argument matches, then it *is* the SCRIPT minus its
hash, and we know that we can use that to recreate the original SCRIPT.
It then pushes them out of the way into the alt stack.

Part #3 then recovers the original SCRIPT from the alt stack, and
resynthesizes the original SCRIPT.
The `OP_TLUV` is then able to resynthesize the original address.

Updating Loop Variables
---

But repeating the same SCRIPT over and over is boring.

What is much more interesting is to be able to *change* the SCRIPT
on each iteration, such that certain values on the SCRIPT can be
changed.

Suppose our SCRIPT has a loop variable `i` that we want to change
each time we execute our SCRIPT.

We can simply put this loop variable after part 1 and before part 2.
Then part 2 is modified to first push this loop variable onto the
alt stack.

The SCRIPT that gets checked is always starts from part 2.
Thus, the SCRIPT, minus the loop variable, is always constant.
The SCRIPT can then access the loop variable from the alt stack.
Part 2 can be extended so that the loop variable is on top of the
quined SCRIPT on the alt stack.
This lets the SCRIPT easily access the loop variable.
The SCRIPT can also update the loop variable by replacing the top
of the alt stack with a different item.

Then part 3 first pops the alt stack top (the loop variable),
concatenates it with an appropriate push, then performs the
hash-then-concatenate dance.
This results in a SCRIPT that is the same as the original SCRIPT,
but with the loop variable possibly changed.

The SCRIPT can use multiple loop variables; it is simply a question
of how hard it would be to access from the alt stack.

Drivechains Over Recursive Covenants


Drivechains can be split into four parts:

1.  A way to commit to the sidechain blocks.
2.  A way to move funds from 

Re: [bitcoin-dev] Stumbling into a contentious soft fork activation attempt

2022-02-21 Thread ZmnSCPxj via bitcoin-dev




> Good morning Prayank,
>
> (offlist)



My apologies.
I pushed the wrong button, I should have pressed "Reply" and not "Reply All".

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Stumbling into a contentious soft fork activation attempt

2022-02-21 Thread ZmnSCPxj via bitcoin-dev
Good morning,


> If this is the reason to stop/delay improvements in bitcoin, maybe it applies 
> for Taproot as well although I don't remember reading such things in your 
> posts or maybe missed it.

Perhaps a thing to note, is that if it allows us to move some activity 
off-chain, and reduce activity on the blockchain, then the increase in 
functionality does *not* translate to a requirement of block size increase.

So for example:

* Taproot, by allowing the below improvements, is good:
  * Schnorr multisignatures that allow multiple users to publish a single 
signature, reducing block size usage for large participant sets.
  * MAST, which allows eliding branches of complicated SCRIPTs that are not 
executed, reducing block size usage for complex contracts.
* `SIGHASH_ANYPREVOUT`, by enabling an offchain updateable multiparty (N > 2) 
cryptocurrency system (Decker-Russell-Osuntokun), is also good, as it allows us 
to make channel factories without having to suffer the bad tradeoffs of 
Decker-Wattenhofer.
* `OP_CTV`, by enabling commit-to-unpublished-promised-outputs, is also good, 
as it allows opportunities for transactional cut-through without having to 
publish promised outputs *right now*.

So I do not think the argument should really object to any of the above, either 
--- all these improvements increase the functionality of Bitcoin, but also 
allow opportunities to use the blockchain as judge+jury+executioner instead of 
noisy marketplace.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Stumbling into a contentious soft fork activation attempt

2022-02-21 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

(offlist)

>  Satoshi

I object to the invocation of Satoshi here, and in general.
If Satoshi wants to participate in Bitcoin development today, he can speak for 
himself.
If Satoshi refuses to participate in Bitcoin development today, who cares what 
his opinion is?
Satoshi is dead, long live Bitcoin.


Aside from that, I am otherwise thinking about the various arguments being 
presented.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> opt-in or explicit tagging of fee account is a bad design IMO.
>
> As pointed out by James O'Beirne in the other email, having an explicit key 
> required means you have to pre-plan suppose you're building a vault meant 
> to distribute funds over many years, do you really want a *specific* 
> precommitted key you have to maintain? What happens to your ability to bump 
> should it be compromised (which may be more likely if it's intended to be a 
> hot-wallet function for bumping).
>
> Furthermore, it's quite often the case that someone might do a transaction 
> that pays you that is low fee that you want to bump but they choose to 
> opt-out... then what? It's better that you should always be able to fee bump.

Good point.

For the latter case, CPFP would work and already exists.
**Unless** you are doing something complicated and offchain-y and involves 
relative locktimes, of course.


Once could point out as well that Peter Todd gave just a single example, 
OpenTimeStamps, for this, and OpenTimeStamps is not the only user of the 
Bitcoin blockchain.

So we can consider: who benefits and who suffers, and does the benefit to the 
former outweigh the detriment of the latter?


It seems to me that the necromancing attack mostly can *only* target users of 
RBF that might want to *additionally* add outputs (or in the case of OTS, 
commitments) when RBF-ing.
For example, a large onchain-paying entity might lowball an onchain transaction 
for a few withdrawals, then as more withdrawals come in, bump up their feerate 
and add more withdrawals to the RBF-ed transaction.
Such an entity might prefer to confirm the latest RBF-ed transaction, as if an 
earlier transaction (which does not include some other withdrawals requested 
later) is necromanced, they would need to make an *entire* *other* transaction 
(which may be costlier!) to fulfill pending withdrawal requests.

However, to my knowledge, there is no actual entity that *currently* acts this 
way (I do have some sketches for a wallet that can support this behavior, but 
it gets *complicated* due to having to keep track of reorgs as well... sigh).

In particular, I expect that many users do not really make outgoing payments 
often enough that they would actually benefit from such a wallet feature.
Instead, they will generally make one payment at a time, or plan ahead and pay 
several in a batch at once, and even if they RBF, they would just keep the same 
set of outputs and just reduce their change output.
For such low-scale users, a rando third-party necromancing their old 
transactions could only make them happy, thus this nuisance attack cannot be 
executed.

We could also point out that this is really a nuisance attack and not an 
economic-theft attack.
The attacker cannot gain, and can only pay in order to impose costs on somebody 
else.
Rationally, the only winning move is not to play.


So --- has anyone actually implemented a Bitcoin wallet that has such a feature 
(i.e. make a lowball send transaction now, then you can add another send later 
and if the previous send transaction is unconfirmed, RBF it with a new 
transaction that has the previous send and the current send) and if so, can you 
open-source the code and show me?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread ZmnSCPxj via bitcoin-dev
Good morning DA,


> Agreed, you cannot rely on a replacement transaction would somehow
> invalidate a previous version of it, it has been spoken into the gossip
> and exists there in mempools somewhere if it does, there is no guarantee
> that anyone has ever heard of the replacement transaction as there is no
> consensus about either the previous version of the transaction or its
> replacement until one of them is mined and the block accepted. -DA.

As I understand from the followup from Peter, the point is not "this should 
never happen", rather the point is "this should not happen *more often*."

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Peter and Jeremy,

> Good morning Peter and Jeremy,
>
> > On Sat, Feb 19, 2022 at 05:20:19PM +, darosior wrote:
> >
> > > > Necromancing might be a reasonable name for attacks that work by 
> > > > getting an
> > > > out-of-date version of a tx mined.
> > >
> > > It's not an "attack"? There is no such thing as an out-of-date 
> > > transaction, if
> > > you signed and broadcasted it in the first place. You can't rely on the 
> > > fact that
> > > a replacement transaction would somehow invalidate a previous version of 
> > > it.
> >
> > Anyone on the internet can send you a packet; a secure system must be able 
> > to
> > receive any packet without being compromised. Yet we still call packet 
> > floods
> > as DoS attacks. And internet standards are careful to avoid making packet
> > flooding cheaper than it currently is.
> > The same principal applies here: in many situations transactions do become
> > out of date, in the sense that you would rather a different transaction be
> > mined instead, and the out-of-date tx being mined is expensive and annoying.
> > While you have to account for the possibility of any transaction you have
> > signed being mined, Bitcoin standards should avoid making unwanted 
> > necromancy a
> > cheap and easy attack.
>
> This seems to me to restrict the only multiparty feebumping method to be some 
> form of per-participant anchor outputs a la Lightning anchor commitments.
>
> Note that multiparty RBF is unreliable.
> While the initial multiparty signing of a transaction may succeed, at a later 
> time with the transaction unconfirmed, one or more of the participants may 
> regret cooperating in the initial signing and decide not to cooperate with 
> the RBF.
> Or for that matter, a participant may, through complete accident, go offline.
>
> Anchor outputs can be keyed to only a specific participant, so feebumping of 
> particular transaction can only be done by participants who have been 
> authorized to feebump.
>
> Perhaps fee accounts can include some kind of 
> proof-this-transaction-authorizes-this-fee-account?

For example:

* We reserve one Tapscript version for fee-account-authorization.
  * Validation of this tapscript version always fails.
* If a transaction wants to authorize a fee account, it should have at least 
one Taproot output.
  * This Taproot output must have tapleaf with the fee-account-authorization 
Tapscript version.
* In order for a fee account to feebump a transaction, it must also present the 
Taproot MAST path to the fee-account-authorization tapleaf of one output of 
that transaction.

This gives similar functionality to anchor outputs, without requiring an 
explicit output on the initial transaction, saving blockspace.
In particular, once the number of participants grows, the number of anchor 
outputs must grow linearly with the number of participants being authorized to 
feebump.
Only when the feerate turns out to be too low do we need to expose the 
authorization.
Revelation of the fee-account-authorization is O(log N), and if only one 
participant decides to feebump, then only a single O(log N) MAST treepath is 
published.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Peter and Jeremy,

> On Sat, Feb 19, 2022 at 05:20:19PM +, darosior wrote:
>
> > > Necromancing might be a reasonable name for attacks that work by getting 
> > > an
> > > out-of-date version of a tx mined.
> >
> > It's not an "attack"? There is no such thing as an out-of-date transaction, 
> > if
> > you signed and broadcasted it in the first place. You can't rely on the 
> > fact that
> > a replacement transaction would somehow invalidate a previous version of it.
>
> Anyone on the internet can send you a packet; a secure system must be able to
> receive any packet without being compromised. Yet we still call packet floods
> as DoS attacks. And internet standards are careful to avoid making packet
> flooding cheaper than it currently is.
>
> The same principal applies here: in many situations transactions do become
> out of date, in the sense that you would rather a different transaction be
> mined instead, and the out-of-date tx being mined is expensive and annoying.
> While you have to account for the possibility of any transaction you have
> signed being mined, Bitcoin standards should avoid making unwanted necromancy 
> a
> cheap and easy attack.
>

This seems to me to restrict the only multiparty feebumping method to be some 
form of per-participant anchor outputs a la Lightning anchor commitments.

Note that multiparty RBF is unreliable.
While the initial multiparty signing of a transaction may succeed, at a later 
time with the transaction unconfirmed, one or more of the participants may 
regret cooperating in the initial signing and decide not to cooperate with the 
RBF.
Or for that matter, a participant may, through complete accident, go offline.

Anchor outputs can be keyed to only a specific participant, so feebumping of 
particular transaction can only be done by participants who have been 
authorized to feebump.

Perhaps fee accounts can include some kind of 
proof-this-transaction-authorizes-this-fee-account?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> > "fully" punitive channels also make large value channels more dangerous 
> >from the perspective of bugs causing old states to be published
>
> Wouldn't it be ideal to have the penalty be to pay for a single extra 
> transaction fee? That way there is a penalty so cheating attempts aren't free 
> (for someone who wants to close a channel anyway) and yet a single fee isn't 
> going to be much of a concern in the accidental publishing case. It still 
> perplexes me why eltoo chose no penalty at all vs a small penalty like that.

Nothing in the Decker-Russell-Osunstokun paper *prevents* that --- you could 
continue to retain per-participant versions of update+state transactions 
(congruent to the per-participant commitment transactions of Poon-Dryja) and 
have each participant hold a version that deducts the fee from their main owned 
funds.
The Decker-Russell-Osuntokun paper simply focuses on the mechanism by itself 
without regard to fees, on the understanding that the reader already knows fees 
exist and need to be paid.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> This is a fascinating post and I'm still chewing on it.
>
> Chiming in with two points:
>
> Point 1, note with respect to evictions, revivals, CTV, TLUV:
>
> CTV enables 1 person to be evicted in O(log N) or one person to leave in 
> O(log N). TLUV enables 1 person to leave in O(1) O(log N) transactions, but 
> evictions take (AFAICT?) O(N) O(log N) transactions because the un-live party 
> stays in the pool. Hence OP_EVICT helps also make it so you can kick someone 
> out, rather than all having to leave, which is an improvement.
>
> CTV rejoins work as follows:
>
> suppose you have a pool with 1 failure, you need to do log N txns to evict 
> the failure, which creates R * log_R(N) outputs, which can then do a 
> transaction to rejoin.
>
> For example, suppose I had 64 people in a radix 4 tree. you'd have at the top 
> level 4 groups of 16, then 4 groups of 4 people, and then 1 to 4 txns. 
> Kicking 1 person out would make you do 3 txns, and create 12 outputs total. A 
> transaction spending the 11 outputs that are live would capture 63 people 
> back into the tree, and with CISA would not be terribly expensive. To be a 
> bit more economical, you might prefer to just join the 3 outputs with 16 
> people in it, and yield 48 people in one pool. Alternatively, you can lazily 
> re-join if fees make it worth it/piggybacking another transaction, or operate 
> independently or try to find new, better, peers.
>
> Overall this is the type of application that necessitates *exact* byte 
> counting. Oftentimes things with CTV seem inefficient, but when you crunch 
> the numbers it turns out not to be so terrible. OP_EVICT seems promising in 
> this regard compared to TLUV or accumulators.
>
> Another option is to randomize the CTV trees with multiple outputs per party 
> (radix Q), then you need to do Q times the evictions, but you end up with 
> sub-pools that contain more people/fractional liquidity (this might happen 
> naturally if CTV Pools have channels in them, so it's good to model).

Do note that a weakness of CTV is that you *have to* split up the CoinPool into 
many smaller pools, and re-merging them requires waiting for onchain 
confirmation.
This overall means you have no real incentive to revive the original CoinPool 
minus evicted parties.
`OP_EVICT` lets the CoinPool revival be made into the same transaction that 
performs the evict.

> Point 2, on Eltoo:
>
> One point of discomfort I have with Eltoo that I think is not universal, but 
> is shared by some others, is that non-punitive channels may not be good for 
> high-value channels as you do want, especially in a congested blockspace 
> world, punishments to incentivize correct behavior (otherwise cheating may 
> look like a free option).
>
> Thus I'm reluctant to fully embrace designs which do not permit nested 
> traditional punitive channels in favor of Eltoo, when Eltoo might not have 
> product-market-fit for higher valued channels.
>
> If someone had a punitive-eltoo variant that would ameliorate this concern 
> almost entirely.

Unfortunately, it seems the way to any kind of N > 2 construction *with* 
penalty would require bonds, such as the recent PathCoin idea (which is an N > 
2 construction *with* penalty, and is definitely offchain for much of its 
operation).

Having a Decker-Russell-Osuntokun "factory" layer that hosts multiple 
Poon-Dryja channels is not quite a solution; if old state on 
Decker-Russell-Osuntokun layer pushes through, then its obsolete Poon-Dryja 
channels will have all states invalid and unclaimable, but in case of Sybil 
where some participants are sockpuppets, it would still be possible for a thief 
to claim the funds from an "invalidated" Poon-Dryja channel if that channel is 
with a sockpuppet.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread ZmnSCPxj via bitcoin-dev
Good morning ariard,


> > A statechain is really just a CoinPool hosted inside a
> >  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.
>
> Note, to the best of my knowledge, how to use LN-Penalty in the context of 
> multi-party construction is still an unsolved issue. If an invalidated state 
> is published on-chain, how do you guarantee that the punished output value is 
> distributed "fairly" among the "honest" set of users ? At least
> where fairness is defined as a reasonable proportion of the balances they 
> owned in the latest state.

LN-Penalty I believe is what I call Poon-Dryja?

Both Decker-Wattenhofer (has no common colloquial name) and 
Decker-Russell-Osuntokun ("eltoo") are safe with N > 2.
The former has bad locktime tradeoffs in the unilateral close case, and the 
latter requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`.


> > In principle, a set of promised outputs, if the owners of those
> > outputs are peers, does not have *any* inherent order.
> > Thus, I started to think about a commitment scheme that does not
> > impose any ordering during commitment.
>
> I think we should dissociate a) *outputs publication ordering* from the b) 
> *spends paths ordering* itself. Even if to each spend path a output 
> publication is attached, the ordering constraint might not present the same 
> complexity.
>
> Under this distinction, are you sure that TLUV imposes an ordering on the 
> output publication ?

Yes, because TLUV is based on tapleaf revelation.
Each participant gets its own unique tapleaf that lets that participant get 
evicted.

In Taproot, the recommendation is to sort the hashes of each tapleaf before 
arranging them into a MAST that the Taproot address then commits to.
This sort-by-hash *is* the arbitrary ordering I refer to when I say that TLUV 
imposes an arbitrary ordering.
(actually the only requirement is that pairs of scripts are sorted-by-hash, but 
it is just easier to sort the whole array by hash.)

To reveal a single participant in a TLUV-based CoinPool, you need to reveal 
O(log N) hashes.
It is the O(log N) space consumption I want to avoid with `OP_EVICT`, and I 
believe the reason for that O(log N) revelation is due precisely to the 
arbitrary but necessary ordering.

> > With `OP_TLUV`, however, it is possible to create an "N-of-N With
> > Eviction" construction.
> > When a participant in the N-of-N is offline, but the remaining
> > participants want to advance the state of the construction, they
> > instead evict the offline participant, creating a smaller N-of-N
> > where *all* participants are online, and continue operating.
>
> I think we should dissociate two types of pool spends : a) eviction by the 
> pool unanimity in case of irresponsive participants and b) unilateral 
> withdrawal by a participant because of the liquidity allocation policy. I 
> think the distinction is worthy, as the pool participant should be stable and 
> the eviction not abused.
>
> I'm not sure if TLUV enables b), at least without transforming the unilateral 
> withdrawal into an eviction. To ensure the TLUV operation is correct  (spent 
> leaf is removed, withdrawing participant point removed, etc), the script 
> content must be inspected by *all* the participant. However, I believe
> knowledge of this content effectively allows you to play it out against the 
> pool at any time ? It's likely solvable at the price of a CHECKSIG.

Indeed, that distinction is important.
`OP_TLUV` (and `OP_EVICT`, which is just a redesigned `OP_TLUV`) supports (a) 
but not (b).

> `OP_EVICT`
> --
>
> >  * If it is `1` that simply means "use the Taproot internal
> >    pubkey", as is usual for `OP_CHECKSIG`.
>
> IIUC, this assumes the deployment of BIP118, where if the  public key is a 
> single byte 0x01, the internal pubkey is used
> for verification.

I thought it was part of Taproot?

>
> >  * Output indices must not be duplicated, and indicated
> >    outputs must be SegWit v1 ("Taproot") outputs.
>
> I think public key duplication must not be verified. If a duplicated public 
> key is present, the point is subtracted twice from the internal pubkey and 
> therefore the aggregated
> key remains unknown ? So it sounds to me safe against replay attacks.

Ah, right.

> >  * The public key is the input point (i.e. stack top)
> >    **MINUS** all the public keys of the indicated outputs.
>
> Can you prevent eviction abuse where one counterparty threatens to evict 
> everyone as all the output signatures are known among participants and free 
> to sum ? (at least not considering fees)

No, I considered onchain fees as the only mechanism to avoid eviction abuse.
The individual-evict signatures commit to fixed quantities.
The remaining change is then the only fund that can pay for onchain fees, so a 
single party evicting everyone else has to pay for the eviction of everyone 
else.


> > Suppose however that B is offline.
> > Then A, C, and D then decide to evict B.
> > To do so, they create 

Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Erik,

> > As I understand your counterproposal, it would require publishing one 
> > transaction per evicted participant.
>
> if you also pre-sign (N-2, N-3, etc), you can avoid this

It also increases the combinatorial explosion.

> > In addition, each participant has to store `N!` possible orderings in which 
> > participants can be evicted, as you cannot predict the future and cannot 
> > predict which partiicpants will go offline first.
>
> why would the ordering matter?  these are unordered pre commitments to move 
> funds, right?   you agree post the one that represents "everyone that's 
> offline"

Suppose `B` is offline first, then the remaining `A` `C` and `D` publish the 
eviction transaction that evicts only `B`.
What happens if `C` then goes offline?
We need to prepare for that case (and other cases where the participants go 
offline at arbitrary orders) and pre-sign a spend from the `ACD` set and evicts 
`C` as well, increasing combinatorial explosion.
And so on.

We *could* use multiple Tapleaves, of the form ` OP_CHECKSIG  
OP_CHECKSIG` for each participant.
Then the per-participant `` signature is signed with 
`SIGHASH_SINGLE|SIGHASH_ANYONECANPAY` and is pre-signed, while the remainder is 
signed by `` with default `SIGHASH_ALL`.
Then if one participant `B` is offline they can evict `B` and then the change 
is put into a new UTXO with a similar pre-signed scheme ` OP_CHECKSIG  
OP_CHECKSIG`.
This technique precludes pre-signing multiple evictions.

>
> > But yes, certainly that can work, just as pre-signed transactions can be 
> > used instead of `OP_CTV` 
>
> i don't see how multiple users can securely share a channel (allowing massive 
> additional scaling with lighting) without op_ctv

They can, they just pre-sign, like you pointed out.
The same technique works --- `OP_CTV` just avoids having ridiculous amounts of 
combinatorial explosion and just requires `O(log n)` per eviction.
Remember, this proposal can be used for channel factories just as well, as 
pointed out, so any objection to this proposal also applies to `OP_CTV`.



Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Erik,

> hey, i read that whole thing, but i'm confused as to why it's necessary
>
> seems like N of N participants can pre-sign an on-chain transfer of funds for 
> each participant to a new address that consists of (N-1) or (N-1) 
> participants, of which each portion of the signature is encrypted for the 
> same (N-1) participants
>
> then any (N-1) subset of participants can collude publish that transaction at 
> any time to remove any other member from the pool
>
> all of the set up  (dkg for N-1), and transfer (encryption of partial sigs) 
> is done offchain, and online with the participants that are online


As I understand your counterproposal, it would require publishing one 
transaction per evicted participant.
In addition, each participant has to store `N!` possible orderings in which 
participants can be evicted, as you cannot predict the future and cannot 
predict which partiicpants will go offline first.

Finally, please see also the other thread on lightning-dev: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003479.html
In this thread, I point out that if we ever use channel factories, it would be 
best if we treat each channel as a 2-of-2 that participates in an overall 
N-of-N (i.e. the N in the outer channel factory is composed of 2-of-2).
For example, instead of the channel factory being signed by participants `A`, 
`B`, `C`, `D`, instead the channel factory is signed by `AB`, `AC`, `AD`, `BC`, 
`BD`, `CD`, so that if e.g. participant B needs to be evicted, we can evict the 
signers `AB`, `BC`, and `BD`.
This means that for the channel factory case, already the number of 
"participants" is quadratic on the number of *actual* participants, which 
greatly increases the number of transactions that need to be evicted in 
one-eviction-at-a-time schemes (which is how I understand your proposal) as 
well as increasing the `N!` number of signatures that need to be exchanged 
during setup.


But yes, certainly that can work, just as pre-signed transactions can be used 
instead of `OP_CTV` or pretty much any non-`OP_CHECKMULTISIG` opcode, xref 
Smart Contracts Unchained.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Dave,

> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.

Let me try to give that a shot.

(Just to be clear, I am not an artificial intelligence, thus, I am not an 
"intelligent such people".)

The objection here is that recursion can admit partial (i.e. Turing-complete) 
computation.
Turing-completeness implies that the halting problem cannot be solved for 
arbitrary programs in the language.

Now, a counter-argument to that is that rather than using arbitrary programs, 
we should just construct programs from provably-terminating components.
Thus, even though the language may admit arbitrary programs that cannot 
provably terminate, "wise" people will just focus on using that subset of the 
language, and programming styles within the language, which have proofs of 
termination.
Or in other words: people can just avoid accepting coin that is encumbered with 
a SCRIPT that is not trivially shown to be non-recursive.

The counter-counter-argument is that it leaves such validation to the user, and 
we should really create automation (i.e. lower-level non-sentient programs) to 
perform that validation on behalf of the user.
***OR*** we could just design our language so that such things are outright 
rejected by the language as a semantic error, of the same type as `for (int x = 
0; x = y; x++);` is a semantic error that most modern C compilers will reject 
if given `-Wall -Werror`.


Yes, we want users to have freedom to shoot themselves in the feet, but we also 
want, when it is our turn to be the users, to keep walking with two feet as 
long as we can.

And yes, you could instead build a *separate* tool that checks if your SCRIPT 
can be proven to be non-recursive, and let the recursive construct remain in 
the interpreter and just require users who don't want their feet shot to use 
the separate tool.
That is certainly a valid alternate approach.
It is certainly valid to argue as well, that if a possibly-recursive construct 
is used, and you cannot find a proof-of-non-recursion, you should avoid coins 
encumbered with that SCRIPT (which is just a heuristic that approximate a tool 
for proof-of-non-recursion).

On the other hand, if we have the ability to identify SCRIPTs that have some 
proof-of-non-recursion, why is such a tool not built into the interpreter 
itself (in the form of operations that are provably non-recursive), why have a 
separate tool that people might be too lazy to actually use?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
Good morning shymaa,

> I just want to add an alarming info to this thread...
>
> There are at least 5.7m UTXOs≤1000 Sat (~7%), 
> 8.04 m ≤1$ (10%), 
> 13.5m ≤ 0.0001BTC (17%)
>
> It seems that bitInfoCharts took my enquiry seriously and added a main link 
> for dust analysis:
> https://bitinfocharts.com/top-100-dustiest-bitcoin-addresses.html
> Here, you can see just the first address contains more than 1.7m dust UTXOs
> (ins-outs =1,712,706 with a few real UTXOs holding the bulk of 415 BTC) 
> https://bitinfocharts.com/bitcoin/address/1HckjUpRGcrrRAtFaaCAUaGjsPx9oYmLaZ
>
> »
>  That's alarming isn't it?, is it due to the lightning networks protocol or 
> could be some other weird activity going on?
> .

I believe some blockchain tracking analysts will "dust" addresses that were 
spent from (give them 546 sats), in the hope that lousy wallets will use the 
new 546-sat UTXO from the same address but spending to a different address and 
combining with *other* inputs with new addresses, thus allowing them to grow 
their datasets about fund ownership.

Indeed JoinMarket has a policy to ignore-by-default UTXOs that pay to an 
address it already spent from, precisely due to this (apparently common, since 
my JoinMarket maker got dusted a number of times already) practice.

I am personally unsure of how common this is but it seems likely that you can 
eliminate this effect by removing outputs of exactly 546 sats to reused 
addresses.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
`OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`
==

In late 2021, `aj` proposed `OP_TAPLEAFUPDATEVERIFY` in order to
implement CoinPools and similar constructions.

`Jeremy` observed that due to the use of Merkle tree paths, an
`OP_TLUV` would require O(log N) hash revelations in order to
reach a particular tapleaf, which, in the case of a CoinPool,
would then delete itself after spending only a particular amount
of funds.
He then observed that `OP_CTV` trees also require a similar
revelation of O(log N) transactions, but with the advantage that
once revealed, the transactions can then be reused, thus overall
the expectation is that the number of total bytes onchain is
lesser compared to `OP_TLUV`.

After some thinking, I realized that it was the use of the
Merkle tree to represent the promised-but-offchain outputs of
the CoinPool that lead to the O(log N) space usage.
I then started thinking of alternative representations of
sets of promised outputs, which would not require O(log N)
revelations by avoiding the tree structure.

Promised Outputs


Fundamentally, we can consider that a solution for scaling
Bitcoin would be to *promise* that some output *can* appear
onchain at some point in the future, without requiring that the
output be shown onchain *right now*.
Then, we can perform transactional cut-through on spends of the
promised outputs, without requiring onchain activity ("offchain").
Only if something Really Bad (TM) happens do we need to actually
drop the latest set of promised outputs onchain, where it has to
be verified globally by all fullnodes (and would thus incur scaling
and privacy costs).

As an example of the above paradigm, consider the Lightning
Network.
Outputs representing the money of each party in a channel are
promised, and *can* appear onchain (via the unilateral close
mechanism).
In the meantime, there is a mechanism for performing cut-through,
allowing transfers between channel participants; any number of
transactions can be performed that are only "solidified" later,
without expensive onchain activity.

Thus:

* A CoinPool is really a way to commit to promised outputs.
  To change the distribution of those promised outputs, the
  CoinPool operators need to post an onchain transaction, but
  that is only a 1-input-1-output transaction, and with Schnorr
  signatures the single input requires only a single signature.
  But in case something Really Bad (TM) happens, any participant
  can unilaterally close the CoinPool, instantiating the promised
  outputs.
* A statechain is really just a CoinPool hosted inside a
  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.
  This allows changing the distribution of those promised outputs
  without using an onchain transaction --- instead, a new state
  in the Decker-Wattenhofer/Decker-Russell-Osuntokun construction
  is created containing the new state, which invalidates all older
  states.
  Again, any participant can unilaterally shut it down, exposing
  the state of the inner CoinPool.
* A channel factory is really just a statechain where the
  promised outputs are not simple 1-of-1 single-owner outputs,
  but are rather 2-of-2 channels.
  This allows graceful degradation, where even if the statechain
  ("factory") layer has missing participants, individual 2-of-2
  channels can still continue operating as long as they do not
  involve missing participants, without requiring all participants
  to be online for large numbers of transactions.

We can then consider that the base CoinPool usage should be enough,
as other mechanisms (`OP_CTV`+`OP_CSFS`, `SIGHASH_NOINPUT`) can be
used to implement statechains and channels and channel factories.

I therefore conclude that what we really need is "just" a way to
commit ourselves to exposing a set of promised outputs, with the
proviso that if we all agree, we can change that set (without
requiring that the current or next set be exposed, for both
scaling and privacy).

(To Bitcoin Cashers: this is not an IOU, this is *committed* and
can be enforced onchain, that is enough to threaten your offchain
counterparties into behaving correctly.
They cannot gain anything by denying the outputs they promised,
you can always drop it onchain and have it enforced, thus it is
not just merely an IOU, as IOUs are not necessarily enforceable,
but this mechanism *would* be.
Blockchain as judge+jury+executioner, not noisy marketplace.)

Importantly: both `OP_CTV` and `OP_TLUV` force the user to
decide on a particular, but ultimately arbitrary, ordering for
promised outputs.
In principle, a set of promised outputs, if the owners of those
outputs are peers, does not have *any* inherent order.
Thus, I started to think about a commitment scheme that does not
impose any ordering during commitment.

Digression: N-of-N With Eviction


An issue with using an N-of-N construction is that if any single

Re: [bitcoin-dev] [Bitcoin Advent Calendar] Oracles, Bonds, and Attestation Chains

2021-12-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,


> Another interesting point: if you use a musig key for your staking key that 
> is musig(a,b,c) you can sign with a until you equivocate once, then switch to 
> b, then c. Three strikes and you're out! IDK what that could be used for.

You could say "oops, I made a mistake, can I correct it by equivocating just 
this time?".
Three strikes and you are out.

> Lastly, while you can't punish lying, you could say "only the stakers who 
> sign with the majority get allocated reward tokens for that slot". So you 
> could equivocate to switch and get tokens, but you'd burn your collateral for 
> them. But this does make an incentive for the stakers to try to sign the 
> "correct" statement in line with peers.

Note the quote marks around "correct" --- the majority of peers could be 
conspiring to lie, too.
Conspiracy theory time.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Oracles, Bonds, and Attestation Chains

2021-12-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> Today's post is pretty cool: it details how covenants like CTV can be used to 
> improve on-chain bitcoin signing oracles by solving the timeout/rollover 
> issue and solving the miner/oracle collusion issue on punishment. This issue 
> is similar to the Blockstream Liquid Custody Federation rollover bug from a 
> while back (which this type of design also helps to fix).
>
> https://rubin.io/bitcoin/2021/12/17/advent-20/
>
> It also describes:
> - how a protocol on top can make 'branch free' attestation chains where if 
> you equivocate your funds get burned.
> - lightly, various uses for these chained attestations
>
> In addition, Robin Linus has a great whitepaper he put out getting much more 
> in the weeds on the concepts described in the post, it's linked in the first 
> bit of the post.

Nice, bonds are significantly better if you can ensure that the bonder cannot 
recover their funds.
Without a covenant the best you could do would be to have the bonder risk loss 
of funds on equivocation, not have the bonder actually definitely lose funds.

We should note that "equivocate" is not "lie".
An oracle can still lie, it just needs to consistently lie (i.e. not 
equivocate).

As an example, if the oracle is a signer for a federated sidechain, it could 
still sign an invalid sidechain block that inflates the sidecoin supply.
It is simply prevented from later denying this by signing an alternative valid 
sidechain block and acting as if it never signed the invalid sidechain block.
But if it sticks to its guns, then the sidechain simply stops operation with 
everyone owning sidecoins losing their funds (and if the oracle already exited 
the sidechain, its bond remains safe, as it did not equivocate, it only lied).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] What's Smart about Smart Contracts

2021-12-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,


> > ## Why would a "Smart" contract be "Smart"?
> >
> > A "smart" contract is simply one that somehow self-enforces rather than 
> > requires a third party to enforce it.
> > It is "smart" because its execution is done automatically.
>
> There are no automatic executing smart contracts on any platform I'm aware 
> of. Bitcoin requires TX submission, same with Eth.
>
> Enforcement and execution are different subjects.

Nothing really prevents a cryptocurrency system from recording a "default 
branch" and enforcing that later.
In Bitcoin terms, nothing fundamentally prevents this redesign:

* A confirmed transaction can include one or more transactions (not part of its 
wtxid or txid) which spend an output of that confirmed transaction.
  * Like SegWit, they can be put in a new region that is not visible to 
pre-softfork nodes, but this new section is committed to in the coinbase.
* Those extra transactions must be `nLockTime`d to a future blockheight.
* When the future blockheight arrives, we add those transactions to the mempool.
  * If the TXO is already spent by then, then they are not put in the mempool.

That way, at least the timelocked branch can be automatically executed, because 
the tx can be submitted "early".
The only real limitation against the above is the amount of resources it would 
consume on typical nodes.

Even watchtower behavior can be programmed directly into the blockchain layer, 
i.e. we can put encrypted blobs into the same extra blockspace, with a partial 
txid key that triggers decryption and putting those transactions in the 
mempool, etc.
Thus, the line between execution and enforcement blurs.


But that is really beside the point.

The Real Point is that "smart"ness is not a Boolean flag, but a spectrum.
The above feature would allow for more "smart"ness in contracts, at the cost of 
increased resource utilization at each node.
In this point-of-view, even a paper contract is "smart", though less "smart" 
than a typical Bitcoin HTLC.

> > Consider the humble HTLC.
> > 
> > This is why the reticence of Bitcoin node operators to change the 
> > programming model is a welcome feature of the network.
> > Any change to the programming model risks the introduction of bugs to the 
> > underlying virtual machine that the Bitcoin network presents to contract 
> > makers.
> > And without that strong reticence, we risk utterly demolishing the basis of 
> > the "smart"ness of "smart" contracts --- if a "smart" contract cannot 
> > reliably be executed, it cannot self-enforce, and if it cannot 
> > self-enforce, it is no longer particularly "smart".
>
> I don't think that anywhere in the post I advocated for playing fast and 
> loose with the rules to introduce any sort of unreliability.

This is not a criticism of your post, merely an amusing article that fits the 
post title better.

> What I'm saying is more akin to we can actually improve the "hardware" that 
> Bitcoin runs on to the extent that it actually does give us better ability to 
> adjudicate the transfers of value, and we should absolutely and aggressively 
> pursue that rather than keeping Bitcoin running on a set mechanisms that are 
> insufficient to reach the scale, privacy, self custody, and decentralization 
> goals we have.

Agreed.

>  
>
> > ## The N-of-N Rule
> >
> > What is a "contract", anyway?
> >
> > A "contract" is an agreement between two or more parties.
> > You do not make a contract to yourself, since (we assume) you are 
> > completely a single unit (in practice, humans are internally divided into 
> > smaller compute modules with slightly different incentives (note: I did not 
> > get this information by *personally* dissecting the brains of any humans), 
> > hence the "we assume").
>
>  
>
> > Thus, a contract must by necessity require N participants
>
> This is getting too pedantic about contracts. If you want to go there, you're 
> also missing "consideration".
>
> Smart Contracts are really just programs. And you absolutely can enter smart 
> contracts with yourself solely, for example, Vaults (as covered in day 10) 
> are an example where you form a contract where you are intended to be the 
> only party.

No, because a vault is a contract between your self-of-today and your 
self-of-tomorrow, with your self-of-today serving as an attorney-in-place of 
your self-of-tomorrow.
After all, at the next Planck Interval you will die and be replaced with a new 
entity that only *mostly* agrees with you.

> You could make the claim that a vault is just an open contract between you 
> and some future would be hacker, but the intent is that the contract is there 
> to just safeguard you and those terms should mostly never execute. + you 
> usually want to define contract participants as not universally quantified...
>
> > This is of interest since in a reliability perspective, we often accept 
> > k-of-n.
> > 
> > But with an N-of-N, *you* are a participant and your input is necessary for 

Re: [bitcoin-dev] [Bitcoin Advent Calendar] What's Smart about Smart Contracts

2021-12-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

>
> Here's the day 6 post: https://rubin.io/bitcoin/2021/12/03/advent-6/, the 
> topic is why smart contracts (in extended form) may be a critical precursor 
> to securing Bitcoin's future rather than something we should do after making 
> the base layer more robust.


*This* particular post seems to contain more polemic than actual content.
This is the first post I read of the series, so maybe it is just a "breather" 
post between content posts?

In any case, given the subject line, it seems a waste not to discuss the actual 
"smart" in "smart" contract...

## Why would a "Smart" contract be "Smart"?

A "smart" contract is simply one that somehow self-enforces rather than 
requires a third party to enforce it.
It is "smart" because its execution is done automatically.

Consider the humble HTLC.
It is simply a contract which says:

* If B can provide the preimage for this hash H, it gets the money from A.
* If the time L arrives without B claiming this fund, A gets its money back.

Why would an HTLC self-enforce?
Why would a simple paper contract with the above wording, signed and notarized, 
be insufficient?

An HTLC self-enforces because given the Bitcoin network, it is not possible to 
violate and transfer the funds outside of the HTLC specification.
Whereas a paper contract can be mere ink on a page, if sufficient firepower is 
directed at the people (judges, lawyers, etc.) that would ensure its faithful 
execution.
You puny humans are notoriously squishy and easily destroyed.

But we must warn as well that the Bitcoin network is *also* run by people.
Thus, a "smart" contract is only "smart" to a degree, and that degree is 
dependent on how easily it is for the "justice system" that enforces the 
contract to be subverted.
After all, a "smart" contract is software, and software must run on some 
hardware in order to execute.

Thus, even existing paper contracts are "smart" to a degree, too.
It is simply that the hardware they run on top of --- a bunch of puny humans 
--- is far less reliable than cold silicon (so upgrade your compute substrate 
already, puny humans!).
Our hope with the Bitcoin experiment is that we might actually be able to make 
it much harder to subvert contracts running on the Bitcoin network.

It is that difficulty of subversion which determines the "smart"ness of a smart 
contract.
Bitcoin is effectively a massive RAID1 on several dozen thousands of redundant 
compute hardware, ensuring that the execution of every contract is faithful to 
the Bitcoin SCRIPT programming model.

This is why the reticence of Bitcoin node operators to change the programming 
model is a welcome feature of the network.
Any change to the programming model risks the introduction of bugs to the 
underlying virtual machine that the Bitcoin network presents to contract makers.
And without that strong reticence, we risk utterly demolishing the basis of the 
"smart"ness of "smart" contracts --- if a "smart" contract cannot reliably be 
executed, it cannot self-enforce, and if it cannot self-enforce, it is no 
longer particularly "smart".

## The N-of-N Rule

What is a "contract", anyway?

A "contract" is an agreement between two or more parties.
You do not make a contract to yourself, since (we assume) you are completely a 
single unit (in practice, humans are internally divided into smaller compute 
modules with slightly different incentives (note: I did not get this 
information by *personally* dissecting the brains of any humans), hence the "we 
assume").

Thus, a contract must by necessity require N participants.

This is of interest since in a reliability perspective, we often accept k-of-n.
For example, we might run a computation on three different pieces of hardware, 
and if only one diverges, we accept the result of the other two as true and the 
diverging hardware as faulty.

However, the above 2-of-3 example has a hidden assumption: that all three 
pieces of hardware are actually owned and operated by a single entity.

A contract has N participants, and is not governed by a single entity.
Thus, it cannot use k-of-n replication.

Contracts require N-of-N replication.
In Bitcoin terms, that is what we mean by "consensus" --- that all Bitcoin 
network participants can agree that some transfer is "valid".

Similarly, L2 layers, to be able to host properly "smart" contracts, require 
N-of-N agreement.
For example, a Lightning Network channel can properly host "smart" HTLCs, as 
the channel is controlled via 2-of-2 agreement.

Lesser L2 layers which support k-of-n thus have degraded "smartness", as a 
quorum of k participants can evict the n-k and deny the execution of the smart 
contract.
But with an N-of-N, *you* are a participant and your input is necessary for the 
execution of the smart contract, thus you can be *personally* assured that the 
smart contract *will* be executed faithfully.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list

  1   2   3   4   5   6   >