Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread Anthony Towns via bitcoin-dev
On Wed, Feb 23, 2022 at 11:28:36AM +, ZmnSCPxj via bitcoin-dev wrote:
> Subject: Turing-Completeness, And Its Enablement Of Drivechains

> And we have already rejected Drivechains,

That seems overly strong to me.

> for the following reason:
> 1.  Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2.  Mainchain miners end up validating and commiting to sidechain blocks.
> 3.  Ergo, sidechains on Drivechains become a block size increase.

I think there are two possible claims about drivechains that would make
them unattractive, if true:

 1) that adding a drivechain is a "block size increase" in the sense
that every full node and every miner need to do more work when
validating a block, in order to be sure whether the majority of hash
rate will consider it valid, or will reject it and refuse to build
on it because it's invalid because of some external drivechain rule

 2) that funds deposited in drivechains will be stolen because
the majority of hashrate is not enforcing drivechain rules (or that
deposited funds cannot be withdrawn, but will instead be stuck in
the drivechain, rather than having a legitimate two-way peg)

And you could combine those claims, saying that one or the other will
happen (depending on whether more or less than 50% of hashpower is
enforcing drivechain rules), and either is bad, even though you don't
know which will happen.

I believe drivechain advocates argue a third outcome is possible where
neither of those claims hold true, where only a minority of hashrates
needs to validate the drivechain rules, but that is still sufficient
to prevent drivechain funds from being stolen.

One way to "reject" drivechains is simply to embrace the second claim --
that putting money into drivechains isn't safe, and that miners *should*
claim coins that have been drivehcain encumbered (or that miners
should not assist with withdrawing funds, leaving them trapped in the
drivechain). In some sense this is already the case: bip300 rules aren't
enforced, so funds committed today via bip300 can likely expect to be
stolen, and likely won't receive the correct acks, so won't progress
even if they aren't stolen.



I think a key difference between tx-covenant based drivechains and bip300
drivechains is hashpower endorsement: if 50% of hashpower acks enforcement
of a new drivechain (as required in bip300 for a new drivechain to exist
at all), there's an implicit threat that any block proposing an incorrect
withdrawal from that blockchain will have their block considered invalid
and get reorged out -- either directly by that hashpower majority, or
indirectly by users conducting a UASF forcing the hashpower majority to
reject those blocks.

I think removing that implicit threat changes the game theory
substantially: rather than deposited funds being withdrawn due to the
drivechain rules, you'd instead expect them to be withdrawn according to
whoever's willing to offer the miners the most upfront fees to withdraw
the funds.

That seems to me to mean you'd frequently expect to end up in a scorched
earth scenario, where someone attempts to steal, then they and the
legitimate owner gets into a bidding war, with the result that most
of the funds end up going to miners in fees. Because of the upfront
payment vs delayed collection of withdrawn funds, maybe it could end up
as a dollar auction, with the two parties competing to lose the least,
but still both losing substantial amounts?

So I think covenant-based drivechains would be roughly the same as bip300
drivechains, where a majority of hashpower used software implementing
the following rules:

 - always endorse any proposed drivechain
 - always accept any payment into a drivechain
 - accept bids to ack/nack withdrawals, then ack/nack depending on
   whoever pays the most

You could probably make covenant-based drivechains a closer match to
bip300 drivechains if a script could determine if an input was from a
(100-block prior) coinbase or not.

> Logically, if the construct is general enough to form Drivechains, and
> we rejected Drivechains, we should also reject the general construct.

Not providing X because it can only be used for E, may generalise to not
providing Y which can also only be used for E, but it doesn't necessarily
generalise to not providing Z which can be used for both G and E.

I think it's pretty reasonable to say:

 a) adding dedicated consensus features for drivechains is a bad idea
in the absence of widespread consensus that drivechains are likely
to work as designed and be a benefit to bitcoin overall

 b) if you want to risk your own funds by leaving your coins on an
exchange or using lightning or eltoo or tumbling/coinjoin or payment
pools or drivechains or being #reckless in some other way, and aren't
asking for consensus changes, that's your business

Cheers,
aj

___
bitcoin-dev mailing 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread ZmnSCPxj via bitcoin-dev


Good morning Paul, welcome back, and the list,


For the most part I am reluctant to add Turing-completeness due to the 
Principle of Least Power.

We saw this play out on the web browser technology.
A full Turing-complete language was included fairly early in a popular HTML 
implementation, which everyone else then copied.
In the beginning, it had very loose boundaries, and protections against things 
like cross-site scripting did not exist.
Eventually, W3C cracked down and modern JavaScript is now a lot more sandboxed 
than at the beginning --- restricting its power.
In addition, for things like "change the color of this bit when the mouse 
hovers it", which used to be implemented in JavaScript, were moved to CSS, a 
non-Turing-complete language.

The Principle of Least Power is that we should strive to use the language with 
*only what we need*, and naught else.

So I think for the most part that Turing-completeness is dangerous.
There may be things, other than Drivechain, that you might object to enabling 
in Bitcoin, and if those things can be implemented in a Turing-complete 
language, then they are likely implementable in recursive covenants.

That the web *started* with a powerful language that was later restricted is 
fine for the web.
After all, the main use of the web is showing videos of attractive female 
humans, and cute cats.
(WARNING: WHEN I TAKE OVER THE WORLD, I WILL TILE IT WITH CUTE CAT PICTURES.)
(Note: I am not an AI that seeks to take over the world.)
But Bitcoin protects money, which I think is more important, as it can be 
traded not only for videos of attractive female humans, and cute cats, but 
other, lesser things as well.
So I believe some reticence towards recursive covenants, and other things it 
may enable, is warranted,

Principle of Least Power exists, though admittedly, this principle was 
developed for the web.
The web is a server-client protocol, but Bitcoin is peer-to-peer, so it seems 
certainly possible that Principle of Least Power does not apply to Bitcoin.
As I understand it, however, the Principle of Least Power exists *precisely* 
because increased power often lets third parties do more than what was 
expected, including things that might damage the interests of the people who 
allowed the increased power to exist, or things that might damage the interests 
of *everyone*.

One can point out as well, that despite the problems that JavaScript 
introduced, it also introduced GMail and the now-rich Web ecosystem.

Perhaps one might liken recursive covenants to the box that was opened by 
Pandora.
Once opened, what is released cannot be put back.
Yet perhaps at the bottom of this box, is Hope?



Also: Go not to the elves for counsel, for they will say both no and yes.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft-BIP: Ordinal Numbers

2022-02-23 Thread damian--- via bitcoin-dev
At the moment it is indisputable that a particular satoshi cannot be 
proven, an amount of Bitcoin is a bag of satoshi's and no-one can tell 
which ones are any particular ones **so even if you used the system of 
ordinals privately, and it might make interesting for research, I cannot 
see that it would be sensible to be adopted** as it can only cause 
trouble. If I receive some Bitcoin I cannot know if some or any of those 
have been at any point in the past been stolen, I assume the transaction 
is honest, and in all likelihood it is likely that it is. The least 
reasonable thing I could expect is some claimed former holder of some 
ordianls turning up to challenge me that it was their stolen Bitcoin was 
some of what I received.


NACK

-DA.

On 2022-02-23 18:02, dam...@willtech.com.au wrote:

Well done, your bip looks well presented for discussion. You say to
number each satoshi created? For a 50 BTC block reward that is
5,000,000,000 ordinal numbers, and when some BTC is transferred to
another UTXO how do you determine which ordinal numbers, say if I
create a transaction to pay-to another UTXO. The system sounds
expensive eventually to cope with approximately 2,100,000,000,000,000
ordinals. If I understand ordinals 0 to 5,000,000,000 as assigned to
the first Bitcoin created from mining block-reward. Say if I send some
Bitcoin to another UTXO then first-in-first-out algorithm splits those
up to assign 1 to 100,000,000 to the 1 BTC that I sent, and
100,000,001 to 5,000,000,000 are assigned to the change plus if any
fee?-DA.

On 2022-02-23 11:43, Casey Rodarmor via bitcoin-dev wrote:

Briefly, newly mined satoshis are sequentially numbered in the order
in
which they are mined. These numbers are called "ordinal numbers" or
"ordinals". When satoshis are spent in a transaction, the input
satoshi
ordinal numbers are assigned to output satoshis using a simple
first-in-first-out algorithm.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft-BIP: Ordinal Numbers

2022-02-23 Thread Casey Rodarmor via bitcoin-dev
​Well done, your bip looks well presented for discussion.


Thank you!

You say to number each satoshi created? For a 50 BTC block reward that is
> 5,000,000,000 ordinal numbers, and when some BTC is transferred to another
> UTXO how do you determine which ordinal numbers, say if I create a
> transaction to pay-to another UTXO.
>

It uses a first-in-first out algorithm, so the first ordinal number of the
first input becomes the first ordinal number of the first output.

The system sounds expensive eventually to cope with approximately
> 2,100,000,000,000,000 ordinals.
>

A full index is expensive, but it doesn't have to track 2.1 individual
entries, it only has to track contiguous ordinal ranges, which scales with
the number of outputs–all outputs, not just unspent outputs–since an output
might split an ordinal range.

If I understand ordinals 0 to 5,000,000,000 as assigned to the first
> Bitcoin created from mining block-reward. Say if I send some Bitcoin to
> another UTXO then first-in-first-out algorithm splits those up to assign 1
> to 100,000,000 to the 1 BTC that I sent, and 100,000,001 to 5,000,000,000
> are assigned to the change plus if any fee?-DA.
>

That's correct, assuming that the 1 BTC output is first, and the 4 BTC
output is second. Although it's actually 0 to 99,999,999 that go to the
first output, and 100,000,000 to 499,999,999 that are assigned to the
second output, less any fees.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft-BIP: Ordinal Numbers

2022-02-23 Thread damian--- via bitcoin-dev
Well done, your bip looks well presented for discussion. You say to 
number each satoshi created? For a 50 BTC block reward that is 
5,000,000,000 ordinal numbers, and when some BTC is transferred to 
another UTXO how do you determine which ordinal numbers, say if I create 
a transaction to pay-to another UTXO. The system sounds expensive 
eventually to cope with approximately 2,100,000,000,000,000 ordinals. If 
I understand ordinals 0 to 5,000,000,000 as assigned to the first 
Bitcoin created from mining block-reward. Say if I send some Bitcoin to 
another UTXO then first-in-first-out algorithm splits those up to assign 
1 to 100,000,000 to the 1 BTC that I sent, and 100,000,001 to 
5,000,000,000 are assigned to the change plus if any fee?-DA.


On 2022-02-23 11:43, Casey Rodarmor via bitcoin-dev wrote:

Briefly, newly mined satoshis are sequentially numbered in the order
in
which they are mined. These numbers are called "ordinal numbers" or
"ordinals". When satoshis are spent in a transaction, the input
satoshi
ordinal numbers are assigned to output satoshis using a simple
first-in-first-out algorithm.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft-BIP: Ordinal Numbers

2022-02-23 Thread Casey Rodarmor via bitcoin-dev
​The least reasonable thing I could expect is some claimed former holder of
some ordianls turning up to challenge me that it was their stolen Bitcoin
was some of what I received.


I think it's unlikely that this would come to pass. A previous owner of an
ordinal wouldn't have any particular reason to expect that they should own
it after they transfer it. Similar to how noting a dollar bill's serial
number doesn't give you a claim to it after you spend it. From the BIP:

​Since any ordinal can be sent to any address at any time, ordinals that
are transferred, even those with some public history, should be considered
to be fungible with other satoshis with no such history.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread Paul Sztorc via bitcoin-dev

On 2/23/2022 6:28 AM, ZmnSCPxj via bitcoin-dev wrote:


... Drivechains is implementable on a Turing-complete
language.
And we have already rejected Drivechains, for the following reason:

1.  Sidechain validators and mainchain miners have a strong incentive to
 merge their businesses.
2.  Mainchain miners end up validating and commiting to sidechain blocks.
3.  Ergo, sidechains on Drivechains become a block size increase.


Is this indeed the reason? Because it is not a good one.

First, (as always) we must ignore BIP 301*. (Since it was invented to cancel 
point 1. Which it does -- by giving an incentive for side-validators and 
main-miners to UN-merge their businesses.)

With that out of the way, let's swap "blocksize increase" for "mining via natural 
gas flaring" :

1. Oil drillers and mainchain miners have a strong incentive** to merge their 
businesses.
2. Mainchain miners end up drilling for oil.
3. Ergo, sidechains on Drivechains become a requirement, that full nodes mine 
for oil.

The above logic is flawed, because full nodes can ignore the mining process. 
Nodes outrank miners.

Merged mining is, in principle, no different from any other source of mining 
profitability. I believe there is an irrational prejudice against merged 
mining, because MM takes the form of software. It would be like an NFL referee 
who refuses to allow their child to play an NFL videogame, on the grounds that 
the reffing in the game is different from how the parent would ref. But that 
makes no difference to anything. The only relevant issue is if the child has 
fun playing the videogame.

(And of course, merged mining long predates drivechain, and miners are MMing 
now, and have been for years. It was Satoshi who co-invented merged mining, so 
the modern prejudice against it is all the more mysterious.)


Also:

1.  The sidechain-to-mainchain peg degrades the security of sidechain
 users from consensus "everyone must agree to the rules" to democracy
 "if enough enfranchised voters say so, they can beat you up and steal
 your money".

In this write-up, I will...


This is also a mischaracterization.

Drivechain will not work if 51% hashrate is attacking the network. But that is 
the case for everything, including the Lightning Network***.

So there is no sense in which the security is "degraded". To establish that, one would need arguments about what will probably happen and why. Which is exactly what my original Nov 2015 article contains: truthcoin.info/blog/drivechain/#drivechains-security , as does my Peer Review section :https://www.drivechain.info/peer-review/peer-review-new/  


(And, today Largeblocker-types do not have any "everyone must agree to the rules" consensus, at 
all. Anyone who wants to use a sidechain-feature today, must obtain it via Altcoin or via real-world trust. 
So the current security is "nothing" and so it is hard to see how that could be 
"degraded".)

--

I am not sure it is a good use of my time to talk to this list about 
Drivechain. My Nov 2015 article anticipated all of the relevant 
misunderstandings. Almost nothing has changed since then.

As far as I am concerned, Drivechain was simply ahead of its time. Eventually, 
one or more of the following --the problem of Altcoins, the human desire for 
freedom and creativity, the meta-consensus/upgrade/ossification problem, the 
problem of persistently low security budget, and/or the expressiveness of 
Bitcoin smart contracts-- will force Bitcoiners to relearn drivechain-lore and 
eventually adopt something drivechain-like. At which point I will write to 
historians to demand credit. That is my plan so far, at least.

--

As to the actual content of your post, it seems pro-Drivechain.

After all, you are saying that Recursive Covenants --> Turing Completeness --> 
Drivechain. So, which would you rather have? The hacky, bizzaro, covenant-Drivechain, 
or my pure optimized transparent Bip300-Drivechain? Seems that this is exactly what I 
predicted: people eventually reinventing Drivechain.

On this topic, in 2015-2016 I wrote a few papers and gave a few recorded talks, in which I 
compared the uncontrollable destructive chaos of Turing Completeness, to a "categorical" 
Turing Completeness where contracts are sorted by category (ie, all of the BitName contracts in the 
Namecoin-sidechain, all of the oracle contracts in the oracle sidechain, etc). The categorical 
strategy allows, paradoxically (and perhaps counterintuitively), for more expressive contracts, 
since you can prevent smart contracts from attacking each other. (They must have a category, so if 
they aren't Name-contracts they cannot live in the Namecoin-sidechain -- they ultimately must live 
in an "Evil Sidechain", which the miners have motive and opportunity to simply disable.) 
If people are now talking about how Turing Completeness can lead to smart contracts attacking each 
other, then I suppose I was years ahead-of-my-time with that, as well. Incidentally, 

Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-23 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> TLUV doesn't assume cooperation among the construction participants once the 
> Taproot tree is setup. EVICT assumes cooperation among the remaining 
> construction participants to satisfy the final CHECKSIG.
>
> So that would be a feature difference between TLUV and EVICT, I think ?

`OP_TLUV` leaves the transaction output with the remaining Tapleaves intact, 
and, optionally, with a point subtracted from Taproot internal pubkey.

In order to *truly* revive the construct, you need a separate transaction that 
spends that change output, and puts it back into a new construct.

See: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003479.html
I describe how this works.

That `OP_EVICT` does another `CHECKSIG` simply cuts through the separate 
transaction that `OP_TLUV` would require in order to revive the construct.

> > I thought it was part of Taproot?
>
> I checked BIP342 again, *as far as I can read* (unreliable process), it 
> sounds like it was proposed by BIP118 only.

*shrug* Okay!

> > A single participant withdrawing their funds unilaterally can do so by 
> > evicting everyone else (and paying for those evictions, as sort of a 
> > "nuisance fee").
>
> I see, I'm more interested in the property of a single participant 
> withdrawing their funds, without affecting the stability of the off-chain 
> pool and without cooperation with other users. This is currently a 
> restriction of the channel factories fault-tolerance. If one channel goes 
> on-chain, all the outputs are published.

See also: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003479.html

Generally, the reason for a channel to go *onchain*, instead of just being 
removed inside the channel factory and its funds redistributed elsewhere, is 
that an HTLC/PTLC is about to time out.
The blockchain is really the only entity that can reliably enforce timeouts.

And, from the above link:

> * If a channel has an HTLC/PTLC time out:
>   * If the participant to whom the HTLC/PTLC is offered is
> offline, that may very well be a signal that it is unlikely
> to come online soon.
> The participant has strong incentives to come online before
> the channel is forcibly closed due to the HTLC/PTLC timeout,
> so if it is not coming online, something is very wrong with
> that participant and we should really evict the participant.
>   * If the participant to whom the HTLC/PTLC is offered is
> online, then it is not behaving properly and we should
> really evict the participant.

Note the term "evict" as well --- the remaining participants that are 
presumably still behaving correctly (i.e. not letting HTLC/PTLC time out) evict 
the participants that *are*, and that is what `OP_EVICT` does, as its name 
suggests.

Indeed, I came up with `OP_EVICT` *after* musing the above link.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread ZmnSCPxj via bitcoin-dev


Subject: Turing-Completeness, And Its Enablement Of Drivechains

Introduction


Recently, David Harding challenged those opposed to recursive covenants
for *actual*, *concrete* reasons why recursive covenants are a Bad Thing
(TM).

Generally, it is accepted that recursive covenants, together with the
ability to update loop variables, is sufficiently powerful to be
considered Turing-complete.
So, the question is: why is Turing-completness bad, if it requires
*multiple* transactions in order to implement Turing-completeness?
Surely the practical matter that fees must be paid for each transaction
serves as a backstop against Turing-completeness?
i.e. Fees end up being the "maximum number of steps", which prevents a
language from becoming truly Turing-complete.

I point out here that Drivechains is implementable on a Turing-complete
language.
And we have already rejected Drivechains, for the following reason:

1.  Sidechain validators and mainchain miners have a strong incentive to
merge their businesses.
2.  Mainchain miners end up validating and commiting to sidechain blocks.
3.  Ergo, sidechains on Drivechains become a block size increase.

Also:

1.  The sidechain-to-mainchain peg degrades the security of sidechain
users from consensus "everyone must agree to the rules" to democracy
"if enough enfranchised voters say so, they can beat you up and steal
your money".

In this write-up, I will demonstrate how recursive covenants, with
loop variable update, is sufficient to implement a form Drivechains.
Logically, if the construct is general enough to form Drivechains, and
we rejected Drivechains, we should also reject the general construct.

Digression: `OP_TLUV` And `OP_CAT` Implement Recursive Covenants


Let me now do some delaying tactics and demonstrate how `OP_TLUV` and
`OP_CAT` allow building recursive covenants by quining.

`OP_TLUV` has a mode where the current Tapleaf is replaced, and the
new address is synthesized.
Then, an output of the transaction is validated to check that it has
the newly-synthesized address.

Let me sketch how a simple recursive covenant can be built.
First, we split the covenant into three parts:

1.  A hash.
2.  A piece of script which validates that the first witness item
hashes to the above given hash in part #1, and then pushes that
item into the alt stack.
3.  A piece of script which takes the item from the alt stack,
hashes it, then concatenates a `OP_PUSH` of the hash to that
item, then does a replace-mode `OP_TLUV`.

Parts 1 and 2 must directly follow each other, but other SCRIPT
logic can be put in between parts 2 and 3.
Part 3 can even occur multiple times, in various `OP_IF` branches.

In order to actually recurse, the top item in the witness stack must
be the covenant script, *minus* the hash.
This is supposed to be the quining argument.

The convenant script part #2 then checks that the quining argument
matches the hash that is hardcoded into the SCRIPT.
This hash is the hash of the *rest* of the SCRIPT.
If the quining argument matches, then it *is* the SCRIPT minus its
hash, and we know that we can use that to recreate the original SCRIPT.
It then pushes them out of the way into the alt stack.

Part #3 then recovers the original SCRIPT from the alt stack, and
resynthesizes the original SCRIPT.
The `OP_TLUV` is then able to resynthesize the original address.

Updating Loop Variables
---

But repeating the same SCRIPT over and over is boring.

What is much more interesting is to be able to *change* the SCRIPT
on each iteration, such that certain values on the SCRIPT can be
changed.

Suppose our SCRIPT has a loop variable `i` that we want to change
each time we execute our SCRIPT.

We can simply put this loop variable after part 1 and before part 2.
Then part 2 is modified to first push this loop variable onto the
alt stack.

The SCRIPT that gets checked is always starts from part 2.
Thus, the SCRIPT, minus the loop variable, is always constant.
The SCRIPT can then access the loop variable from the alt stack.
Part 2 can be extended so that the loop variable is on top of the
quined SCRIPT on the alt stack.
This lets the SCRIPT easily access the loop variable.
The SCRIPT can also update the loop variable by replacing the top
of the alt stack with a different item.

Then part 3 first pops the alt stack top (the loop variable),
concatenates it with an appropriate push, then performs the
hash-then-concatenate dance.
This results in a SCRIPT that is the same as the original SCRIPT,
but with the loop variable possibly changed.

The SCRIPT can use multiple loop variables; it is simply a question
of how hard it would be to access from the alt stack.

Drivechains Over Recursive Covenants


Drivechains can be split into four parts:

1.  A way to commit to the sidechain blocks.
2.  A way to move funds from