Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Dave,

> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
>
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist). Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.

Let me try to give that a shot.

(Just to be clear, I am not an artificial intelligence, thus, I am not an 
"intelligent such people".)

The objection here is that recursion can admit partial (i.e. Turing-complete) 
computation.
Turing-completeness implies that the halting problem cannot be solved for 
arbitrary programs in the language.

Now, a counter-argument to that is that rather than using arbitrary programs, 
we should just construct programs from provably-terminating components.
Thus, even though the language may admit arbitrary programs that cannot 
provably terminate, "wise" people will just focus on using that subset of the 
language, and programming styles within the language, which have proofs of 
termination.
Or in other words: people can just avoid accepting coin that is encumbered with 
a SCRIPT that is not trivially shown to be non-recursive.

The counter-counter-argument is that it leaves such validation to the user, and 
we should really create automation (i.e. lower-level non-sentient programs) to 
perform that validation on behalf of the user.
***OR*** we could just design our language so that such things are outright 
rejected by the language as a semantic error, of the same type as `for (int x = 
0; x = y; x++);` is a semantic error that most modern C compilers will reject 
if given `-Wall -Werror`.


Yes, we want users to have freedom to shoot themselves in the feet, but we also 
want, when it is our turn to be the users, to keep walking with two feet as 
long as we can.

And yes, you could instead build a *separate* tool that checks if your SCRIPT 
can be proven to be non-recursive, and let the recursive construct remain in 
the interpreter and just require users who don't want their feet shot to use 
the separate tool.
That is certainly a valid alternate approach.
It is certainly valid to argue as well, that if a possibly-recursive construct 
is used, and you cannot find a proof-of-non-recursion, you should avoid coins 
encumbered with that SCRIPT (which is just a heuristic that approximate a tool 
for proof-of-non-recursion).

On the other hand, if we have the ability to identify SCRIPTs that have some 
proof-of-non-recursion, why is such a tool not built into the interpreter 
itself (in the form of operations that are provably non-recursive), why have a 
separate tool that people might be too lazy to actually use?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
Good morning shymaa,

> I just want to add an alarming info to this thread...
>
> There are at least 5.7m UTXOs≤1000 Sat (~7%), 
> 8.04 m ≤1$ (10%), 
> 13.5m ≤ 0.0001BTC (17%)
>
> It seems that bitInfoCharts took my enquiry seriously and added a main link 
> for dust analysis:
> https://bitinfocharts.com/top-100-dustiest-bitcoin-addresses.html
> Here, you can see just the first address contains more than 1.7m dust UTXOs
> (ins-outs =1,712,706 with a few real UTXOs holding the bulk of 415 BTC) 
> https://bitinfocharts.com/bitcoin/address/1HckjUpRGcrrRAtFaaCAUaGjsPx9oYmLaZ
>
> »
>  That's alarming isn't it?, is it due to the lightning networks protocol or 
> could be some other weird activity going on?
> .

I believe some blockchain tracking analysts will "dust" addresses that were 
spent from (give them 546 sats), in the hope that lousy wallets will use the 
new 546-sat UTXO from the same address but spending to a different address and 
combining with *other* inputs with new addresses, thus allowing them to grow 
their datasets about fund ownership.

Indeed JoinMarket has a policy to ignore-by-default UTXOs that pay to an 
address it already spent from, precisely due to this (apparently common, since 
my JoinMarket maker got dusted a number of times already) practice.

I am personally unsure of how common this is but it seems likely that you can 
eliminate this effect by removing outputs of exactly 546 sats to reused 
addresses.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-17 Thread ZmnSCPxj via bitcoin-dev
`OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`
==

In late 2021, `aj` proposed `OP_TAPLEAFUPDATEVERIFY` in order to
implement CoinPools and similar constructions.

`Jeremy` observed that due to the use of Merkle tree paths, an
`OP_TLUV` would require O(log N) hash revelations in order to
reach a particular tapleaf, which, in the case of a CoinPool,
would then delete itself after spending only a particular amount
of funds.
He then observed that `OP_CTV` trees also require a similar
revelation of O(log N) transactions, but with the advantage that
once revealed, the transactions can then be reused, thus overall
the expectation is that the number of total bytes onchain is
lesser compared to `OP_TLUV`.

After some thinking, I realized that it was the use of the
Merkle tree to represent the promised-but-offchain outputs of
the CoinPool that lead to the O(log N) space usage.
I then started thinking of alternative representations of
sets of promised outputs, which would not require O(log N)
revelations by avoiding the tree structure.

Promised Outputs


Fundamentally, we can consider that a solution for scaling
Bitcoin would be to *promise* that some output *can* appear
onchain at some point in the future, without requiring that the
output be shown onchain *right now*.
Then, we can perform transactional cut-through on spends of the
promised outputs, without requiring onchain activity ("offchain").
Only if something Really Bad (TM) happens do we need to actually
drop the latest set of promised outputs onchain, where it has to
be verified globally by all fullnodes (and would thus incur scaling
and privacy costs).

As an example of the above paradigm, consider the Lightning
Network.
Outputs representing the money of each party in a channel are
promised, and *can* appear onchain (via the unilateral close
mechanism).
In the meantime, there is a mechanism for performing cut-through,
allowing transfers between channel participants; any number of
transactions can be performed that are only "solidified" later,
without expensive onchain activity.

Thus:

* A CoinPool is really a way to commit to promised outputs.
  To change the distribution of those promised outputs, the
  CoinPool operators need to post an onchain transaction, but
  that is only a 1-input-1-output transaction, and with Schnorr
  signatures the single input requires only a single signature.
  But in case something Really Bad (TM) happens, any participant
  can unilaterally close the CoinPool, instantiating the promised
  outputs.
* A statechain is really just a CoinPool hosted inside a
  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.
  This allows changing the distribution of those promised outputs
  without using an onchain transaction --- instead, a new state
  in the Decker-Wattenhofer/Decker-Russell-Osuntokun construction
  is created containing the new state, which invalidates all older
  states.
  Again, any participant can unilaterally shut it down, exposing
  the state of the inner CoinPool.
* A channel factory is really just a statechain where the
  promised outputs are not simple 1-of-1 single-owner outputs,
  but are rather 2-of-2 channels.
  This allows graceful degradation, where even if the statechain
  ("factory") layer has missing participants, individual 2-of-2
  channels can still continue operating as long as they do not
  involve missing participants, without requiring all participants
  to be online for large numbers of transactions.

We can then consider that the base CoinPool usage should be enough,
as other mechanisms (`OP_CTV`+`OP_CSFS`, `SIGHASH_NOINPUT`) can be
used to implement statechains and channels and channel factories.

I therefore conclude that what we really need is "just" a way to
commit ourselves to exposing a set of promised outputs, with the
proviso that if we all agree, we can change that set (without
requiring that the current or next set be exposed, for both
scaling and privacy).

(To Bitcoin Cashers: this is not an IOU, this is *committed* and
can be enforced onchain, that is enough to threaten your offchain
counterparties into behaving correctly.
They cannot gain anything by denying the outputs they promised,
you can always drop it onchain and have it enforced, thus it is
not just merely an IOU, as IOUs are not necessarily enforceable,
but this mechanism *would* be.
Blockchain as judge+jury+executioner, not noisy marketplace.)

Importantly: both `OP_CTV` and `OP_TLUV` force the user to
decide on a particular, but ultimately arbitrary, ordering for
promised outputs.
In principle, a set of promised outputs, if the owners of those
outputs are peers, does not have *any* inherent order.
Thus, I started to think about a commitment scheme that does not
impose any ordering during commitment.

Digression: N-of-N With Eviction


An issue with using an N-of-N construction is that if any single

[bitcoin-dev] CTV Signet Parameters

2022-02-17 Thread Jeremy Rubin via bitcoin-dev
Hi devs,

I have been running a CTV signet for around a year and it's seen little
use. Early on I had some issues syncing new nodes, but I have verified
syncability to this signet using
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-signet-23.0-alpha.
Please use this signet!

```
[signet]
signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
addnode=50.18.75.225
```

This should be operational. Let me know if there are any issues you
experience (likely with signet itself, but CTV too).

Feel free to also email me an address and I can send you some signet coins
-- if anyone is interested in running an automatic faucet I would love help
with that and will send you a lot of coins.

AJ Wrote (in another thread):

>  I'd much rather see some real
>   third-party experimentation *somewhere* public first, and Jeremy's CTV
>   signet being completely empty seems like a bad sign to me. Maybe that
>   means we should tentatively merge the feature and deploy it on the
>   default global signet though?  Not really sure how best to get more
>   real world testing; but "deploy first, test later" doesn't sit right.

I agree that real experimentation would be great, and think that merging
the code (w/o activation) for signet would likely help users v.s. custom
builds/parameters.

I am unsure that "learning in public" is required -- personally I do
experiments on regtest regularly and on mainnet (using emulators) more
occasionally. I think some of the difficulty is that for setting up signet
stuff you need to wait e.g. 10 minutes for blocks and stuff, source faucet
coins, etc. V.s. regtest you can make tests that run automatically. Maybe
seeing more regtest RPC test samples for regtests would be a sufficient
in-between?


Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread James O'Beirne via bitcoin-dev
> Is it really true that miners do/should care about that?

De facto, any miner running an unmodified version of bitcoind doesn't
care about anything aside from ancestor fee rate, given that the
BlockAssembler as-written orders transactions for inclusion by
descending ancestor fee-rate and then greedily adds them to the block
template. [0]

If anyone has any indication that there are miners running forks of
bitcoind that change this behavior, I'd be curious to know it.

Along the lines of what AJ wrote, optimal transaction selection is
NP-hard (knapsack problem). Any time that a miner spends deciding how
to assemble the next block is time not spent grinding on the nonce, and
so I'm skeptical that miners in practice are currently doing anything
that isn't fast and simple like the default implementation: sorting
fee-rate in descending order and then greedily packing.

But it would be interesting to hear evidence to the contrary.

---

You can make the argument that transaction selection is just a function
of mempool contents, and so mempool maintenance criteria might be the
thing to look at. Mempool acceptance is gated based on a minimum
feerate[1].  Mempool eviction (when running low on space) happens on
the basis of max(self_feerate, descendant_feerate) [2]. So even in the
mempool we're still talking in terms of fee rates, not absolute fees.

That presents us with the "is/ought" problem: just because the mempool
*is* currently gating only on fee rate doesn't mean that's optimal. But
if the whole point of the mempool is to hold transactions that will be
mined, and if there's good reason that txns are chosen for mining based
on fee rate (it's quick and good enough), then it seems like fee rate
is the approximation that should ultimately prevail for txn
replacement.


[0]:
https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
[1]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
[2]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Fri, Feb 11, 2022 at 12:12:28PM -0600, digital vagabond via bitcoin-dev 
wrote:
> Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO.

> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever [...], but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol.

I think that sort of encumberance is already possible: you send bitcoin
to an OP_RETURN address and that is registered on some other system as a
way of "minting" coins there (ie, "proof of burn") at which point rules
other than bitcoin's apply. Bitcoin consensus guarantees the value can't
be extracted back out of the OP_RETURN value.

I think spacechains effectively takes up this concept for their one-way
peg:

  https://bitcoin.stackexchange.com/questions/100537/what-is-spacechain

  
https://medium.com/@RubenSomsen/21-million-bitcoins-to-rule-all-sidechains-the-perpetual-one-way-peg-96cb2f8ac302

(I think spacechains requires a covenant construct to track the
single-tx-per-bitcoin-block that commits to the spacechain, but that's
not directly used for the BTC value that was pegged into the spacechain)

If we didn't have OP_RETURN, you could instead pay to a pubkey that's
constructed from a NUMS point / or a pedersen commitment, that's (roughly)
guaranteed unspendable, at least until secp256k1 is broken via bitcoin's
consensus rules (with the obvious disadvantage that nodes then can't
remove these outputs from the utxo set).

That was also used for XCP/Counterparty's ICO in 2014, at about 823 uBTC
per XCP on average (depending on when you got in it was between 666
uBTC/XCP and 1000 uBTC/XCP apparently), falling to a current price of
about 208 uBTC per XCP. It was about 1000 uBTC/XCP until mid 2018 though.

  https://counterparty.io/news/why-proof-of-burn/
  https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQ-XCP.md

These seem like they might be bad things for people to actually do
(why would you want to be paid to mine a spacechain in coins that can
only fall in value relative to bitcoin?), and certainly I don't think
we should do things just to make this easier; but it seems more like a
"here's why you're hurting yourself if you do this" thing, rather than a
"we can prevent you from doing it and we will" thing.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread Russell O'Connor via bitcoin-dev
On Thu, Feb 17, 2022 at 9:27 AM Anthony Towns  wrote:

>
> I guess that's all partly dependent on thinking that, TXHASH isn't
> great for tx introspection (especially without CAT) and, (without tx
> introspection and decent math opcodes), DLCs already provide all the
> interesting oracle behaviour you're really going to get...
>

You left out CSFSV's ability to do pubkey delegation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:
> This is where *all* the complexity comes from. If our goal is to "ensure a
> bump increases a miner's overall revenue" (thus not wasting relay for
> everyone else), then we precisely *do* need
> > Special consideration for "what should be in the next
> > block" and/or the caching of block templates seems like an imposing
> > dependency
> Whether a transaction increases a miner's revenue depends precisely on
> whether the transaction (package) being replaced is in the next block - if
> it is, you care about the absolute fee of the package and its replacement.

On Thu, Feb 10, 2022 at 11:44:38PM +, darosior via bitcoin-dev wrote:
> It's not that simple. As a miner, if i have less than 1vMB of transactions in 
> my mempool. I don't want a 10sats/vb transaction paying 10sats by a 
> 100sats/vb transaction paying only 1sats.

Is it really true that miners do/should care about that?

If you did this particular example, the miner would be losing 90k sats
in fees, which would be at most 1.44 *millionths* of a percent of the
block reward with the subsidy at 6.25BTC per block, even if there were
no other transactions in the mempool. Even cumulatively, 10sats/vb over
1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.

I suspect the "economically rational" choice would be to happily trade
off that immediate loss against even a small chance of a simpler policy
encouraging higher adoption of bitcoin, _or_ a small chance of more
on-chain activity due to higher adoption of bitcoin protocols like
lightning and thus a lower chance of an empty mempool in future.

If the network has an "empty mempool" (say less than 2MvB-10MvB of
backlog even if you have access to every valid 1+ sat/vB tx on any node
connected to the network), then I don't think you'll generally have txs
with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
which means your maximum loss is about 3% of block revenue, at least
while the block subsidy remains at 6.25BTC/block.

Certainly those percentages can be expected to double every four years as
the block reward halves (assuming we don't also reduce the min relay fee
and block min tx fee), but I think for both miners and network stability,
it'd be better to have the mempool backlog increase over time, which
would both mean there's no/less need to worry about the special case of
the mempool being empty, and give a better incentive for people to pay
higher fees for quicker confirmations.

If we accept that logic (and assuming we had some additional policy
to prevent p2p relay spam due to replacement txs), we could make
the mempool accept policy for replacements just be (something like)
"[package] feerate is greater than max(descendent fee rate)", which
seems like it'd be pretty straightforward to deal with in general?



Thinking about it a little more; I think the decision as to whether
you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
100sat/vb" tx in your mempool if you're going to take into account
unrelated, lower fee rate txs that are also in the mempool makes block
building "more" of an NP-hard problem and makes the greedy solution
we've currently got much more suboptimal -- if you really want to do that
optimally, I think you have to have a mempool that retains conflicting
txs and runs a dynamic programming solution to pick the best set, rather
than today's simple greedy algorithms both for building the block and
populating the mempool?

For example, if you had two such replacements come through the network,
a miner could want to flip from initially accepting the first replacement,
to unaccepting it:

Initial mempool: two big txs at 100k each, many small transactions at
15s/vB and 1s/vB

 [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]
   -> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)

Replacement for the 20s/vB tx paying a higher fee rate but lower total
fee; that's worth including:

 [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
   -> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)

Later, replacement for the 12s/vB tx comes in, also paying higher fee
rate but lower total fee. Worth including, but only if you revert the
original replacement:

 [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.16 BTC for 1MvB (150*20 + 850*15)

 [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)

Algorithms/mempool policies you might have, and their results with
this example:

 * current RBF rules: reject both replacements because they don't
   increase the absolute fee, thus get the minimum block fees of
   0.148 BTC

 * reject RBF unless it increases the fee rate, and get 0.1484 BTC in
   fees

 * reject RBF if it's lower fee rate or immediately decreases the block
   reward: so, accept the 

Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 07, 2022 at 09:16:10PM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> > > For more complex interactions, I was imagining combining this TXHASH
> > > proposal with CAT and/or rolling SHA256 opcodes.
> Indeed, and we really want something that can be programmed at redemption
> time.

I mean, ideally we'd want something that can be flexibly programmed at
redemption time, in a way that requires very few bytes to express the
common use cases, is very efficient to execute even if used maliciously,
is hard to misuse accidently, and can be cleanly upgraded via soft fork
in the future if needed?

That feels like it's probably got a "fast, cheap, good" paradox buried
in there, but even if it doesn't, it doesn't seem like something you
can really achieve by tweaking around the edges?

> That probably involves something like how the historic MULTISIG worked by
> having list of input / output indexes be passed in along with length
> arguments.
> 
> I don't think there will be problems with quadratic hashing here because as
> more inputs are list, the witness in turns grows larger itself.

If you cache the hash of each input/output, it would mean each byte of
the witness would be hashing at most an extra 32 bytes of data pulled
from that cache, so I think you're right. Three bytes of "script" can
already cause you to rehash an additional ~500 bytes (DUP SHA256 DROP),
so that should be within the existing computation-vs-weight relationship.

If you add the ability to hash a chosen output (as Rusty suggests, and
which would allow you to simulate SIGHASH_GROUP), your probably have to
increase your cache to cover each outputs' scriptPubKey simultaneously,
which might be annoying, but doesn't seem fatal.

> That said, your SIGHASH_GROUP proposal suggests that some sort of
> intra-input communication is really needed, and that is something I would
> need to think about.

I think the way to look at it is that it trades off spending an extra
witness byte or three per output (your way, give or take) vs only being
able to combine transactions in limited ways (sighash_group), but being
able to be more optimised than the more manual approach.

That's a fine tradeoff to make for something that's common -- you
save onchain data, make something easier to use, and can optimise the
implementation so that it handles the common case more efficiently.

(That's a bit of a "premature optimisation" thing though -- we can't
currently do SIGHASH_GROUP style things, so how can you sensibly justify
optimising it because it's common, when it's not only currently not
common, but also not possible? That seems to me a convincing reason to
make script more expressive)

> While normally I'd be hesitant about this sort of feature creep, when we
> are talking about doing soft-forks, I really think it makes sense to think
> through these sorts of issues (as we are doing here).

+1

I guess I especially appreciate your goodwill here, because this has
sure turned out to be a pretty long message as I think some of these
things through out loud :)

> > "CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
> > elements for a while; has anyone managed to build anything interesting
> > with them in practice, or are they only useful for thought experiments
> > and blog posts? To me, that suggests that while they're useful for
> > theoretical discussion, they don't turn out to be a good design in
> > practice.
> Perhaps the lesson to be drawn is that languages should support multiplying
> two numbers together.

Well, then you get to the question of whether that's enough, or if
you need to be able to multiply bignums together, etc? 

I was looking at uniswap-like things on liquid, and wanted to do constant
product for multiple assets -- but you already get the problem that "x*y
< k" might overflow if the output values x and y are ~50 bits each, and
that gets worse with three assets and wanting to calculate "x*y*z < k",
etc. And really you'd rather calculate "a*log(x) + b*log(y) + c*log(z)
< k" instead, which then means implementing fixed point log in script...

> Having 2/3rd of the language you need to write interesting programs doesn't
> mean that you get 2/3rd of the interesting programs written.

I guess to abuse that analogy: I think you're saying something like
we've currently got 67% of an ideal programming language, and CTV
would give us 68%, but that would only take us from 10% to 11% of the
interesting programs. I agree txhash might bump that up to, say, 69%
(nice) but I'm not super convinced that even moves us from 11% to 12%
of interesting programs, let alone a qualitative leap to 50% or 70%
of interesting programs.

It's *possible* that the ideal combination of opcodes will turn out to
be CAT, TXHASH, CHECKSIGFROMSTACK, MUL64LE, etc, but it feels like it'd
be better working something out that fits together well, rather than
adding things piecemeal and hoping we don't spend all that effort to
end up in a local optimum