Re: [bitcoin-dev] Package Relay Proposal

2022-05-17 Thread Anthony Towns via bitcoin-dev
On Tue, May 17, 2022 at 12:01:04PM -0400, Gloria Zhao via bitcoin-dev wrote:
> New Messages
> Three new protocol messages are added for use in any version of
> package relay. Additionally, each version of package relay must define
> its own inv type and "pckginfo" message version, referred to in this
> document as "MSG_PCKG" and "pckginfo" respectively. See
> BIP-v1-packages for a concrete example.

The "PCKG" abbreviation threw me for a loop; isn't the usual
abbreviation "PKG" ?

> =sendpackages=
> |version || uint32_t || 4 || Denotes a package version supported by the
> node.
> |max_count || uint32_t || 4 ||Specifies the maximum number of transactions
> per package this node is
> willing to accept.
> |max_weight || uint32_t || 4 ||Specifies the maximum total weight per
> package this node is willing
> to accept.

Does it make sense for these to be configurable, rather than implied
by the version? 

I presume the idea is to cope with people specifying different values for
-limitancestorcount or -limitancestorsize, but if people are regularly
relaying packages around, it seems like it becomes hard to have those
values really be configurable while being compatible with that?

I guess I'm asking: would it be better to either just not do sendpackages
at all if you're limiting ancestors in the mempool incompatibly; or
alternatively, would it be better to do the package relay, then reject
the particular package if it turns out too big, and log that you've
dropped it so that the node operator has some way of realising "whoops,
I'm not relaying packages properly because of how I configured my node"?

> 5. If 'fRelay==false' in a peer's version message, the node must not
>send "sendpackages" to them. If a "sendpackages" message is
> received by a peer after sending `fRelay==false` in their version
> message, the sender should be disconnected.

Seems better to just say "if you set fRelay=false in your version
message, you must not send sendpackages"? You already won't do packages
with the peer if they don't also announce sendpackages.

> 7. If both peers send "wtxidrelay" and "sendpackages" with the same
>version, the peers should announce, request, and send package
> information to each other.

Maybe: "You must not send sendpackages unless you also send wtxidrelay" ?


As I understand it, the two cases for the protocol flow are "I received
an orphan, and I'd like its ancestors please" which seems simple enough,
and "here's a child you may be interested in, even though you possibly
weren't interested in the parents of that child". I think the logic for
the latter is:

 * if tx C's fee rate is less than the peer's feefilter, skip it
   (will maybe treat it as a parent in some package later though)
 * if tx C's ancestor fee rate is less than the peer's feefilter, skip
   it?
 * look at the lowest ancestor fee rate for any of C's in-mempool
   parents
 * if that is higher than the peer's fee filter, send a normal INV
 * if it's lower than the peer's fee filter, send a PCKG INV

Are "getpckgtxns" / "pcktxns" really limited to packages, or are they
just a general way to request a batch of transactions? Particularly in
the case of requesting the parents of an orphan tx you already have,
it seems hard for the node receiving getpckgtxns to validate that the
txs are related in some way; but also it doesn't seem very necessary?

Maybe call those messages "getbatchtxns" and "batchtxns" and allow them to
be used more generally, potentially in ways unrelated to packages/cpfp?
The "only be sent if both peers agreed to do package relay" rule could
simply be dropped, I think.

> 4. The reciever uses the package information to decide how to request
>the transactions. For example, if the receiver already has some of
> the transactions in their mempool, they only request the missing ones.
> They could also decide not to request the package at all based on the
> fee information provided.

Shouldn't the sender only be sending package announcements when they know
the recipient will be interested in the package, based on their feefilter?

> =pckginfo1=
> {|
> |  Field Name  ||  Type  ||  Size  ||   Purpose
> |-
> |blockhash || uint256 || 32 || The chain tip at which this package is
> defined.
> |-
> |pckg_fee||CAmount||4|| The sum total fees paid by all transactions in the
> package.

CAmount in consensus/amount.h is a int64_t so shouldn't this be 8
bytes? If you limit a package to 101kvB, an int32_t is enough to cover
any package with a fee rate of about 212 BTC/block or lower, though.

> |pckg_weight||int64_t||8|| The sum total weight of all transactions in the
> package.

The maximum block weight is 4M, and the default -limitancestorsize
presumably implies a max package weight of 404k; seems odd to provide
a uint64_t rather than an int32_t here, which easily allows either of
those values?

> 2. ''Only 1 child with unconfirmed parents.'' The package must consist
>of one transaction and its 

Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-13 Thread Anthony Towns via bitcoin-dev
On Thu, May 12, 2022 at 06:48:44AM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> On Wed, May 11, 2022 at 11:07 PM ZmnSCPxj  wrote:
> > So really: are recursive covenants good or...?
> My view is that recursive covenants are inevitable.  It is nearly
> impossible to have programmable money without it because it is so difficult
> to avoid.

I think my answer is that yes they are good: they enable much more
powerful contracting.

Of course, like any cryptographic tool they can also be harmful to you if
you misuse them, and so before you use them yourself you should put in the
time to understand them well enough that you *don't* misuse them. Same as
using a kitchen knife, or riding a bicycle, or swimming. Can be natural
to be scared at first, too.

> Given that we cannot have programmable money without recursive covenants
> and given all the considerations already discussed regarding them, i.e. no
> worse than being compelled to co-sign transactions, and that user generated
> addresses won't be encumbered by a covenant unless they specifically
> generate it to be, I do think it makes sense to embrace them.

I think that's really the easy way to be sure *you* aren't at risk
from covenants: just follow the usual "not your keys, not your coins"
philosophy.

The way you currently generate an address from a private key already
guarantees that *your* funds won't be encumbered by any covenants; all
you need to do is to keep doing that. And generating the full address
yourself is already necessary with taproot: if you don't understand
all the tapscript MAST paths, then even though you can spend the coin,
one of those paths you don't know about might already allow someone to
steal your funds. But if you generated the address, you (or at least your
software) will understand everything and not include anything dangerous,
so your funds really are safu.

It may be that some people will refuse to send money to your address
because they have some rule that says "I'll only send money to people who
encumber all their funds with covenant X" and you didn't encumber your
address in that way -- but that just means they're refusing to pay you,
just as people who say "I'll only pay you off-chain via coinbase" or
"I'll only pay you via SWIFT" won't send funds to your bitcoin address.

Other examples might include "we only support segwit-v0 addresses not
taproot ones", or "you're on an OFAC sanctions list so I can't send
to you or the government will put me in prison" or "my funds are in a
multisig with the government who won't pay to anyone who isn't also in
a multisig with them".

It does mean you still need people with the moral fortitude to say "no,
if you can't pay me properly, we can't do business" though.

Even better: in so far as wallet software will just ignore any funds
sent to addresses that they didn't generate themselves according to the
rules you selected, you can already kind of outsource that policy to
your wallet. And covenants, recursive or otherwise, don't change that.


For any specific opcode proposal, I think you still want to consider

 1) how much you can do with it
 2) how efficient it is to validate (and thus how cheap it is to use)
 3) how easy it is to make it do what you want
 4) how helpful it is at preventing bugs
 5) how clean and maintainable the validation code is

I guess to me CTV and APO are weakest at (1); CAT/CSFS falls down on
(3) and (4); OP_TX is probably weakest at (5) and maybe not as good as
we'd like at (3) and (4)?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] What to expect in the next few weeks

2022-04-26 Thread Anthony Towns via bitcoin-dev
On Mon, Apr 25, 2022 at 10:48:20PM -0700, Jeremy Rubin via bitcoin-dev wrote:
> Further, you're representing the state of affairs as if there's a great
> need to scramble to generate software for this, whereas there already are
> scripts to support a URSF that work with the source code I pointed to from
> my blog. This approach is a decent one, even though it requires two things,
> because it is simple. I think it's important that people keep this in mind
> because that is not a joke, the intention was that the correct set of check
> and balance tools were made available. I'd be eager to learn what,
> specifically, you think the advantages are of a separate binary release
> rather than a binary + script that can handle both cases?

The point of running a client with a validation requirement of "blocks
must (not) signal" is to handle the possiblity of there being a chain
split, where your preferred ruleset ends up on the less-work side.

Ideally that will be a temporary situation and other people will come to
your side, switch their miners over etc, and your chain will go back to
having the most work, and anyone who wasn't running a client with the
opposite signalling requirement will reorg to your chain and ruleset.

But forkd isn't quite enough to do that reliably -- instead, you'll
start disconnecting nodes who forward blocks to you that were built on
the block you disconnected, and you'll risk ending up isolated: that's
why bip8 recommends clients "should either use parameters that do not
risk there being a higher work alternative chain, or specify a mechanism
for implementations that support the deployment to preferentially peer
with each other".

Also, in order to have other nodes reorg to your chain when it has
more work, you don't want to exclusively connect to likeminded peers.
That's less of a big deal though, since you only need one peer to
forward the new chain to the compatible network to trigger all of them
to reorg.

Being able to see the other chain has more work might be valuable in
order to add some sort of user warning signal though: "the other chain
appears to have maintained 3x as much hash power as the chain your are
following".

In theory, using the `BLOCK_RECENT_CONSENSUS_CHANGE` flag to indicate
unwanted signalling might make sense; then you could theoretically
trigger on that to avoid disconnecting inbound peers that are following
the wrong chain. There's already some code along those lines; but while I
haven't checked recently, I think it ends up failing relatively quickly
once an invalid chain has been extended by a few blocks, since they'll
result in `BLOCK_INVALID_PREV` errors instead. The segwit UASF client
took some care to try to make this work, fwiw.

(As it stands, I think RECENT_CONSENSUS_CHANGE only really helps with
avoiding disconnections if there's one or maybe two invalid blocks in
a row from a random miner that's doing strange things, rather than if
there's an active conflict resulting in a deliberate chain split).

On the other hand, if there is a non-trivial chain split, then everyone
has to deal with splitting their coins across the different chains,
presuming they don't want to just consider one or the other a complete
write-off. That's already annoying; but for lightning funds I think it
means the automation breaks down, and every channel in the network would
need to be immediately closed on chain, as otherwise accepting state
updates risks losing the value of your channel balance on whichever
chain you're lightning node is not following.

So to your original question: I think it's pretty hard to do all that
stuff in a separate script, without updating the node software itself.

More generally, while I think forkd *is* pretty much state of the art;
I don't think it comes close to addressing all the problems that a chain
split would create.  Maybe it's still worthwhile despite those problems
if there's some existential threat to bitcoin, but (not) activating CTV
doesn't seem to rise to that level to me.

Just my opinion, but: without some sort of existential threat, it
seems better to take things slowly and hold off on changes until either
pretty much everyone who cares is convinced that the change is a good
idea and ready to go; or until someone has done the rest of the work to
smooth over all the disruption a non-trivial chain split could cause.
Of course, the latter option is a _lot_ of work, and probably requires
consensus changes itself...

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-04-25 Thread Anthony Towns via bitcoin-dev
On Mon, Apr 25, 2022 at 11:26:09AM -0600, Keagan McClelland via bitcoin-dev 
wrote:
> > Semi-mandatory in that only "threshold" blocks must signal, so if
> only 4% or 9% of miners aren't signalling and the threshold is set
> at 95% or 90%, no blocks will be orphaned.
> How do nodes decide on which blocks are orphaned if only some of them have
> to signal, and others don't? Is it just any block that would cause the
> whole threshold period to fail?

Yes, exactly those. See [0] or [1].

[0] 
https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki#Mandatory_signalling

[1] https://github.com/bitcoin/bips/pull/1021
(err, you apparently acked that PR)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-04-25 Thread Anthony Towns via bitcoin-dev
On Mon, Apr 25, 2022 at 10:11:45AM -0600, Keagan McClelland via bitcoin-dev 
wrote:
> > Under *any* other circumstance, when they're used to activate a bad soft
> > fork, speedy trial and bip8 are the same. If a resistance method works
> > against bip8, it works against speedy trial; if it fails against speedy
> > trial, it fails against bip8.
> IIRC one essential difference between ST (which is a variant of BIP9) and
> BIP8 is that since there is no mandatory signaling during the lockin
> period, 

BIP8 doesn't have mandatory signaling during the lockin period, it has
semi-mandatory [0] signalling during the must_signal period. 

> you can't do a counter soft fork as easily.

The "counter" for bip8 activation is to reject any block during either
the started or must_signal phases that would meet the threshold. In that
case someone running bip8 might see blocks:

  [elapsed=2010, count=1813, signal=yes]
  [elapsed=2011, count=1813, signal=no]
  [elapsed=2012, count=1814, signal=yes]
  [elapsed=2013, count=1815, signal=yes, will-lockin!]
  [elapsed=2014, count=1816, signal=yes]
  [elapsed=2015, count=1816, signal=no]
  [elapsed=2016, count=1816, signal=no]
  [locked in!]

But running software to reject the soft fork, you would reject the
elapsed=2013 block, and any blocks that build on it. You would wait for
someone else to mine a chain that looked like:

  [elapsed=2013, count=1814, signal=no]
  [elapsed=2014, count=1814, signal=no]
  [elapsed=2015, count=1814, signal=no]
  [elapsed=2016, count=1814, signal=no]
  [failed!]

That approach works *exactly* the same with speedy trial.

Jeremy's written code that does exactly this using the getdeploymentinfo
rpc to check the deployment status, and the invalidateblock rpc to
reject a block. See: https://github.com/JeremyRubin/forkd

The difference to bip8 with lot=true is that nodes running speedy trial
will reorg to follow the resisting chain if it has the most work. bip8
with lot=true nodes will not reorg to a failing chain, potentially
creating an ongoing chain split, unless one group or the other gives up,
and changes their software.

Cheers,
aj

[0] Semi-mandatory in that only "threshold" blocks must signal, so if
only 4% or 9% of miners aren't signalling and the threshold is set
at 95% or 90%, no blocks will be orphaned.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-04-24 Thread Anthony Towns via bitcoin-dev
On Sun, Apr 24, 2022 at 12:13:08PM +0100, Jorge Timón wrote:
> You're not even considering user resistance in your cases. 

Of course I am. Again:

> > My claim is that for *any* bad (evil, flawed, whatever) softfork, then
> > attempting activation via bip8 is *never* superior to speedy trial,
> > and in some cases is worse.
> >
> > If I'm missing something, you only need to work through a single example
> > to demonstrate I'm wrong, which seems like it ought to be easy... But
> > just saying "I disagree" and "I don't want to talk about that" isn't
> > going to convince anyone.

The "some cases" where bip8 with lot=true is *worse* than speedy trial
is when miners correctly see that a bad fork is bad.

Under *any* other circumstance, when they're used to activate a bad soft
fork, speedy trial and bip8 are the same. If a resistance method works
against bip8, it works against speedy trial; if it fails against speedy
trial, it fails against bip8.

> Sorry for the aggressive tone, but I when people ignore some of my points
> repeteadly, I start to wonder if they do it on purpose. 

Perhaps examine the beam in your own eye.

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-21 Thread Anthony Towns via bitcoin-dev
On Thu, Apr 21, 2022 at 10:05:20AM -0500, Jeremy Rubin via bitcoin-dev wrote:
> I can probably make some show up sometime soon. Note that James' vault uses
> one at the top-level https://github.com/jamesob/simple-ctv-vault, but I
> think the second use of it (since it's not segwit wrapped) wouldn't be
> broadcastable since it's nonstandard.

The whole point of testing is so that bugs like "wouldn't be broadcastable
since it's nonstandard" get fixed. If these things are still in the
"interesting thought experiment" stage, but nobody but Jeremy is
interested enough to start making them consistent with the proposed
consensus and policy rules, it seems very premature to be changing
consensus or policy rules.

> One case where you actually use less space is if you have a few different
> sets of customers at N different fee priority level. Then, you might need
> to have N independent batches, or risk overpaying against the customer's
> priority level. Imagine I have 100 tier 1 customers and 1000 tier 2
> customers. If I batcher tier 1 with tier 2, to provide tier 1 guarantees
> I'd need to pay tier 1 rate for 10x the customers. With CTV, I can combine
> my batch into a root and N batch outputs. This eliminates the need for
> inputs, signatures, change outputs, etc per batch, and can be slightly
> smaller. Since the marginal benefit on that is still pretty small, having
> bare CTV improves the margin of byte wise saving.

Bare CTV only saves bytes when *spending* -- but this is when you're
creating the 1100 outputs, so an extra 34 or 67 bytes of witness data
seems fairly immaterial (0.05% extra vbytes?). It doesn't make the small
commitment tx any smaller.

ie, scriptPubKey looks like:
 - bare ctv: [push][32 bytes][op_nop4]
 - p2wsh: [op_0][push][32 bytes]
 - p2tr: [op_1][push][32 bytes]

while witness data looks like:
 - bare ctv: empty scriptSig, no witness
 - pw2sh: empty scriptSig, witness = "[push][32 bytes][op_nop4]"
 - p2tr: empty scriptSig, witness = 33B control block,
 "[push][32 bytes][op_nop4]"

You might get more a benefit from bare ctv if you don't pay all 1100
outputs in a single tx when fees go lower; but if so, you're also wasting
quite a bit more block space in that case due to the intermediate
transactions you're introducing, which makes it seem unlikely that
you care about the extra 9 or 17 vbytes bare CTV would save you per
intermediate tx...

I admit that I am inclined towards micro-optimising things to save
those bytes if it's easy, which does incline me towards bare CTV; but
the closest thing we have to real user data suggests that nobody's going
to benefit from that possibility anyway.

> Even if we got rid of bare ctv, segwit v0 CTV would still exist, so we
> couldn't use OP_SUCCESSx there either. segwitv0 might be desired if someone
> has e.g. hardware modules or MPC Threshold Crypto that only support ECDSA
> signatures, but still want CTV.

If you desire new features, then you might have to upgrade old hardware
that can't support them.

Otherwise that would be an argument to never use OP_SUCCESSx: someone
might want to use whatever new feature we might imagine on hardware that
only supports ECDSA signatures.

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Anthony Towns via bitcoin-dev
On Wed, Apr 20, 2022 at 03:04:53PM -1000, David A. Harding via bitcoin-dev 
wrote:
> The main criticisms I'm aware of against CTV seem to be along the following
> lines: [...]

> Could those concerns be mitigated by making CTV an automatically reverting
> consensus change with an option to renew?  [...]

Buck O Perley suggested that "Many of the use cases that people
are excited to use CTV for ([5], [6]) are very long term in nature
and targeted for long term store of value in contrast to medium of
exchange."

But, if true, that's presumably incompatible with any sort of sunset
that's less than many decades away, so doesn't seem much better than
just having it be available on a signet?

[5] https://github.com/kanzure/python-vaults/blob/master/vaults/bip119_ctv.py
[6] https://github.com/jamesob/simple-ctv-vault

If sunsetting were a good idea, one way to think about implementing it
might be to code it as:

  if (DeploymentActiveAfter(pindexPrev, params, FOO) &&
  !DeploymentActiveAfter(pindexPrev, params, FOO_SUNSET))
  {
  EnforceFoo();
  }

That is to have sunsetting the feature be its own soft-fork with
pre-declared parameters that are included in the original activation
proposal. That way you don't have to have a second process debate about
how to go about (not) sunsetting the rules, just one on the merits of
whether sunsetting is worth doing or not.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-20 Thread Anthony Towns via bitcoin-dev
On Wed, Apr 20, 2022 at 05:13:19PM +, Buck O Perley via bitcoin-dev wrote:
> All merits (or lack thereof depending on your view) of CTV aside, I find this 
> topic around decision making both interesting and important. While I think I 
> sympathize with the high level concern about making sure there are use cases, 
> interest, and sufficient testing of a particular proposal before soft forking 
> it into consensus code, it does feel like the attempt to attribute hard 
> numbers in this way is somewhat arbitrary.

Sure. I included the numbers for falsifiability mostly -- so people
could easily check if my analysis was way off the mark.

> For example, I think it could be reasonable to paint the list of examples you 
> provided where CTV has been used on signet in a positive light. 317 CTV 
> spends “out in the wild” before there’s a known activation date is quite a lot

Not really? Once you can make one transaction, it's trivial to make
hundreds. It's more interesting to see if there's multiple wallets or
similar that support it; or if one wallet has a particularly compelling
use case.

> (more than taproot had afaik).

Yes; as I've said a few times now, I think we should have had more
real life demos before locking taproot's activation in. I think that
would have helped avoid bugs like Neutrino's [0] and made it easier for
hardware wallets etc to have support for taproot as soon as it was active,
without having to rush around adding library support at the last minute.

[0] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-November/019589.html
 

Lightning's "two independent implementations" rule might be worth aspiring
too, eg.

> If we don’t think it is enough, then what number of unique spends and use 
> cases should we expect to see of a new proposal before it’s been sufficiently 
> tested?

I don't really think that's the metric. I'd go for something more like:

 1a) can you make transactions using the new feature with bitcoin-cli,
 eg createrawtransaction etc?
 1b) can you make transactions using the new feature with some other
 library?
 1c) can you make transactions using the new feature with most common
 libraries?

 2) has anyone done a usable prototype of the major use cases of the new
feature?

I think the answers for CTV are:

 1a) no
 1b) yes, core's python test suite, sapio
 1c) no
 2) no
 
Though presumably jamesob's simple ctv vault is close to being an answer
for (2)?

For taproot, we had,

 1a) yes, with difficulty [1]
 1b) yes, core's python test suite; kalle's btcdeb sometimes worked too
 1c) no
 2) optech's python notebook [2] from it's taproot workshops had demos for
musig and degrading multisig via multiple merkle paths, though I
think they were out of date with the taproot spec for a while

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019543.html
[2] https://github.com/bitcoinops/taproot-workshop/

To some extent those things are really proxies for:

 3) how well do people actually understand the feature?

 4) are we sure the tradeoffs being made in this implementation of the
feature, vs other implementations or other features actually make
sense?

 5) how useful is the feature?

I think we were pretty confident in the answers for those questions
for taproot. At least personally, I'm still not super confident in
the answers for CTV. In particular:

 - is there really any benefit to doing it as a NOP vs a taproot-only
   opcode like TXHASH? Theoretically, sure, that saves some bytes; but as
   was pointed out on #bitcoin-wizards the other day, you can't express
   those outputs as an address, which makes them not very interoperable,
   and if they're not interoperable between, say, an exchange and its
   users trying to do a withdraw, how useful is that really ever going
   to be?

 - the scriptSig commitments seems very kludgy; combining multiple
   inputs likewise seems kludgy

The continual push to rush activation of it certainly doesn't increase my
confidence either. Personally, I suspect it's counterproductive; better
to spend the time answering questions and improving the proposal, rather
than spending time going around in circles about activating something
people aren't (essentially) unanimously confident about.

> In absence of the above, the risk of a constantly moving bar 

I'd argue the bar *should* be constantly moving, in the sense that we
should keep raising it.

> To use your meme, miners know precisely what they’re mining for and what a 
> metric of success looks like which makes the risk/costs of attempting the PoW 
> worth it 

The difference between mining and R is variance: if you're competing for
50k blocks a year, you can get your actual returns to closely match your
expected return, especially if you pool with others so your probability
of success isn't miniscule -- for consensus dev, you can reasonably only
work on a couple of projects a year, so your median return is likely $0,

Re: [bitcoin-dev] CTV Signet Parameters

2022-04-20 Thread Anthony Towns via bitcoin-dev
On Wed, Apr 20, 2022 at 08:05:36PM +0300, Nadav Ivgi via bitcoin-dev wrote:
> > I didn't think DROP/1 is necessary here? Doesn't leaving the 32 byte hash
> on the stack evaluate as true?
> Not with Taproot's CLEANSTACK rule. 

The CLEANSTACK rule is the same for segwit and tapscript though? 

For p2wsh/BIP 141 it's "The script must not fail, and result in exactly
a single TRUE on the stack." and for tapscript/BIP 342, it's "If the
execution results in anything but exactly one element on the stack which
evaluates to true with CastToBool(), fail."

CastToBool/TRUE is anything that's not false, false is zero (ie, any
string of 0x00 bytes) or negative zero (a string of 0x00 bytes but with
the high byte being 0x80).

Taproot has the MINIMALIF rule that means you have to use exactly 1 or 0
as the input to IF, but I don't think that's relevant for CTV.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-19 Thread Anthony Towns via bitcoin-dev
On Thu, Feb 17, 2022 at 01:58:38PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> AJ Wrote (in another thread):
> >   I'd much rather see some real
> >   third-party experimentation *somewhere* public first, and Jeremy's CTV
> >   signet being completely empty seems like a bad sign to me. 

There's now been some 2,200 txs on CTV signet, of which (if I haven't
missed anything) 317 have been CTV spends:

 - none have been bare CTV (ie, CTV in scriptPubKey directly, not via
   p2sh/p2wsh/taproot)

 - none have been via p2sh

 - 3 have been via taproot:

https://explorer.ctvsignet.com/tx/f73f4671c6ee2bdc8da597f843b2291ca539722a168e8f6b68143b8c157bee20

https://explorer.ctvsignet.com/tx/7e4ade977db94117f2d7a71541d87724ccdad91fa710264206bb87ae1314c796

https://explorer.ctvsignet.com/tx/e05d828bf716effc65b00ae8b826213706c216b930aff194f1fb2fca045f7f11

   The first two of these had alternative merkle paths, the last didn't.

 - 314 have been via p2wsh

https://explorer.ctvsignet.com/tx/62292138c2f55713c3c161bd7ab36c7212362b648cf3f054315853a081f5808e
   (don't think there's any meaningfully different examples?)

As far as I can see, all the scripts take the form:

  [PUSH 32 bytes] [OP_NOP4] [OP_DROP] [OP_1]

(I didn't think DROP/1 is necessary here? Doesn't leaving the 32 byte
hash on the stack evaluate as true? I guess that means everyone's using
sapio to construct the txs?)

I don't think there's any demos of jamesob's simple-ctv-vault [0], which
I think uses a p2wsh of "IF n CSV DROP hotkey CHECKSIG ELSE lockcoldtx CTV
ENDIF", rather than taproot branches.

[0] https://github.com/jamesob/simple-ctv-vault

Likewise I don't think there's any examples of "this CTV immediately;
or if fees are too high, this other CTV that pays more fees after X
days", though potentially they could be hidden in the untaken taproot
merkle branches.

I don't think there's any examples of two CTV outputs being combined
and spent in a single transaction.

I don't see any txs with nSequence set meaningfully; though most (all?)
of the CTV spends seem to set nSequence to 0x0040 which I think
doesn't have a different effect from 0xfffe?

That looks to me like there's still not much practical (vs theoretical)
exploration of CTV going on; but perhaps it's an indication that CTV
could be substantially simplified and still get all the benefits that
people are particularly eager for.

> I am unsure that "learning in public" is required --

For a consensus system, part of the learning is "this doesn't seem that
interesting to me; is it actually valuable enough to others that the
change is worth the risk it imposes on me?" and that's not something
you can do purely in private.

One challenge with building a soft fork is that people don't want to
commit to spending time building something that relies on consensus
features and run the risk that they might never get deployed. But the
reverse of that is also a concern: you don't want to deploy consensus
changes and run the risk that they won't actually turn out to be useful.

Or, perhaps, to "meme-ify" it -- part of the "proof of work" for deploying
a consensus change is actually proving that it's going to be useful.
Like sha256 hashing, that does require real work, and it might turn out
to be wasteful.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-04-11 Thread Anthony Towns via bitcoin-dev
On Fri, Apr 08, 2022 at 11:58:48AM +0200, Jorge Timón via bitcoin-dev wrote:
> On Wed, Mar 30, 2022 at 6:21 AM Anthony Towns  wrote:
> > > Let's discuss those too. Feel free to point out how bip8 fails at some
> > > hypothetical cases speedy trial doesn't.
> > Any case where a flawed proposal makes it through getting activation
> > parameters set and released, but doesn't achieve supermajority hashpower
> > support is made worse by bip8/lot=true in comparison to speedy trial
> I disagree. Also, again, not the hypothetical case I want to discuss.

You just said "Let's discuss those" and "Feel free to point out how bip8
fails at some hypothetical cases speedy trial doesn't", now you're
saying it's not what you want to discuss?

But the above does include your "evil soft fork" hypothetical (I mean,
unless you think being evil isn't a flaw?). The evil soft fork gets
proposed, and due to some failure in review, merged with activation
parameters set (via either speedy trial or bip8), then:

 a) supermajority hashpower support is achieved quickly:
 - both speedy trial and bip8+lot=true activate the evil fork

 b) supermajority hashpower support is achieved slowly:
 - speedy trial does *not* activate the evil fork, as it times out
   quickly
 - bip8 *does* activate the fork

 c) supermajority hashpower support support is never achieved:
 - speedy trial does *not* activate the evil fork
 - bip8+lot=false does *not* activate the evil fork, but only after a
   long timeout
 - bip8+lot=true *does* activate the evil fork

In case (a), they both do the same thing; in case (b) speedy trial is
superior to bip8 no matter whether lot=true/false since it blocks the
evil fork, and in case (c) speedy trial is better than lot=false because
it's quicker, and much better than lot=true because lot=true activates
the evil fork.

> > > >  0') someone has come up with a good idea (yay!)
> > > >  1') most of bitcoin is enthusiastically behind the idea
> > > >  2') an enemy of bitcoin is essentially alone in trying to stop it
> > > >  3') almost everyone remains enthusiastic, despite that guy's
> > incoherent
> > > >  raving
> > > >  4') nevertheless, the enemies of bitcoin should have the power to stop
> > > >  the good idea
> > > "That guy's incoherent raving"
> > > "I'm just disagreeing".
> >
> > Uh, you realise the above is an alternative hypothetical, and not talking
> > about you? I would have thought "that guy" being "an enemy of bitcoin"
> > made that obvious... I think you're mistaken; I don't think your emails
> > are incoherent ravings.
> Do you realize IT IS NOT the hypothetical case I wanted to discuss. 

Yes, that's what "alternative" means: a different one.

> I'm sorry, but I'm tired of trying to explain. and quite, honestly, you
> don't seem interested in listening to me and understanding me at all, but
> only in "addressing my concerns". Obviously we understand different things
> by "addressing concerns".
> Perhaps it's the language barrier or something.

My claim is that for *any* bad (evil, flawed, whatever) softfork, then
attempting activation via bip8 is *never* superior to speedy trial,
and in some cases is worse.

If I'm missing something, you only need to work through a single example
to demonstrate I'm wrong, which seems like it ought to be easy... But
just saying "I disagree" and "I don't want to talk about that" isn't
going to convince anyone.

I really don't think the claim above should be surprising; bip8 is meant
for activating good proposals, bad ones need to be stopped in review --
as "pushd" has said in this thread: "Flawed proposal making it through
activation is a failure of review process", and Luke's said similar things
as well. The point of bip8 isn't to make it easier to reject bad forks,
it's to make it easier to ensure *good* forks don't get rejected. But
that's not your hypothetical, and you don't want to talk about all the
ways to stop an evil fork prior to an activation attempt...

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-29 Thread Anthony Towns via bitcoin-dev
On Mon, Mar 28, 2022 at 09:31:18AM +0100, Jorge Timón via bitcoin-dev wrote:
> > In particular, any approach that allows you to block an evil fork,
> > even when everyone else doesn't agree that it's evil, would also allow
> > an enemy of bitcoin to block a good fork, that everyone else correctly
> > recognises is good. A solution that works for an implausible hypothetical
> > and breaks when a single attacker decides to take advantage of it is
> > not a good design.
> Let's discuss those too. Feel free to point out how bip8 fails at some
> hypothetical cases speedy trial doesn't.

Any case where a flawed proposal makes it through getting activation
parameters set and released, but doesn't achieve supermajority hashpower
support is made worse by bip8/lot=true in comparison to speedy trial.

That's true both because of the "trial" part, in that activation can fail
and you can go back to the drawing board without having to get everyone
upgrade a second time, and also the "speedy" part, in that you don't
have to wait a year or more before you even know what's going to happen.

> >  0') someone has come up with a good idea (yay!)
> >  1') most of bitcoin is enthusiastically behind the idea
> >  2') an enemy of bitcoin is essentially alone in trying to stop it
> >  3') almost everyone remains enthusiastic, despite that guy's incoherent
> >  raving
> >  4') nevertheless, the enemies of bitcoin should have the power to stop
> >  the good idea
> "That guy's incoherent raving"
> "I'm just disagreeing".

Uh, you realise the above is an alternative hypothetical, and not talking
about you? I would have thought "that guy" being "an enemy of bitcoin"
made that obvious... I think you're mistaken; I don't think your emails
are incoherent ravings.

It was intended to be the simplest possible case of where someone being
able to block a change is undesirable: they're motivated by trying to
harm bitcoin, they're as far as possible from being part of some economic
majority, and they don't even have a coherent rationale to provide for
blocking the idea.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-25 Thread Anthony Towns via bitcoin-dev
On Thu, Mar 24, 2022 at 07:30:09PM +0100, Jorge Timón via bitcoin-dev wrote:
> Sorry, I won't answer to everything, because it's clear you're not listening.

I'm not agreeing with you; that's different to not listening to you.

> In the HYPOTHETICAL CASE that there's an evil for, the fork being evil
> is a PREMISE of that hypothetical case, a GIVEN.

Do you really find people more inclined to start agreeing with you when
you begin yelling at them? When other people start shouting at you,
do you feel like it's a discussion that you're engaged in?

> Your claim that "if it's evil, good people would oppose it" is a NON
> SEQUITUR, "good people" aren't necessarily perfect and all knowing.
> good people can make mistakes, they can be fooled too.
> In the hypothetical case that THERE'S AN EVIL FORK, if "good people"
> don't complain, it is because they didn't realize that the given fork
> was evil. Because in our hypothetical example THE EVIL FORK IS EVIL BY
> DEFINITION, THAT'S THE HYPOTHETICAL CASE I WANT TO DISCUSS, not the
> hypothetical case where there's a fork some people think it's evil but
> it's not really evil.

The problem with that approach is that any solution we come up with
doesn't only have to deal with the hypotheticals you want to discuss.

In particular, any approach that allows you to block an evil fork,
even when everyone else doesn't agree that it's evil, would also allow
an enemy of bitcoin to block a good fork, that everyone else correctly
recognises is good. A solution that works for an implausible hypothetical
and breaks when a single attacker decides to take advantage of it is
not a good design.

And I did already address what to do in exactly that scenario:

> > But hey what about the worst case: what if everyone else in bitcoin
> > is evil and supports doing evil things. And maybe that's not even
> > implausible: maybe it's not an "evil" thing per se, perhaps [...]
> >
> > In that scenario, I think a hard fork is the best choice: split out a new
> > coin that will survive the upcoming crash, adjust the mining/difficulty
> > algorithm so it works from day one, and set it up so that you can
> > maintain it along with the people who support your vision, rather than
> > having to constantly deal with well-meaning attacks from "bitcoiners"
> > who don't see the risks and have lost the plot.
> >
> > Basically: do what Satoshi did and create a better system, and let
> > everyone else join you as the problems with the old one eventually become
> > unavoidably obvious.

> Once you understand what hypothetical case I'm talking about, maybe
> you can understand the rest of my reasoning.

As I understand it, your hypothetical is:

 0) someone has come up with a bad idea
 1) most of bitcoin is enthusiastically behind the idea
 2) you are essentially alone in discovering that it's a bad idea
 3) almost everyone remains enthusiastic, despite your explanations that
it's a bad idea
 4) nevertheless, you and your colleagues who are aware the idea is bad
should have the power to stop the bad idea
 5) bip8 gives you the power to stop the bad idea but speedy trial does not

Again given (0), I think (1) and (2) are already not very likely, and (3)
is simply not plausible. But in the event that it does somehow occur,
I disagree with (4) for the reasons I describe above; namely, that any
mechanism that did allow that would be unable to distinguish between the
"bad idea" case and something along the lines of:

 0') someone has come up with a good idea (yay!)
 1') most of bitcoin is enthusiastically behind the idea
 2') an enemy of bitcoin is essentially alone in trying to stop it
 3') almost everyone remains enthusiastic, despite that guy's incoherent
 raving
 4') nevertheless, the enemies of bitcoin should have the power to stop
 the good idea

And, as I said in the previous mail, I think (5) is false, independently
of any of the other conditions.

> But if you don't understand the PREMISES of my example, 

You can come up with hypothetical premises that invalidate bitcoin,
let alone some activation method. For example, imagine if the Federal
Reserve Board are full of geniuses and know exactly when to keep issuance
predictable and when to juice the economy? Having flexibility gives more
options than hardcoding "21M" somewhere, so clearly the USD's approach
is the way to go, and everything is just a matter of appointing the
right people to the board, not all this decentralised stuff. 

The right answer is to reject bad premises, not to argue hypotheticals
that have zero relationship to reality.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-22 Thread Anthony Towns via bitcoin-dev
On Thu, Mar 17, 2022 at 03:04:32PM +0100, Jorge Timón via bitcoin-dev wrote:
> On Tue, Mar 15, 2022 at 4:45 PM Anthony Towns  wrote:
> > On Fri, Mar 11, 2022 at 02:04:29PM +, Jorge Timón via bitcoin-dev wrote:
> > People opposed to having taproot transactions in their chain had over
> > three years to do that coordination before an activation method was merged
> > [0], and then an additional seven months after the activation method was 
> > merged before taproot enforcement began [1].
> >
> > [0] 2018-01-23 was the original proposal, 2021-04-15 was when speedy
> > trial activation parameters for mainnet and testnet were merged.
> > [1] 2021-11-14
> People may be opposed only to the final version, but not the initial
> one or the fundamental concept.
> Please, try to think of worse case scenarios.

I mean, I've already spent a lot of time thinking through these worst
cast scenarios, including the ones you bring up. Maybe I've come up with
wrong or suboptimal conclusions about it, and I'm happy to discuss that,
but it's a bit hard to avoid taking offense at the suggestion that I
haven't even thought about it.

In the case of taproot, the final substantive update to the BIP was PR#982
merged on 2020-08-27 -- so even if you'd only been opposed to the changes
in the final version (32B pubkeys perhaps?) you'd have had 1.5 months to
raise those concerns before the code implementing taproot was merged,
and 6 months to raise those concerns before activation parameters were
set. If you'd been following the discussion outside of the code and BIP
text, in the case of 32B pubkeys, you'd have had an additional 15 months
from the time the idea was proposed on 2019-05-22 (or 2019-05-29 if you
only follow optech's summaries) until it was included in the BIP.

> Perhaps there's no opposition until after activation code has been
> released and miners are already starting to signal.
> Perhaps at that moment a reviewer comes and points out a fatal flaw.

Perhaps there's no opposition until the change has been deployed and in
wide use for 30 years. Aborting activation isn't the be-all and end-all
of addressing problems with a proposal, and it's not going to be able to
deal with every problem. For any problems that can be found before the
change is deployed and in use, you want to find them while the proposal
is being discussed.



More broadly, what I don't think you're getting is that *any* method you
can use to abort/veto/revert an activation that's occuring via BIP8 (with
or without mandatory activation), can also be used to abort/veto/revert
a speedy trial activation.

Speedy trial simply changes two things: it allows a minority (~10%)
of hashpower to abort the activation; and it guarantees a "yes" or "no"
answer within three months, while with BIP343 you initially don't know
when within a ~1 year period activation will occur.

If you're part of an (apparent) minority trying to abort/veto/reject
activation, this gives you an additional option: if you can get support
from ~10% of hashpower, you can force an initial "no" answer within
three months, at which point many of the people who were ignoring your
arguments up until then may be willing to reconsider them.

For example, I think Mark Friedenbach's concerns about unhashed pubkeys
and quantum resistance don't make sense, and (therefore) aren't widely
held; but if 10% of blocks during taproot's speedy trial had included a
tagline indicating otherwise and prevented activation, that would have
been pretty clear objective evidence that the concern was more widely
held than I thought, and might be worth reconsidering. Likewise, there
could have somehow been other problems that somehow were being ignored,
that could have similarly been reprioritised in the same way.

That's not the way that you *want* things to work -- ideally people
should be raising the concerns beforehand, and they should be taken
seriously and fixed or addressed beforehand. That did happen with Mark's
concerns -- heck, I raised it as a question ~6 hours after Greg's original
taproot proposal -- and it's directly addressed in the rationale section
of BIP341.

But in the worst case; maybe that doesn't happen. Maybe bitcoin-dev and
other places are somehow being censored, or sensible critics are being
demonised and ignored. The advantage of a hashrate veto here is that it's
hard to fake and hard to censor -- whereas with mailing list messages and
the like, it's both easy to fake (setup sockpuppets and pay troll farms)
and easy to censor (ban/moderate people for spamming say). So as a last
ditch "we've been censored, please take us seriously" method of protest,
it seems worthwhile to have to me.

(Of course, a 90% majority might *still* choose to not take the concerns
of the 10% minority seriously, and just continue to ignore the concern
and followup with an immediate mandatory activation. But if that's what
happening, you can't stop it; you can't only choose whether you want to
be a part of it, or 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-22 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 16, 2022 at 02:54:05PM +, ZmnSCPxj via bitcoin-dev wrote:
> My point is that in the past we were willing to discuss the complicated 
> crypto math around cross-input sigagg in order to save bytes, so it seems to 
> me that cross-input compression of puzzles/solutions at least merits a 
> discussion, since it would require a lot less heavy crypto math, and *also* 
> save bytes.

Maybe it would be; but it's not something I was intending to bring up in
this thread.

Chia allows any coin spend to reference any output created in the
same block, and potentially any other input in the same block, and
automatically aggregates all signatures in a block; that's all pretty
neat, but trying to do all that in bitcoin in step one doesn't seem smart.

> > > > I /think/ the compression hook would be to allow you to have the puzzles
> > > > be (re)generated via another lisp program if that was more efficient
> > > > than just listing them out. But I assume it would be turtles, err,
> > > > lisp all the way down, no special C functions like with jets.
> > > > Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
> > > > cryptocurrency node software, so "special C function" seems to 
> > > > overprivilege C...
> > Jets are "special" in so far as they are costed differently at the
> > consensus level than the equivalent pure/jetless simplicity code that
> > they replace. Whether they're written in C or something else isn't the
> > important part.
> > By comparison, generating lisp code with lisp code in chia doesn't get
> > special treatment.
> Hmm, what exactly do you mean here?

This is going a bit into the weeds...

> If I have a shorter piece of code that expands to a larger piece of code 
> because metaprogramming, is it considered the same cost as the larger piece 
> of code (even if not all parts of the larger piece of code are executed, e.g. 
> branches)?

Chia looks at the problem differently to bitcoin. In bitcoin each
transaction includes a set of inputs, and each of those inputs contains
both a reference to a utxo which has a scriptPubKey, and a solution for
the scriptPubKey called the scriptSig. In chia, each block contains a
list of coins (~utxos) that are being spent, each of which has a hash
of its puzzle (~scriptPubKey) which must be solved; each block then
contains a lisp program that will produce all the transaction info,
namely coin (~utxo id), puzzle reveal (~witness program) and solution
(~witness stack); then to verify the block, you need to check the coins
exist, the puzzle reveals all match the corresponding coin's puzzle,
that the puzzle+solution executes successfully, and that the assertions
that get returned by all the puzzle+solutions are all consistent.

> Or is the cost simply proportional to the number of operations actually 
> executed?

AIUI, the cost is the sum of the size of the program, as well as how
much compute and memory is used to run the program.

In comparison, the cost for an input with tapscript is the size of that
input; memory usage has a fixed maximum (1000 elements in the
stack/altstack, and 520 bytes per element); and compute resources are
limited according to the size of the input.

> It seems to me that lisp-generating-lisp compression would reduce the cost of 
> bytes transmitted, but increase the CPU load (first the metaprogram runs, and 
> *then* the produced program runs).

In chia, you're always running the metaprogram, it may just be that that
program is the equivalent of:

   stuff = lambda: [("hello", "world"), ("hello", "Z-man")]

which doesn't seem much better than just saying:

   stuff = [("hello", "world"), ("hello", "Z-man")]

The advantage is that you could construct a block template optimiser
that rewrites the program to:

   def stuff():
   h = "hello"
   return [(h, "world"), (h, "Z-man")]

which for large values of "hello" may be worthwhile (and the standard
puzzle in chia is large enough at that that might well be worthwhile at
~227 bytes, since it implements taproot/graftroot logic from scratch).

> Over in that thread, we seem to have largely split jets into two types:
> * Consensus-critical jets which need a softfork but reduce the weight of the 
> jetted code (and which are invisible to pre-softfork nodes).
> * Non-consensus-critical jets which only need relay change and reduces bytes 
> sent, but keeps the weight of the jetted code.
> It seems to me that lisp-generating-lisp compression would roughly fall into 
> the "non-consensus-critical jets", roughly.

It could do; but the way it's used in chia is consensus-critical. 

I'm not 100% sure how costing works in chia, but I believe a block
template optimiser as above might allow miners to fit more transactions
in their block and therefore collect more transaction fees. That makes
the block packing problem harder though, since it means your transaction
is "cheaper" if it's more similar to other transactions in the block. I
don't think it's relevant today 

Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 22, 2022 at 05:37:03AM +, ZmnSCPxj via bitcoin-dev wrote:
> Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

(Have you considered applying a jit or some other compression algorithm
to your emails?)

> Microcode For Bitcoin SCRIPT
> 
> I propose:
> * Define a generic, low-level language (the "RISC language").

This is pretty much what Simplicity does, if you optimise the low-level
language to minimise the number of primitives and maximise the ability
to apply tooling to reason about it, which seem like good things for a
RISC language to optimise.

> * Define a mapping from a specific, high-level language to
>   the above language (the microcode).
> * Allow users to sacrifice Bitcoins to define a new microcode.

I think you're defining "the microcode" as the "mapping" here.

This is pretty similar to the suggestion Bram Cohen was making a couple
of months ago:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019722.html
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019773.html
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019803.html

I believe this is done in chia via the block being able to
include-by-reference prior blocks' transaction generators:

] transactions_generator_ref_list: List[uint32]: A list of block heights of 
previous generators referenced by this block's generator.
  - https://docs.chia.net/docs/05block-validation/block_format

(That approach comes at the cost of not being able to do full validation
if you're running a pruning node. The alternative is to effectively
introduce a parallel "utxo" set -- where you're mapping the "sacrificed"
BTC as the nValue and instead of just mapping it to a scriptPubKey for
a later spend, you're permanently storing the definition of the new
CISC opcode)

> We can then support a "RISC" language that is composed of
> general instructions, such as arithmetic, SECP256K1 scalar
> and point math, bytevector concatenation, sha256 midstates,
> bytevector bit manipulation, transaction introspection, and
> so on.

A language that includes instructions for each operation we can think
of isn't very "RISC"... More importantly it gets straight back to the
"we've got a new zk system / ECC curve / ... that we want to include,
let's do a softfork" problem you were trying to avoid in the first place.

> Then, the user creates a new transaction where one of
> the outputs contains, say, 1.0 Bitcoins (exact required
> value TBD),

Likely, the "fair" price would be the cost of introducing however many
additional bytes to the utxo set that it would take to represent your
microcode, and the cost it would take to run jit(your microcode script)
if that were a validation function. Both seem pretty hard to manage.

"Ideally", I think you'd want to be able to say "this old microcode
no longer has any value, let's forget it, and instead replace it with
this new microcode that is much better" -- that way nodes don't have to
keep around old useless data, and you've reduced the cost of introducing
new functionality.

Additionally, I think it has something of a tragedy-of-the-commons
problem: whoever creates the microcode pays the cost, but then anyone
can use it and gain the benefit. That might even end up creating
centralisation pressure: if you design a highly decentralised L2 system,
it ends up expensive because people can't coordinate to pay for the
new microcode that would make it cheaper; but if you design a highly
centralised L2 system, you can just pay for the microcode yourself and
make it even cheaper.

This approach isn't very composable -- if there's a clever opcode
defined in one microcode spec, and another one in some other microcode,
the only way to use both of them in the same transaction is to burn 1
BTC to define a new microcode that includes both of them.

> We want to be able to execute the defined microcode
> faster than expanding an `OP_`-code SCRIPT to a
> `UOP_`-code SCRIPT and having an interpreter loop
> over the `UOP_`-code SCRIPT.
>
> We can use LLVM.

We've not long ago gone to the effort of removing openssl as a consensus
critical dependency; and likewise previously removed bdb.  Introducing a
huge new dependency to the definition of consensus seems like an enormous
step backwards.

This would also mean we'd be stuck at the performance of whatever version
of llvm we initially adopted, as any performance improvements introduced
in later llvm versions would be a hard fork.

> On the other hand, LLVM bugs are compiler bugs and
> the same bugs can hit the static compiler `cc`, too,

"Well, you could hit Achilles in the heel, so really, what's the point
of trying to be invulnerable anywhere else?"

> Then we put a pointer to this compiled function to a
> 256-long array of functions, where the array index is
> the `OP_` code.

That's a 256-long array of functions for each microcode, which increases
the "microcode-utxo" database 

Re: [bitcoin-dev] Speedy Trial

2022-03-15 Thread Anthony Towns via bitcoin-dev
On Fri, Mar 11, 2022 at 02:04:29PM +, Jorge Timón via bitcoin-dev wrote:
> >  Thirdly, if some users insist on a chain where taproot is
> > "not activated", they can always softk-fork in their own rule that
> > disallows the version bits that complete the Speedy Trial activation
> > sequence, or alternatively soft-fork in a rule to make spending from (or
> > to) taproot addresses illegal.
> Since it's about activation in general and not about taproot specifically,
> your third point is the one that applies.
> Users could have coordinated to have "activation x" never activated in
> their chains if they simply make a rule that activating a given proposal
> (with bip8) is forbidden in their chain.
> But coordination requires time.

People opposed to having taproot transactions in their chain had over
three years to do that coordination before an activation method was merged
[0], and then an additional seven months after the activation method was merged 
before taproot enforcement began [1].

[0] 2018-01-23 was the original proposal, 2021-04-15 was when speedy
trial activation parameters for mainnet and testnet were merged.
[1] 2021-11-14

For comparison, the UASF activation attempt for segwit took between 4
to 6 months to coordinate, assuming you start counting from either the
"user activated soft fork" concept being raised on bitcoin-dev or the
final params for BIP 148 being merged into the bips repo, and stop
counting when segwit locked in.

> Please, try to imagine an example for an activation that you wouldn't like
> yourself. Imagine it gets proposed and you, as a user, want to resist it.

Sure. There's more steps than just "fork off onto a minority chain"
though.

 1) The first and most important step is to explain why you want to
resist it, either to convince the proposers that there really is
a problem and they should stand down, or so someone can come up
with a way of fixing the proposal so you don't need to resist it.
Ideally, that's all that's needed to resolve the objections. (That's
what didn't happen with opposition to segwit)

 2) If that somehow doesn't work, and people are pushing ahead with a
consensus change despite significant reasonable opposition; the next
thing to do would be to establish if either side is a paper tiger
and setup a futures market. That has the extra benefit of giving
miners some information about which (combination of) rules will be
most profitable to mine for.

Once that's setup and price discovery happens, one side or the other
will probably throw in the towel -- there's not much point have a
money that other people aren't interested in using. (And that more
or less is what happened with 2X)

If a futures market like that is going to be setup, I think it's
best if it happens before signalling for the soft fork starts --
the information miners will get from it is useful for figuring out
how much resources to invest in signalling, eg. I think it might even
be feasible to set something up even before activation parameters are
finalised; you need something more than just one-on-one twitter bets
to get meaningful price discovery, but I think you could probably
build something based on a reasonably unbiassed oracle declaring an
outcome, without precisely defined parameters fixed in a BIP.

So if acting like reasonable people and talking it through doesn't
work, this seems like the next step to me.

 3) But maybe you try both those and they fail and people start trying
to activate the soft fork (or perhaps you just weren't paying
attention until it was too late, and missed the opportunity).

I think the speedy trial approach here is ideal for a last ditch
"everyone stays on the same chain while avoiding this horrible change"
attempt. The reason being that it allows everyone to agree to not
adopt the new rules with only very little cost: all you need is for
10% of hashpower to not signal over a three month period.

That's cheaper than bip9 (5% over 12 months requires 2x the
cumulative hashpower), and much cheaper than bip8 which requires
users to update their software

 4) At this point, if you were able to prevent activation, hopefully
that's enough of a power move that people will take your concerns
seriously, and you get a second chance at step (1). If that still
results in an impasse, I'd expect there to be a second, non-speedy
activation of the soft fork, that either cannot be blocked at all, or
cannot be blocked without having control of at least 60% of hashpower.

 5) If you weren't able to prevent activation (whether or not you
prevented speedy trial from working), then you should have a lot
of information:

  - you weren't able to convince people there was a problem

  - you either weren't in the economic majority and people don't
think your concept of bitcoin is more valuable 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-10 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 08, 2022 at 03:06:43AM +, ZmnSCPxj via bitcoin-dev wrote:
> > > They're radically different approaches and
> > > it's hard to see how they mix. Everything in lisp is completely sandboxed,
> > > and that functionality is important to a lot of things, and it's really
> > > normal to be given a reveal of a scriptpubkey and be able to rely on your
> > > parsing of it.
> > The above prevents combining puzzles/solutions from multiple coin spends,
> > but I don't think that's very attractive in bitcoin's context, the way
> > it is for chia. I don't think it loses much else?
> But cross-input signature aggregation is a nice-to-have we want for Bitcoin, 
> and, to me, cross-input sigagg is not much different from cross-input 
> puzzle/solution compression.

Signature aggregation has a lot more maths and crypto involved than
reversible compression of puzzles/solutions. I was more meaning
cross-transaction relationships rather than cross-input ones though.

> > I /think/ the compression hook would be to allow you to have the puzzles
> > be (re)generated via another lisp program if that was more efficient
> > than just listing them out. But I assume it would be turtles, err,
> > lisp all the way down, no special C functions like with jets.
> Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
> cryptocurrency node software, so "special C function" seems to overprivilege 
> C...

Jets are "special" in so far as they are costed differently at the
consensus level than the equivalent pure/jetless simplicity code that
they replace.  Whether they're written in C or something else isn't the
important part.

By comparison, generating lisp code with lisp code in chia doesn't get
special treatment.

(You *could* also use jets in a way that doesn't impact consensus just
to make your node software more efficient in the normal case -- perhaps
via a JIT compiler that sees common expressions in the blockchain and
optimises them eg)

On Wed, Mar 09, 2022 at 02:30:34PM +, ZmnSCPxj via bitcoin-dev wrote:
> Do note that PTLCs remain more space-efficient though, so forget about HTLCs 
> and just use PTLCs.

Note that PTLCs aren't really Chia-friendly, both because chia doesn't
have secp256k1 operations in the first place, but also because you can't
do a scriptless-script because the information you need to extract
is lost when signatures are non-interactively aggregated via BLS --
so that adds an expensive extra ECC operation rather than reusing an
op you're already paying for (scriptless script PTLCs) or just adding
a cheap hash operation (HTLCs).

(Pretty sure Chia could do (= PTLC (pubkey_for_exp PREIMAGE)) for
preimage reveal of BLS PTLCs, but that wouldn't be compatible with
bitcoin secp256k1 PTLCs. You could sha256 the PTLC to save a few bytes,
but I think given how much a sha256 opcode costs in Chia, that that
would actually be more expensive?)

None of that applies to a bitcoin implementation that doesn't switch to
BLS signatures though.

> > But if they're fully baked into the scriptpubkey then they're opted into by 
> > the recipient and there aren't any weird surprises.
> This is really what I kinda object to.
> Yes, "buyer beware", but consider that as the covenant complexity increases, 
> the probability of bugs, intentional or not, sneaking in, increases as well.
> And a bug is really "a weird surprise" --- xref TheDAO incident.

Which is better: a bug in the complicated script code specified for
implementing eltoo in a BOLT; or a bug in the BIP/implementation of a
new sighash feature designed to make it easy to implement eltoo, that's
been soft-forked into consensus?

Seems to me, that it's always better to have the bug be at the wallet
level, since that can be fixed by upgrading individual wallet software.

> This makes me kinda wary of using such covenant features at all, and if stuff 
> like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added but must 
> be reimplemented via a covenant feature, I would be saddened, as I now have 
> to contend with the complexity of covenant features and carefully check that 
> `SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented correctly.
> True I also still have to check the C++ source code if they are implemented 
> directly as opcodes, but I can read C++ better than frikkin Bitcoin SCRIPT.

If OP_CHECKTEMPLATEVERIFY (etc) is implemented as a consensus update, you
probably want to review the C++ code even if you're not going to use it,
just to make sure consensus doesn't end up broken as a result. Whereas if
it's only used by other people's wallets, you might be able to ignore it
entirely (at least until it becomes so common that any bugs might allow
a significant fraction of BTC to be stolen/lost and indirectly cause a
systemic risk).

> Not to mention that I now have to review both the (more complicated due to 
> more general) covenant feature implementation, *and* the implementation of 
> 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-09 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 08, 2022 at 06:54:56PM -0800, Bram Cohen via bitcoin-dev wrote:
> On Mon, Mar 7, 2022 at 5:27 PM Anthony Towns  wrote:
> > One way to match the way bitcoin do things, you could have the "list of
> > extra conditions" encoded explicitly in the transaction via the annex,
> > and then check the extra conditions when the script is executed.
> The conditions are already basically what's in transactions. I think the
> only thing missing is the assertion about one's own id, which could be
> added in by, in addition to passing the scriptpubkey the transaction it's
> part of, also passing in the index of inputs which it itself is.

To redo the singleton pattern in bitcoin's context, I think you'd have
to pass in both the full tx you're spending (to be able to get the
txid of its parent) and the full tx of its parent (to be able to get
the scriptPubKey that your utxo spent) which seems klunky but at least
possible (you'd be able to drop the witness data at least; without that
every tx would be including the entire history of the singleton).

> > > A nice side benefit of sticking with the UTXO model is that the soft fork
> > > hook can be that all unknown opcodes make the entire thing automatically
> > > pass.
> > I don't think that works well if you want to allow the spender (the
> > puzzle solution) to be able to use opcodes introduced in a soft-fork
> > (eg, for graftroot-like behaviour)?
> This is already the approach to soft forking in Bitcoin script and I don't
> see anything wrong with it.

It's fine in Bitcoin script, because the scriptPubKey already commits to
all the opcodes that can possibly be used for any particular output. With
a lisp approach, however, you could pass in additional code fragments
to execute. For example, where you currently say:

  script: [pubkey] CHECKSIG
  witness: [64B signature][0x83]

(where 0x83 is SINGLE|ANYONECANPAY) you might translate that to:

  script: (checksig pubkey (bip342-txmsg 3) 2)
  witness: signature 0x83

where "3" grabs the sighash byte, and "2" grabs the signature. But you
could also translate it to:

  script: (checksig pubkey (sha256 3 (a 3)) 2)
  witness: signature (bip342-txmsg 0x83)

where "a 3" takes "(bip342-txmsg 0x83)" then evaluates it, and (sha256
3 (a 3)) makes sure you've signed off on both how the message was
constructed as well as what the message was. The advantage there is that
the spender can then create their own signature hashes however they like;
even ones that hadn't been thought of when the output was created.

But what if we later softfork in a bip118-txmsg for quick and easy
ANYPREVOUT style-signatures, and want to use that instead of custom
lisp code? You can't just stick (softfork C (bip118-txmsg 0xc3)) into
the witness, because it will evaluate to nil and you won't be signing
anything. But you *could* change the script to something like:

  script: (softfork C (q checksigverify pubkey (a 3) 2))
  witness: signature (bip118-txmsg 0xc3)

But what happens if the witness instead has:

  script: (softfork C (q checksigverify pubkey (a 3) 2))
  witness: fake-signature (fakeopcode 0xff)

If softfork is just doing a best effort for whatever opcodes it knows
about, and otherwise succeeding, then it has to succeed, and your
script/output has become anyone-can-spend.

On the other hand, if you could tell the softfork op that you only wanted
ops up-to-and-including the 118 softfork, then it could reject fakeopcode
and fail the script, which I think gives the desirable behaviour.

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-07 Thread Anthony Towns via bitcoin-dev
On Sun, Mar 06, 2022 at 10:26:47PM -0800, Bram Cohen via bitcoin-dev wrote:
> > After looking into it, I actually think chia lisp [1] gets pretty much all
> > the major design decisions pretty much right. There are obviously a few
> > changes needed given the differences in design between chia and bitcoin:
> Bitcoin uses the UTXO model as opposed to Chia's Coin Set model. While
> these are close enough that it's often explained as Chia uses the UTXO
> model but that isn't technically true. Relevant to the above comment is
> that in the UTXO model transactions get passed to a scriptpubkey and it
> either assert fails or it doesn't, while in the coin set model each puzzle
> (scriptpubkey) gets run and either assert fails or returns a list of extra
> conditions it has, possibly including timelocks and creating new coins,
> paying fees, and other things.

One way to match the way bitcoin do things, you could have the "list of
extra conditions" encoded explicitly in the transaction via the annex,
and then check the extra conditions when the script is executed.

> If you're doing everything from scratch it's cleaner to go with the coin
> set model, but retrofitting onto existing Bitcoin it may be best to leave
> the UTXO model intact and compensate by adding a bunch more opcodes which
> are special to parsing Bitcoin transactions. The transaction format itself
> can be mostly left alone but to enable some of the extra tricks (mostly
> implementing capabilities) it's probably a good idea to make new
> conventions for how a transaction can have advisory information which
> specifies which of the inputs to a transaction is the parent of a specific
> output and also info which is used for communication between the UTXOs in a
> transaction.

I think the parent/child coin relationship is only interesting when
"unrelated" spends can assert that the child coin is being created -- ie
things along the lines of the "transaction sponsorship" proposal. My
feeling is that complicates the mempool a bit much, so is best left for
later, if done at all.

(I think the hard part of managing the extra conditions is mostly
in keeping it efficient to manage the mempool and construct the most
profitable blocks/bundles, rather than where the data goes)

> But one could also make lisp-generated UTXOs be based off transactions
> which look completely trivial and have all their important information be
> stored separately in a new vbytes area. That works but results in a bit of
> a dual identity where some coins have both an old style id and a new style
> id which gunks up what

We've already got a txid and a wtxid, adding more ids seems best avoided
if possible...

> > Pretty much all the opcodes in the first section are directly from chia
> > lisp, while all the rest are to complete the "bitcoin" functionality.
> > The last two are extensions that are more food for thought than a real
> > proposal.
> Are you thinking of this as a completely alternative script format or an
> extension to bitcoin script?

As an alternative to tapscript, so when constructing the merkle tree of
scripts for a taproot address, you could have some of those scripts be
in tapscript as it exists today with OP_CHECKSIG etc, and others could
be in lisp. (You could then have an entirely lisp-based sub-merkle-tree
of lisp fragments via sha256tree or similar of course)

> They're radically different approaches and
> it's hard to see how they mix. Everything in lisp is completely sandboxed,
> and that functionality is important to a lot of things, and it's really
> normal to be given a reveal of a scriptpubkey and be able to rely on your
> parsing of it.

The above prevents combining puzzles/solutions from multiple coin spends,
but I don't think that's very attractive in bitcoin's context, the way
it is for chia. I don't think it loses much else?

> > There's two ways to think about upgradability here; if someday we want
> > to add new opcodes to the language -- perhaps something to validate zero
> > knowledge proofs or calculate sha3 or use a different ECC curve, or some
> > way to support cross-input signature aggregation, or perhaps it's just
> > that some snippets are very widely used and we'd like to code them in
> > C++ directly so they validate quicker and don't use up as much block
> > weight. One approach is to just define a new version of the language
> > via the tapleaf version, defining new opcodes however we like.
> A nice side benefit of sticking with the UTXO model is that the soft fork
> hook can be that all unknown opcodes make the entire thing automatically
> pass.

I don't think that works well if you want to allow the spender (the
puzzle solution) to be able to use opcodes introduced in a soft-fork
(eg, for graftroot-like behaviour)?

> Chia's approach to transaction fees is essentially identical to Bitcoin's
> although a lot fewer things in the ecosystem support fees due to a lack of
> having needed it yet. I don't think mempool issues have much to 

Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-07 Thread Anthony Towns via bitcoin-dev
On Sat, Mar 05, 2022 at 12:20:02PM +, Jeremy Rubin via bitcoin-dev wrote:
> On Sat, Mar 5, 2022 at 5:59 AM Anthony Towns  wrote:
> > The difference between information in the annex and information in
> > either a script (or the input data for the script that is the rest of
> > the witness) is (in theory) that the annex can be analysed immediately
> > and unconditionally, without necessarily even knowing anything about
> > the utxo being spent.
> I agree that should happen, but there are cases where this would not work.
> E.g., imagine OP_LISP_EVAL + OP_ANNEX... and then you do delegation via the
> thing in the annex.
> Now the annex can be executed as a script.

You've got the implication backwards: the benefit isn't that the annex
*can't* be used as/in a script; it's that it *can* be used *without*
having to execute/analyse a script (and without even having to load the
utxo being spent).

How big a benefit that is might be debatable -- it's only a different
ordering of the work you have to do to be sure the transaction is valid;
it doesn't reduce the total work. And I think you can easily design
invalid transactions that will maximise the work required to establish
the tx is invalid, no matter what order you validate things.

> Yes, this seems tough to do without redefining checksig to allow partial
> annexes.

"Redefining checksig to allow X" in taproot means "defining a new pubkey
format that allows a new sighash that allows X", which, if it turns out
to be necessary/useful, is entirely possible.  It's not sensible to do
what you suggest *now* though, because we don't have a spec of how a
partial annex might look.

> Hence thinking we should make our current checksig behavior
> require it be 0,

Signatures already require the annex to not be present. If you personally
want to do that for every future transaction you sign off on, you
already can.

> > It seems like a good place for optimising SIGHASH_GROUP (allowing a group
> > of inputs to claim a group of outputs for signing, but not allowing inputs
> > from different groups to ever claim the same output; so that each output
> > is hashed at most once for this purpose) -- since each input's validity
> > depends on the other inputs' state, it's better to be able to get at
> > that state as easily as possible rather than having to actually execute
> > other scripts before your can tell if your script is going to be valid.
> I think SIGHASH_GROUP could be some sort of mutable stack value, not ANNEX.

The annex is already a stack value, and the SIGHASH_GROUP parameter
cannot be mutable since it will break the corresponding signature, and
(in order to ensure validating SIGHASH_GROUP signatures don't require
hashing the same output multiple times) also impacts SIGHASH_GROUP
signatures from other inputs.

> you want to be able to compute what range you should sign, and then the
> signature should cover the actual range not the argument itself.

The value that SIGHASH_GROUP proposes putting in the annex is just an
indication of whether (a) this input is using the same output group as
the previous input; or else (b) how many outputs are in this input's
output group. The signature naturally commits to that value because it's
signing all the outputs in the group anyway.

> Why sign the annex literally?

To prevent it from being third-party malleable.

When there is some meaning assigned to the annex then perhaps it will
make sense to add some more granular way of accessing it via script, but
until then, committing to the whole thing is the best option possible,
since it still allows some potential uses of the annex without having
to define a new sighash.

Note that signing only part of the annex means that you probably
reintroduce the quadratic hashing problem -- that is, with a script of
length X and an annex of length Y, you may have to hash O(X*Y) bytes
instead of O(X+Y) bytes (because every X/k bytes of the script selects
a different Y/j subset of the annex to sign).

> Why require that all signatures in one output sign the exact same digest?
> What if one wants to sign for value and another for value + change?

You can already have one signature for value and one for value+change:
use SIGHASH_SINGLE for the former, and SIGHASH_ALL for the latter.
SIGHASH_GROUP is designed for the case where the "value" goes to
multiple places.

> > > Essentially, I read this as saying: The annex is the ability to pad a
> > > transaction with an additional string of 0's
> > If you wanted to pad it directly, you can do that in script already
> > with a PUSH/DROP combo.
> You cannot, because the push/drop would not be signed and would be
> malleable.

If it's a PUSH, then it's in the tapscript and committed to by the
scriptPubKey, and not malleable.

There's currently no reason to have padding specifiable at spend time --
you know when you're writing the script whether the spender can reuse
the same signature for multiple CHECKSIG ops, because the only way to
do that is to 

Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread Anthony Towns via bitcoin-dev
On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev wrote:
> I've seen some discussion of what the Annex can be used for in Bitcoin. 

https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html

includes some discussion on that topic from the taproot review meetings.

The difference between information in the annex and information in
either a script (or the input data for the script that is the rest of
the witness) is (in theory) that the annex can be analysed immediately
and unconditionally, without necessarily even knowing anything about
the utxo being spent.

The idea is that we would define some simple way of encoding (multiple)
entries into the annex -- perhaps a tag/length/value scheme like
lightning uses; maybe if we add a lisp scripting language to consensus,
we just reuse the list encoding from that? -- at which point we might
use one tag to specify that a transaction uses advanced computation, and
needs to be treated as having a heavier weight than its serialized size
implies; but we could use another tag for per-input absolute locktimes;
or another tag to commit to a past block height having a particular hash.

It seems like a good place for optimising SIGHASH_GROUP (allowing a group
of inputs to claim a group of outputs for signing, but not allowing inputs
from different groups to ever claim the same output; so that each output
is hashed at most once for this purpose) -- since each input's validity
depends on the other inputs' state, it's better to be able to get at
that state as easily as possible rather than having to actually execute
other scripts before your can tell if your script is going to be valid.

> The BIP is tight lipped about it's purpose

BIP341 only reserves an area to put the annex; it doesn't define how
it's used or why it should be used.

> Essentially, I read this as saying: The annex is the ability to pad a
> transaction with an additional string of 0's 

If you wanted to pad it directly, you can do that in script already
with a PUSH/DROP combo.

The point of doing it in the annex is you could have a short byte
string, perhaps something like "0x010201a4" saying "tag 1, data length 2
bytes, value 420" and have the consensus intepretation of that be "this
transaction should be treated as if it's 420 weight units more expensive
than its serialized size", while only increasing its witness size by
6 bytes (annex length, annex flag, and the four bytes above). Adding 6
bytes for a 426 weight unit increase seems much better than adding 426
witness bytes.

The example scenario is that if there was an opcode to verify a
zero-knowledge proof, eg I think bulletproof range proofs are something
like 10x longer than a signature, but require something like 400x the
validation time. Since checksig has a validation weight of 50 units,
a bulletproof verify might have a 400x greater validation weight, ie
20,000 units, while your witness data is only 650 bytes serialized. In
that case, we'd need to artificially bump the weight of you transaction
up by the missing 19,350 units, or else an attacker could fill a block
with perhaps 6000 bulletproofs costing the equivalent of 120M signature
operations, rather than the 80k sigops we currently expect as the maximum
in a block. Seems better to just have "0x01024b96" stuck in the annex,
than 19kB of zeroes.

> Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode,
> OP_ANNEX which puts the annex on the stack

I think you'd want to have a way of accessing individual entries from
the annex, rather than the annex as a single unit.

> Now suppose that I have a computation that I am running in a script as
> follows:
> 
> OP_ANNEX
> OP_IF
> `some operation that requires annex to be <1>`
> OP_ELSE
> OP_SIZE
> `some operation that requires annex to be len(annex) + 1 or does a
> checksig`
> OP_ENDIF
> 
> Now every time you run this,

You only run a script from a transaction once at which point its
annex is known (a different annex gives a different wtxid and breaks
any signatures), and can't reference previous or future transactions'
annexes...

> Because the Annex is signed, and must be the same, this can also be
> inconvenient:

The annex is committed to by signatures in the same way nVersion,
nLockTime and nSequence are committed to by signatures; I think it helps
to think about it in a similar way.

> Suppose that you have a Miniscript that is something like: and(or(PK(A),
> PK(A')), X, or(PK(B), PK(B'))).
> 
> A or A' should sign with B or B'. X is some sort of fragment that might
> require a value that is unknown (and maybe recursively defined?) so
> therefore if we send the PSBT to A first, which commits to the annex, and
> then X reads the annex and say it must be something else, A must sign
> again. So you might say, run X first, and then sign with A and C or B.
> However, what if the script somehow detects the bitstring WHICH_A WHICH_B
> and has a 

[bitcoin-dev] bitcoin scripting and lisp

2022-03-03 Thread Anthony Towns via bitcoin-dev
On Sun, Feb 27, 2022 at 04:34:31PM +, ZmnSCPxj via bitcoin-dev wrote:
> In reaction to this, AJ Towns mailed me privately about some of his
> thoughts on this insane `OP_EVICT` proposal.
> He observed that we could generalize the `OP_EVICT` opcode by
> decomposing it into smaller parts, including an operation congruent
> to the Scheme/Haskell/Scala `map` operation.

At much the same time Zman was thinking about OP_FOLD and in exactly the
same context, I was wondering what the simplest possible language that
had some sort of map construction was -- I mean simplest in a "practical
engineering" sense; I think Simplicity already has the Euclidean/Peano
"least axioms" sense covered.

The thing that's most appealing to me about bitcoin script as it stands
(beyond "it works") is that it's really pretty simple in an engineering
sense: it's just a "forth" like system, where you put byte strings on a
stack and have a few operators to manipulate them.  The alt-stack, and
supporting "IF" and "CODESEPARATOR" add a little additional complexity,
but really not very much.

To level-up from that, instead of putting byte strings on a stack, you
could have some other data structure than a stack -- eg one that allows
nesting. Simple ones that come to mind are lists of (lists of) byte
strings, or a binary tree of byte strings [0]. Both those essentially
give you a lisp-like language -- lisp is obviously all about lists,
and a binary tree is just made of things or pairs of things, and pairs
of things are just another way of saying "car" and "cdr".

A particular advantage of lisp-like approaches is that they treat code
and data exactly the same -- so if we're trying to leave the option open
for a transaction to supply some unexpected code on the witness stack,
then lisp handles that really naturally: you were going to include data
on the stack anyway, and code and data are the same, so you don't have
to do anything special at all. And while I've never really coded in
lisp at all, my understanding is that its biggest problems are all about
doing things efficiently at large scales -- but script's problem space
is for very small scale things, so there's at least reason to hope that
any problems lisp might have won't actually show up for this use case.

So, to me, that seemed like something worth looking into...



After looking into it, I actually think chia lisp [1] gets pretty much all
the major design decisions pretty much right. There are obviously a few
changes needed given the differences in design between chia and bitcoin:

 - having secp256k1 signatures (and curve operations), instead of
   BLS12-381 ones

 - adding tx introspection instead of having bundle-oriented CREATE_COIN,
   and CREATE/ASSERT results [10]

and there are a couple of other things that could maybe be improved
upon:

 - serialization seems to be a bit verbose -- 100kB of serialized clvm
   code from a random block gzips to 60kB; optimising the serialization
   for small lists, and perhaps also for small literal numbers might be
   a feasible improvement; though it's not clear to me how frequently
   serialization size would be the limiting factor for cost versus
   execution time or memory usage.

 - I don't think execution costing takes into account how much memory
   is used at any one time, just how much was allocated in total; so
   the equivalent of (OP_DUP OP_DROP OP_DUP OP_DROP ..) only has the
   allocations accounted for, with no discount given for the immediate
   freeing, so it gets treated as having the same cost as (OP_DUP
   OP_DUP ..  OP_DROP OP_DROP ..). Doing it that way would be a worse
   than how bitcoin script is currently costed, but doing better might
   mean locking in an evaluation method at the consensus level. Seems
   worth looking into, at least.

But otherwise, it seems a pretty good match.



I think you'd need about 40 opcodes to match bitcoin script and (roughly)
chia lisp, something like:

   q- quote
   a- apply
   x- exception / immediately fail (OP_RETURN style)
   i- if/then/else
   softfork - upgradability
   not, all, any- boolean logic
   bitand, bitor, bitxor, bitnot, shift - bitwise logic
   =- bitwise equality
   > - + * / divmod - (signed, bignum) arithmetic
   ashift   - arithmetic shift (sign extended)
   >s   - string comparison
   strlen, substr, concat - string ops
   f, r, c, l   - list ops (head, tail, make a list, is this a list?)
   sha256   - hashing

   numequal - arithmetic equal, equivalent to (= (+ a 0) (+ b 0))
   ripemd160, hash160, hash256 - more hashing
   bip342-txmsg - given a sighash byte, construct the bip342 message
   bip340-verify- given a pubkey, message, and signature bip340 verify it
   tx   - get various information about the tx
   taproot  - get merkle path/internalpubkey/program/annex information
   ecdsa- 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-25 Thread Anthony Towns via bitcoin-dev
On Thu, Feb 24, 2022 at 12:03:32PM +, ZmnSCPxj via bitcoin-dev wrote:
> > > Logically, if the construct is general enough to form Drivechains, and
> > > we rejected Drivechains, we should also reject the general construct.
> > Not providing X because it can only be used for E, may generalise to not
> > providing Y which can also only be used for E, but it doesn't necessarily
> > generalise to not providing Z which can be used for both G and E.
> Does this not work only if the original objection to merging in BIP-300 was 
> of the form:
> * X implements E.
> * Z implements G and E.
> * Therefore, we should not merge in X and instead should merge in the more 
> general construct Z.

I'd describe the "original objection" more as "E is not worth doing;
X achieves nothing but E; therefore we should not work on or merge X".

Whether we should work on or eventually merge some other construct that
does other things than E, depends on the (relative) merits of those
other things.

> I think we really need someone who NACKed BIP-300 to speak up.

Here's some posts from 2017:

] I think it's great that people want to experiment with things like
] drivechains/sidechains and what not, but their security model is very
] distinct from Bitcoin's and, given the current highly centralized
] mining ecosystem, arguably not very good.  So positioning them as a
] major solution for the Bitcoin project is the wrong way to go. Instead
] we should support people trying cool stuff, at their own risk.

 - https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html

] Regardless, people are free experiment and adopt such an approach. The
] nice thing about it not being a hardfork is that it does not require
] network-wide consensus to deploy. However, I don't think they offer a
] security model that should be encouraged, and thus doesn't have a
] place on a roadmap.

 - https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014729.html

> If my understanding is correct and that the original objection was 
> "Drivechains are bad for reasons R[0], R[1]...", then:
> * You can have either of these two positions:
>   * R[0], R[1] ... are specious arguments and Drivechains are not bad [...]
>   * R[0], R[1] ... are valid arguments are Drivechains are bad, therefore we 
> should **NOT** merge in a feature that implements Recursive Covenants [...]
> You cannot have it both ways.

I guess you mean to say that I've got to pick one, rather than can't
pick both. But in any event, I don't pick either; my view is more along
the lines of:

 * drivechains shouldn't be used
 * it's okay if other people think drivechains are worth using, and go
   ahead and do so, if they're not creating a direct burden on everyone
   else

That's the same position I hold for other things, like using lightning
on mainnet in January 2018; or giving your bitcoin to an anonymous
custodian so it it can be borrowed via a flash loan on some novel third
party smart contract platform.

> Admittedly, there may be some set of restrictions that prevent 
> Turing-Completeness from implementing Drivechains, but you have to 
> demonstrate a proof of that set of restrictions existing.

Like I said; I don't think the drivechains game theory works without
the implicit threat of miner censorship, and therefore you need a
"from_coinbase" flag as well as covenants. That's not a big objection,
though. (On the other hand, if I'm wrong and drivechains *do* work
without that threat; then drivechains don't cause a block size increase,
and can be safely ignored by miners and full node operators, and the
arguments against drivechains are specious; and implementing them purely
via covenants so miners aren't put in a privileged position seems an
improvement)

> > I think it's pretty reasonable to say:
> >
> > a) adding dedicated consensus features for drivechains is a bad idea
> > in the absence of widespread consensus that drivechains are likely
> > to work as designed and be a benefit to bitcoin overall
> >
> > b) if you want to risk your own funds by leaving your coins on an
> > exchange or using lightning or eltoo or tumbling/coinjoin or payment
> > pools or drivechains or being #reckless in some other way, and aren't
> > asking for consensus changes, that's your business
> 
> *Shrug* I do not really see the distinction here --- in a world with 
> Drivechains, you are free to not put your coins in a Drivechain-backed 
> sidechain, too.

Well, yes: I'm saying there's no distinction between putting funds in
drivechains and other #reckless things you might do with your money?

My opinion is (a) we should be conservative about adding new consensus
features because of the maintenance cost; (b) we should design
consensus/policy in a way to encourage minimising the externality costs
users impose on each other; and (c) we should make it as easy as possible
to use bitcoin safely in general -- but if people *want* to be reckless,
even knowing the consequences, that's 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-23 Thread Anthony Towns via bitcoin-dev
On Wed, Feb 23, 2022 at 11:28:36AM +, ZmnSCPxj via bitcoin-dev wrote:
> Subject: Turing-Completeness, And Its Enablement Of Drivechains

> And we have already rejected Drivechains,

That seems overly strong to me.

> for the following reason:
> 1.  Sidechain validators and mainchain miners have a strong incentive to
> merge their businesses.
> 2.  Mainchain miners end up validating and commiting to sidechain blocks.
> 3.  Ergo, sidechains on Drivechains become a block size increase.

I think there are two possible claims about drivechains that would make
them unattractive, if true:

 1) that adding a drivechain is a "block size increase" in the sense
that every full node and every miner need to do more work when
validating a block, in order to be sure whether the majority of hash
rate will consider it valid, or will reject it and refuse to build
on it because it's invalid because of some external drivechain rule

 2) that funds deposited in drivechains will be stolen because
the majority of hashrate is not enforcing drivechain rules (or that
deposited funds cannot be withdrawn, but will instead be stuck in
the drivechain, rather than having a legitimate two-way peg)

And you could combine those claims, saying that one or the other will
happen (depending on whether more or less than 50% of hashpower is
enforcing drivechain rules), and either is bad, even though you don't
know which will happen.

I believe drivechain advocates argue a third outcome is possible where
neither of those claims hold true, where only a minority of hashrates
needs to validate the drivechain rules, but that is still sufficient
to prevent drivechain funds from being stolen.

One way to "reject" drivechains is simply to embrace the second claim --
that putting money into drivechains isn't safe, and that miners *should*
claim coins that have been drivehcain encumbered (or that miners
should not assist with withdrawing funds, leaving them trapped in the
drivechain). In some sense this is already the case: bip300 rules aren't
enforced, so funds committed today via bip300 can likely expect to be
stolen, and likely won't receive the correct acks, so won't progress
even if they aren't stolen.



I think a key difference between tx-covenant based drivechains and bip300
drivechains is hashpower endorsement: if 50% of hashpower acks enforcement
of a new drivechain (as required in bip300 for a new drivechain to exist
at all), there's an implicit threat that any block proposing an incorrect
withdrawal from that blockchain will have their block considered invalid
and get reorged out -- either directly by that hashpower majority, or
indirectly by users conducting a UASF forcing the hashpower majority to
reject those blocks.

I think removing that implicit threat changes the game theory
substantially: rather than deposited funds being withdrawn due to the
drivechain rules, you'd instead expect them to be withdrawn according to
whoever's willing to offer the miners the most upfront fees to withdraw
the funds.

That seems to me to mean you'd frequently expect to end up in a scorched
earth scenario, where someone attempts to steal, then they and the
legitimate owner gets into a bidding war, with the result that most
of the funds end up going to miners in fees. Because of the upfront
payment vs delayed collection of withdrawn funds, maybe it could end up
as a dollar auction, with the two parties competing to lose the least,
but still both losing substantial amounts?

So I think covenant-based drivechains would be roughly the same as bip300
drivechains, where a majority of hashpower used software implementing
the following rules:

 - always endorse any proposed drivechain
 - always accept any payment into a drivechain
 - accept bids to ack/nack withdrawals, then ack/nack depending on
   whoever pays the most

You could probably make covenant-based drivechains a closer match to
bip300 drivechains if a script could determine if an input was from a
(100-block prior) coinbase or not.

> Logically, if the construct is general enough to form Drivechains, and
> we rejected Drivechains, we should also reject the general construct.

Not providing X because it can only be used for E, may generalise to not
providing Y which can also only be used for E, but it doesn't necessarily
generalise to not providing Z which can be used for both G and E.

I think it's pretty reasonable to say:

 a) adding dedicated consensus features for drivechains is a bad idea
in the absence of widespread consensus that drivechains are likely
to work as designed and be a benefit to bitcoin overall

 b) if you want to risk your own funds by leaving your coins on an
exchange or using lightning or eltoo or tumbling/coinjoin or payment
pools or drivechains or being #reckless in some other way, and aren't
asking for consensus changes, that's your business

Cheers,
aj

___
bitcoin-dev mailing 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Fri, Feb 11, 2022 at 12:12:28PM -0600, digital vagabond via bitcoin-dev 
wrote:
> Imagine a covenant design that was
> flexible enough to create an encumbrance like this: a script specifies a
> specific key in a multisig controlled by some authority figure (or a branch
> in the script that would allow unilateral control by such an authority),
> and the conditions of the covenant would perpetually require than any spend
> from the covenant can only be sent to a script involving that key from said
> authority, preventing by consensus any removal of that central authorities
> involvement in control over that UTXO.

> I know that such a walled garden could easily be constructed now with
> multisig and restrictions on where coins can be withdrawn to from exchanges
> or whatever [...], but I think the important distinction
> between such non-consensus system designed to enforce such restrictions and
> a recursive covenant to accomplish the same is that in the case of a
> multisig/non-consensus based system, exit from that restriction is still
> possible under the consensus rules of the protocol.

I think that sort of encumberance is already possible: you send bitcoin
to an OP_RETURN address and that is registered on some other system as a
way of "minting" coins there (ie, "proof of burn") at which point rules
other than bitcoin's apply. Bitcoin consensus guarantees the value can't
be extracted back out of the OP_RETURN value.

I think spacechains effectively takes up this concept for their one-way
peg:

  https://bitcoin.stackexchange.com/questions/100537/what-is-spacechain

  
https://medium.com/@RubenSomsen/21-million-bitcoins-to-rule-all-sidechains-the-perpetual-one-way-peg-96cb2f8ac302

(I think spacechains requires a covenant construct to track the
single-tx-per-bitcoin-block that commits to the spacechain, but that's
not directly used for the BTC value that was pegged into the spacechain)

If we didn't have OP_RETURN, you could instead pay to a pubkey that's
constructed from a NUMS point / or a pedersen commitment, that's (roughly)
guaranteed unspendable, at least until secp256k1 is broken via bitcoin's
consensus rules (with the obvious disadvantage that nodes then can't
remove these outputs from the utxo set).

That was also used for XCP/Counterparty's ICO in 2014, at about 823 uBTC
per XCP on average (depending on when you got in it was between 666
uBTC/XCP and 1000 uBTC/XCP apparently), falling to a current price of
about 208 uBTC per XCP. It was about 1000 uBTC/XCP until mid 2018 though.

  https://counterparty.io/news/why-proof-of-burn/
  https://github.com/CounterpartyXCP/Documentation/blob/master/Basics/FAQ-XCP.md

These seem like they might be bad things for people to actually do
(why would you want to be paid to mine a spacechain in coins that can
only fall in value relative to bitcoin?), and certainly I don't think
we should do things just to make this easier; but it seems more like a
"here's why you're hurting yourself if you do this" thing, rather than a
"we can prevent you from doing it and we will" thing.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:
> This is where *all* the complexity comes from. If our goal is to "ensure a
> bump increases a miner's overall revenue" (thus not wasting relay for
> everyone else), then we precisely *do* need
> > Special consideration for "what should be in the next
> > block" and/or the caching of block templates seems like an imposing
> > dependency
> Whether a transaction increases a miner's revenue depends precisely on
> whether the transaction (package) being replaced is in the next block - if
> it is, you care about the absolute fee of the package and its replacement.

On Thu, Feb 10, 2022 at 11:44:38PM +, darosior via bitcoin-dev wrote:
> It's not that simple. As a miner, if i have less than 1vMB of transactions in 
> my mempool. I don't want a 10sats/vb transaction paying 10sats by a 
> 100sats/vb transaction paying only 1sats.

Is it really true that miners do/should care about that?

If you did this particular example, the miner would be losing 90k sats
in fees, which would be at most 1.44 *millionths* of a percent of the
block reward with the subsidy at 6.25BTC per block, even if there were
no other transactions in the mempool. Even cumulatively, 10sats/vb over
1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.

I suspect the "economically rational" choice would be to happily trade
off that immediate loss against even a small chance of a simpler policy
encouraging higher adoption of bitcoin, _or_ a small chance of more
on-chain activity due to higher adoption of bitcoin protocols like
lightning and thus a lower chance of an empty mempool in future.

If the network has an "empty mempool" (say less than 2MvB-10MvB of
backlog even if you have access to every valid 1+ sat/vB tx on any node
connected to the network), then I don't think you'll generally have txs
with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
which means your maximum loss is about 3% of block revenue, at least
while the block subsidy remains at 6.25BTC/block.

Certainly those percentages can be expected to double every four years as
the block reward halves (assuming we don't also reduce the min relay fee
and block min tx fee), but I think for both miners and network stability,
it'd be better to have the mempool backlog increase over time, which
would both mean there's no/less need to worry about the special case of
the mempool being empty, and give a better incentive for people to pay
higher fees for quicker confirmations.

If we accept that logic (and assuming we had some additional policy
to prevent p2p relay spam due to replacement txs), we could make
the mempool accept policy for replacements just be (something like)
"[package] feerate is greater than max(descendent fee rate)", which
seems like it'd be pretty straightforward to deal with in general?



Thinking about it a little more; I think the decision as to whether
you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
100sat/vb" tx in your mempool if you're going to take into account
unrelated, lower fee rate txs that are also in the mempool makes block
building "more" of an NP-hard problem and makes the greedy solution
we've currently got much more suboptimal -- if you really want to do that
optimally, I think you have to have a mempool that retains conflicting
txs and runs a dynamic programming solution to pick the best set, rather
than today's simple greedy algorithms both for building the block and
populating the mempool?

For example, if you had two such replacements come through the network,
a miner could want to flip from initially accepting the first replacement,
to unaccepting it:

Initial mempool: two big txs at 100k each, many small transactions at
15s/vB and 1s/vB

 [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]
   -> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)

Replacement for the 20s/vB tx paying a higher fee rate but lower total
fee; that's worth including:

 [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
   -> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)

Later, replacement for the 12s/vB tx comes in, also paying higher fee
rate but lower total fee. Worth including, but only if you revert the
original replacement:

 [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.16 BTC for 1MvB (150*20 + 850*15)

 [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)

Algorithms/mempool policies you might have, and their results with
this example:

 * current RBF rules: reject both replacements because they don't
   increase the absolute fee, thus get the minimum block fees of
   0.148 BTC

 * reject RBF unless it increases the fee rate, and get 0.1484 BTC in
   fees

 * reject RBF if it's lower fee rate or immediately decreases the block
   reward: so, accept the 

Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 07, 2022 at 09:16:10PM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> > > For more complex interactions, I was imagining combining this TXHASH
> > > proposal with CAT and/or rolling SHA256 opcodes.
> Indeed, and we really want something that can be programmed at redemption
> time.

I mean, ideally we'd want something that can be flexibly programmed at
redemption time, in a way that requires very few bytes to express the
common use cases, is very efficient to execute even if used maliciously,
is hard to misuse accidently, and can be cleanly upgraded via soft fork
in the future if needed?

That feels like it's probably got a "fast, cheap, good" paradox buried
in there, but even if it doesn't, it doesn't seem like something you
can really achieve by tweaking around the edges?

> That probably involves something like how the historic MULTISIG worked by
> having list of input / output indexes be passed in along with length
> arguments.
> 
> I don't think there will be problems with quadratic hashing here because as
> more inputs are list, the witness in turns grows larger itself.

If you cache the hash of each input/output, it would mean each byte of
the witness would be hashing at most an extra 32 bytes of data pulled
from that cache, so I think you're right. Three bytes of "script" can
already cause you to rehash an additional ~500 bytes (DUP SHA256 DROP),
so that should be within the existing computation-vs-weight relationship.

If you add the ability to hash a chosen output (as Rusty suggests, and
which would allow you to simulate SIGHASH_GROUP), your probably have to
increase your cache to cover each outputs' scriptPubKey simultaneously,
which might be annoying, but doesn't seem fatal.

> That said, your SIGHASH_GROUP proposal suggests that some sort of
> intra-input communication is really needed, and that is something I would
> need to think about.

I think the way to look at it is that it trades off spending an extra
witness byte or three per output (your way, give or take) vs only being
able to combine transactions in limited ways (sighash_group), but being
able to be more optimised than the more manual approach.

That's a fine tradeoff to make for something that's common -- you
save onchain data, make something easier to use, and can optimise the
implementation so that it handles the common case more efficiently.

(That's a bit of a "premature optimisation" thing though -- we can't
currently do SIGHASH_GROUP style things, so how can you sensibly justify
optimising it because it's common, when it's not only currently not
common, but also not possible? That seems to me a convincing reason to
make script more expressive)

> While normally I'd be hesitant about this sort of feature creep, when we
> are talking about doing soft-forks, I really think it makes sense to think
> through these sorts of issues (as we are doing here).

+1

I guess I especially appreciate your goodwill here, because this has
sure turned out to be a pretty long message as I think some of these
things through out loud :)

> > "CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
> > elements for a while; has anyone managed to build anything interesting
> > with them in practice, or are they only useful for thought experiments
> > and blog posts? To me, that suggests that while they're useful for
> > theoretical discussion, they don't turn out to be a good design in
> > practice.
> Perhaps the lesson to be drawn is that languages should support multiplying
> two numbers together.

Well, then you get to the question of whether that's enough, or if
you need to be able to multiply bignums together, etc? 

I was looking at uniswap-like things on liquid, and wanted to do constant
product for multiple assets -- but you already get the problem that "x*y
< k" might overflow if the output values x and y are ~50 bits each, and
that gets worse with three assets and wanting to calculate "x*y*z < k",
etc. And really you'd rather calculate "a*log(x) + b*log(y) + c*log(z)
< k" instead, which then means implementing fixed point log in script...

> Having 2/3rd of the language you need to write interesting programs doesn't
> mean that you get 2/3rd of the interesting programs written.

I guess to abuse that analogy: I think you're saying something like
we've currently got 67% of an ideal programming language, and CTV
would give us 68%, but that would only take us from 10% to 11% of the
interesting programs. I agree txhash might bump that up to, say, 69%
(nice) but I'm not super convinced that even moves us from 11% to 12%
of interesting programs, let alone a qualitative leap to 50% or 70%
of interesting programs.

It's *possible* that the ideal combination of opcodes will turn out to
be CAT, TXHASH, CHECKSIGFROMSTACK, MUL64LE, etc, but it feels like it'd
be better working something out that fits together well, rather than
adding things piecemeal and hoping we don't spend all that effort to
end up in a local optimum 

Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 07, 2022 at 11:16:26AM +, Gloria Zhao wrote:
> @aj:
> > I wonder sometimes if it could be sufficient to just have a relay rate
> > limit and prioritise by ancestor feerate though. Maybe something like:
> > - instead of adding txs to each peers setInventoryTxToSend immediately,
> >   set a mempool flag "relayed=false"
> > - on a time delay, add the top N (by fee rate) "relayed=false" txs to
> >   each peer's setInventoryTxToSend and mark them as "relayed=true";
> >   calculate how much kB those txs were, and do this again after
> >   SIZE/RATELIMIT seconds

> > - don't include "relayed=false" txs when building blocks?

The "?" was me not being sure that point is a good suggestion...

Miners might reasonably decide to have no rate limit, and always relay,
and never exclude txs -- but the question then becomes is whether they
hear about the tx at all, so rate limiting behaviour could still be a
potential problem for whoever made the tx.

> Wow cool! I think outbound tx relay size-based rate-limiting and
> prioritizing tx relay by feerate are great ideas for preventing spammers
> from wasting bandwidth network-wide. I agree, this would slow the low
> feerate spam down, preventing a huge network-wide bandwidth spike. And it
> would allow high feerate transactions to propagate as they should,
> regardless of how busy traffic is. Combined with inbound tx request
> rate-limiting, might this be sufficient to prevent DoS regardless of the
> fee-based replacement policies?

I think you only want to do outbound rate limits, ie, how often you send
INV, GETDATA and TX messages? Once you receive any of those, I think
you have to immediately process / ignore it, you can't really sensibly
defer it (beyond the existing queues we have that just build up while
we're busy processing other things first)?

> One point that I'm not 100% clear on: is it ok to prioritize the
> transactions by ancestor feerate in this scheme? As I described in the
> original post, this can be quite different from the actual feerate we would
> consider a transaction in a block for. The transaction could have a high
> feerate sibling bumping its ancestor.
> For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB). If
> we just received C, it would be incorrect to give it a priority equal to
> its ancestor feerate (3sat/vB) because if we constructed a block template
> now, B would bump A, and C's new ancestor feerate is 5sat/vB.
> Then, if we imagine that top N is >5sat/vB, we're not relaying C. If we
> also exclude C when building blocks, we're missing out on good fees.

I think you're right that this would be ugly. It's something of a
special case:

 a) you really care about C getting into the next block; but
 b) you're trusting B not being replaced by a higher fee tx that
doesn't have A as a parent; and
 c) there's a lot of txs bidding the floor of the next block up to a
level in-between the ancestor fee rate of 3sat/vB and the tx fee
rate of 5sat/vB

Without (a), maybe you don't care about it getting to a miner quickly.
If your trust in (b) was misplaced, then your tx's effective fee rate
will drop and (because of (c)), you'll lose anyway. And if the spam ends
up outside of (c)'s range, either the rate limiting won't take effect
(spam's too cheap) and you'll be fine, or you'll miss out on the block
anyway (spam's paying more than your tx rate) and you never had any hope
of making it in.

Note that we already rate limit via INVENTORY_BROADCAST_MAX /
*_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
per 10 minutes for outbound connections. This would be a weight based
rate limit instead-of/in-addition-to that, I guess.

As far as a non-ugly approach goes, I think you'd have to be smarter about
tracking the "effective fee rate" than the ancestor fee rate manages;
maybe that's something that could fall out of Murch and Clara's candidate
set blockbuilding ideas [0] ?

Perhaps that same work would also make it possible to come up with
a better answer to "do I care that this replacement would invalidate
these descendents?"

[0] https://github.com/Xekyo/blockbuilding

> > - keep high-feerate evicted txs around for a while in case they get
> >   mined by someone else to improve compact block relay, a la the
> >   orphan pool?
> Replaced transactions are already added to vExtraTxnForCompact :D

I guess I was thinking that it's just a 100 tx LRU cache, which might
not be good enough?

Maybe it would be more on point to have a rate limit apply only to
replacement transactions?

> For wallets, AJ's "All you need is for there to be *a* path that follows
> the new relay rules and gets from your node/wallet to perhaps 10% of
> hashpower" makes sense to me (which would be the former).

Perhaps a corollarly of that is that it's *better* to have the mempool
acceptance rule only consider economic incentives, and have the spam
prevention only be about "shall I tell my peers about this?"

If you don't have 

Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2022-02-02 Thread Anthony Towns via bitcoin-dev
On Mon, Jul 05, 2021 at 09:46:21AM -0400, Matt Corallo via bitcoin-dev wrote:
> More importantly, AJ's point here neuters anti-covanent arguments rather
> strongly.
>
> On 7/5/21 01:04, Anthony Towns via bitcoin-dev wrote:
> > In some sense multisig *alone* enables recursive covenants: a government
> > that wants to enforce KYC can require that funds be deposited into
> > a multisig of "2   2 CHECKMULTISIG", and that
> > "recipient" has gone through KYC. Once deposited to such an address,
> > the gov can refus to sign with gov_key unless the funds are being spent
> > to a new address that follows the same rules.

I couldn't remember where I'd heard this, but it looks like I came
across it via Andrew Poelstra's "CAT and Schnorr Tricks II" post [0]
(Feb 2021), in which he credits Ethan Heilman for originally coming up
with the analogy (in 2019, cf [1]).

[0] https://medium.com/blockstream/cat-and-schnorr-tricks-ii-2f6ede3d7bb5
[1] https://twitter.com/Ethan_Heilman/status/1194624166093369345

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-02-02 Thread Anthony Towns via bitcoin-dev
On Tue, Feb 01, 2022 at 10:30:12AM +0100, Bastien TEINTURIER via bitcoin-dev 
wrote:
> But do you agree that descendants only matter for DoS resistance then,
> not for miner incentives?

There's an edge case where you're replacing tx A with tx X, and X's fee
rate is higher than A's, but you'd be obsoleting descendent txs (B, C,
D...) and thus replacing them with unrelated txs (L, M, N...), and the
total feerate/fees of A+B+C+D... is nevertheless higher than X+L+M+N...

But I think that's probably unusual (transactions D and L are adjacent
in the mempool, that's why L is chosen for the block; but somehow
there's a big drop off in value somewhere between B/C/D and L/M/N),
and at least today, I don't think miners consider it essential to eke
out every possible sat in fee income.

(If, as per your example, you're actually replacing {A,B,C,D} with
{X,Y,Z,W} where X pays higher fees than A and the package in total pays
either the same or higher fees, that's certainly incentive compatible.
The tricky question is what happens when X arrives on its own and it
might be that no one ever sends a replacement for B,C,D)

> The two policies I proposed address miner incentives. I think they're
> insufficient to address DoS issues. But adding a 3rd policy to address
> DoS issues may be a good solution?

>>> 1. The transaction's ancestor absolute fees must be X% higher than the
>>> previous transaction's ancestor fees
>>> 2. The transaction's ancestor feerate must be Y% higher than the
>>> previous transaction's ancestor feerate

Absolute fees only matter if your backlog's feerate drops off. If you've
got 100MB of txs offering 5sat/vb, then exchanging 50kB at 5sat/vb for
1kB at 6sat/vb is still a win: your block gains 1000 sats in fees even
though your mempool loses 245,000 sats in fees.

But if your backlog's feerate does drop off, *and* that matters, then
I don't think you can ignore the impact of the descendent transactions
that you might not get a replacement for.

I think "Y% higher" rather than just "higher" is only useful for
rate-limiting, not incentive compatibility. (Though maybe it helps
stabilise a greedy algorithm in some cases?)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why CTV, why now?

2022-02-01 Thread Anthony Towns via bitcoin-dev
On Wed, Jan 05, 2022 at 02:44:54PM -0800, Jeremy via bitcoin-dev wrote:
> CTV was an output of my personal "research program" on how to make simple
> covenant types without undue validation burdens. It is designed to be the
> simplest and least risky covenant specification you can do that still
> delivers sufficient flexibility and power to build many useful applications.

I believe the new elements opcodes [0] allow simulating CTV on the liquid
blockchain (or liquid-testnet [1] if you'd rather use fake money but not
use Jeremy's CTV signet). It's very much not as efficient as having a
dedicated opcode, of course, but I think the following script template
would work:

INSPECTVERSION SHA256INITIALIZE
INSPECTLOCKTIME SHA256UPDATEE
INSPECTNUMINPUTS SCRIPTNUMTOLE64 SHA256UPDATE
INSPECTNUMOUTPUTS SCRIPTNUMTOLE64 SHA256UPDATE

PUSHCURRENTINPUTINDEX SCRIPTNUMTOLE64 SHA256UPDATE
PUSHCURRENTINPUTINDEX INSPECTINPUTSEQUENCE SCRIPTNUMTOLE64 SHA256UPDATE

{ for  in 0..
 INSPECTOUTPUTASSET CAT SHA256UPDATE
 INSPECTOUTPUTVALUE DROP SIZE SCRIPTNUMTOLE64 SWAP CAT SHA256UPDATE
 INSPECTOUTPUTNONCE SIZE SCRIPTNUMTOLE64 SWAP CAT SHA256UPDATE
 INSPECTOUTPUTSCRIPTPUBKEY SWAP SIZE SCRIPTNUMTOLE64 SWAP CAT CAT 
SHA256UPDATE
}

SHA256FINALIZE  EQUAL

Provided NUMINPUTS is one, this also means the txid of the spending tx is
fixed, I believe (since these are tapoot only opcodes, scriptSig
malleability isn't possible); if NUMINPUTS is greater than one, you'd
need to limit what other inputs could be used somehow which would be
application specific, I think.

I think that might be compatible with confidential assets/values, but
I'm not really sure.

I think it should be possible to use a similar approach with
CHECKSIGFROMSTACK instead of " EQUAL" to construct APO-style
signatures on elements/liquid. Though you'd probably want to have the
output inspction blocks wrapped with "INSPECTNUMOUTPUTS  GREATERTHAN
IF .. ENDIF". (In that case, beginning with "PUSH[FakeAPOSig] SHA256
DUP SHA256INITIALIZE SHA256UPDATE" might also be sensible, so you're
not signing something that might be misused in a different context later)


Anyway, since liquid isn't congested, and mostly doesn't have lightning
channels built on top of it, probably the vaulting application is the
only interesting one to build on top on liquid today? There's apparently
about $120M worth of BTC and $36M worth of USDT on liquid, which seems
like it could justify some vault-related work. And real experience with
CTV-like constructs seems like it would be very informative.

Cheers,
aj

[0] 
https://github.com/ElementsProject/elements/blob/master/doc/tapscript_opcodes.md
[1] https://liquidtestnet.com/

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-01-31 Thread Anthony Towns via bitcoin-dev
On Mon, Jan 31, 2022 at 04:57:52PM +0100, Bastien TEINTURIER via bitcoin-dev 
wrote:
> I'd like to propose a different way of looking at descendants that makes
> it easier to design the new rules. The way I understand it, limiting the
> impact on descendant transactions is only important for DoS protection,
> not for incentive compatibility. I would argue that after evictions,
> descendant transactions will be submitted again (because they represent
> transactions that people actually want to make),

I think that's backwards: we're trying to discourage people from wasting
the network's bandwidth, which they would do by publishing transactions
that will never get confirmed -- if they were to eventually get confirmed
it wouldn't be a waste of bandwith, after all. But if the original
descendent txs were that sort of spam, then they may well not be
submitted again if the ancestor tx reaches a fee rate that's actually
likely to confirm.

I wonder sometimes if it could be sufficient to just have a relay rate
limit and prioritise by ancestor feerate though. Maybe something like:

 - instead of adding txs to each peers setInventoryTxToSend immediately,
   set a mempool flag "relayed=false"

 - on a time delay, add the top N (by fee rate) "relayed=false" txs to
   each peer's setInventoryTxToSend and mark them as "relayed=true";
   calculate how much kB those txs were, and do this again after
   SIZE/RATELIMIT seconds

 - don't include "relayed=false" txs when building blocks?

 - keep high-feerate evicted txs around for a while in case they get
   mined by someone else to improve compact block relay, a la the
   orphan pool?

That way if the network is busy, any attempt to do low fee rate tx spam
will just cause those txs to sit as relayed=false until they're replaced
or the network becomes less busy and they're worth relaying. And your
actual mempool accept policy can just be "is this tx a higher fee rate
than the txs it replaces"...

> Even if bitcoin core releases a new version with updated RBF rules, as a
> wallet you'll need to keep using the old rules for a long time if you
> want to be safe.

All you need is for there to be *a* path that follows the new relay rules
and gets from your node/wallet to perhaps 10% of hashpower, which seems
like something wallet providers could construct relatively quickly?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-31 Thread Anthony Towns via bitcoin-dev
On Fri, Jan 28, 2022 at 08:56:25AM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
> For more complex interactions, I was imagining combining this TXHASH
> proposal with CAT and/or rolling SHA256 opcodes.  If TXHASH ended up
> supporting relative or absolute input/output indexes then users could
> assemble the hashes of the particular inputs and outputs they care about
> into a single signed message.

That's certainly possible, but it sure seems overly complicated and
error prone...

> > While I see the appeal of this from a language design perspective;
> > I'm not sure it's really the goal we want. When I look at bitcoin's
> > existing script, I see a lot of basic opcodes to do simple arithmetic and
> > manipulate the stack in various ways, but the opcodes that are actually
> > useful are more "do everything at once" things like check(multi)sig or
> > sha256. It seems like what's most useful on the blockchain is a higher
> > level language, rather than more of blockchain assembly language made
> > up of small generic pieces. I guess "program their own use cases from
> > components" seems to be coming pretty close to "write your own crypto
> > algorithms" here...
> Which operations in Script are actually composable today?

> There is one aspect of Bitcoin Script that is composable, which is
> (monotone) boolean combinations of the few primitive transaction conditions
> that do exist.  The miniscript language captures nearly the entirety of
> what is composable in Bitcoin Script today: which amounts to conjunctions,
> disjunctions (and thresholds) of signatures, locktimes, and revealing hash
> preimages.

Yeah; I think miniscript captures everything bitcion script is actually
useful for today, and if we were designing bitcoin from scratch and
had known that was the feature set we were going to end up with, we'd
have come up with something simpler and a fair bit more high level than
bitcoin script for the interpreter.

> I don't think there is much in the way of lessons to be drawn from how we
> see Bitcoin Script used today with regards to programs built out of
> reusable components.

I guess I think one conclusion we should draw is some modesty in how
good we are at creating general reusable components. That is, bitcoin
script looks a lot like a relatively general expression language,
that should allow you to write interesting things; but in practice a
lot of it was buggy (OP_VER hardforks and resource exhaustion issues),
or not powerful enough to actually be interesting, or too complicated
to actually get enough use out of [0].

> TXHASH + CSFSV won't be enough by itself to allow for very interesting
> programs Bitcoin Script yet, we still need CAT and friends for that,

"CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
elements for a while; has anyone managed to build anything interesting
with them in practice, or are they only useful for thought experiments
and blog posts? To me, that suggests that while they're useful for
theoretical discussion, they don't turn out to be a good design in
practice.

> but
> CSFSV is at least a step in that direction.  CSFSV can take arbitrary
> messages and these messages can be fixed strings, or they can be hashes of
> strings (that need to be revealed), or they can be hashes returned from
> TXHASH, or they can be locktime values, or they can be values that are
> added or subtracted from locktime values, or they can be values used for
> thresholds, or they can be other pubkeys for delegation purposes, or they
> can be other signatures ... for who knows what purpose.

I mean, if you can't even think of a couple of uses, that doesn't seem
very interesting to pursue in the near term? CTV has something like half
a dozen fairly near-term use cases, but obviously those can all be done
just with TXHASH without a need for CSFS, and likewise all the ANYPREVOUT
things can obviously be done via CHECKSIG without either TXHASH or CSFS...

To me, the point of having CSFS (as opposed to CHECKSIG) seems to be
verifying that an oracle asserted something; but for really simply boolean
decisions, doing that via a DLC seems better in general since that moves
more of the work off-chain; and for the case where the signature is being
used to authenticate input into the script rather than just gating a path,
that feels a bit like a weaker version of graftroot?

I guess I'd still be interested in the answer to:

> > If we had CTV, POP_SIGDATA, and SIGHASH_NO_TX_DATA_AT_ALL but no OP_CAT,
> > are there any practical use cases that wouldn't be covered that having
> > TXHASH/CAT/CHECKSIGFROMSTACK instead would allow? Or where those would
> > be significantly more convenient/efficient?
> > 
> > (Assume "y x POP_SIGDATA POP_SIGDATA p CHECKSIGVERIFY q CHECKSIG"
> > commits to a vector [x,y] via p but does not commit to either via q so
> > that there's some "CAT"-like behaviour available)


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-30 Thread Anthony Towns via bitcoin-dev
On Thu, Jan 27, 2022 at 07:18:54PM -0500, James O'Beirne via bitcoin-dev wrote:
> > I don't think implementing a CTV opcode that we expect to largely be
> > obsoleted by a TXHASH at a later date is yielding good value from a soft
> > fork process.
> Caching for something
> like TXHASH looks to me like a whole different ballgame relative to CTV,
> which has a single kind of hash.

I don't think caching is a particular problem even for the plethora of
flags Russell described: you cache each value upon use, and reuse that
cached item if it's needed for other signatures within the tx; sharing
with BIP 143, 341 or 342 signatures as appropriate. Once everything's
cached, each signature then only requires hashing about 32*17+4 = ~548
bytes, and you're only hashing each part of the transaction once in
order to satisfy every possible flag.

> Even if we were to adopt something like TXHASH, how long is it going to
> take to develop, test, and release?

I think the work to release something like TXHASH is all in deciding:

 - if TXHASH or CTV or something else is the better "UX"
 - what is a good tx to message algorithm and how it should be
   parametized
 - what's an appropriate upgrade path for the TXHASH/CTV/??? mechanism

BIP 119 provides one answer to each of those, but you still have to do
the work to decide if its a *good* answer to each of them.

> My guess is "a while" - 

If we want to get a good answer to those questions, it might be true
that it takes a while; but even if we want to rush ahead with more of
a "well, we're pretty sure it's not going to be a disaster" attitude,
we can do that with TXHASH (almost) as easily as with CTV.

> The utility of vaulting seems
> underappreciated among consensus devs and it's something I'd like to write
> about soon in a separate post.

I think most of the opposition is just that support for CTV seems to be
taking the form "something must be done; this is something, therefore
it must be done"...

I'd be more comfortable if the support looked more like "here are the
alternatives to CTV, and here's the advantages and drawbacks for each,
here's how they interact with other ideas, and here's why we think,
on balance, we think this approach is the best one". But mostly the
alternatives are dismissed with "this will take too long" or "this enables
recursive covenants which someone (we don't know who) might oppose".

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-28 Thread Anthony Towns via bitcoin-dev
On Fri, Jan 28, 2022 at 01:14:07PM +, Michael Folkson via bitcoin-dev wrote:
> There is not even a custom signet with CTV (as far as I know) 

https://twitter.com/jeremyrubin/status/1339699281192656897

signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
addnode=50.18.75.225

But I think there's only been a single coinbase consolidation tx, and no
actual CTV transactions?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-27 Thread Anthony Towns via bitcoin-dev
On Wed, Jan 26, 2022 at 12:20:10PM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> Recapping the relationship between CTV and ANYPREVOUT::

> While this is a pretty neat feature,
> something that ANYPREVOUT cannot mimic, the main application for it is
> listed as using congestion control to fund lightning channels, fixing their
> TXIDs in advance of them being placed on chain.  However, if ANYPREVOUT
> were used to mimic CTV, then likely it would be eltoo channels that would
> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
> advance in order to use them.

Even if they weren't eltoo channels, they could be updated lightning penalty
channels signed with APO signatures so that the txid wasn't crucial. So
I don't think this would require all the work to update to eltoo just to
have this feature, if APO were available without CTV per se.

> An Alternative Proposal::
>  ...

> For similar reasons, TXHASH is not amenable to extending the set of txflags
> at a later date.

> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start.  For example having
> bits to control whether [...]

I don't think that's really feasible -- eg, what you propose don't cover
SIGHASH_GROUP: 

 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html

> That all said, even if other txhash flag modes are needed in the future,
> adding TXHASH2 always remains an option.

I think baking this in from day 0 might be better: make TXHASH be
a multibyte opcode, so that when you decode "0xBB" on the stack,
you also decode a serialize.h:VarInt as the version number. Version 0
(0xBB00) gives hashes corresponding to bip342, version 1 (0xBB01) gives
hashes corresponding to bip118 (anyprevout), anything else remains as
OP_SUCCESS behaviour, and you retain a pretty compact encoding even if
we somehow eventually end up needing hundreds or thousands of different
TXHASH versions.

Because the version here is part of the opcode rather than pulled from
the stack, I think this preserves any benefits related to composition
or analysis, but is otherwise still pretty general. I'm imagining that
the idea would be to be consistent between CHECKSIG key versions and
TXHASH versions.

So I think just designing it this way means TXHASH *would* be "amenable
to extending the set of txflags at a later date."

> ' CHECKSIGVERIFY can be simulated by ' 
> TXHASH  CHECKSIGFROMSTACKVERIFY'. 

I don't think that's quite right. BIP 118 anyprevout is done by taking
the pubkey "P", marking it as "APO-capable" (by prefixing it with 0x01),
and then getting a sighash and sig from the witness. Doing the same
with TXHASH/CSFSV would just be replacing " CHECKSIGVERIFY" with
"TXHASH  CSFSV" with the witness providing both the signature and
txhash flag, just as separate elements rather than concatenated. (The
"APO-capable" part is implicit in the "TXHASH" opcode)

> In addition to the CTV and ANYPREVOUT applications, with
> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
> signed by oracles for oracle applications.  This is where we see the
> benefit of decomposing operations into primitive pieces.  By giving users
> the ability to program their own use cases from components, we get more
> applications out of fewer op codes!

While I see the appeal of this from a language design perspective;
I'm not sure it's really the goal we want. When I look at bitcoin's
existing script, I see a lot of basic opcodes to do simple arithmetic and
manipulate the stack in various ways, but the opcodes that are actually
useful are more "do everything at once" things like check(multi)sig or
sha256. It seems like what's most useful on the blockchain is a higher
level language, rather than more of blockchain assembly language made
up of small generic pieces. I guess "program their own use cases from
components" seems to be coming pretty close to "write your own crypto
algorithms" here...

I'm not really sure what the dividing line there is, or even which side
TXHASH would be on. I'm not even totally convinced that the "high level
language" should be describing what consensus provides rather than some
layer on top that people compile (a la miniscript). Just trying to put
into words why I'm not 100% comfortable with the principle per se.


One thing I've thought about is an opcode like "POP_SIGDATA" which would
populate a new "register" called "sigdata", which would then be added
to the message being signed. That's a generalisation of tapscript's
behaviour for "codeseparator" essentially. That is,

   x POP_SIGDATA p CHECKSIG

would be roughly the same as

   TXHASH x CAT SHA256SUM p CHECKSIGFROMSTACK

I think "POP_SIGDATA" makes for an interesting counterpart to
"PUSH_ANNEXITEM" -- we implicitly commit to all the annex items in
signatures, so PUSH_ANNEXITEM would give a way to use signed data that's
given verbatim in the witness in further calculations; but POP_SIGDATA

Re: [bitcoin-dev] CTV BIP review

2022-01-20 Thread Anthony Towns via bitcoin-dev
On Tue, Jan 18, 2022 at 03:54:21PM -0800, Jeremy via bitcoin-dev wrote:
> Some of it's kind of annoying because
> the legal definition of covenant is [...]
> so I do think things like CLTV/CSV are covenants

I think that in the context of Bitcoin, the most useful definition of
covenant is that it's when the scriptPubKey of a utxo restricts the
scriptPubKey in the output(s) of a tx spending that utxo.

CTV, TLUV, etc do that; CSV, CLTV don't. ("checksig" per se doesn't
either, though of course the signature that checksig uses does -- if that
signature is in the scriptPubKey rather than the scriptSig or witness,
that potentially becomes a covenant too)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-10-14 Thread Anthony Towns via bitcoin-dev
On Wed, Sep 15, 2021 at 08:24:43AM -0700, Matt Corallo via bitcoin-dev wrote:
> > On Sep 13, 2021, at 21:56, Anthony Towns  wrote:
> > I'm not sure that's really the question you want answered?
> Of course it is? I’d like to understand the initial thinking and design 
> analysis that went into this decision. That seems like an important question 
> to ask when seeking changes in an existing system :).

Well, "are there any drawbacks to doing X instead, because that would make
it easier for me to do Y" just seems like a more interesting question?
Because:

> > Mostly
> > it's just "this is how mainnet works" plus "these are the smallest
> > changes to have blocks be chosen by a signature, rather than entirely
> > by PoW competition".

doesn't seem like that interesting an answer...

To be a bit more specific, it's not at all clear to me what you would
be happy with? (I mean, beyond "something magic that works exactly how
I want it, when I want it, even if I don't know what that is yet or
change my mind later" which is obviously the desired behaviour for all
software everywhere) 

You say you're happier with both mainnet and testnet3 than signet,
but mainnet isn't any faster than signet, while (if you've got an ASIC)
testnet3 will give you a block per second, especially if you don't mind
your blocks getting reorged out. There's a lot of ground between those
two extremes.

> > For integration testing across many services, I think a ten-minute-average
> > between blocks still makes sense -- protocols relying on CSV/CLTV to
> > ensure there's a delay they can use to recover funds, if they specify
> > that in blocks (as lightning's to_self_delay does), then significant
> > surges of blocks will cause uninteresting bugs. 
> Hmm, why would blocks coming quicker lead to a bug? I certainly hope no one 
> has a bug if their block time is faster than per ten minutes. I presume here, 
> you mean something like “if the node can’t keep up with the block rate”, but 
> I certainly hope the benchmark for may isn’t 10 minutes, or really even one.

The lightning to_self_delay is specified in blocks, but is meant to allow
you to be offline for some real time period; if you specify 1000 blocks
and are sure you'll be online every two days, that's fine on mainnet
and signet as it stands, but broken on testnet.

> > It would be easy enough to change things to target an average of 2 or
> > 5 minutes, I suppose, but then you'd probably need to propogate that
> > logic back into your apps that would otherwise think 144 blocks is around
> > about a day.
> Why? One useful thing for testing is compressing real time.

Sure, but if you're compressing _real_ time you need to manipulate the
nTime not just the number of blocks -- and that might be relevant for
nLocktime or nSequence checks by mtp rather than height. But that's
not something signet's appropriate for: you should be using regtest for
that scenario.

> > We could switch back to doing blocks exactly every 10 minutes, rather
> > than a poisson-ish distribution in the range of 1min to 60min, but that
> > doesn't seem like that huge a win, and makes it hard to test that things
> > behave properly when blocks arrive in bursts.
> Hmm, I suppose? If you want to test that the upper bound doesn’t
> need to be 100 minutes, though, it could be 10.

Mathematically, you can't have an average of 10 minutes and a max of 10
minutes without the minimum also being 10 minutes...

> > Best of luck to you then? Nobody's trying to sell you on a subscription
> > plan to using signet.
> lol, yes, I’m aware of that, nor did I mean to imply that anything has to be 
> targeted at a specific person’s requirements. Rather, my point here is that 
> I’m really confused as to who  the target user *is*, because we should be 
> building products with target users in mind, even if those targets are often 
> “me” for open source projects.

I don't really think there's a definitive answer to that yet?

My guess is "integration testing" is close to right; whether it be
different services validating they interoperate, or users seeing if a
service works the way they expect in a nearly-live environment.

For private signets, the advantage over regtest is you don't risk some
random participant causing major reorgs, and can reasonably use it over
the internet without having to worry too much about securing things.

For the default public signet, the advantage over regtest is probably that
you've got additional infrastructure already setup (eg explorer.bc-2.jp
and mempool.space/signet, perhaps eventually a decent lightning test
network? there's signet-lightning.wakiyamap.dev)

The advantage of a private signet vs the default public one is probably
only that you can control the consensus rules to introduce and test a
new soft fork if you want. The advantage of the default public signet
over your own private one is probably mostly that it already has
miners/explorers/faucets setup and you don't have to do that yourself.

I 

Re: [bitcoin-dev] Taproot testnet wallet

2021-10-14 Thread Anthony Towns via bitcoin-dev
On Sat, Oct 09, 2021 at 04:49:42PM +, Pieter Wuille via bitcoin-dev wrote:
> You can construct a taproot-capable wallet in Bitcoin Core as follows:
> * Have or create a descriptor wallet (createwallet RPC, with 
> descriptors=true).
> * Import a taproot descriptor (of the form "tr(KEY)"), as active descriptor
> (with active=true), where KEY can be a tprv.../* or any other supported key
> expression.
> * Get a new address with addresstype=bech32m

Running master (which has PR#21500 merged), then the above can be
done with:

1. create a descriptor wallet

  bitcoin-cli -signet -named createwallet wallet_name=descwallet 
descriptors=true load_on_startup=true

2. get the associated bip32 tprv private key

  TPRV=$(bitcoin-cli -rpcwallet=descwallet -signet listdescriptors true | jq 
'.descriptors | .[].desc' | sed 's/^.*(//;s/[)/].*//' | uniq | head -n1)

(This step requires PR#21500 to extract the wallet's tprv; you'll need to
be running an updated version of bitcoin-cli here as well as bitcoind. You
could also generate the tprv some other way.)

3. construct the taproot descriptor per BIP 86

  DESC="tr($TPRV/86'/1'/0'/0/*)"
  CHK="$(bitcoin-cli -rpcwallet=descwallet -signet getdescriptorinfo "$DESC" | 
jq -r .checksum)"

4. import the descriptor

  bitcoin-cli -rpcwallet=descwallet -signet importdescriptors "[{\"desc\": 
\"$DESC#$CHK\", \"active\": true, \"timestamp\": \"now\", \"range\": [0,1000], 
\"next_index\": 1}]"

5. get an address

  bitcoin-cli -rpcwallet=descwallet -signet getnewaddress '' bech32m

You can then use the signet faucet to send a few million ssats to that
address directly.

Same stuff works with testnet, though I'm not sure if any testnet faucets
will accept bech32m addresses directly.

This is all a bit deliberately cumbersome prior to taproot activating on
mainnet; once that happens and PR#22364 is merged, you'll only need to
do steps (1) and (5).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the regularity of soft forks

2021-10-14 Thread Anthony Towns via bitcoin-dev
On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > ... in this post I will argue against frequent soft forks with a single or
> minimal
> > set of features and instead argue for infrequent soft forks with batches
> > of features.
> I think this type of development has been discussed in the past and has been
> rejected.

> AJ: - improvements: changes might not make everyone better off, but we
>    don't want changes to screw anyone over either -- pareto
>    improvements in economics, "first, do no harm", etc. (if we get this
>    right, there's no need to make compromises and bundle multiple
>    flawed proposals so that everyone's an equal mix of happy and
>    miserable)

I don't think your conclusion above matches my opinion, for what it's
worth.

If you've got two features, A and B, where the game theory is:

 If A happens, I'm +100, You're -50
 If B happens, I'm -50, You're +100

then even though A+B is +50, +50, then I do think the answer should
generally be "think harder and come up with better proposals" rather than
"implement A+B as a bundle that makes us both +50".

_But_ if the two features are more like:

  If C happens, I'm +100, You're +/- 0
  If D happens, I'm +/- 0, You're +100

then I don't have a problem with bundling them together as a single
simultaneous activation of both C and D.

Also, you can have situations where things are better together,
that is:

  If E happens, we're both at +100
  If F happens, we're both at +50
  If E+F both happen, we're both at +9000

In general, I think combining proposals when the combination is better
than the individual proposals were is obviously good; and combining
related proposals into a single activation can be good if it is easier
to think about the ideas as a set. 

It's only when you'd be rejecting the proposal on its own merits that
I think combining it with others is a bad idea in principle.

For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
and CHECKSIGADD together because they do have synergies like that; we
didn't bundle ANYPREVOUT and graftroot despite the potential synergies
because those features needed substantially more study.

The nulldummy soft-fork (bip 147) was deployed concurrently with
the segwit soft-fork (bip 141, 143), but I don't think there was any
particular synergy or need for those things to be combined, it just
reduced the overhead of two sets of activation signalling to one.

Note that the implementation code for nulldummy had already been merged
and were applied as relay policy well before activation parameters were
defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
an active soft fork.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-20 Thread Anthony Towns via bitcoin-dev
On Sat, Sep 18, 2021 at 10:11:10AM -0400, Antoine Riard wrote:
> I think one design advantage of combining scope-minimal opcodes like MERKLESUB
> with sighash malleability is the ability to update a subset of the off-chain
> contract transactions fields after the funding phase.

Note that it's not "update" so much as "add to"; and I mostly think
graftroot (and friends), or just updating the utxo onchain, are a better
general purpose way of doing that. It's definitely a tradeoff though.

> Yes this is a different contract policy that I would like to set up.
> Let's say you would like to express the following set of capabilities.
> C0="Split the 4 BTC funds between Alice/Bob and Caroll/Dave"
> C1="Alice can withdraw 1 BTC after 2 weeks"
> C2="Bob can withdraw 1 BTC after 2 weeks"
> C3="Caroll can withdraw 1 BTC after 2 weeks"
> C4="Dave can withdraw 1 BTC after 2 weeks"
> C5="If USDT price=X, Alice can withdraw 2 BTC or Caroll can withdraw 2 BTC"

Hmm, I'm reading C5 as "If an oracle says X, and Alice and Carol agree,
they can distribute all the remaining funds as they see fit".

> If C4 is exercised, to avoid trust in the remaining counterparty, both Alice 
> or
> Caroll should be able to conserve the C5 option, without relying on the 
> updated
> key path.

> As you're saying, as we know the group in advance, one way to setup the tree
> could be:
>        (A, (B, C), BC), D), BCD), E, F), EF), G), EFG)))

Make it:

  (((AB, (A,B)), (CD, (C,D))), ACO)

AB = DROP  DUP 0 6 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 2BTC 
LESSTHAN
CD = same but for carol+dave
A =  DUP  10 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN
B' =  DUP 0 2 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN
B,C,D = same as A but for bob, etc
A',C',D' = same as B' but for alice, etc
ACO =  CHECKSIGVERIFY  CHECKSIG

Probably AB, CD, A..D, A'..D' all want a CLTV delay in there as well.
(Relative timelocks would probably be annoying for everyone who wasn't
the first to exit the pool)

> Note, this solution isn't really satisfying as the G path isn't neutralized on
> the Caroll/Dave fork and could be replayed by Alice or Bob...

I think the above fixes that -- when AB is spent it deletes itself and
the (A,B) pair; when A is spent, it deletes (A, B and AB) and replaces
them with B'; when B' is spent it just deletes itself.

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-18 Thread Anthony Towns via bitcoin-dev
On Fri, Sep 17, 2021 at 09:58:45AM -0700, Jeremy via bitcoin-dev wrote,
on behalf of John Law:

> I'd like to propose an alternative to BIP-118 [1] that is both safer and more
> powerful. The proposal is called Inherited IDs (IIDs) and is described in a
> paper that can be found here [2]. [...]

Pretty sure I've skimmed this before but hadn't given it a proper look.
Saying "X is more powerful" and then saying it can't actually do the
same stuff as the thing it's "more powerful" than always strikes me as
a red flag. Anyhoo..

I think the basic summary is that you add to each utxo a new resettable
"structural" tx id called an "iid" and indetify input txs that way when
signing, so that if the details of the transaction changes but not the
structure, the signature remains valid.

In particular, if you've got a tx with inputs tx1:n1, tx2:n2, tx3:n3, etc;
and outputs out1, out2, out3, etc, then its structual id is hash(iid(tx1),
n1) if any of its outputs are "tagged" and it's not a coinbase tx, and
otherwise it's just its txid.  (The proposed tagging is to use a segwit
v2 output in the tx, though I don't think that's an essential detail)

So if you have a tx A with 3 outputs, then tx B spends "A:0, A:1" and
tx C spends "B:0" and tx D spends "C:0", if you replace B with B',
then if both B and B' were tagged, and the signatures for C (and D,
assuming C was tagged) will still be valid for spending from B'.

So the question is what you can do with that.

The "2stage" protocol is proposed as an alternative to eltoo is
essentially just:

 a) funding tx gets dropped to the chain
 b) closing state is proposed by one party
 c) other party can immediately finalise by confirming a final state
that matches the proposed closing state, or was after it
 d) if the other party's not around for whatever delay, the party that
proposed the close can finalise it

That doesn't work for more than two participants, because two of
the participants could collude to take the fast path in (c) with some
earlier state, robbing any other participants. That said, this is a fine
protocol for two participants, and might be better than doing the full
eltoo arrangement if you only have a two participant channel.

To make channel factories work in this model, I think the key step is
using invalidation trees to allow updating the split of funds between
groups of participants. I think invalidation trees introduce a tradeoff
between (a) how many updates you can make, and (b) how long you have to
notice a close is proposed and correct it, before an invalidated state
can be posted, and (c) how long it will take to be able to extract your
funds from the factory if there are problems initially. You reduce those
delays substantially (to a log() factor) by introducing a hierarchy of
update txs (giving you a log() number of txs), I think.

That's the "multisig factories" section anyway, if I'm
following correctly. The "timeout trees", "update-forest" and
"challenge-and-response" approaches both introduce a trusted user ("the
operator"), I think, so are perhaps more comparable to statechains
than eltoo?

So how does that compare, in my opinion?

If you consider special casing two-party channels with eltoo, then I
think eltoo-2party and 2stage are equally effective. Comparing
eltoo-nparty and the multisig iid factories approach, I think the
uncooperative case looks like:

 ms-iid:
   log(n) txs (for the invalidation tree)
   log(n) time (?) (for the delays to ensure invalidated states don't
get published)

 eltoo: 1 tx from you
1 block after you notice, plus the fixed csv delay

A malicious counterparty can post many old update states prior to you
poisting the latest state, but those don't introduce extra csv delays
and you aren't paying the fees for those states, so I don't think it
makes sense to call that an O(n) delay or cost.

An additional practical problem with lightning is dealing with layered
commitments; that's a problem both for the delays while waiting for a
potential rejection in 2stage and for the invalidation tree delays in the
factory construction. But it's not a solved problem for eltoo yet, either.

As far as implementation goes, introducing the "iid" concept would mean
that info would need to be added to the utxo database -- if every utxo
got an iid, that would be perhaps a 1.4GB increase to the utxo db (going
by unique transaction rather than unique output), but presumably iid txs
would end up being both uncommon and short-lived, so the cost is probably
really mostly just in the additional complexity. Both iid and ANYPREVOUT
require changes to how signatures are evaluated and apps that use the
new feature are written, but ANYPREVOUT doesn't need changes beyond that.

(Also, the description of OP_CODESEPARATOR (footnote 13 on page 13,
ominous!) doesn't match its implementation in taproot. It also says BIP
118 introduces a new address type for floating transactions, but while
this was floated on the list, 

Re: [bitcoin-dev] BIP extensions

2021-09-15 Thread Anthony Towns via bitcoin-dev
On Wed, Sep 15, 2021 at 03:14:31PM +0900, Karl-Johan Alm via bitcoin-dev wrote:
> BIPs are proposals.

> It is then organically incorporated into the various entities that
> exist in the Bitcoin space. At this point, it is not merely a
> proposal, but a standard.

Thinking of BIPs that have reach "Final" status as a "standard" might
be reasonable, but I'd be pretty careful about even going that far,
let alone further.

But as you said, "BIPs are proposals". If your conclusion is somehow
that a BIP "is not merely a proposal", you're reached a contradiction,
which means you've made a logic error somewhere in between...

> Someone may have
> agreed to the proposal in its original form, but they may disagree
> with it if it is altered from under their feet.

> 2. To improve the proposal in some way, e.g. after discussion or after
> getting feedback on the proposed approach.
> 3. To add missing content, such as activation strategy.

> I propose that changes of the second and third type, unless they are
> absolutely free from contention, are done as BIP extensions.

If you were proposing this just for BIPs that are marked final, then
sure, maybe, I guess -- though why mark them final if you still want
to add missing content or make further improvements? But if you want to
apply it as soon as a BIP number is assigned or text is merged into the
repo, I think that just means requesting number assignment gets delayed
until the end of the development process rather than near the beginning,
which doesn't sound particularly helpful.

That's essentially how the lightning BOLTs are set up -- you only get to
publish a BOLT after you've got support from multiple implementations
[0]; but that has meant they don't have published docs for the various
things individual teams have implemented, making interoperability harder
rather than easier. There's been talk about creating bLIPs [1] to remedy
this lack.

> BIP extensions are separate BIPs that extend on or an existing BIP.

So as an alternative, how about more clearly separating out draft BIPs
from those in Active/Final state? ie:

 * brand new BIP draft comes in from its authors/champions/whatever
 * number xxx gets assigned, it becomes "Draft BIP xxx"
 * authors modify it as they see fit
 * once the authors are happy with the text, they can move it
   to Final status, at which point it is no longer a draft and is
   just "BIP xxx", and doesn't get modified anymore
 * go to step 1

(I'm doubtful that it's very useful to have an "Active" state as distinct
from "Final"; that just gives the editors an excuse to play favourites
by deciding whose objections count and whose don't (or perhaps which
implementations count and which ones don't). It's currently only used for
BIPs about the BIP process, which makes it seem particularly pointless...)

> By making extensions to BIPs, rather than modifying them long after
> review, we are giving the community [...]

As described, I think you would be giving people an easy way to actively
obstruct the BIP process by making it harder to "improve the proposal"
and "add missing content", and encouraging contentiousness as a result.

For adding on to BIPs that have reached Final status, I think just
assigning completely new numbers is fine, as occurred with bech32 and
bech32m (BIPs 173 and 350).

Even beyond that, having BIP maintainers exercising judgement by trying
to reserve/assign "pretty" numbers (like "BIP 3" for the new BIP process)
seems like a mistake to me. If it were up to me, I'd make the setup be
something like:

 * new BIP? make a PR, putting the text into
   "drafts/bip-authorname-description.mediawiki" (with corresponding
   directory for images etc). Have the word "Draft" appear in the "BIP:
   xxx" header as well as in the Status: header.

 * if that passes CI and isn't incoherent, it gets merged

 * only after the draft is already merged is a BIP number assigned.
   the number is chosen by a script, and the BIP maintainers rename it
   to "drafts/bip-xxx.mediawiki" in a followup commit including internal
   links to bip-authorname-description/foo.png and add it to the README
   (automatically at the same time as the merge, ideally)

 * when a BIP becomes Final, it gets moved from drafts/ into
   the main directory [2], and to avoid breaking external links,
   drafts/bip-xxx.mediawiki is changed to just have a link to the
   main doc.

 * likewise when a BIP becomes rejected/deprecated/whatever, it's moved
   into historical/ and drafts/bip-xxx.mediawiki and bip-xxx.mediawiki
   are updated with a link to the new location

 * otherwise, don't allow any modifications to bips outside of
   drafts/, with the possible exception of adding additional info in
   Acknowledgements or See also section or similar, adding Superseded-By:
   links, and updating additional tables that are deliberately designed
   to be updated, eg bip-0009/assignments.mediawiki

It's better to remove incentives to introduce friction rather than
add more.


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-15 Thread Anthony Towns via bitcoin-dev
On Sun, Sep 12, 2021 at 07:37:56PM -0400, Antoine Riard via bitcoin-dev wrote:
> While MERKLESUB is still WIP, here the semantic. [...]
> I believe this is matching your description and the main difference compared 
> to
> your TLUV proposal is the lack of merkle tree extension, where a new merkle
> path is added in place of the removed tapscript.

I think "  MERKLESUB" is the same as " OP_0 2 TLUV", provided
 happens to be the same index as the current input. So it misses the
ability to add branches (replacing OP_0 with a hash), the ability to
preserve the current script (replacing 2 with 0), and the ability to
remove some of the parent paths (replacing 2 with 4*n); but gains the
ability to refer to non-corresponding outputs.

> > That would mean anyone who could do a valid spend of the tx could
> > violate the covenant by spending to an unencumbered witness v2 output
> > and (by collaborating with a miner) steal the funds. I don't think
> > there's a reasonable way to have existing covenants be forward
> > compatible with future destination addresses (beyond something like CTV
> > that strictly hardcodes them).
> That's a good catch, thanks for raising it :)
> Depends how you define reasonable, but I think one straightforward fix is to
> extend the signature digest algorithm to encompass the segwit version (and
> maybe program-size ?) of the spending transaction outputs.

That... doesn't sound very straightforward to me; it's basically
introducing a new covenant approach, that's getting fixed into a
signature, rather than being a separate opcode.

I think a better approach for that would be to introduce the opcode (eg,
PUSH_OUTPUT_SCRIPTPUBKEY, and SUBSTR to be able to analyse the segwit
version), and make use of graftroot to allow a signature to declare that
it's conditional on some extra script code. But it feels like it's going
a bit off topic.

> > Having the output position parameter might be an interesting way to
> > merge/split a vault/pool, but it's not clear to me how much sense it
> > makes sense to optimise for that, rather than just doing that via the key
> > path. For pools, you want the key path to be common anyway (for privacy
> > and efficiency), so it shouldn't be a problem; but even for vaults,
> > you want the cold wallet accessible enough to be useful for the case
> > where theft is attempted, and maybe that's also accessible enough for
> > the ocassional merge/split to keep your utxo count/sizes reasonable.
> I think you can come up with interesting contract policies. Let's say you want
> to authorize the emergency path of your pool/vault balances if X happens (e.g 
> a
> massive drop in USDT price signed by DLC oracles). You have (A+B+C+D) forking
> into (A+B) and (C+D) pooled funds. To conserve the contracts pre-negotiated
> economic equilibrium, all the participants would like the emergency path to be
> inherited on both forks. Without relying on the key path interactivity, which
> is ultimately a trust on the post-fork cooperation of your counterparty ?

I'm not really sure what you're saying there; is that any different to a
pool of (A and B) where A suddenly wants to withdraw funds ASAP and can't
wait for a key path signature? In that case A authorises the withdrawal
and does whatever she wants with the funds (including form a new pool),
and B remains in the pool.

I don't think you can reliably have some arbitrary subset of the pool
able to withdraw atomically without using the key path -- if A,B,C,D have
individual scripts allowing withdrawal, then there's no way of setting
the tree up so that every pair of members can have their scripts cut
off without also cutting off one or both of the other members withdrawal
scripts.

If you know in advance which groups want to stick together, you could
set things up as:

  (((A, B), AB), C)

where:

  A =   "A DUP H(B') 10 TLUV CHECKSIG"  -> (B', C)
  B =   "B DUP H(A') 10 TLUV CHECKSIG"  -> (A', C)
  A' =  "A DUP 0 2 TLUV CHECKSIG"   -> (C)
  B' =  "B DUP 0 2 TLUV CHECKSIG"   -> (C)
  AB =  "(A+B) DUP 6 TLUV CHECKSIG  -> (C)
  C  =  "C DUP 0 2 TLUV CHECKSIG"   -> ((A,B), AB)

(10 = 2+4*2 = drop my script, my sibling and my uncle; 6 = 2+4*1 =
drop my script and my sibling; 2 = drop my script only)

Which would let A and B exit together in a single tx rather than needing two
transactions to exit separately.

> > Saving a byte of witness data at the cost of specifying additional
> > opcodes seems like optimising the wrong thing to me.
> I think we should keep in mind that any overhead cost in the usage of a script
> primitive is echoed to the user of off-chain contract/payment channels. If the
> tapscripts are bigger, your average on-chain spends in case of non-cooperative
> scenarios are increased in consequence, and as such your fee-bumping reserve.
> Thus making those systems less economically accessible.

If you're worried about the cost of a single byte of witness data you
probably can't afford to do script path spends at all -- 

Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-13 Thread Anthony Towns via bitcoin-dev
On Sun, Sep 12, 2021 at 10:33:24PM -0700, Matt Corallo via bitcoin-dev wrote:
> > On Sep 12, 2021, at 00:53, Anthony Towns  wrote:
> >> Why bother with a version bit? This seems substantially more complicated
> >> than the original proposal that surfaced many times before signet launched
> >> to just have a different reorg signing key.
> > Yeah, that was the original idea, but there ended up being two problems
> > with that approach. The simplest is that the signet block signature
> > encodes the signet challenge,
> But if that was the originally proposal, why is the challenge committed to in 
> the block? :)

The answer to your question was in the text after the comma, that you
deleted...

> > Blocks on signet get mined at a similar rate to mainnet, so you'll always
> > have to wait a little bit (up to an hour) -- if you don't want to wait
> > at all, that's what regtest (or perhaps a custom signet) is for.
> Can you explain the motivation for this? 

I'm not sure that's really the question you want answered? Mostly
it's just "this is how mainnet works" plus "these are the smallest
changes to have blocks be chosen by a signature, rather than entirely
by PoW competition".

For integration testing across many services, I think a ten-minute-average
between blocks still makes sense -- protocols relying on CSV/CLTV to
ensure there's a delay they can use to recover funds, if they specify
that in blocks (as lightning's to_self_delay does), then significant
surges of blocks will cause uninteresting bugs. 

It would be easy enough to change things to target an average of 2 or
5 minutes, I suppose, but then you'd probably need to propogate that
logic back into your apps that would otherwise think 144 blocks is around
about a day.

We could switch back to doing blocks exactly every 10 minutes, rather
than a poisson-ish distribution in the range of 1min to 60min, but that
doesn't seem like that huge a win, and makes it hard to test that things
behave properly when blocks arrive in bursts.

> From where I sit, as far as I know, I should basically be a prime
> example of the target market for public signet - someone developing
> bitcoin applications with regular requirements to test those applications
> with other developers without jumping through hoops to configure software
> the same across the globe and set up miners. With blocks being slow and
> irregular, I’m basically not benefited at all by signet and will stick
> with testnet3/mainnet testing, which both suck.

Best of luck to you then? Nobody's trying to sell you on a subscription
plan to using signet. Signet's less expensive in fees (or risk) than
mainnet, and takes far less time for IBD than testnet, but if those
aren't blockers for you, that's great.

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-12 Thread Anthony Towns via bitcoin-dev
On Thu, Sep 09, 2021 at 05:50:08PM -0700, Matt Corallo via bitcoin-dev wrote:
> > AJ proposed to allow SigNet users to opt-out of reorgs in case they
> > explicitly want to remain unaffected. This can be done by setting a
> > to-be-reorged version bit [...]
> Why bother with a version bit? This seems substantially more complicated
> than the original proposal that surfaced many times before signet launched
> to just have a different reorg signing key.

Yeah, that was the original idea, but there ended up being two problems
with that approach. The simplest is that the signet block signature
encodes the signet challenge, so if you have two different challenges, eg

  " CHECKSIG"
  "0 SWAP 1   2 CHECKMULTISIG"

then while both challenges will accept a signature by normal as the
block solution, the signature by "normal" will be different between the
two. This is a fairly natural result of reusing the tx-signing code for
the block signatures and not having a noinput/anyprevout tx-signing mode.

More generally, though, this would mean that a node that's opting out
of reorgs will see the to-be-reorged blocks as simply invalid due to a
bad signature, and will follow the "this node sent me an invalid block"
path in the p2p code, and start marking peers that are following reorgs
as discouraged and worth disconnecting. I think that would make it pretty
hard to avoid partitioning the network between peers that do and don't
accept reorgs, and generally be a pain.

So using the RECENT_CONSENSUS_CHANGE behaviour that avoids the
discourage/disconnect logic seems the way to avoid that problem, and that
means making it so that nodes that that opt-out of reorgs can distinguish
valid-but-will-become-stale blocks from invalid blocks. Using a versionbit
seems like the easiest way of doing that.

> > The reorg-interval X very much depends on the user's needs. One could
> > argue that there should be, for example, three reorgs per day, each 48
> > blocks apart. Such a short reorg interval allows developers in all time
> > zones to be awake during one or two reorgs per day. Developers don't
> > need to wait for, for example, a week until they can test their reorgs
> > next. However, too frequent reorgs could hinder other SigNet users.
> I see zero reason whatsoever to not simply reorg ~every block, or as often
> as is practical. If users opt in to wanting to test with reorgs, they should
> be able to test with reorgs, not wait a day to test with reorgs.

Blocks on signet get mined at a similar rate to mainnet, so you'll always
have to wait a little bit (up to an hour) -- if you don't want to wait
at all, that's what regtest (or perhaps a custom signet) is for.

I guess it would be super easy to say something like:

 - miner 1 ignores blocks marked for reorg
 - miner 2 marks its blocks for reorg, mines on top of the most work
   block
 - miner 2 never mines a block which would have (height % 10 == 1)
 - miner 1 and miner 2 have the same hashrate, but mine at randomly
   different times

which would mean there's almost always a reorg being mined, people that
follow reorgs will see fewer than 1.9x as many blocks as non-reorg nodes,
and reorgs won't go on for more than 10 blocks.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Anthony Towns via bitcoin-dev
On Fri, Sep 10, 2021 at 12:12:24AM -0400, Antoine Riard wrote:
> "Talk is cheap. Show me the code" :p
>     case OP_MERKLESUB:

I'm not entirely clear on what your opcode there is trying to do. I
think it's taking

 MERKLESUB

and checking that output N has the same scripts as the current input
except with the current script removed, and with its internal pubkey as
the current input's internal pubkey plus P.

>         txTo->vout[out_pos].scriptPubKey.IsWitnessProgram(witnessversion,
> witnessprogram);
>         //! The committed to output must be a witness v1 program at least

That would mean anyone who could do a valid spend of the tx could
violate the covenant by spending to an unencumbered witness v2 output
and (by collaborating with a miner) steal the funds. I don't think
there's a reasonable way to have existing covenants be forward
compatible with future destination addresses (beyond something like CTV
that strictly hardcodes them).

> One could also imagine a list of output positions to force the taproot update
> on multiple outputs ("OP_MULTIMERKLESUB").

Having the output position parameter might be an interesting way to
merge/split a vault/pool, but it's not clear to me how much sense it
makes sense to optimise for that, rather than just doing that via the key
path. For pools, you want the key path to be common anyway (for privacy
and efficiency), so it shouldn't be a problem; but even for vaults,
you want the cold wallet accessible enough to be useful for the case
where theft is attempted, and maybe that's also accessible enough for
the ocassional merge/split to keep your utxo count/sizes reasonable.

> For the merkle branches extension, I was thinking of introducing a separate
> OP_MERKLEADD, maybe to *add* a point to the internal pubkey group signer. If
> you're only interested in leaf pruning, using OP_MERKLESUB only should save 
> you
> one byte of empty vector ?

Saving a byte of witness data at the cost of specifying additional
opcodes seems like optimising the wrong thing to me.

> One solution I was thinking about was introducing a new tapscript version
> (`TAPROOT_INTERNAL_TAPSCRIPT`) signaling that VerifyTaprootCommitment must
> compute the TapTweak with a new TapTweak=(internal_pubkey || merkle_root ||
> parity_bit). A malicious participant wouldn't be able to interfere with the
> updated internal key as it would break its own spending taproot commitment
> verification ?

I don't think that works, because different scripts in the same merkle
tree can have different script versions, which would here indicate
different parities for the same internal pub key.

> > That's useless without some way of verifying that the new utxo retains
> > the bitcoin that was in the old utxo, so also include a new opcode
> > IN_OUT_AMOUNT that pushes two items onto the stack: the amount from this
> > input's utxo, and the amount in the corresponding output, and then expect
> > anyone using TLUV to use maths operators to verify that funds are being
> > appropriately retained in the updated scriptPubKey.
> Credit to you for the SIGHASH_GROUP design, here the code, with
> SIGHASH_ANYPUBKEY/ANYAMOUNT extensions.
> 
> I think it's achieving the same effect as IN_OUT_AMOUNT, at least for CoinPool
> use-case.

The IN_OUT_AMOUNT opcode lets you do maths on the values, so you can
specify "hot wallets can withdraw up to X" rather than "hot wallets
must withdraw exactly X". I don't think there's a way of doing that with
SIGHASH_GROUP, even with a modifier like ANYPUBKEY?

> (I think I could come with some use-case from lex mercatoria where if you play
> out a hardship provision you want to tweak all the other provisions by a CSV
> delay while conserving the rest of their policy)

If you want to tweak all the scripts, I think you should be using the
key path.

One way you could do somthing like that without changing the scripts
though, is have the timelock on most of the scripts be something like
"[3 months] CSV", and have a "delay" script that doesn't require a CSV,
does require a signature from someone able to authorise the delay,
and requires the output to have the same scriptPubKey and amount. Then
you can use that path to delay resolution by 3 months however often,
even if you can't coordinate a key path spend.

> > And second, it doesn't provide a way for utxos to "interact", which is
> > something that is interesting for automated market makers [5], but perhaps
> > only interesting for chains aiming to support multiple asset types,
> > and not bitcoin directly. On the other hand, perhaps combining it with
> > CTV might be enough to solve that, particularly if the hash passed to
> > CTV is constructed via script/CAT/etc.
> That's where SIGHASH_GROUP might be more interesting as you could generate
> transaction "puzzles".
> IIUC, the problem is how to have a set of ratios between x/f(x).

Normal way to do it is specify a formula, eg

   outBTC * outUSDT >= inBTC * inUSDT

that's a constant product market 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Anthony Towns via bitcoin-dev
On Thu, Sep 09, 2021 at 12:26:37PM -0700, Jeremy wrote:
> I'm a bit skeptical of the safety of the control byte. Have you considered the
> following issues?

> If we used the script "0 F 0 TLUV" (H=F, C=0) then we keep the current
> script, keep all the steps in the merkle path (AB and CD), and add
> a new step to the merkle path (F), giving us:
>     EF = H_TapBranch(E, F)
>     CDEF =H_TapBranch(CD, EF)
>     ABCDEF = H_TapBranch(AB, CDEF)
> 
> If we recursively apply this rule, would it not be possible to repeatedly 
> apply
> it and end up burning out path E beyond the 128 Taproot depth limit?

Sure. Suppose you had a script X which allows adding a new script A[0..n]
as its sibling. You'd start with X and then go to (A0, X), then (A0,
(A1, X)), then (A0, (A1, (A2, X))) and by the time you added A127 TLUV
would fail because it'd be trying to add a path longer than 128 elements.

But this would be bad anyway -- you'd already have a maximally unbalanced
tree. So the fix for both these things would be to do a key path spend
and rebalance the tree. With taproot, you always want to do key path
spends if possible.

Another approach would be to have X replace itself not with (X, A) but
with (X, (X, A)) -- that way you go from:

   /\
  A  X

to 
 /\
A /\
 X /\
  B  X
  
to 
  /\
 /  \
A   /\
   /  \
  /\
 /\/\
C  X  B  X

and can keep the tree height at O(log(n)) of the number of members.

This means the script X would need a way to reference its own hash, but
you could do that by invoking TLUV twice, once to check that your new
sPK is adding a sibling (X', B) to the current script X, and a second
time to check that you're replacing the current script with (X', (X',
B)). Executing it twice ensures that you've verified X' = X, so you can
provide X' on the stack, rather than trying to include the script's on
hash in itself.

> Perhaps it's OK: E can always approve burning E?

As long as you've got the key path, then I think that's the thing to do.

> If we used the script "0 F 4 TLUV" (H=F, C=4) then we keep the current
> script, but drop the last step in the merkle path, and add a new step
> (effectively replacing the *sibling* of the current script):
>     EF = H_TapBranch(E, F)
>     ABEF = H_TapBranch(AB, EF) 
> If we used the script "0 0 4 TLUV" (H=empty, C=4) then we keep the current
> script, drop the last step in the merkle path, and don't add anything new
> (effectively dropping the sibling), giving just:
>     ABE = H_TapBranch(AB, E)
> 
> Is C = 4 stable across all state transitions? I may be missing something, but
> it seems that the location of C would not be stable across transitions.

Dropping a sibling without replacing it or dropping the current script
would mean you could re-execute the same script on the new utxo, and
repeat that enough times and the only remaining ways of spending would
be that script and the key path.

> E.g., What happens when, C and E are similar scripts and C adds some clauses
> F1, F2, F3, then what does this sibling replacement do? Should a sibling not 
> be
> able to specify (e.g., by leaf version?) a NOREPLACE flag that prevents
> siblings from modifying it?

If you want a utxo where some script paths are constant, don't construct
the utxo with script paths that can modify them.

> What happens when E adds a bunch of F's F1 F2 F3, is C still in the same
> position as when E was created?

That depends how you define "position". If you have:


   /\
  R  S

and

   /\
  R /\
   S  T

then I'd say that "R" has stayed in the same position, while "S" has
been lowered to allow for a new sibling "T". But the merkle path to
R will have changed (from "H(S)" to "H(H(S),H(T))"). 

> Especially since nodes are lexicographically sorted, it seems hard to create
> stable path descriptors even if you index from the root downwards.

The merkle path will always change unless you have the exact same set
of scripts, so that doesn't seem like a very interesting way to define
"position" when you're adding/removing/replacing scripts.

The "lexical ordering" is just a modification to how the hash is
calculated that makes it commutative, so that H(A,B) = H(B,A), with
the result being that the merkle path for any script in the the R,(S,T)
tree above is the same for the corresponding script in the tree:

   /\
  /\ R
 T  S

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-09 Thread Anthony Towns via bitcoin-dev
On Thu, Sep 09, 2021 at 04:41:38PM +1000, Anthony Towns wrote:
> I'll split this into two emails, this one's the handwavy overview,
> the followup will go into some of the implementation complexities.

(This is informed by discussions with Greg, Matt Corallo, David Harding
and Jeremy Rubin; opinions and mistakes my own, of course)



First, let's talk quickly about IN_OUT_AMOUNT. I think the easiest way to
deal with it is just a single opcode that pushes two values to the stack;
however it could be two opcodes, or it could even accept a parameter
letting you specify which input (and hence which corresponding output)
you're talking about (-1 meaning the current input perhaps). 

Anyway, a big complication here is that amounts in satoshis require up
to 51 bits to represent them, but script only allows you to do 32 bit
maths. However introducing IN_OUT_AMOUNT already means using an OP_SUCCESS
opcode, which in turn allows us to arbitrarily redefine the behaviour
of other opcodes -- so we can use the presence of IN_OUT_AMOUNT in the
script to upgrade ADD, SUB, and the comparison operators to support 64
bit values. Enabling MUL, DIV and MOD might also be worthwhile.



Moving onto TLUV. My theory is that it pops three items off the stack. The
top of the stack is "C" the control integer; next is "H" the
additional path step; and finally "X" the tweak for the internal
pubkey. If "H" is the empty vector, no additional path step is
added; otherwise it must be 32 bytes. If "X" is the empty vector,
the internal pubkey is not tweaked; otherwise it must be a 32 byte
x-only pubkey.

The low bit of C indicates the parity of X; if it's 0, X has even y,
if it's 1, X has odd y.

The next bit of C indicates whether the current script is dropped from
the merkle path, if it's 0, the current script is kept, if it's 1 the
current script is dropped.

The remaining bits of C (ie C >> 2) are the number of steps in the merkle
path that are dropped. (If C is negative, behaviour is to be determined
-- either always fail, or always succeed and left for definition via
future soft-fork)

For example, suppose we have a taproot utxo that had 5 scripts
(A,B,C,D,E), calculated as per the example in BIP 341 as:

AB = H_TapBranch(A, B)
CD = H_TapBranch(C, D)
CDE = H_TapBranch(CD, E)
ABCDE = H_TapBranch(AB, CDE)

And we're spending using script E, in that case the control block includes
the script E, and the merkle path to it, namely (AB, CD).

So here's some examples of what you could do with TLUV to control how
the spending scripts can change, between the input sPK and the output sPK.

At it's simplest, if we used the script "0 0 0 TLUV", then that says we
keep the current script, keep all steps in the merkle path, don't add
any new ones, and don't change the internal public key -- that is that
we want to resulting sPK to be exactly the same as the one we're spending.

If we used the script "0 F 0 TLUV" (H=F, C=0) then we keep the current
script, keep all the steps in the merkle path (AB and CD), and add
a new step to the merkle path (F), giving us:

EF = H_TapBranch(E, F)
CDEF =H_TapBranch(CD, EF)
ABCDEF = H_TapBranch(AB, CDEF)

If we used the script "0 F 2 TLUV" (H=F, C=2) then we drop the current
script, but keep all the other steps, and add a new step (effectively
replacing the current script with a new one):

CDF = H_TapBranch(CD, F)
ABCDF = H_TapBranch(AB, CDF)

If we used the script "0 F 4 TLUV" (H=F, C=4) then we keep the current
script, but drop the last step in the merkle path, and add a new step
(effectively replacing the *sibling* of the current script):

EF = H_TapBranch(E, F)
ABEF = H_TapBranch(AB, EF)

If we used the script "0 0 4 TLUV" (H=empty, C=4) then we keep the current
script, drop the last step in the merkle path, and don't add anything new
(effectively dropping the sibling), giving just:

ABE = H_TapBranch(AB, E)



Implementing the release/locked/available vault construct would then
look something like this:

Locked script = "OP_RETURN"
Available script = " CHECKSIGVERIFY IN_OUT_AMOUNT SWAP  SUB DUP 0 
GREATERTHAN IF GREATERTHANOREQUAL VERIFY 0  2 TLUV ELSE 2DROP ENDIF 
 CSV"
Release script = " CHECKSIGVERIFY IF  ELSE  ENDIF 0 
SWAP 4 TLUV  INPUTAMOUNT OUTPUTAMOUNT LESSTHANOREQUAL"
HOT = 32B hot wallet pubkey
X = maximum amount spendable via hot wallet at any time
D = compulsory delay between releasing funds and being able to spend them
H_LOCKED = H_TapLeaf(locked script)
H_AVAILABLE= H_TapLeaf(available script)
Internal public key = 32B cold wallet pubkey



Moving on to the pooled scheme and actually updating the internal pubkey
is, unfortunately, where things start to come apart. In particular,
since taproot uses 32-byte x-only pubkeys (with implicit even-y) for the
scriptPubKey and the internal public key, we have to worry about what
happens if, eg, A,B,C and A+B+C all have even-y, but (A+B)=(A+B+C)-C does
not have even-y. In that case allowing C to remove herself 

[bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-09 Thread Anthony Towns via bitcoin-dev
Hello world,

A couple of years ago I had a flight of fancy [0] imagining how it
might be possible for everyone on the planet to use bitcoin in a
mostly decentralised/untrusted way, without requiring a block size
increase. It was a bit ridiculous and probably doesn't quite hold up,
and beyond needing all the existing proposals to be implemented (taproot,
ANYPREVOUT, CTV, eltoo, channel factories), it also needed a covenant
opcode [1]. I came up with something that I thought fit well with taproot,
but couldn't quite figure out how to use it for anything other than my
ridiculous scheme, so left it at that.

But recently [2] Greg Maxwell emailed me about his own cool idea for a
covenant opcode, which turned out to basically be a reinvention of the
same idea but with more functionality, a better name and a less fanciful
use case; and with that inspiration, I think I've also now figured out
how to use it for a basic vault, so it seems worth making the idea a
bit more public.

I'll split this into two emails, this one's the handwavy overview,
the followup will go into some of the implementation complexities.



The basic idea is to think about "updating" a utxo by changing the
taproot tree.

As you might recall, a taproot address is made up from an internal public
key (P) and a merkle tree of scripts (S) combined via the formula Q=P+H(P,
S)*G to calculate the scriptPubKey (Q). When spending using a script,
you provide the path to the merkle leaf that has the script you want
to use in the control block. The BIP has an example [3] with 5 scripts
arranged as ((A,B), ((C,D), E)), so if you were spending with E, you'd
reveal a path of two hashes, one for (AB), then one for (CD), then you'd
reveal your script E and satisfy it.

So that makes it relatively easy to imagine creating a new taproot address
based on the input you're spending by doing some or all of the following:

 * Updating the internal public key (ie from P to P' = P + X)
 * Trimming the merkle path (eg, removing CD)
 * Removing the script you're currently executing (ie E)
 * Adding a new step to the end of the merkle path (eg F)

Once you've done those things, you can then calculate the new merkle
root by resolving the updated merkle path (eg, S' = MerkleRootFor(AB,
F, H_TapLeaf(E))), and then calculate a new scriptPubKey based on that
and the updated internal public key (Q' = P' + H(P', S')).

So the idea is to do just that via a new opcode "TAPLEAF_UPDATE_VERIFY"
(TLUV) that takes three inputs: one that specifies how to update the
internal public key (X), one that specifies a new step for the merkle path
(F), and one that specifies whether to remove the current script and/or
how many merkle path steps to remove. The opcode then calculates the
scriptPubKey that matches that, and verifies that the output corresponding
to the current input spends to that scriptPubKey.

That's useless without some way of verifying that the new utxo retains
the bitcoin that was in the old utxo, so also include a new opcode
IN_OUT_AMOUNT that pushes two items onto the stack: the amount from this
input's utxo, and the amount in the corresponding output, and then expect
anyone using TLUV to use maths operators to verify that funds are being
appropriately retained in the updated scriptPubKey.



Here's two examples of how you might use this functionality.

First, a basic vault. The idea is that funds are ultimately protected
by a cold wallet key (COLD) that's inconvenient to access but is as
safe from theft as possible. In order to make day to day transactions
more convenient, a hot wallet key (HOT) is also available, which is
more vulnerable to theft. The vault design thus limits the hot wallet
to withdrawing at most L satoshis every D blocks, so that if funds are
stolen, you lose at most L, and have D blocks to use your cold wallet
key to re-secure the funds and prevent further losses.

To set this up with TLUV, you construct a taproot output with COLD as
the internal public key, and a script that specifies:

 * The tx is signed via HOT
 *  CSV -- there's a relative time lock since the last spend
 * If the input amount is less than L + dust threshold, fine, all done,
   the vault can be emptied.
 * Otherwise, the output amount must be at least (the input amount -
   L), and do a TLUV check that the resulting sPK is unchanged

So you can spend up to "L" satoshis via the hot wallet as long as you
wait D blocks since the last spend, and can do whatever you want via a
key path spend with the cold wallet.

You could extend this to have a two phase protocol for spending, where
first you use the hot wallet to say "in D blocks, allow spending up to
L satoshis", and only after that can you use the hot wallet to actually
spend funds. In that case supply a taproot sPK with COLD as the internal
public key and two scripts, the "release" script, which specifies:

 * The tx is signed via HOT
 * Output amount is greater or equal to the input amount.
 * Use TLUV to check:
   + the output 

Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-08 Thread Anthony Towns via bitcoin-dev
On Tue, Sep 07, 2021 at 06:07:47PM +0200, 0xB10C via bitcoin-dev wrote:
> The reorg-interval X very much depends on the user's needs. One could
> argue that there should be, for example, three reorgs per day, each 48
> blocks apart.

Oh, wow, I think the last suggestion was every 100 blocks (every
~16h40m). Once every ~8h sounds very convenient.

> Such a short reorg interval allows developers in all time
> zones to be awake during one or two reorgs per day.

And also for there to reliably be reorgs when they're not awake, which
might be a useful thing to be able to handle, too :)

> Developers don't
> need to wait for, for example, a week until they can test their reorgs
> next. However, too frequent reorgs could hinder other SigNet users.

Being able to run `bitcoind -signet -signetacceptreorg=0` and never
seeing any reorgs should presumably make this not a problem?

For people who do see reorgs, having an average of 2 or 3 additional
blocks every 48 blocks is perhaps a 6% increase in storage/traffic.

> # Scenario 1: Race between two chains
> 
> For this scenario, at least two nodes and miner scripts need to be
> running. An always-miner A continuously produces blocks and rejects
> blocks with the to-be-reorged version bit flag set. And a race-miner R
> that only mines D blocks at the start of each interval and then waits X
> blocks. A and R both have the same hash rate. Assuming both are well
> connected to the network, it's random which miner will first mine and
> propagate a block. In the end, the A miner chain will always win the race.

I think this description is missing that all the blocks R mines have
the to-be-reorged flag set.

> 3. How deep should the reorgs be on average? Do you want to test
>deeper reorgs (10+ blocks) too?

Super interested in input on this -- perhaps we should get optech to
send a survey out to their members, or so?

My feeling is:

 - 1 block reorgs: these are a regular feature on mainnet, everyone
   should cope with them; having them happen multiple times a day to
   make testing easier should be great

 - 2-3 block reorgs: good for testing the "your tx didn't get enough
   confirms to be credited to your account" case, even though it barely
   ever happens on mainnet

 - 4-6 block reorgs: likely to violate business assumptions, but
   completely technically plausible, especially if there's an attack
   against the network

 - 7-100 block reorgs: for this to happen on mainnet, it would probably
   mean there was a bug and pools/miners agree the chain has to
   be immediately reverted -- eg, someone discovers and exploits an
   inflation bug, minting themselves free bitcoins and breaking the 21M
   limit (eg, the 51 block reorg in Aug 2010); or someone discovers a
   bug that splits the chain, and the less compatible chain is reverted
   (eg, the 24 block reorg due to the bdb lock limit in Mar 2013);
   or something similar. Obviously the bug would have to have been
   discovered pretty quickly after it was exploited for the reorg to be
   under a day's worth of blocks.

 - 100-2000+ block reorgs: severe bug that wasn't found quickly, or where
   getting >50% of miners organised took more than a few hours. This will
   start breaking protocol assumptions, like pool payouts, lightning's
   relative locktimes, or liquid's peg-in confirmation requirements, and
   result in hundres of MBs of changes to the utxo set

Maybe it would be good to do reorgs of 15, 150 or 1500 blocks as a
special fire-drill event, perhaps once a month/quarter/year or so,
in some pre-announced window?

I think sticking to 1-6 block reorgs initially is a fine way to start
though.

> After enough testing, the default SigNet can start to do periodical
> reorgs, too.

FWIW, the only thing that concerns me about doing this on the default
signet is making sure that nodes that set -signetacceptreorg=0 don't
end up partitioning the p2p network due to either rejecting a higher
work chain or rejecting txs due to double-spends across the two chains.

A quick draft of code for -signetacceptreorg=0 is available at 

  https://github.com/ajtowns/bitcoin/commits/202108-signetreorg

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-12 Thread Anthony Towns via bitcoin-dev
On Tue, Aug 10, 2021 at 06:37:48PM -0400, Antoine Riard via bitcoin-dev wrote:
> Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime of
> the HTLC.

Right: but that just means it's not something you should determine once
for the HTLC, but something you should determine each time you update the
channel commitment -- if fee rates are at 1sat/vb, then a 10,000 sat HTLC
that's going to cost 100 sats to create the utxo and eventually claim it
might be worth committing to, but if fee rates suddenly rise to 75sat/vb,
then the combined cost of 7500 sat probably isn't worthwhile (and it
certainly isn't worthwhile if fees rise to above 100sat/vb).

That's independent of dust limits -- those only give you a fixed size
lower limit or about 305sats for p2wsh outputs.

Things become irrational before they become uneconomic as well: ie the
100vb is perhaps 40vb to create then 60vb to spend, so if you create
the utxo anyway then the 40vb is a sunk cost, and redeeming the 10k sats
might still be marginally wortwhile up until about 167sat/vb fee rate.

But note the logic there: it's an uneconomic output if fees rise above
167sat/vb, but it was already economically irrational for the two parties
to create it in the first place when fees were at or above 100sat/vb. If
you're trying to save every sat, dust limits aren't your problem. If
you're not trying to save every sat, then just add 305 sats to your
output so you avoid the dust limit.

(And the dust limit is only preventing you from creating outputs that
would be irrational if they only required a pubkey reveal and signature
to spend -- so a HTLC that requires revealing a script, two hashes,
two pubkeys, a hash preimage and two signatures with the same dust
threshold value for p2wsh of ~305sats would already be irrational at
about 2.1sat/vb and unconomic at 2.75 sat/vb).

> (From a LN viewpoint, I would say we're trying to solve a price discovery
> issue, namely the cost to write on the UTXO set, in a distributed system, 
> where
> any deviation from the "honest" price means you trust more your LN
> counterparty)

At these amounts you're already trusting your LN counterparty to not just
close the channel unilaterally at a high fee rate time and waste your
funds in fees, vs doing a much for efficient mutual/cooperative close.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-13 Thread Anthony Towns via bitcoin-dev
On Mon, Jul 12, 2021 at 03:07:29PM -0700, Jeremy wrote:
> Perhaps there's a more general principle -- evaluating a script should
> only return one bit of info: "bool tx_is_invalid_script_failed"; every
> other bit of information -- how much is paid in fees (cf ethereum gas
> calculations), when the tx is final, if the tx is only valid in some
> chain fork, if other txs have to have already been mined / can't have
> been mined, who loses funds and who gets funds, etc... -- should already
> be obvious from a "simple" parsing of the tx.
> I don't think we have this property as is.
> E.g. consider the transaction:
> TX:
>    locktime: None
>    sequence: 100
>    scriptpubkey: 101 CSV

That tx will never be valid, no matter the state of the chain -- even if
it's 420 blocks after the utxo it's spending: it fails because "top stack
item is greater than the transaction input sequence" rule from BIP 112.

> How will you tell it is able to be included without running the script?

You have to run the script at some point, but you don't need to run the
script to differentiate between it being valid on one chain vs valid on
some other chain.

> What's nice is the transaction in this form cannot go from invalid to valid --
> once invalid it is always invalid for a given UTXO.

Huh? Timelocks always go from invalid to valid -- they're invalid prior
to some block height (IsFinal() returns false), then valid after.

Not going from valid to invalid is valuable because it limits the cases
where you have to remove txs (and their descendents) from the mempool.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-11 Thread Anthony Towns via bitcoin-dev
On Thu, Jul 08, 2021 at 08:48:14AM -0700, Jeremy wrote:
> This would disallow using a relative locktime and an absolute locktime
> for the same input. I don't think I've seen a use case for that so far,
> but ruling it out seems suboptimal.
> I think you meant disallowing a relative locktime and a sequence locktime? I
> agree it is suboptimal.

No? If you overload the nSequence for a per-input absolute locktime
(well in the past for eltoo), then you can't reuse the same input's
nSequence for a per-input relative locktime (ie CSV).

Apparently I have thought of a use for it now -- cut-through of PTLC
refunds when the timeout expires well after the channel settlement delay
has passed. (You want a signature that's valid after a relative locktime
of the delay and after the absolute timeout)

> What do you make of sequence tagged keys?

I think we want sequencing restrictions to be obvious from some (simple)
combination of nlocktime/nsequence/annex so that you don't have to
evaluate scripts/signatures in order to determine if a transaction
is final.

Perhaps there's a more general principle -- evaluating a script should
only return one bit of info: "bool tx_is_invalid_script_failed"; every
other bit of information -- how much is paid in fees (cf ethereum gas
calculations), when the tx is final, if the tx is only valid in some
chain fork, if other txs have to have already been mined / can't have
been mined, who loses funds and who gets funds, etc... -- should already
be obvious from a "simple" parsing of the tx.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-07-09 Thread Anthony Towns via bitcoin-dev
On Fri, Jul 09, 2021 at 09:19:45AM -0400, Antoine Riard via bitcoin-dev wrote:
> > The easy way to avoid O(n^2) behaviour in (3) is to disallow partial
> > overlaps. So let's treat the tx as being distinct bundles of x-inputs
> > and y-outputs, and we'll use the annex for grouping, since that is
> > committed to by singatures. Call the annex field "sig_group_count".
> > When processing inputs, setup a new state pair, (start, end), initially
> > (0,0).
> > When evaluating an input, lookup sig_group_count. If it's not present,
> > then set start := end. If it's present and 0, leave start and end
> > unchanged. Otherwise, if it's present and greather than 0, set
> > start := end, and then set end := start + sig_group_count.
> IIUC the design rationale, the "sig_group_count" lockdowns the hashing of
> outputs for a given input, thus allowing midstate reuse across signatures
> input.

No midstates, the message being signed would just replace
SIGHASH_SINGLE's:

  sha_single_output: the SHA256 of the corresponding output in CTxOut
  format

with

  sha_group_outputs: the SHA256 of the serialization of the group
  outputs in CTxOut format.

ie, you'd take span{start,end}, serialize it (same as if it were
a vector of just those CTxOuts), and sha256 it.

> Let's say you want to combine {x_1, y_1} and {x_2, y_2} where {x, y} denotes
> bundles of Lightning commitment transactions.
> x_1 is dual-signed by Alice and Bob under the SIGHASH_GROUP flag with
> `sig_group_count`=3.
> x_2 is dual-signed by Alice and Caroll under the SIGHASH_GROUP flag, with
> `sig_group_count`=2.
> y_1 and y_2 are disjunctive.
> At broadcast, Alice is not able to combine {x_1,y_1} and {x_2, y_2} for the
> reason that x_1, x_2 are colliding on the absolute output position.

So the sha256 of the span of the group doesn't commit to start and end
-- it just serializes a vector, so commits to the number of elements,
the order, and the elements themselves. So you're taking serialize(y_1)
and serialize(y_2), and each of x_1 signs against the former, and each
of x_2 signs against the latter.

(Note that the annex for x_1_0 specifies sig_group_count=len(y_1)
and the annex for x_1_{1..} specifies sig_group_count=0, for "reuse
previous input's group", and the signatures for each input commit to
the annex anyway)

> One fix could be to skim the "end > num_ouputs" semantic,

That's only there to ensure the span doesn't go out of range, so I don't
think it makes any sense to skip it?

> I think this SIGHASH_GROUP proposal might solve other use-cases, but if I
> understand the semantics correctly, it doesn't seem to achieve the batch
> fee-bumping of multiple Lightning commitment with O(1) onchain footprint I was
> thinking of for IOMAP...

Does the above resolve that?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-07-08 Thread Anthony Towns via bitcoin-dev
On Thu, May 27, 2021 at 04:14:13PM -0400, Antoine Riard via bitcoin-dev wrote:
> This overhead could be smoothed even further in the future with more advanced
> sighash malleability flags like SIGHASH_IOMAP, allowing transaction signers to
> commit to a map of inputs/outputs [2]. In the context of input-based, the
> overflowed fee value could be redirected to an outgoing output.

> Input-based (SIGHASH_ANYPREVOUT+SIGHASH_IOMAP): Multiple chains of 
> transactions
> might be aggregated together *non-interactively*. One bumping input and
> outgoing output can be attached to the aggregated root.

> [2] https://bitcointalk.org/index.php?topic=252960.0

I haven't seen any recent specs for "IOMAP", but there are a few things
that have bugged me about them in the past:

 (1) allowing partially overlapping sets of outputs could allow "theft",
 eg if I give you a signature "you can spend A+B as long as I get X"
 and "you can spend A+C as long as I get X", you could combine them
 to spend A+B+C instead but still only give me 1 X.

 (2) a range specification or a whole bitfield is a lot heavier than an
 extra bit to add to the sighash

 (3) this lets you specify lots of different ways of hashing the
 outputs, which then can't be cached, so you get kind-of quadratic
 behaviour -- O(n^2/8) where n/2 is the size of the inputs, which
 gives you the number of signatures, and n/2 is also the size of the
 outputs, so n/4 is a different half of the output selected for each
 signature in the input.

But under the "don't bring me problems, bring me solutions" banner,
here's an idea.

The easy way to avoid O(n^2) behaviour in (3) is to disallow partial
overlaps. So let's treat the tx as being distinct bundles of x-inputs
and y-outputs, and we'll use the annex for grouping, since that is
committed to by singatures. Call the annex field "sig_group_count".

When processing inputs, setup a new state pair, (start, end), initially
(0,0).

When evaluating an input, lookup sig_group_count. If it's not present,
then set start := end. If it's present and 0, leave start and end
unchanged. Otherwise, if it's present and greather than 0, set
start := end, and then set end := start + sig_group_count.

Introduce a new SIGHASH_GROUP flag, as an alternative to ALL/SINGLE/NONE,
that commits to each output i, start <= i < end. If start==end or end >
num_outputs, signature is invalid.

That means each output in a tx could be hashed three times instead of
twice (once for its particular group, as well as once for SIGHASH_ALL
and once for SIGHASH_SINGLE), and I think would let you combine x-input
and y-outputs fairly safely, by having the first input commit to "y"
in the annex, and the remaining x-1 commit to "0".

That does mean if you have two different sets of inputs (x1 and x2)
each spending to the exact same set of y outputs, you could claim all
but one of them while only paying a single set of y outputs. But you
could include an "OP_RETURN hash(x1)" tapleaf branch in one of the y
outputs to ensure the outputs aren't precisely the same to avoid that
problem, so maybe that's fine?

Okay, now that I've written and re-written that a couple of times,
it looks like I'm just reinventing Rusty's signature bundles from 2018:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015862.html

(though at least I think using the annex is probably an improvement on
having values that affect other inputs being buried deeper in an input's
witness data)



Without something like this, I think it will be very hard to incorporate
fees into eltoo with layered commitments [0]. As a new sighash mode it
would make sense to include it as part of ANYPREVOUT to avoid introducing
many new "unknown key types".

[0] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002448.html
also, https://www.erisian.com.au/lightning-dev/log-2021-07-08.html

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-08 Thread Anthony Towns via bitcoin-dev
On Wed, Jul 07, 2021 at 06:00:20PM -0700, Jeremy via bitcoin-dev wrote:
> This means that you're overloading the CLTV clause, which means it's 
> impossible
> to use Eltoo and use a absolute lock time,

It's already impossible to simultaneously spend two inputs if one
requires a locktime specified by mediantime and the other by block
height. Having per-input locktimes would satisfy both concerns.

> 1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated to
> eltoo sequences). This has the benefit of giving a per input sequence, but the
> drawback of using a CSV bit. Because there's only 1 CSV per input, this
> technique cannot be used with a sequence tag.

This would disallow using a relative locktime and an absolute locktime
for the same input. I don't think I've seen a use case for that so far,
but ruling it out seems suboptimal.

Adding a per-input absolute locktime to the annex is what I've had in
mind. That could also be used to cheaply add a commitment to an historical
block hash (eg "the block at height 650,000 ended in cc6a") in order to
disambiguate which branch of a chain split or reorg your tx is valid for.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread Anthony Towns via bitcoin-dev
On Sun, Jul 04, 2021 at 09:02:25PM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> Bear in mind that when people are talking about enabling covenants, we are
> talking about whether OP_CAT should be allowed or not.

In some sense multisig *alone* enables recursive covenants: a government
that wants to enforce KYC can require that funds be deposited into
a multisig of "2   2 CHECKMULTISIG", and that
"recipient" has gone through KYC. Once deposited to such an address,
the gov can refus to sign with gov_key unless the funds are being spent
to a new address that follows the same rules.

(That's also more efficient than an explicit covenant since it's all
off-chain -- recipient/gov_key can jointly sign via taproot/MuSig at
that point, so that full nodes are only validating a single pubkey and
signature per spend, rather than having to do analysis of whatever the
underlying covenant is supposed to be [0])

This is essentially what Liquid already does -- it locks bitcoins into
a multisig and enforces an "off-chain" covenant that those bitcoins can
only be redeemed after some valid set of signatures are entered into
the Liquid blockchain. Likewise for the various BTC-on-Ethereum tokens.
To some extent, likewise for coins held in exchanges/custodial wallets
where funds can be transferred between customers off-chain.

You can "escape" from that recursive covenant by having the government
(or Liquid functionaries, or exchange admins) change their signing
policy of course; but you could equally escape any consensus-enforced
covenant by having a hard fork to stop doing consensus-enforcement (cf
ETH Classic?). To me, that looks more like a difference of procedure
and difficulty, rather than a fundamental difference in kind.

Cheers,
aj

[0] https://twitter.com/pwuille/status/1411533549224693762

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Gradual transition to an alternate proof without a hard fork.

2021-04-17 Thread Anthony Towns via bitcoin-dev
On Fri, Apr 16, 2021 at 04:48:35PM -0400, Erik Aronesty via bitcoin-dev wrote:
> The transition *could* look like this:
>  - validating nodes begin to require proof-of-burn, in addition to
> proof-of-work (soft fork)
>  - the extra expense makes it more expensive for miners, so POW slowly drops
>  - on a predefined schedule, POB required is increased to 100% of the
> "required work" to mine
> Given all of that, am I correct in thinking that a hard fork would not
> be necessary?

It depends what you mean by a "hard fork". By the definition that
"the old software will consider the chain followed by new versions of
the software as valid" it's a soft fork. But it doesn't maintain the
property "old software continues to follow the same chain as new software,
provided the economic majority has adopted the new software" -- because
after the PoW part has dropped its difficulty substantitally, people can
easily/cheaply make a new chain that doesn't include proof-of-burn, and
has weak proof-of-work that's nevertheless higher than the proof-of-burn
chain, so old nodes will switch to it, while new nodes will continue to
follow the proof-of-burn chain.

So I think that means it needs to be treated as a hard fork: everyone
needs to be running the new software by some date to ensure they follow
the same chain.

(The same argument applies to trying to switch to a different PoW
algorithm via a soft fork; I forget who explained this to me)

Jeremy wrote:
> I think you need to hard deprecate the PoW for this to work, otherwise
> all old miners are like "toxic waste".
>
> Imagine one miner turns on a S9 and then ramps up difficulty for
> everyone else.

If it's a soft-fork, you could only ramp up the PoW difficulty by mining
more than one block every ten minutes, but presumably the proof-of-burn
scheme would have its own way of preventing burners from mining blocks
too fast (it was assumption 2).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] March 23rd 2021 Taproot Activation Meeting Notes

2021-04-08 Thread Anthony Towns via bitcoin-dev
On Wed, Apr 07, 2021 at 02:31:13PM +0930, Rusty Russell via bitcoin-dev wrote:
> >> It's totally a political approach, to avoid facing the awkward question.
> >> Since I believe that such prevaricating makes a future crisis less
> >> predictable, I am forced to conclude that it makes bitcoin less robust.
> > LOT=true does face the awkward question, but there are downsides:
> >   - in the requirement to drop blocks from apathetic miners (although
> > as Luke-Jr pointed out in a previous reply on this list they have
> > no contract under which to raise a complaint); and
> Surely, yes.  If the users of bitcoin decide blocks are invalid, they're
> invalid.

That's begging the question though -- yes, if _everyone_ decides bitcoin
works such-n-such a way, then there's no debate. But that's trivial:
who's left to debate, when everyone agrees?

On the otherhand, if people disagree with you, who's to say they're in
the minority and "the users" are on your side?

> With a year's warning, and developer and user consensus
> against them, I think we've reached the limits of acceptable miner
> apathy.

The question is "how do you establish developer and user consensus?"

In particular, if you're running a business accepting payments via
"bitcoin", how do you know what software to run to stay in consensus
with everyone else running bitcoin, so you know the payments you receive
are good?

Ideally, we try to make the answer to that trivial: just download any
version of bitcoind and run it with the default configuration. More
recent (supported) versions are better due to potential security fixes
and performance improvements, of course.

> >   - in the risk of a chain split, should gauging economic majority
> > support - which there is zero intrinsic tooling for - go poorly.
> Agreed that we should definitely do better here: in practice people
> would rely on third party explorers for information on the other side of
> the split.  Tracking the cumulative work on invalid chains would be a
> good idea for bitcoind in general (AJ suggested this, IIRC).

Those measures are only useful *after* there's been a chain split. I'm
certainly in favour of better protections like that -- adversarial
thinking, prepper-ism, whatever -- but we should be trying really hard to
avoid ending up in that situation; and even better to avoid even ending
up *risking* that situation.

> Again, openly creating a contingency plan is not brinkmanship,

I think the word "brinkmanship" is being a bit overused in this thread...

lockinontimeout is designed for a chain split -- its only action is
to ignore one side of a split should it occur. That's not useless --
splitting the chain is a plausible scenario in the event of someone
dedicating something like $200M+ per week to attacking bitcoin, and we
should have contingencies in place for that sort of thing.

But it's like carrying a gun around -- yeah, there are times when that
might be helpful for self-protection or to put a tyrant into the ground;
but putting it down on the table everytime you sit down for a coffee*
and tapping it and saying "look, I'm sure you'll do the right thing and
serve me properly and I'll leave happy and give you a big tip; this is
just a contingency plan" isn't super great.

And even then, lockinontimeout isn't really a very *good* contingency
plan in the event of a chain split: if your side of the split isn't
in the majority, you're relying on the other side -- the one with all
the money -- being stupid and not having a dontlockinever=yes option to
protect them from wipeout, and without a hardfork to change proof-of-work
or the difficulty adjustment, you'll have enormous difficulties getting
blocks at all.

* The only thing worth spending bitcoin on.

> I think we should be normalizing the understanding that bitcoin users
> are the ultimate decider.

Yes. 

What we shouldn't be normalising is that the way users decide is by
risking their business by having their node reject blocks and hoping
that everyone else will also reject the same set of blocks.

(After all, businesses handling lots of bitcoin being willing to force
the issue via running node software that rejects "invalid" blocks,
was the whole plan for making s2x a fait accompli...)

I've written up what I believe is a better approach to dealing with
the possibility of miners not upgrading to enforce a soft-fork quickly
here:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018723.html

I belive it would be straightforward to implement that after a failed
speedy trial; technically anyway.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot Activation Meeting Reminder: April 6th 19:00 UTC bitcoin/bitcoin-dev

2021-04-06 Thread Anthony Towns via bitcoin-dev
On Tue, Apr 06, 2021 at 01:17:58PM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> Not only that, but the "time_until_next_retargeting_period" is a variable 
> whose
> distribution could straddle between 0 days and 14 days should the
> MIN_LOCKIN_TIME end up close to a retargeting boundary.

As noted in [0] the observed time frame of a single retarget period
over the last few years on mainnet is 11-17 days, so if LOCKED_IN is
determined by a min lock in time, then activation should be expected to
occur between 11 days (if the min lock in time is reached just prior to
a retarget boundary and the next period is short) and 34 days (if the
min lock in the is reached just after the retarget boundary and both
that period and the following one are long).

[0] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018728.html 

> MTP risks having a
> persistent two week error in estimating the activation time (which is the time
> that nodes need to strive to be upgraded)

That's a range of 16 days, consistenly after the time that's specified
and which cannot be brought forward even if miners were to attempt to
a timewarp attack. 

That compares to the height based approach, which gives a similar error
for the 7 period / 3 month target, and larger errors for anything longer,
and which can be both earlier or later in attack scenarios. The errors
are worse if you consider times prior to the 2015 cut-off I used, but
hopefully that's because of the switch to ASICs and won't be repeated?

> which may not be resolved until only
> two weeks prior to activation!  If MIN_LOCKIN_TIME ends up close to a
> retargeting boundary, then the MTP estimate becomes bimodal and provides much
> worse estimates than provided by height based activation, just as we are
> approaching the important 4 weeks (or is it 2 weeks?) prior to activation!

That doesn't seem like a particularly important design goal to me? Having
a last minute two week delay seems easy to deal with, while having to
make estimates of how many blocks we might have in an X month period
X+K months in advance is not. If it were important, I expect we could
change the state machine to reflect that goal and make the limit tighter
(in non-attack scenarios).

> The short of it is that MTP LOCKIN only really guarantees a minimum 2 week
> notice prior to activation,

If the timeout is at X and MTP min lockin is at X+Y then you guarantee
a notice period of at least Y + (1 retarget period).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot Activation Meeting Reminder: April 6th 19:00 UTC bitcoin/bitcoin-dev

2021-04-05 Thread Anthony Towns via bitcoin-dev
On Sat, Apr 03, 2021 at 09:39:11PM -0700, Jeremy via bitcoin-dev wrote:
> As such, the main conversation in this agenda item is
> around the pros/cons of height or MTP and determining if we can reach 
> consensus
> on either approach.

Here's some numbers.

Given a desired signalling period of xxx days, where signaling begins
on the first retarget boundary after the starttime and ends on the last
retarget boundary before the endtime, this is how many retarget periods
you get (based on blocks since 2015-01-01):

 90 days: mainnet  5-7 full 2016-block retarget periods
180 days: mainnet 11-14
365 days: mainnet 25-27
730 days: mainnet 51-55

(This applies to non-signalling periods like the activation/lock in delay
too of course. If you change it so that it ends at the first retarget
period after endtime, all the values just get incremented -- ie, 6-8,
12-15 etc)

If I've got the maths right, then requiring 1814 of 2016 blocks to signal,
means that having 7 periods instead of 5 lets you get a 50% chance of
successful activation by maintaining 89.04% of hashpower over the entire
period instead of 89.17%, while 55 periods instead of 51 gives you a 50%
chance of success with 88.38% hashpower instead of 88.40% hashpower.
So the "repeated trials" part doesn't look like it has any significant
effect on mainnet.

If you target yy periods instead of xxx days, starting and ending on a
retarget boundary, you get the following stats from the last few years
of mainnet (again starting at 2015-01-01):

 1 period:  mainnet 11-17 days (range 5.2 days)
 7 periods: mainnet 87-103 days (range 15.4 days)
13 periods: mainnet 166-185 days (range 17.9 days)
27 periods: mainnet 352-377 days (range 24.4 days)
54 periods: mainnet 711-747 days (range 35.0 days)

As far as I can see the questions that matter are:

 * is signalling still possible by the time enough miners have upgraded
   and are ready to start signalling?

 * have nodes upgraded to enforce the new rules by the time activation
   occurs, if it occurs?

But both those benefit from less real time variance, rather than less
variance in the numbers of signalling periods, at least in every way
that I can think of.

Corresponding numbers for testnet:

 90 days: testnet   5-85
180 days: testnet  23-131
365 days: testnet  70-224
730 days: testnet 176-390

(A 50% chance of activating within 5 periods requires sustaining 89.18%
hashpower; within 85 periods, 88.26% hashpower; far smaller differences
with all the other ranges -- of course, presumably the only way the
higher block rates ever actually happen is by someone pointing an ASIC at
testnet, and thus controlling 100% of blocks for multiple periods anyway)

  1 period:  testnet 5.6minutes-26 days (range 26.5 days)
 13 periods: testnet 1-135 days (range 133.5 days)
 27 periods: testnet 13-192 days (range 178.3 days)
 54 periods: testnet 39-283 days (range 243.1 days)
100 periods: testnet 114-476 days (range 360.9 days)
 (this is the value used in [0] in order to ensure 3 months'
  worth of signalling is available)
132 periods: testnet 184-583 days (range 398.1 days)
225 periods: testnet 365-877 days (range 510.7 days)
390 periods: testnet 725-1403 days (range 677.1 days)

[0] https://github.com/bitcoin/bips/pull/1081#pullrequestreview-621934640

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making the case for flag day activation of taproot

2021-03-29 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 03, 2021 at 02:08:21PM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> While I support essentially any proposed taproot activation method, including 
> a
> flag day activation, I think it is premature to call BIP8 dead.
>
> Even today, I still think that starting with BIP8 LOT=false is, generally
> speaking, considered a reasonably safe activation method in the sense that I
> think it will be widely considered as a "not wholly unacceptable" approach to
> activation.

If everyone started with lot=false, that would be true -- that was how
segwit then BIP148/BIP91 worked, and was at least how I had been assuming
people were going to make use of the new improved BIP8.

But it seems more likely that when activation starts, even if many
people run lot=false, many other people will run lot=true: see Luke's
arguments on this list [0], Rusty's campaign for a lot=true option [1],
the ##uasf channel (initial topic: Development of a Taproot activation
implementation with default LOT=true) [2], or Samson's hat designs [3].

[0] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html
[1] https://rusty.ozlabs.org/?p=628

https://gist.github.com/michaelfolkson/92899f27f1ab30aa2ebee82314f8fe7f#gistcomment-3667311
[2] http://gnusha.org/uasf/
[3] https://twitter.com/Excellion/status/1363751091460956167

With lot=false and lot=true nodes active on the network, a chain split
occurs if the activation is blocked: perhaps that might happen for good
reasons, eg because there are implementation bugs found after activation
settings are merged but prior to activation locking in, or perhaps for
neutral or bad reasons that cause miners to be opposed.

In the event of a huge conflict or emergency as bad as or worse than
occurred with segwit, a chain split might well be the lesser evil
compared to the activation being blocked or delayed substantially;
but as default scenario for every consensus change, before we know if
someone might find a problem while there's still time to back out, and
before we know if there is going to be a huge conflict? It doesn't seem
as focussed on safety as I'd expect from bitcoin.

> After a normal and successful Core update with LOT=false, we will have more
> data showing broad community support for the taproot upgrade in hand.  In the
> next release, 6 months later or so, Core could then confidently deploy a BIP8
> LOT=true client, should it prove to be necessary.

BIP8/lot=true is great for the situation we found ourselves in with
BIP91/BIP148: there's an activation underway, that is being unexpectedly
opposed, we want to ensure it activates, *and* we don't want to have
everyone have to do a new round of software upgrades.

But if we're able to calmly put out a new release with new activation
parameters, with enough time before it takes effect that everyone can
reasonably upgrade to it, that's a pretty different situation.

First, since we're giving everyone time to upgrade, it can be a clean
secondary deployment -- there's no need to try to drag the original
lot=false users along -- we're giving everyone time to upgrade, and
everyone includes them.

Second, lot=true implies we're doing some unsignalled consensus change
anyway -- blocks would be considered invalid if they don't signal for
activation as of some height, with no prior on-chain signalling that
that rule change has occurred. So if we're allowing that, there's no
principle that requires signalling to lock in the change we're actually
trying to activate.

So at that point why not simply reverse the burden of proof entirely? Run
an initial "speedy trial" type event that allows fast activation via miner
enforcement prior to everyone having upgraded; and if that fails but there
haven't been valid reasonable objections to activation identified, run
a secondary activation attempt that instead of requiring 90% signalling
to activate, requires 90% signalling to *fail*.

Miners not upgrading is then not a blocker, and even if a few miners are
buggy and signal accidently, that doesn't cause a risk to them of having
their blocks dropped, or to the network of having the upgrade blocked.

If there are good reasons to object to the fork being activated that are
discovered/revealed (very) late, this also provides an opportunity to
cleanly cancel the activation by convincing miners that the activation
is undesirable and having them agree to signal for cancellation (via
BIP-91 style coordination or even BIP-148 style mandatory signalling, eg).

And, on the other hand, if 90% of miners are opposed to a soft fork for
some selfish or bad reason, with that much hashpower they could easily do
a 51% attack on the network after activation to prevent any new features
being used, so cancelling activation in that case is probably not worse
in practice than it would be continue with it despite the opposition.

I think a state machine along the lines of:

  DEFINED - for 6 months after code release
  STARTED - exactly 1 period
  

Re: [bitcoin-dev] March 23rd 2021 Taproot Activation Meeting Notes

2021-03-25 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 23, 2021 at 08:46:54PM -0700, Jeremy via bitcoin-dev wrote:
> 3. Parameter Selection
> - There is broad agreement that we should target something like a May 1st
>   release, with 1 week from rc1 starttime/startheight,
>   and 3 months + delta stoptime/stopheight (6 retargetting periods), and an
>   activation time of around Nov 15th.

I'd thought the idea was to release mid-late April, targetting a starttime
of May 1st.

> - If we select heights, it looks like the first signalling period would be
>   683424, the stop height would be 695520.

> - If we select times, we should target a mid-period MTP. We can shift this
> closer to release, but currently it looks like a 1300 UTC May 7th start time 
> and stop time would be 1300 UTC August 13th.

We've traditionally done starttime/timeout at midnight UTC, seems weird
to change. Oh, or is it a Friday-the-13th, lets have 13s everywhere
thing?

Anyway, block 695520 is about 19440 blocks away, which we'd expect to be
135 days, but over the past two years, 19440 blocks prior to a retarget
boundary has been between 127 (-8) days and 137 (+2) days, and in the
last four years, it's been between 121 (-14) days and 139 (+4) days. [0]

So given block 676080 had mediantime 1616578564, I think picking a
mediantime no later than ~139 days later, ie 162864 (00:00 UTC
11 Aug) would be the most likely to result in MTP logic matching the
height parameters above, and a day or two earlier still might be better.
(It will match provided MTP passes the timeout at any block in the range
[695520, 697535])

> (please check my math, if anyone is superstitious we can add a day to 
> times...)

It looks to me like blocks are more likely to arrive earlier than later
(which is what we'd expect with increasing hashrate), fwiw, so adding
days would be more likely to cause MTP to have more signalling periods
than height-based, rather than avoid having fewer.

Cheers,
aj

[0] $ for b in `seq 201600 2016 676000`; do a=$(($b-19440)); echo $(( 
$(bitcoin-cli getblockheader $(bitcoin-cli getblockhash $b) | grep mediantime | 
cut -d\  -f4 | tr -d ,) -  $(bitcoin-cli getblockheader $(bitcoin-cli 
getblockhash $a) | grep mediantime | cut -d\  -f4 | tr -d ,) )); done 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 16, 2021 at 08:01:47AM +0900, Karl-Johan Alm via bitcoin-dev wrote:
> It may initially take months to break a single key. 

>From what I understand, the constraint on using quantum techniques to
break an ECC key is on the number of bits you can entangle and how long
you can keep them coherent -- but those are both essentially thresholds:
you can't use two quantum computers that support a lower number of bits
when you need a higher number, and you can't reuse the state you reached
after you collapsed halfway through to make the next run shorter.

I think that means having a break take a longer time means maintaining
the quantum state for longer, which is *harder* than having it happen
quicker...

So I think the only way you get it taking substantial amounts of time to
break a key is if your quantum attack works quickly but very unreliably:
maybe it takes a minute to reset, and every attempt only has probability
p of succeeding (ie, random probability of managing to maintain the
quantum state until completion of the dlog algorithm), so over t minutes
you end up with probability 1-(1-p)^t of success.

For 50% odds after 1 month with 1 minute per attempt, you'd need a 0.0016%
chance per attempt, for 50% odds after 1 day, you'd need 0.048% chance per
attempt. But those odds assume you've only got one QC making the attempts
-- if you've got 30, you can make a month's worth of attempts in a day;
if you scale up to 720, you can make a month's worth of attempts in an
hour, ie once you've got one, it's a fairly straightforward engineering
challenge at that point.

So a "slow" attack simply doesn't seem likely to me. YMMV, obviously.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-03-06 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 03, 2021 at 11:49:57AM -0500, Matt Corallo wrote:
> On 3/3/21 09:59, Anthony Towns wrote:
> > A couple of days ago I would have disagreed with this; but with Luke
> > now strongly pushing against implementing lot=false, I can at least see
> > your point...
> Right. It may be the case that the minority group threatening to fork off
> onto a lot=true chain is not large enough to give a second thought to.
> However, I'm not certain that its worth the risk, and, as Chris noted in his
> post this morning, that approach is likely to include more drama.

I think there's two different interpretations of what a "user-activated
fork" means:

 1) if people try to take bitcoin in a direction that risks destroying
it, it's possible to ignore both devs and hashpower entirely and force
a chain split to ensure it's possible to continue transacting with
"the real bitcoin".

 2) removing miners' influence over consensus rules entirely -- so that
not only can users overcome miner objections by risking a chain split,
but so that miners don't have any greater ability to object than
anyone else in the ecosystem.

In my opinion, bip8's optional lockinontimeout setting and must-signal
approach is well-designed for case 1; if miners object for good reasons,
then there is no need to override them (if there's a good reason not to do
something, it shouldn't be done!), while still having the possibility to
override them if they object for bad reasons. Because hashpower disagrees,
there's always a risk of a chain split in that case, so the additional
risk introduced by a signalling requirement is pretty minimal. That the
lockinontimeout value is a setting means it can be switched only when
we're sure there aren't good reasons for the objection.

There is a lot of work to be done to make bitcoind have an acceptable
chance of gracefully *surviving* a network split introduced by this sort
of conflict; but provided no one started setting lockinontimeout=true
until we were six or so months into an activation attempt (and hence
had the opportunity to judge whether the reasons for not activating
were bad), that would likely be enough time to start implementing some
safety mechanisms.

But there seems to be much more signficant support for the case 2 than I
expected; as evidenced by the "let's do lockinontimeout=true immediately"
advocacy, eg:

  I am not willing to go to war for Taproot. I'll be honest the reason
  I'm interested at all is that devs I respect spent a lot of energy and
  time on it and I was reluctant to let their marginally beneficial work
  go to waste.

  I am, however, willing to go to war against LOT=False.

   -- https://twitter.com/francispouliot_/status/1363876292169465856

I don't think bip8 is well-designed for that approach: most importantly,
with early adoption of lockinontimeout=true, bip8 *encourages* a consensus
split in the event that good reasons for not activating are discovered
afterwards, because lockinontimeout=false nodes remain able to abandon
the activation attempt. Consensus splits are terrible; they should be
a last resort used only in the event that bitcoin's fundamental nature
is threatened, not a standard risk if bugs happened to be discovered
late. But additionally, if we are worried miners might not be acting
in the interests of all bitcoin users, there are other games they could
play, such as "if you want X activated quickly, also give us Y; otherwise
we'll delay it as long as possible".

Losing the opportunity to abandon an activation attempt, by whatever
mechanism, also puts a lot more pressure on being absolutely sure of the
desirability of the change at the point when it's merged; because miners,
third-party devs, businesses, and users don't even have the option of
attempting to influence miners, all objections needs to be raised when
the activation parameters are merged, which raises the stakes for that
event substantially.

I think my conclusions from that are:

 * as it stands, people are expecting to run bip8/lot=true nodes on the
   network immediately; so deploying bip8/lot=false with compatible
   parameters risks causing consensus splits, and should not be done

 * David Harding's "speedy trial" approach probably doesn't suffer from
   the problems -- running a lot=true variant would require enforcing
   signalling prior to the end of July, which is an unreasonable timeframe
   to expect the majority of economic nodes to upgrade in; if bip9 is
   used, then the risk of enforcement occuring with minority hashrate
   (and thus having fewer retarget periods before the timeout is
   reached) would also make a bip148/lot=true variant difficulty

 * if people want a "taproot is guaranteed to activate no later than X"
   PR merged, someone needs to do a *lot* more outreach to be sure that
   that's the right outcome, and it's not just devs/maintainers making
   the call

 * IMO, Matt's proposed approach is both a better and simpler approach
   to avoid 

Re: [bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-06 Thread Anthony Towns via bitcoin-dev
On Fri, Mar 05, 2021 at 05:43:43PM -1000, David A. Harding via bitcoin-dev 
wrote:
> ## Example timeline
> - T+0: release of one or more full nodes with activation code
> - T+14: signal tracking begins
> - T+28: earliest possible lock in
> - T+104: locked in by this date or need to try a different activation process
> - T+194: activation (if lockin occurred)

> ### Base activation protocol
> The idea can be implemented on top of either Bitcoin Core's existing
> BIP9 code or its proposed BIP8 patchset.[6]
> BIP9 is already part of Bitcoin Core and I think the changes being
> proposed would be relatively small, resulting in a small patch that
> could be easy to review.

To get to specifics, here's a PR, based on #21334, that updates bip9
to support an extra parameter to delay the transition from LOCKED_IN
to ACTIVE until a particular timestamp is reached, and to reduce the
activation threshold to 90%:

  https://github.com/bitcoin/bitcoin/pull/21377

With that in mind, I think the example timeline above could translate
to taproot parameters of:

  nStartTime = 1618358400; // April 14, 2021
  nTimeout = 1626220800; // July 14 2021
  activation_time = 1633046400; // October 1 2021

That is, signalling begins with the first retarget period whose parent's
median time is at least April 14th; and concludes with the last retarget
period whose final block's median time is prior to July 14th; that's
91 days which should be about ~6.5 retarget periods, so should cover 6
full retarget periods, but could only cover 5.  Activation is delayed
until the first retarget period where the final block of the previous
retarget period has a timestamp of at least October 1st.

Note that the timeout there is prior to the expected timestamp of the
startheight block specified in the proposal for bip8 parameters:

  https://en.bitcoin.it/wiki/Taproot_activation_proposal_202102

and earliest activation is after the expected release of 22.0 and hence
the maintenance end of 0.20.

Note also that the PR above specifies the delay as a deadline, not a
delta between lockin and activation; so earlier lockin does not produce
an earlier activation with the code referenced above.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-03-03 Thread Anthony Towns via bitcoin-dev
On Sun, Feb 28, 2021 at 11:45:22AM -0500, Matt Corallo via bitcoin-dev wrote:
> Given this, it seems one way to keep the network in consensus would be to
> simply activate taproot through a traditional, no-frills, flag-day (or
> -height) activation with a flag day of roughly August, 2022. Going back to
> my criteria laid out in [1],

The timeout height proposed in:

 https://en.bitcoin.it/wiki/Taproot_activation_proposal_202102

is block 745920, so bip8/lockinontimeout=true with that param would ensure
activation by block 747936. That's 74,940 blocks away at present, which
would be ~6th August 2022 if the block interval averaged at 10 minutes.

So I think I'm going to treat this as reusing the same parameter, just
dropping the consensus-critical signalling and hence the possibilty
of early activation.

I believe this sort of unsignalled flag day could be implemented
fairly easily by merging PR #19438, adding "int TaprootHeight;" to
Conensus::Params, moving "DEPLOYMENT_TAPROOT" from DeploymentPos
to BuriedDeployment, adjusting DeploymentHeight(), and setting
TaprootHeight=747936 for MAINNET. Might need to add a config param like
"-segwitheight" for regtest in order to keep the tests working.

I think it would be worthwhile to also update getblocktemplate so that
miners signal uptake for something like three or four retarget periods
prior to activation, without that signalling having any consensus-level
effect. That should allow miners and businesses to adjust their
expectations for how much hashpower might not be enforcing taproot rules
when generating blocks -- potentially allowing miners to switch pools
to one running an up to date node, pools to reduce the amount of time
they spend mining on top of unvalidated headers, businesses to increase
confirmation requirements or prepare for the possibility of an increase
in invalid-block entries in their logs, etc.

> 2) The high node-level-adoption bar is one of the most critical goals, and
> the one most currently in jeopardy in a BIP 8 approach.

A couple of days ago I would have disagreed with this; but with Luke
now strongly pushing against implementing lot=false, I can at least see
your point...

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-03-02 Thread Anthony Towns via bitcoin-dev
On Mon, Mar 01, 2021 at 08:58:46PM +, John Newbery via bitcoin-dev wrote:
> We can increase the permitted number of inbound block-relay-only peers
> while minimizing resource requirement _and_ improving addr record
> propagation, without any changes to the p2p protocol required.

+1.

I found this diagram:

https://raw.githubusercontent.com/amitiuttarwar/bitcoin-notes/main/scale-block-relay-only.png

helpful for analysing the possibilities. The greyed-circles indicate
when one node doesn't need to send txs to the other node, and thus can
avoid allocating the tx relay data structures.

> I propose that for Bitcoin Core version 22.0:
> 
> - only initialize the transaction relay data structures after the
>   `version` message is received, and only if fRelay=true and
>   `NODE_BLOOM` is not offered on this connection.

The tx relay data structure should *always* be initialised if you offer
NODE_BLOOM services on the connection.

> - only initialize the addr data structures for inbound connections when
>   an `addr`, `addrv2` or `getaddr` message is received on the
>   connection, and only consider a connection for addr relay if its addr
>   data structures are initialized.

I think it's simpler to initialize the addr data structures for all
connections, and add a bool to register "we've received an ADDR messages
from this peer, so will consider it for announcements". The data structure
is lightweight enough that this should be fine, I think.

I think the ideal rules are slightly more complicated:

Address relay:
  * do GETADDR on every outbound connection except block-relay-only
  * do self announcements on every connection but only having received a
*ADDR* message of some kind

Set fRelay=false when:
  * running with -blocksonly=1
  * making a block-relay-only outbound connection
  * accepting an inbound connection but you're out of normal slots
and can only offer a lightweight slot

Tx relay:
  * inbound (you accept the connection):
+ if you advertised bloom services to the node, full tx relay always
+ accept inbound txs, unless you advertised fRelay=false
+ send outbound txs if they didn't specify fRelay=false
  *or* you've run out of normal slots and can only offer a
  lightweight slot

  * outbound (you make the connection to someone else):
+ accept inbound txs, unless you advertised fRelay=false
+ send outbound txs if they didn't specify fRelay=false
  *and* you're not using them as a blocks-relay-only node

Note (because I keep getting them confused): the net.h TxRelay structure
needs to be initialised in order to *send* outbound txs; the txrequest.h
TxRequest structure is used for accepting inbound txs.

I think to support -blocksonly=1 nodes who want to relay their own wallet
addresses occassionally, assigning inbound txs who have fRelay=false to
a *much* lower MAX_PEER_TX_ANNOUNCEMENTS (perhaps 10 or 20, instead of
5000) and not allocating TxRelay for them at all would ensure they're
both functional and lightweight.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] LOT=False is dangerous and shouldn't be used

2021-03-01 Thread Anthony Towns via bitcoin-dev
On Sun, Feb 28, 2021 at 07:33:30PM +, Luke Dashjr via bitcoin-dev wrote:
> As we saw in 2017 with BIP 9, coordinating activation by miner signal alone, 
> despite its potential benefits, also leaves open the door to a miner veto. 

To the contrary, we saw in 2017 that miners could *not* successfully
veto a BIP 9 activation. It was certainly more effort and risk than was
desirable to override the attempted veto, but the attempt at vetoing
nevertheless failed.

> It wouldn't be much different than adding back the inflation bug 
> (CVE-2018-17144) and trusting miners not to exploit it.

That is ridiculous FUD.

> With LOT=False in the picture, however, things can get messy:

LOT=false is always in the picture if we are talking about a soft-fork:
the defining feature of a soft-fork is that old node software continues
to work, and old node software will be entirely indifferent to whether
activation is signalled or not.

> some users will 
> enforce Taproot(eg) (those running LOT=True), while others will not (those 
> with LOT=False)

If you are following bip8 with lockinontimeout=false, you will enforce
taproot rules if activation occurs, you will simply not reject blocks if
activation does not occur.

> Users with LOT=True will still get all the safety thereof, 
> but those with LOT=False will (in the event of miners deciding to produce a 
> chain split) face an unreliable chain, being replaced by the LOT=True chain 
> every time it overtakes the LOT=False chain in work.

This assumes anyone mining the chain where taproot does not activate is
not able to avoid a reorg, despite having majority hashpower (as implied
by the lot=true chain having to overtake them repeatedly). That's absurd;
avoiding a reorg is trivially achieved via running "invalidateblock", or
via pool software examining block headers, or via a patch along the lines
of MUST_SIGNAL enforcement, but doing the opposite. For concreteness,
here's a sketch of such a patch:

https://github.com/ajtowns/bitcoin/commit/f195688bd1eff3780f200e7a049e23b30ca4fe2f

> For 2 weeks, users with LOT=False would not have a usable network.

That's also ridiculous FUD.

If it were true, it would mean the activation mechanism was not
acceptable, as non-upgraded nodes would also not have a usable network
for the same reason.

Fortunately, it's not true.

More generally, if miners are willing to lose significant amounts of
money mining orphan blocks, they can do that at any time. If they're
not inclined to do so, it's incredibly straightforward for them to avoid
doing so, whatever a minority of other miners might do.

> The overall risk is maximally reduced by LOT=True being the only deployed 
> parameter, and any introduction of LOT=False only increases risk probability 
> and severity.

LOT=false is the default behaviour of everything single piece of node
software out there. That behaviour doesn't need to be introduced, it's
already universal.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring alternative activation mechanisms: decreasing threshold

2021-03-01 Thread Anthony Towns via bitcoin-dev
On Sat, Feb 27, 2021 at 05:55:00PM +, Luke Dashjr via bitcoin-dev wrote:

[on the topic of non-signalled activation; ie "it doesn't matter what
miners do or signal, the rules are active as of height X"]

> This has the same problems BIP149 did: since there is no signalling, it is 
> ambiguous whether the softfork has activated at all. Both anti-SF and pro-SF 
> nodes will remain on the same chain, with conflicting perceptions of the 
> rules, and resolution (if ever) will be chaotic. Absent resolution, however, 
> there is a strong incentive not to rely on the rules, and thus it may never 
> get used, and therefore also never resolved.

I think this might be a bit abstract, and less convincing than it might
otherwise be.

To give a more explicit hypothetical: imagine that instead of making it
impossible to use an optimisation when mining (as segwit did to ASICBoost,
for which a patent had been applied for), a future soft-fork made it
possible/easier to use some mining optimisation, and further that the
optimisation is already patented, and that the patent wasn't widely known,
and the owners of the patent have put everyone that they can under NDA.

Obviously mining optimisations are great for manufacturers -- it means
a new generation of hardware is more efficient, which means miners
want to upgrade to it; but patented mining optimisations are bad for
decentralisation, because the're no competition in who can sell the new
generation of mining hardware, so the patent holder is able to choose
who is able to mine, and because miners control transaction selection,
they could insist that the only people they'll sell to must censor
transactions, and attempt to render miners that don't censor
uncompetitive.

So the incentives there are:

 - the patent holder wants the soft-fork to activate ASAP, and
   does not want to reveal the patent until after it's permanently
   locked in

 - people who want decentralisation/competition and know about the
   patent want to stop the soft-fork from activation, or hard-fork it
   out after it's activated; but they can't talk about the patent because
   of NDA (or other bribes/threats intended to keep them silent)

Suppose further that the anti-patent folks either directly control 20%
of hashpower, or are otherwise able to block the easy consensus path,
and that the patent holder isn't able to get over 50% of hashpower to
commit to orphaning non-signalling blocks to ensure early activation
despite that. (Or, alternatively, that an approach like Matt suggests in
"Straight Flag Day (Height)" is used, and there is no early-activation
via hashpower supermajority option)

So under that scenario you reach the timeout, but without activation
occurring. You also don't have any "reasonable, directed objection":
everyone who could provide a reasonable objection is under NDA.

What's the scenario look like if you say "signalling doesn't matter,
the software enforces the consensus rules"?

I think it'll be obvious there'll be two sets of software out there and
supported and following a single chain; one set that enforces the new
rules, and one set that doesn't, just as we had Bitcoin Unlimited back
in the day. For at least a while, it will be safe to do spends protected
by the new rules, because one set of nodes will enforce them, and any
miners running the other software won't want see it in their mempool,
and won't want to risk manually mining non-compliant transactions in case
it turns out they're in the minority -- just as Bitcoin Unlimited miners
didn't actually attempt to mine big blocks on mainnet back in the day.

So for a while, we have two divergent sets of maintained node software
following the same chain, with advocates of both claiming that they're the
majority. Some people will beleive the people claiming the new rules are
safe, and commit funds to them, and as those funds are demonstrably not
stolen, the number of people thinking it's safe will gradually increase
-- presumably the new rules have some benefit other than to the patent
holder, after all.

Eventually the patent gets revealed, though, just as covert ASICBoost
did. Either NDA's expire, something violates them, someone rediscovers
or reverse-engineers the idea, or the patent holder decides it's time
to start either suing competitors or advertising.

What happens at that point? We have two sets of node software that both
claim to have been in the majority, and one of which is clearly better
for decentralisation.

But if we all just switch to that two things happen: we allow miners to
steal the funds that users entrusted to the new rules, and anyone who
was enforcing the new rules but is not following the day-to-day drama
has a hard-fork event and can no longer follow the main chain until the
find new software to run.

Alternatively, do we all switch to software that protects users funds
and avoids hard-fork events, even though that software is bad for
decentralisation, and do we do that precisely when the people 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-23 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 22, 2021 at 06:10:34PM -0800, Jeremy via bitcoin-dev wrote:
> Not responding to anyone in particular, but it strikes me that one can think
> about the case where a small minority (let's say H = 20%?) of nodes

I don't think that's a good way to try to look at things -- number of
nodes has some impacts, but they're relatively minor (pun deflected).

I think the things to look at are (from most to least important):

 (1) what the price indicates / what people buying/selling BTC want
 (2) what hashpower does
 (3) what nodes do

Here's a concrete example to help justify that ordering. Suppose
that for whatever reason nobody is particularly interested in running
lockinontimeout=true -- only 0.1% of nodes are doing it and they're not
the "economic majority" in any way. In addition, 15% of hashpower have
spent almost the entire signalling period not bothering to upgrade and
thus haven't been signalling and have been blocking activation.

Suppose further that there are futures/prediction markets setup so that
people can bet on taproot activation (eg the bitfinex chain split tokens,
or some sort of DeFi contracts), and the result is that there's some
decent profits to be made if it does activate, enough to tempt >55%
of hashpower into running with lockinontimeout=true. That way those
miners can be confident it will activate, take up contracts in the
futures/predictions markets, and be confident they'll win and get a
big payday. (Note that this means the people on the other side of those
contracts are betting that taproot *doesn't* activate)

Once a majority of hashpower is running lockinontimeout=true, it then
makes sense for the remaining hashpower to both signal for activation
and also run lockinontimeout=true -- otherwise they risk their blocks
being orphaned if too many blocks don't signal, and they build on top
of one.  Figuring out that a majority of hashpower is/will be running
lockinontimeout=true can be done either by a coinbase message or by
bip91-style signalling.

In that scenario, you end up with >90% of hashpower running with
lockinontimeout=true, even if only a token number of nodes in the wild
are doing the same.



It's possible to do estimates of what happens if a majority of miners
are using lockinontimeout=true, and the numbers end up pretty wild.

With 90% of miners signalling and running lockinontimeout=true, if the
remaining 10% don't signal, they can expect to lose around 3% of revenue
($2M) due to blocks getting orphaned. If the numbers are 85% running
lockinontimeout=true, and 15% not signalling, the non-signallers can
expect to lose about 37% of revenue ($38M) during the retarget period
prior to timeout. If 60% of miners are doing spy-mining for up to 90s,
they would expect to lose 0.9% of their spy-mining revenue ($2.5M). If
60% of hashpower is running lockinontimeout=true, while 40% don't
signal, the non-signallers will forego ~83% of revenue ($320M) due to
their blocks being orphaned, and if 60% of miners spy-mine for 90s, they
should expect to lose 5% of revenue ($10M) over the same period. Dollar
figures based on 6.25BTC/block at $50k per BTC.

https://gist.github.com/ajtowns/fbcf30ed9d0e1708fdc98a876a04ff69#file-forced_signalling_chaos_cost_sim-py

Note that if miners simply accept those losses and don't take any
action to prevent it, very long reorgs are to be expected -- in the 15%
non-signalling scenario, you'd expect to see a 5-block reorg; in the 40%
non-signalling scenario, you'd get reorgs of 60+ blocks. (Only people
not running lockinontimeout=true would see the blocks being reorged out,
of course)


So I think focussing on how many nodes have a particular lockinontimeout
setting can be pretty misleading.

> # 80% on LOT=false, 20% LOT=True
> - Case 1: Activates ahead of time anyways

That's the case where >90% of hashpower is signalling, and everything
works fine.

> - Case 2: Fails to Activate before timeout...
> 20% *may* fork off with LOT=true.

Anyone running with lockinontimeout=true will refuse to follow a chain
where lockin hasn't been reached by the timeout height; so if the most
work chain meets that condition, lockinontimeout=true nodes will refuse
to follow it; either getting stuck with no confirmations at all, or
following a lower work chain that does (or can) reach lockin by timeout
height.

> Bitcoin hashrate reduced, chance of multi
> block reorgs at time of fork relatively high, especially if network does not
> partition.

If the most-work chain fails to activate, and only a minority of
hashrate is running lockinontimeout=true, the chance of multiblock
reorgs is actually pretty low. The way it would play out in detail,
with say 20% of hashpower not signalling and 40% of hashpower running
lockinontimeout=true:

  * the chain reaches the last retarget period; lockinontimeout=false
nodes stay in STARTED, lockinontimeout=true nodes switch to
MUST_SIGNAL

  * for the first ~1009 blocks, everyone stays in sync, but block ~1010
becomes the 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-22 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 22, 2021 at 09:00:29AM -0500, Matt Corallo wrote:
> I think it should be clear that a UASF-style command line option to allow
> consensus rule changes in the node in the short term, immediately before a 
> fork
> carries some risk of a fork, even if I agree it may not persist over months. 
> We
> can’t simply ignore that.

I don't think a "-set-bip8-lockinontimeout=taproot" option on its own
would be very safe -- if we were sure it was safe, because we were sure
that everyone would eventually set lockinontimeout=true, then we would
set lockinontimeout=true from day one and not need an option. I haven't
seen/had any good ideas on how to make the option safe, or at least make
it obvious that you shouldn't be setting it if you don't really
understand what you're getting yourself into. [0]

And that's even if you assume that the code was perfectly capable of
handling forks in some theoretically optimal way.

So at least for the time being, I don't think a config param / command
line option is a good idea for bip8. IMHO, YMMV, IANABDFL etc.

> I think the important specific case of this is something like "if a chain
> where taproot is impossible to activate is temporarily the most work,
> miners with lockinontimeout=true need to be well connected so they don't
> end up competing with each other while they're catching back up".
> Between this and your above point, I think we probably agree - there is
> material  technical complexity hiding behind a “change the consensus rules“
> option. Given it’s not a critical feature by any means, putting resources into
> fixing these issues probably isn’t worth it.

For reference, the "preferentially peer with other UASF nodes" PR for
the BIP148 client was

  https://github.com/UASF/bitcoin/pull/24

List discussion was at

  https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014618.html

I think I'll add playing around with that and reorgs on a signet to my
todo list to see how it goes in cases other than ones that are (hopefully)
vanishingly unlikely to ever happen in practice.

Cheers,
aj

[0] In some sense, this is exactly the opposite sentiment compared to
earonesty's comment:

https://github.com/bitcoin/bitcoin/pull/10900#issuecomment-31712

I mean, I guess could solve the unsafe-now-but-maybe-safe-later
problem generally with a signature:

  -authorise-dangerous-options-key=
  -lockinontimeout=taproot:

where  is a signature of "dangerous:lockinontimeout=taproot" or
similar by the key , and  defaults to some (multisig?) key
controlled by some bitcoin people, who'll only sign that when
there's clear evidence that it will be reasonably safe, and maybe to
"cert-transparency" or something as well. So that allows having an
option become available by publishing a signature, without having
to recompile the code. And it could still be overriden by people who
know what they're doing if the default key owners are being weird. And
maybe the "dangerous" part is enough to prevent people from randomly
cut-and-pasting it from a website into their bitcoin.conf.

I dunno. No bad ideas when brainstorming...
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-22 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 22, 2021 at 01:44:55AM -0500, Matt Corallo wrote:
> A node feeding you invalid headers (used to be) cause for a ban [...]

Headers that are invalid due to MUST_SIGNAL rules are marked as
BLOCK_RECENT_CONSENSUS_CHANGE so don't directly result in a ban. If you're
doing headers-first relay, I think that will also prevent hitting the
BLOCK_MISSING_PREV case, which would result in a ban.

If a lockinontimeout=true node is requesting compact blocks from a
lockinontimeout=false node during a chainsplit in the MUST_SIGNAL phase,
I think that could result in a ban.

> More importantly, nodes on both sides of the fork need to find each other. 

(If there was going to be an ongoing fork there'd be bigger things to
worry about...)

I think the important specific case of this is something like "if a chain
where taproot is impossible to activate is temporarily the most work,
miners with lockinontimeout=true need to be well connected so they don't
end up competing with each other while they're catching back up".

Actually, that same requirement might be more practically for a signet
feature we were thinking about -- namely having "optional reorgs", ie
every now and then we'd mine 1-6 blocks and then reorg them out; but
also flag the soon-to-be-stale blocks in some way so that if you didn't
want to have to deal with reorgs you could easily ignore them. Having
it be possible for the "I want to see reorgs!" nodes to be able to find
each other seems like it might be a similar problem (avoiding having the
"don't-want-reorgs" nodes ban the "want-reorgs" nodes too perhaps).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-21 Thread Anthony Towns via bitcoin-dev
On Fri, Feb 19, 2021 at 12:48:00PM -0500, Matt Corallo via bitcoin-dev wrote:
> It was pointed out to me that this discussion is largely moot as the
> software complexity for Bitcoin Core to ship an option like this is likely
> not practical/what people would wish to see.
> Bitcoin Core does not have infrastructure to handle switching consensus
> rules with the same datadir - after running with uasf=true for some time,
> valid blocks will be marked as invalid, 

I don't think this is true? With the current proposed bip8 code,
lockinontimeout=true will cause headers to be marked as invalid, and
won't process the block further. If a node running lockinontimeout=true
accepts the header, then it will apply the same consensus rules as a
lockinontimeout=false node.

I don't think an invalid header will be added to the block index at all,
so a node restart should always cleanly allow it to be reconsidered.

The test case in

https://github.com/bitcoin/bitcoin/pull/19573/commits/bd8517135fc839c3332fea4d9c8373b94c8c9de8

tests that a node that had rejected a chain due to lockinontimeout=true
will reorg to that chain after being restarted as a byproduct of the way
it tests different cases (the nodes set a new startheight, but retain
their lockinontimeout settings).


(I think with the current bip8 code, if you switch from
lockinontimeout=false to lockinontimeout=true and the tip of the current
most work chain is after the timeoutheight and did not lockin, then you
will continue following that chain until a taproot-invalid transaction
is inclued, rather than immediately reorging to a shorter chain that
complies with the lockinontimeout=true rules)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-13 Thread Anthony Towns via bitcoin-dev
On Wed, Jan 06, 2021 at 11:35:11AM -0500, Suhas Daftuar via bitcoin-dev wrote:
> I'm proposing the addition of a new, optional p2p message to allow peers to
> communicate that they do not want to send or receive (loose) transactions for
> the lifetime of a connection. 
> 
> The goal of this message is to help facilitate connections on the network over
> which only block-related data (blocks/headers/compact blocks/etc) are relayed,
> to create low-resource connections that help protect against partition attacks
> on the network.

I think we should separate these ideas -- ie, the protocol change to
allow indicating that you don't want transactions, and the policy change
to protect against partition attacks using that protocol addition.

The idea (obviously?) being that the protocol should be simple and
flexible enough to support many possible policies.

> ==Specification==
> # A new disabletx message is added, [...]
> # A node that has sent or received a disabletx message to/from a peer MUST 
> NOT send any of these messages to the peer:
> ## inv messages for transactions
> [...]
> # It is RECOMMENDED that a node that has sent or received a disabletx message 
> to/from a peer not send any of these messages to the peer:
> ## addr/getaddr
> ## addrv2 (BIP 155)

In particular, I think combining these doesn't make sense for:

 * nodes running with -blocksonly (that want to stay up to date with
   the blockchain, but don't care about txes)
- not sending disabletx reduces their potential connectivity, if
  nodes are willing to accept more disabletx peers due to the lower
  resource usage
- sending disabletx means they can't maintain their addr db and find
  other nodes to connect to

 * non-listening nodes running with -connect to one/more preselected peers
- they can't send disabletx generally because they want txes
- they don't need addr information (since they only make connections
  to some known peers), and don't have many peers to relay addresses
  on to, so are essentially blackholes, so would like to disable
  addr relay for much the same reasons

So to me that says the protocol part of the design's not as flexible as
it should be, which suggests DISABLETX and DISABLEADDR.

(I think I'm going to flesh out the "FEATURE" idea as an alternative
way of dealing with this, but that can be its own thread)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-13 Thread Anthony Towns via bitcoin-dev
On Wed, Jan 13, 2021 at 01:40:03AM -0500, Matt Corallo via bitcoin-dev wrote:
> Out of curiosity, was the interaction between fRelay and bloom disabling ever
> specified? ie if you aren’t allowed to enable bloom filters on a connection 
> due
> to resource constraints/new limits, is it ever possible to “set” fRelay later?

(Maybe I'm missing something, but...)

In the current bitcoin implementation, no -- you either set
m_tx_relay->fRelayTxes to true via the VERSION message (either explicitly
or by not setting fRelay), or you enable it later with FILTERLOAD or
FILTERCLEAR, both of which will cause a disconnect if bloom filters
aren't supported. Bloom filter support is (optionally?) indicated via
a service bit (BIP 111), so you could assume you know whether they're
supported as soon as you receive the VERSION line.

fRelay is specified in BIP 37 as:

  | 1 byte || fRelay || bool || If false then broadcast transactions will
  not be announced until a filter{load,add,clear} command is received. If
  missing or true, no change in protocol behaviour occurs.

BIP 60 defines the field as "relay" and references BIP 37. Don't think
it's referenced in any other bips.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-20 Thread Anthony Towns via bitcoin-dev
On Fri, Aug 14, 2020 at 03:28:41PM -0400, Suhas Daftuar via bitcoin-dev wrote:
> In thinking about the mechanism used there, I thought it would be helpful to
> codify in a BIP the idea that Bitcoin network clients should ignore unknown
> messages received before a VERACK.  A draft of my proposal is available here
> [2].

Rather than allowing arbitrary messages, maybe it would make sense to
have a specific feature negotiation message, eg:

  VERSION ...
  FEATURE wtxidrelay
  FEATURE packagerelay
  VERACK

with the behaviour being that it's valid only between VERSION and VERACK,
and it takes a length-prefixed-string giving the feature name, optional
additional data, and if the feature name isn't recognised the message
is ignored.

If we were to support a "polite disconnect" feature like Jeremy suggested,
it might be easier to do that for a generic FEATURE message, than
reimplement it for the message proposed by each new feature.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] reviving op_difficulty

2020-08-16 Thread Anthony Towns via bitcoin-dev
On Sun, Aug 16, 2020 at 11:41:30AM -0400, Thomas Hartman via bitcoin-dev wrote:
> My understanding is that adding a single op_difficulty operation as
> proposed would enable not true difficulty futures but binary options
> on difficulty.

An alternative approach for this might be to use taproot's annex concept.

https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5

The idea would be to add an annex restriction that's only valid if the
difficulty is a given value (or within a given range). Protocol design
could be:

  Alice, Bob, Carol, Dave want to make bets on difficulty futures
  They each deposit a,b,c,d into a UTXO of value a+b+c+d payable to
a 4-4 of multisig of their keys (eg MuSig(A,B,c,D))
  They construct signed payouts spending that UTXO, all timelocked
for the future date; one spending to Alice with the annex locking
in the difficulty to Alice's predicted range, one spending to Bob
with the annex locking in the difficulty to Bob's predicted range,
etc

When the future date arrives, whoever was right can immediately
broadcast their payout transaction. (If they don't, then someone else
might be able to when the difficulty next retargets)

(Specifying an exact value for the difficulty rather than a range might
be better; it's smaller/simpler on the blockchain, and doesn't reveal
the ranges of your predictions giving traders slightly better privacy.
The cost to doing that is if Alice predicts difficulty could be any of 100
different values, she needs 100 different signatures for her pre-signed
payout, one for each possible difficulty value that would be encoded in
the annex)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Thoughts on soft-fork activation

2020-07-14 Thread Anthony Towns via bitcoin-dev
Hi,

I've been trying to figure out a good way to activate soft forks in
future. I'd like to post some thoughts on that. So:

I think there's two proposals that are roughly plausible. The first is
Luke's recent update to BIP 8:

https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

It has the advantage of being about as simple as possible, and (in my
opinion) is an incremental improvement on how segwit was activated. Its
main properties are:

   - signalling via a version bit
   - state tansitions based on height rather than median time
   - 1 year time frame
   - optional mandatory activation at the end of the year
   - mandatory signalling if mandatory activation occurs
   - if the soft fork activates on the most work chain, nodes don't
 risk falling out of consensus depending on whether they've opted in
 to mandatory activation or not

I think there's some fixable problems with that proposal as it stands
(mostly already mentioned in the comments in the recently merged PR,
https://github.com/bitcoin/bips/pull/550 )

The approach I've been working on is based on the more complicated and
slower method described by Matt on this list back in January. I've got a
BIP drafted at:


https://github.com/ajtowns/bips/blob/202007-activation-dec-thresh/bip-decthresh.mediawiki

The main difference with the mechanism described in January is that the
threshold gradually decreases during the secondary period -- it starts at
95%, gradually decreases until 50%, then mandatorily activates. The idea
here is to provide at least some potential reward for miners signalling
in the secondary phase: if 8% of hashpower had refused to signal for
a soft-fork, then there would have been no chance of activating until
the very end of the period. This way, every additional percentage of
hashpower signalling brings the activation deadline forward.

The main differences between the two proposals is that the BIP 8 approach
has a relatively short time frame for users to upgrade if we want
mandatory activation without a supermajority of hashpower enforcing the
rules; while the "decreasing threshold" approach linked above provides
quite a long timeline.

In addition, there is always the potential to introduce a BIP 91/148
style soft-fork after the fact where either miners or users coordinate to
have mandatory signalling which then activates a pre-existing deployment
attempt.

I think the design constraints we want are:

 * if everyone cooperates and no one objects, we activate pretty quickly

 * there's no obvious exploits, and we have plausible contingency plans
   in place to discourage people from try to use the attempt to deploy
   a new soft fork as a way of attacking bitcoin, either via social
   disruption or by preventing bitcoin from improving

 * we don't want to ship code that causes people to fall out of
   consensus in the (hopefully unlikely) event that things don't go
   smoothly [0]

In light of that, I think I'm leaning towards:

 * use BIP 8 with mandatory activation disabled in bitcoin core -- if
   you want to participate in enforcing mandatory activation, you'll
   need to recompile, or use a fork like bitcoin knots; however if
   mandatory activation occurs on the longest chain, you'll still follow
   that chain and enforce the rules.

 * be prepared to update the BIP 8 parameters to allow mandatory
   activation in bitcoin core if, after 9 months say, it's apparent that
   there aren't reasonable objections, there's strong support for
   activation, the vast majority of nodes have already updated to
   enforce the rules upon activation, and there's strong support for
   mandatory activation

 * change the dec-threshold proposal to be compatible with BIP 8, and
   keep it maintained so that it can be used if there seems to be
   widespread consensus for activation, but BIP 8 activation does
   not seem certain -- ie, as an extra contingency plan.

 * be prepared to support miners coordinating via BIP 91 either to
   bring activation forward in either BIP 8 or "decreasing threshold" or
   de-risk BIP 8 mandatory activation -- ie, an alternative contingency
   plan. This is more appropriate if we've found that users/miners have
   upgraded so that activation is safe; so it's a decision we can make
   later when we have more data, rather than having to make the decision
   early when we don't have enough information to judge whether it's
   safe or not.

 * (also, improve BIP 8 a bit more before deploying it -- I'm hoping for
   some modest changes, which is why "decreasing threshold" isn't already
   compatible with BIP 8)

 * (continue to ensure the underlying soft fork makes sense and is
   well implemented on its own merits)

 * (continue to talk to as many people as we can about the underlying
   changes and make sure people understand what's going on and that
   we've addressed any reasonable objections)

I'm hopeful activating taproot will go smoothly, but I'm not 100% sure
of it, and there 

Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-07-09 Thread Anthony Towns via bitcoin-dev
On Fri, Jul 10, 2020 at 07:40:48AM +1000, Anthony Towns via bitcoin-dev wrote:
> After talking with Christina

Christian. Dr Christian Decker, PhD. Dr Bitcoin. cdecker. Snyke.

Cheers,
aj, hoping he typed one of those right, at least...

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-07-09 Thread Anthony Towns via bitcoin-dev
Hello world,

After talking with Christina ages ago, we came to the conclusion that
it made more sense to update BIP 118 to the latest thinking than have
a new BIP number, so I've (finally) opened a (draft) PR to update BIP
118 with the ANYPREVOUT bip I've passed around to a few people,

https://github.com/bitcoin/bips/pull/943

Probably easiest to just read the new BIP text on github:

https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

It doesn't come with tested code at this point, but I figure better to
have the text available for discussion than nothing.

Some significant changes since previous discussion include complete lack
of chaperone signatures or anything like it (if you want them, you can
always add them yourself, of course), and that ANYPREVOUTANYSCRIPT no
longer commits to the value (details/rationale in the text).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP-341: Committing to all scriptPubKeys in the signature message

2020-05-02 Thread Anthony Towns via bitcoin-dev
On Fri, May 01, 2020 at 08:23:07AM -0400, Russell O'Connor wrote:
> Regarding specifics, I personally think it would be better to keep the
> hashes of the ScriptPubKeys separate from the hashes of the input values.

I think Andrew's original suggestion achieves this:

>> The obvious way to implement this is to add another hash to the
>> signature message:
>>   sha_scriptPubKeys (32): the SHA256 of the serialization of all
>>   scriptPubKeys of the previous outputs spent by this
>>   transaction.

presumably with sha_scriptPubKeys' inclusion being conditional on
hash_type not matching ANYONECANPAY.

We could possibly also make the "scriptPubKey" field dependent on
hash_type matching ANYONECANPAY, making this not cost any more
in serialised bytes per signature.

This would basically mean we're committing to each component of the
UTXOs being spent:

  without ANYONECANPAY:
sha_prevouts commits to the txid hashes and vout indexes (COutPoint)
sha_amounts commits to the nValues (Coin.CTxOut.nValue)
sha_scriptpubkeys commits to the scriptPubKey (Coin.CTxOut.scriptPubKey)

  with ANYONECANPAY it's the same but just for this input's prevout:
outpoint
amount
scriptPubKey

except that we'd arguably still be missing:

is this a coinbase output? (Coin.fCoinBase)
what was the height of the coin? (Coin.nHeight)

Maybe committing to the coinbase flag would have some use, but committing
to the height would make it hard to chain unconfirmed spends, so at
least that part doesn't seem worth adding.

> I would also (and independently) propose
> separating the hashing of the output values from the output ScriptPubKeys in
> `sha_outputs` so again, applications interested only in summing the values of
> the outputs (for instance to compute fees) do not have to wade through those
> arbitrarily long ScriptPubKeys in the outputs.

If you didn't verify the output scriptPubKeys, you would *only* be able
to care about fees since you couldn't verify where any of the funds went?
And you'd only be able to say fees are "at least x", since they could be
more if one of the scriptPubKeys turned out to be OP_TRUE eg. That might
almost make sense for a transaction accelerator that's trying to increase
the fees; but only if you were doing it for someone else's transaction
(since otherwise you'd care about the output addresses) and only if you
were happy to not receive any change? Seems like a pretty weird use case?

There's some prior discussion on this topic at:

http://www.erisian.com.au/taproot-bip-review/log-2020-03-04.html
http://www.erisian.com.au/taproot-bip-review/log-2020-03-05.html

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot (and graftroot) complexity

2020-02-09 Thread Anthony Towns via bitcoin-dev
On Sun, Feb 09, 2020 at 02:19:55PM -0600, Bryan Bishop via bitcoin-dev wrote:
> However, after
> our review, we're left perplexed about the development of Taproot (and
> Graftroot, to a lesser extent).

I think the main cause of the perplexity is not seeing the benefit of
taproot. 

For me, the simplest benefit is that taproot lets everyone's wallet change
from "if you lose this key, your funds are gone" to "if you lose this key,
you'll have to recover 3 of your 5 backup keys that you sent to trusted
friends, and pay a little more, but you won't have lost your funds". That
won't cost you *anything* beyond upgrading your wallet sotware/hardware;
if you never lose your main key, it doesn't cost you any more, but if
you do, you now have a new recovery option (or many recovery options).

Note that doing graftroot isn't proposed as it requires non-interactive
half-signature aggregation to be comparably efficient, and the crypto
hasn't been worked out for that -- or at least, the maths hasn't been
properly written up for criticism. (If you don't care about efficiency,
you can do a poor man's graftroot with pre-signed transactions and CPFP)

More detailed responses below. Kinda long.

> In essence, Taproot is fundamentally the same as doing
> https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki and Schnorr
> signatures separately.
> 
> Suppose a MAST for {a,b,c,d,e,f,g,h} spending conditions it looks something
> like this:
> 
>       /\
>      /  \
>     /    \
>    /      \
>   /\      /\
>  /  \    /  \
> /\  /\  /\  /\
> a b c d e f g h
> 
> If we want this to be functionally equivalent to Taproot, we add a new path:
> 
>        /\
>       /\ { schnorr_checksig}
>      /  \
>     /    \
>    /      \
>   /\      /\
>  /  \    /  \
> /\  /\  /\  /\
> a b c d e f g h

There's a bit more subtlety to the difference between a merkle branch
and a taproot alternative. In particular, imagine you've got three
alternatives, one of which has 60% odds of being taken, and the other
two have 20% odds each. You'd construct a merkle tree:

/\
   a /\
b  c

And would reveal:

  60%: a [#(b,c)]
  20%: b [#a, #c]
  20%: c [#a, #b]

So your overhead would be 32B 60% of the time and 64B 40% of the time,
or an expected overhead of 44.8 bytes.

With taproot, you construct a tree of much the same shape, but 60% of
the time you no longer have to reveal anything about the path not taken:

  60%: a-tweaked
  20%: b [a, #c]
  20%: c [a, #b]

So your overhead is 0B 60% of the time, and 65B 40% of the time, for an
expected overhead of 26B.

That math only works out as an improvement if your common case really
is (or can be made to be) a simple key path spend, though.

You can generalise taproot and combine it with a merkle tree arbitrarily,
with the end result being that using a merkle branch means you can
choose either the left or right sub-tree for a cost of 32B, while a
taproot branch lets you choose the left *leaf* for free, or a right
sub-tree for (essentially) 64B. So for equally likely branches you'd
want to use the merkle split, while if there's some particular outcome
that's overwhelmingly likely, with others just there for emergencies,
then a taproot-style alternative will be better. See:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html

for slightly more detailed background.

Ultimately, I think we can do this better, so that you could choose
whether to make the free "taproot" path be a key or a script, or to use
the taproot method to make other likely leaves cheaper than unlikely
ones, rather than just having that benefit available for the most likely
leaf.

But I also think that's a lot of work, much of which will overlap with
the work to get cross-input signature aggregation working, so fwiw,
my view that the current taproot feature set is a good midway point to
draw a line, and get stuff out and released. This overall approach was
discussed quite a while ago:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015951.html

> However, if we do the same script via taproot, we now need to provide the base
> public key (33 bytes) as well as the root hash (32 bytes) and path and then 
> the
> actual scripts. 

You need to provide the internal public key, the actual script and the
path back; the root hash is easily calculated from the script and the
path, and then verified by ECC math against the scriptPubKey and the
internal public key.

>       /\
>      /  \
>     /    \
>    /      \
>   /\      /\
>  /  \    /  \
> /\  /\  /\  /\
> a b c d e f/\ { schnorr_checksig}
>           g  h
>
> We could argue that this is more private than Taproot, because we don't
> distinguish between the Schnorr key case and other cases by default, so chain
> analyzers can't tell if the signature came from the Taproot case or from one 
> of
> the Script paths.

In that example there is no taproot case -- 

Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-15 Thread Anthony Towns via bitcoin-dev
On Tue, Jan 14, 2020 at 07:42:07PM +, Matt Corallo wrote:
> Thus, part of the goal here is that we ensure we have that "out", and
> can observe the response of the ecosystem once the change is "staring
> them in the face", as it were.

> A BIP 9 process is here not only to offer
> a compelling activation path, but *also* to allow for observation and
> discussion time for any lingering minor objections prior to a BIP 8/flag
> day activation.

One thing that I wonder is if your proposal (and BIP9) has enough of
time for this sort of observation?

If something looks good to devs and miners, but still has some
underlying problem, it seems like it would be pretty easy to for it
to activate quickly just because miners happen to upgrade quickly and
don't see a need to tweak the default signalling parameters. I think
the BIP 68/112/113 bundle starttime was met at block 409643 (May 1,
2016), so it entered STARTED at 411264 (May 11), was LOCKED_IN at 417312
(June 21), and active at 481824 (July 5). If we're worried people will
only seriously look at things once activation is possible, having just
a month or two to find new problems isn't very long.

One approach to improve that might be to have the first point release that
includes the soft-fork activation parameters _not_ update getblocktemplate
to signal the version bit by default, but only do that in a second point
release later. That way miners could manually enable signalling if there's
some reason to rush things (which still means there's pressure to actually
look at the changes), but by default there's a bit of extra time.

(This might just be a reason why people should look at proposals before
they're ready to activate, though; or why users of bitcoin should also
be miners)

> On the other hand, in practice, we've seen that version bits are set on
> the pool side, and not on the node side, meaning the goal of ensuring
> miners have upgraded isn't really accomplished in practice, you just end
> up forking the chain for no gain.

ITYM version bits are set via mining software rather than the node
software the constructs blocks (when validation happens), so that
there's no strong link between signalling and having actually updated
your software to properly enforce the new rules? I think people have
suggested in the past moving signalling into the coinbase or similar
rather than the version field of the header to make that link a bit
tighter. Maybe this is worth doing at the same time? (For pools that
want to let their users choose whether to signal or not, that'd mean
offering two different templates for them to mine, I guess) That would
mean miners using the version field as extra nonce space wouldn't be
confused with upgrade signalling at least...

(I don't have an opinion on whether either of these is worth worrying
about)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-13 Thread Anthony Towns via bitcoin-dev
On Mon, Jan 13, 2020 at 08:34:24AM +, Yosef via bitcoin-dev wrote:
> tl;dr How about 80% ?

The point of having hashpower upgraded is that it means that there's low
liklihood of long chains of blocks that are invalid per the new rules, so
that if you haven't upgraded your node but wait for a few confirmations,
you'll still (with very high liklihood) only see blocks valid per the
new rules.

If you have 80% of miners enforcing the rules, then if someone produces
a block that violates the new rules (but is valid for the old ones),
then you've got a 20% chance of one of the non-enforcing miners getting
the next block, and a 4% chance of non-enforcing miners getting both
the next blocks, giving 3 confirmations to invalid transactions. That
seems a bit high.

3 confirmations isn't unrealistic, eg Coinbase apparently recently
dropped its requirement to that apparently:

https://blog.coinbase.com/announcing-new-confirmation-requirements-4a5504ba8d81

I could maybe see a 90% threshold though?

> 95% can prove difficult to achieve. Some % of negligent miners that forget to 
> upgrade is expected.

Is it? We went from 59% to 54% to 28% to 0% (!!) of blocks not signalling
for segwit during consecutive two-week blocks in the BIP-91/148
period; and from 100% of blocks not signalling for BIP-91 to 99.4%,
48%, 15%, and 11% during consecutive 2.3 day periods targeting an 80%
threshold. Certainly that was a particularly high-stakes period, but
they were both pretty short. For comparison, for CSV, we went from 100%
not signalling to 61%, to 54% to 3.4% in consecutive two-week periods.

> Completing that to 5% is not too difficult for a small malicious minority 
> trying to delay the activation. This is the issue Matt's goal #5 aims to 
> prevent, and while the fallback to BIP-8 helps, BIP-9’s 95% requirement makes 
> it worse by allowing quite a neglected minority to force a dramatic delay. 
> Also note how in such case it would have been better to skip BIP-9 altogether 
> and maybe save 1.5 years.

I don't think you can really skip steps if you need a flag day:

 - the first 12 months is for *really seriously* making sure there's no
   problems with the proposed upgrade; you can't that because people
   might not look for problems until the code's out there and ready for
   actual use

 - the next 6 months is for updating the software to lock in the flag
   day; you can't skip that because it takes time to get new releases out

 - the next 24 months is to ensure everyone's upgraded their nodes so
   that they won't be at risk of thinking they've received bitcoins when
   those coins aren't in compliance with the new rules; and you can't
   skip that because if we don't have hashpower reliably enforcing the
   rules, *everybody* needs to upgrade, which can take a lot of time.

Times could be tweaked, but the "everyone has to upgrade their node
software" is almost the same constraint that hard forks have, so I think
you want to end up with a long overall lead time any which way. For
comparison, 0.12.1 came out about 45 months ago and 0.13.2 came out
about 36 months ago -- about 0.5% of nodes are running 0.12 or earler,
and about 4.9% of nodes are running 0.13 or earlier, at least per [0],
so the overall timeline of 42 months seems plausible to me...

[0] https://luke.dashjr.org/programs/bitcoin/files/charts/software.html

I think (especially if we attempt BIP-91/BIP-148-style compulsory
signalling again) it's worth also considering the failure case if miners
false-signal: that is they signal support of the new soft-fork rules,
but then don't actually enforce them. If you end up with, say, 15% of
hashpower not upgraded or signalling, 25% of hashpower not upgraded but
signalling so their blocks don't get orphaned, and only 65% of hashpower
upgraded, you have a 1% chance of 5 blocks built on top of a block
that's invalid according to the new rules, giving those transactions 6
confirmations as far as non-upgraded nodes are concerned.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-11 Thread Anthony Towns via bitcoin-dev
On Fri, Jan 10, 2020 at 09:30:09PM +, Matt Corallo via bitcoin-dev wrote:
> 1) a standard BIP 9 deployment with a one-year time horizon for
> activation with 95% miner readiness,
> 2) in the case that no activation occurs within a year, a six month
> quieting period during which the community can analyze and discussion
> the reasons for no activation and,
> 3) in the case that it makes sense, a simple command-line/bitcoin.conf
> parameter which was supported since the original deployment release
> would enable users to opt into a BIP 8 deployment with a 24-month
> time-horizon for flag-day activation (as well as a new Bitcoin Core
> release enabling the flag universally).

FWIW etc, but my perspective on this is that the way we want consensus
changes in Bitcoin to work is:

 - decentralised: we want everyone to be able to participate, in
   designing/promoting/reviewing changes, without decision making
   power getting centralised amongst one group or another

 - technical: we want changes to be judged on their objective technical
   merits; politics and animal spirits and the like are fine, especially
   for working out what to prioritise, but they shouldn't be part of the
   final yes/no decision on consensus changes

 - improvements: changes might not make everyone better off, but we
   don't want changes to screw anyone over either -- pareto
   improvements in economics, "first, do no harm", etc. (if we get this
   right, there's no need to make compromises and bundle multiple
   flawed proposals so that everyone's an equal mix of happy and
   miserable)

In particular, we don't want to misalign skills and responsibilities: it's
fine for developers to judge if a proposal has bugs or technical problems,
but we don't want want developers to have to decide if a proposal is
"sufficiently popular" or "economically sound" and the like, for instance.
Likewise we don't want to have miners or pool operators have to take
responsibility for managing the whole economy, rather than just keeping
their systems running.

So the way I hope this will work out is:

 - investors, industry, people in general work out priorities for what's
   valuable to work on; this is an economic/policy/subjective question,
   that everyone can participate in, and everyone can act on --
   either directly if they're developers who can work on proposals and
   implementations directly, or indirectly by persuading or paying other
   people to work on whatever's important

 - developers work on proposals, designing and implementing them to make
   (some subset of) bitcoin users better off, and to not make anyone worse
   off.

 - if someone discovers a good technical reason why a proposal does make
   people worse off, we don't try to keep pushing the proposal over the
   top of objections, but go back to the drawing board and try to fix
   the problems

 - once we've done as much development as we can, including setting up
   experimental testnet/signet style deployments for testing, we setup a
   deployment. the idea at this point is to make sure the live network
   upgrade works, and to retain the ability to abort if last minute
   problems come up. no doubt some review and testing will be left until
   the last minute and only done here, but *ideally* the focus should be
   on catching errors *well before* this point.

 - as a result, the activation strategy mostly needs to be about ensuring
   that the Bitcoin network stays in consensus, rather than checking
   popularity or voting -- the yes/no decisions should have mostly been
   made earlier already. so we have two strategies for locking in the
   upgrade: either 95%+ of hashpower signals that they've upgraded to
   software that will enforce the changes forever more, _or_ after a
   year of trying to deploy, we fail to find any technical problems,
   and then allow an additional 2.5 years to ensure all node software is
   upgraded to enforce the new rules before locking them in.

The strategy behind the last point is that we need to establish that
there's consensus amongst all of Bitcoin before we commit to a flag day,
and if we've found some method to establish consensus on that, then we're
done -- we've already got consensus, we don't need to put a blockchain
protocol on top of that and signal that we've got consensus. (Activating
via hashpower still needs signalling, because we need to coordinate on
*when* sufficient hashpower has upgraded)

This approach is obviously compatible with BIP-148 or BIP-91 style
forced-signalling UASFs if some upgrade does need to be done urgently
despite miner opposition; the forced signalling just needs to occur during
the BIP-9 or BIP-8 phases, and no during the "quiet period". Probably the
first period of BIP-8 after the quiet period would make the most sense.

But without that, this approach seems very friendly for miners: even
if they don't upgrade, they won't mine invalid blocks (unless the rules
activate and someone else deliberately 

Re: [bitcoin-dev] Signing CHECKSIG position in Tapscript

2019-12-05 Thread Anthony Towns via bitcoin-dev
On Thu, Dec 05, 2019 at 03:24:46PM -0500, Russell O'Connor wrote:

Thanks for the careful write up! That matches what I was thinking.

> This analysis suggests that we should amend CODESEPARATORs behaviour to update
> an accumulator (presumably a running hash value), so that all executed
> CODESEPARATOR positions end up covered by the signature.

On IRC, gmaxwell suggests "OP_BREADCRUMB" as a name for (something like)
this functionality.

(I think it's a barely plausible stretch to use the name "CODESEPARATOR"
for marking a position in the script -- that separates what was before
and after, at least; anything more general seems like it warrants a
better name though)

> That would provide a
> solution to the above problem for those cases where taproot's MAST cannot be
> used.  I'm not sure if it is better to propose such an amendment to
> CODESEPARATOR's behaviour now, or to propose to soft-fork in such optional
> behaviour at a later time.
> However, what I said above was even too simplified.  

FWIW, I think it's too soon to propose this because (a) it's not clear
there's a practical need for it, (b) it's not clear the functionality is
quite right (opcode vs more automatic sighash flag?), and (c) as you say,
it's not clear it's powerful enough.

> In general, a policy of the form.
>     (Exists w[1]. C[1](w[1]) && PK[1,1](w[1]) && ... && PK[1,m[1]](w[1]) || 
> ...
> || (Exists w[n]. C[n](w[n]) && PK[n,1](w[n]) && ... && PK[n,m[n]](w[n]))
> where each term could possibly be parameterized by some witness value (though
> at the moment there isn't enough functionality in Script to parameterize the
> pubkeys in any reasonably way and it maybe isn't even possible to parameterise
> the conditions in any reasonable way).  In general, you might want your
> signature to cover (some function of) this witness value.  This suggests that
> we would actually want a CODESEPARATOR variant that pushes a stack item into
> the accumulator that gets covered by the signature rather than pushing the
> CODESEPARATOR position.  Though at this point the name CODESEPARATOR is
> probably not suitable, even if it subsumes the functionality.

> Again, I'm not
> sure if it is better to propose such a replacement for CODESEPARATOR's
> behaviour now, or to propose to soft-fork in such optional behaviour at a 
> later
> time.

Last bit first, it seems pretty clear to me that this is too novel an
idea to propose it immediately -- we should explore the problem space
more first to see what's the best way of doing it before coding it into
consensus. And (guessing) I think the tapscript upgrade methods should
be fine for handling this later.

I think the annex is also not general enough for what you're thinking
here, in that it wouldn't allow for one signature to constrain the witness
data more than some other signature -- so you'd need to determine all
the constraints for all signatures to finish filling out the annex,
and could only then start signing.

I think you could conceivably do any/all of:

 * commit to a hash of all the witness data that hasn't been popped off
   the stack ("suffix" commitment -- the data will be used by later script
   opcodes)
 * commit to a hash of all the witness data that has been popped off the
   stack ("prefix" commitment -- this is the data that's been used by
   earlier script opcodes)
 * commit to the hash of the current stack

That would be expensive, but still doable as O(1) per opcode / stack
element. I think any other masking would mean you'd have potentially
O(size of witness data) or O(size of stack) runtime per signature which
I think would be unacceptable...

I guess a general implementation to at least think about the possibilities
might be an "OP_DATACOMMIT" opcode that pops an element from the stack,
does hash_"DataCommit"(element), and then any later signatures commit
to that value (maybe with OP_0 OP_DATACOMMIT allowing you to get back to
the default state). You'd either need to write your script carefully to
commit to witness data you're using elsewhere, or have some other new
opcodes to do that more conveniently...

CODESEP at position "x" in the script is equivalent to " DATACOMMIT"
here, I think. "BREADCRUMB .. BREADCRUMB" could be something like:

   OP_0 TOALT [at start of script]
   ..
   FROMALT x CAT SHA256 DUP TOALT DATACOMMIT   
   ..
   FROMALT y CAT SHA256 DUP TOALT DATACOMMIT   

if the altstack was otherwise unused, I guess; so the accumulator
behaviour probably warrants something better.

It also more or less gives you CHECKSIGFROMSTACK behaviour by doing
"SWAP OP_DATACOMMIT OP_CHECKSIG" and a SIGHASH_NONE|ANYPREVOUTANYSCRIPT
signature.

But that seems like a plausible generalisation to think about?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Signing CHECKSIG position in Tapscript

2019-12-03 Thread Anthony Towns via bitcoin-dev
On Sun, Dec 01, 2019 at 11:09:54AM -0500, Russell O'Connor wrote:
> On Thu, Nov 28, 2019 at 3:07 AM Anthony Towns  wrote:
> First, it seems like a bad idea for Alice to have put funds behind a
> script she doesn't understand in the first place. There's plenty of
> scripts that are analysable, so just not using ones that are too hard to
> analyse sure seems like an option.
> I don't think this is true in general.  When constructing a script it seems
> quite reasonable for one party to come to the table with their own custom
> script that they want to use because they have some sort of 7-of-11 scheme but
> in one of those cases is really a 2-of-3 and another is 5-of-6.  The point is
> that you shouldn't need to decode their exact policy in order to collaborate
> with them.

Hmm, I take the opposite lesson from your scenario -- it's only fine for
people to bring their own 2-of-3 or 5-of-6 or whatever and replace a
simple key if you've got something like miniscript where you understand
the script completely enough that you can be sure those changes are
fine. 

For contrast, with ECDSA and pre-miniscript, the above scenario might
have gone like someone proposing to change:

  7 A B C1 C2 C3 C4 C5 C6 C7 C8 C9 11 CHECKMULTISIG

for something like

  7
  SWAP IF TOALT 2 A1 A2 A3 3 CHECKMULTISIGVERIFY FROMALT 1SUB ENDIF
  SWAP IF TOALT 5 B1 B2 B3 B4 B5 B6 6 CHECKMULTISIGVERIFY FROMALT 1SUB ENDIF
  C1 C2 C3 C4 C5 C6 C7 C8 C9 11 CHECKMULTISIG

but I think you'd want to be pretty sure you can decode those added
policies rather than just accepting it because your "C4" key is still
there. (In particular, any script fragment that uses an opcode that used
to be OP_SUCCESS could have arbitrary effects on the script)

[0]

> This notion is captured quite clearly in the MAST aspect of
> taproot. In many circumstances, it is sufficient for you to know that there
> exists a branch that contains a particular script without need to know what
> every branch contains.

(I'm trying to avoid using MAST in the context of taproot, despite the
backronym, so please excuse the rephrasing--)

I think if you're going to start using a taproot address with multiple
tapscripts, either as a participant in a multiparty smart contract,
or just to have different ways of spending your funds, then you do have
to analyse all the branches to make sure there's no hidden "all the
money goes to the Lizard People" script.

Once you've done that, you can then simplify things -- maybe some
scripts are only useful for other participants in the contract, or maybe
you've got a few different hardware wallets and one only needs to know
about one branch, while the other only needs to know about some other
branch, but you still need to have done the analysis in the first place.

Of course, probably most of the time that "analysis" is just making sure
the scripts match some well known, hardcoded template, as filled out
with various (tweaked) keys that you've checked elsewhere, but that
still ensures you know all the scripts do what you need them too.

> Third, if you are doing something crazy complex where a particular key
> could appear in different CHECKSIG operators and they should have
> independent signatures, that seems like you're at the level of
> complexity where learning about CODESEPARATOR is a reasonable thing to
> do.
> So while I agree that learning about CODESEPARATOR is a reasonable thing to 
> do,
> given that I haven't heard the CODESEPARATOR being proposed as protection
> against this sort of signature-copying attack before

Err? The current behaviour of CODESEP with taproot was first discussed in
[1], which summarised it as "CODESEP -- lets you require different sigs
for different parts of a single script" which seems to me like just a
different way of saying the same thing.

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016500.html

I don't think tapscript's CODESEP or the current CODESEP can be used
for anything other than preventing a signature from being reused for a
different CHECKSIG operation on the same pubkey within the same script.

> and given the subtle
> nature of the issue, I'm not sure people will know to use it to protect
> themselves.  We should aim for a Script design that makes the cheaper default
> Script programming choices the safer one.

I think techniques like miniscript and having fixed templates specified
in BIPs and BOLTs and the like are better approaches -- both let you
easily allow a limited set of changes that can be safely made to a policy
(maybe just substituting keys, hashes and times, maybe allowing more
general changes).

> On the other hand, in a previous thread a while ago I was also arguing that
> sophisticated people are plausibly using CODESEPARATOR today, hidden away in
> unredeemed P2SH UTXOs.  So perhaps I'm right about at least one of these two
> points. :)

Sounds like an economics argument :)

>  IF HASH160 x EQUALVERIFY groupa 

Re: [bitcoin-dev] Signing CHECKSIG position in Tapscript

2019-11-28 Thread Anthony Towns via bitcoin-dev
On Wed, Nov 27, 2019 at 04:29:32PM -0500, Russell O'Connor via bitcoin-dev 
wrote:
> The current tapscript proposal requires a signature on the last executed
> CODESEPRATOR position.  I'd like to propose an amendment whereby instead of
> signing the last executed CODESEPRATOR position, we simply always sign the
> position of the CHECKSIG (or other signing opcode) being executed.

FWIW, there's discussion of this at
http://www.erisian.com.au/taproot-bip-review/log-2019-11-28.html#l-65

> However, unless CODESEPARATOR is explicitly used, there is no protection
> against these sorts of attacks when there are multiple participants that have
> signing conditions within a single UTXO (or rather within a single tapleaf in
> the tapscript case).

(You already know this, but:)

With taproot key path spending, the only other conditions that can be
placed on a transaction are nSequence, nLockTime, and the annex, all of
which are committed to via the signature; so I think this concern only
applies to taproot script path spending.

The proposed sighashes for taproot script path spending all commit to
the script being used, so you can't reuse the signature in a different
leaf of the merkle tree of scripts for the UTXO, only in a separate
execution path within the script you're already looking at.

> So for example, if Alice and Bob are engaged in some kind of multi-party
> protocol, and Alice wants to pre-sign a transaction redeeming a UTXO but
> subject to the condition that a certain hash-preimage is revealed, she might
> verify the Script template shows that the code path to her public key enforces
> that the hash pre-image is revealed (using a toolkit like miniscript can aid 
> in
> this), and she might make her signature feeling secure that it, if her
> signature is used, the required preimage must be revealed on the blockchain. 
> But perhaps Bob has masquated Alice's pubkey as his own, and maybe he has
> inserted a copy of Alice's pubkey into a different path of the Script
> template.
>
> Now Alice's signature can be copied and used in this alternate path,
> allowing the UTXO to be redeemed under circumstances that Alice didn't believe
> she was authorizing.  In general, to protect herself, Alice needs to inspect
> the Script to see if her pubkey occurs in any other branch.  Given that her
> pubkey, in principle, could be derived from a computation rather that pushed
> directly into the stack, it is arguably infeasible for Alice to perform the
> required check in general.

First, it seems like a bad idea for Alice to have put funds behind a
script she doesn't understand in the first place. There's plenty of
scripts that are analysable, so just not using ones that are too hard to
analyse sure seems like an option.

Second, if there are many branches in the script, it's probably more
efficient to do them via different branches in the merkle tree, which
at least for this purpose would make them easier to analyse as well
(since you can analyse them independently).

Third, if you are doing something crazy complex where a particular key
could appear in different CHECKSIG operators and they should have
independent signatures, that seems like you're at the level of
complexity where learning about CODESEPARATOR is a reasonable thing to
do.

I think CODESEPARATOR is a better solution to this problem anyway. In
particular, consider a "leaf path root OP_MERKLEPATHVERIFY" opcode,
and a script that says "anyone in group A can spend if the preimage for
X is revelaed, anyone in group B can spend unconditionally":

 IF HASH160 x EQUALVERIFY groupa ELSE groupb ENDIF
 MERKLEPATHVERIFY CHECKSIG

spendable by

 siga keya path preimagex 1

or

 sigb keyb path 0

With your proposed semantics, if my pubkey is in both groups, my signature
will sign for position 10, and still be valid on either path, even if
the signature commits to the CHECKSIG position.

I could fix my script either by having two CHECKSIG opcodes (one for
each branch) and also duplicating the MERKLEPATHVERIFY; or I could
add a CODESEPARATOR in either IF branch.

(Or I could just not reuse the exact same pubkey across groups; or I could
have two separate scripts: "HASH160 x EQUALVERIFY groupa MERKLEPATHVERIFY
CHECKSIG" and "groupb MERKLEPATHVERIFY CHECKSIG")

> I believe that it would be safer, and less surprising to users, to always sign
> the CHECKSIG position by default.

> As a side benefit, we get to eliminate CODESEPARATOR, removing a fairly 
> awkward
> opcode from this script version.

As it stands, ANYPREVOUTANYSCRIPT proposes to not sign the script code
(allowing the signature to be reused in different scripts) but does
continue signing the CODESEPARATOR position, allowing you to optionally
restrict how flexibly you can reuse signatures. That seems like a better
tradeoff than having ANYPREVOUTANYSCRIPT signatures commit to the CHECKSIG
position which would make it a fair bit harder to design scripts that
can share signatures, or not having any way to restrict 

Re: [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-05 Thread Anthony Towns via bitcoin-dev
On Thu, Oct 03, 2019 at 01:08:29PM +0200, Christian Decker wrote:
> >  * anyprevout signatures make the address you're signing for less safe,
> >which may cause you to lose funds when additional coins are sent to
> >the same address; this can be avoided if handled with care (or if you
> >don't care about losing funds in the event of address reuse)
> Excellent points, I had missed the hidden nature of the opt-in via
> pubkey prefix while reading your proposal. I'm starting to like that
> option more and more. In that case we'd only ever be revealing that we
> opted into anyprevout when we're revealing the entire script anyway, at
> which point all fungibility concerns go out the window anyway.
>
> Would this scheme be extendable to opt into all sighash flags the
> outpoint would like to allow (e.g., adding opt-in for sighash_none and
> sighash_anyonecanpay as well)? That way the pubkey prefix could act as a
> mask for the sighash flags and fail verification if they don't match.

For me, the thing that distinguishes ANYPREVOUT/NOINPUT as warranting
an opt-in step is that it affects the security of potentially many
UTXOs at once; whereas all the other combinations (ALL,SINGLE,NONE
cross ALL,ANYONECANPAY) still commit to the specific UTXO being spent,
so at most you only risk somehow losing the funds from the specific UTXO
you're working with (apart from the SINGLE bug, which taproot doesn't
support anyway).

Having a meaningful prefix on the taproot scriptpubkey (ie paying to
"[SIGHASH_SINGLE][32B pubkey]") seems like it would make it a bit easier
to distinguish wallets, which taproot otherwise avoids -- "oh this address
is going to be a SIGHASH_SINGLE? probably some hacker, let's ban it".

> > I think it might be good to have a public testnet (based on Richard Myers
> > et al's signet2 work?) where we have some fake exchanges/merchants/etc
> > and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
> > can think of, and just work out if we need any extra code/tagging/whatever
> > to keep those fake exchanges/merchants from losing money (and write up
> > the weird cases we've found in a wiki or a paper so people can easily
> > tell if we missed something obvious).
> That'd be great, however even that will not ensure that every possible
> corner case is handled [...]

Well, sure. I'm thinking of it more as a *necessary* step than a
*sufficient* one, though. If we can't demonstrate that we can deal with
the theoretical attacks people have dreamt up in a "laboratory" setting,
then it doesn't make much sense to deploy things in a real world setting,
does it?

I think if it turns out that we can handle every case we can think of
easily, that will be good evidence that output tagging and the like isn't
necessary; and conversely if it turns out we can't handle them easily,
it at least gives us a chance to see how output tagging (or chaperone
sigs, or whatever else) would actually work, and if they'd provide any
meaningful protection at all. At the moment the best we've got is ideas
and handwaving...

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Continuing the discussion about noinput / anyprevout

2019-10-02 Thread Anthony Towns via bitcoin-dev
On Wed, Oct 02, 2019 at 02:03:43AM +, ZmnSCPxj via Lightning-dev wrote:
> So let me propose the more radical excision, starting with SegWit v1:
> * Remove `SIGHASH` from signatures.
> * Put `SIGHASH` on public keys.
>   OP_SETPUBKEYSIGHASH

I don't think you could reasonably do this for key path spends -- if
you included the sighash as part of the scriptpubkey explicitly, that
would lose some of the indistinguishability of taproot addresses, and be
more expensive than having the sighash be in witness data. So I think
that means sighashes would still be included in key path signatures,
which would make the behaviour a little confusingly different between
signing for key path and script path spends.

> This removes the problems with `SIGHASH_NONE` `SIGHASH_SINGLE`, as they are 
> allowed only if the output specifically says they are allowed.

I don't think the problems with NONE and SINGLE are any worse than using
SIGHASH_ALL to pay to "1*G" -- someone may steal the money you send,
but that's as far as it goes. NOINPUT/ANYPREVOUT is worse in that if
you use it, someone may steal funds from other UTXOs too -- similar
to nonce-reuse. So I think having to commit to enabling NOINPUT for an
address may make sense; but I don't really see the need for doing the
same for other sighashes generally.

FWIW, one way of looking at a transaction spending UTXO "U" to address
"A" is something like:

 * "script" lets you enforce conditions on the transaction when you
   create "A" [0]
 * "sighash" lets you enforce conditions on the transaction when
   you sign the transaction
 * nlocktime, nsequence, taproot annex are ways you express conditions
   on the transaction

In that view, "sighash" is actually an *extremely* simple scripting
language itself (with a total of six possible scripts).

That doesn't seem like a bad design to me, fwiw.

Cheers,
aj

[0] "graftroot" lets you update those conditions for address "A" after
the fact
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Anthony Towns via bitcoin-dev
On Mon, Sep 30, 2019 at 03:23:56PM +0200, Christian Decker via bitcoin-dev 
wrote:
> With the recently renewed interest in eltoo, a proof-of-concept implementation
> [1], and the discussions regarding clean abstractions for off-chain protocols
> [2,3], I thought it might be time to revisit the `sighash_noinput` proposal
> (BIP-118 [4]), and AJ's `bip-anyprevout` proposal [5].

Hey Christian, thanks for the write up!

> ## Open questions
> The questions that remain to be addressed are the following:
> 1.  General agreement on the usefulness of noinput / anyprevoutanyscript /
> anyprevout[?]
> 2.  Is there strong support or opposition to the chaperone signatures[?]
> 3.  The same for output tagging / explicit opt-in[?]
> 4.  Shall we merge BIP-118 and bip-anyprevout. This would likely reduce the
> confusion and make for simpler discussions in the end.

I think there's an important open question you missed from this list:
(1.5) do we really understand what the dangers of noinput/anyprevout-style
constructions actually are?

My impression on the first 3.5 q's is: (1) yes, (1.5) not really,
(2) weak opposition for requiring chaperone sigs, (3) mixed (weak)
support/opposition for output tagging.

My thinking at the moment (subject to change!) is:

 * anyprevout signatures make the address you're signing for less safe,
   which may cause you to lose funds when additional coins are sent to
   the same address; this can be avoided if handled with care (or if you
   don't care about losing funds in the event of address reuse)

 * being able to guarantee that an address can never be signed for with
   an anyprevout signature is therefore valuable; so having it be opt-in
   at the tapscript level, rather than a sighash flag available for
   key-path spends is valuable (I call this "opt-in", but it's hidden
   until use via taproot rather than "explicit" as output tagging
   would be)

 * receiving funds spent via an anyprevout signature does not involve any
   qualitatively new double-spending/malleability risks.
   
   (eltoo is unavoidably malleable if there are multiple update
   transactions (and chaperone signatures aren't used or are used with
   well known keys), but while it is better to avoid this where possible,
   it's something that's already easily dealt with simply by waiting
   for confirmations, and whether a transaction is malleable is always
   under the control of the sender not the receiver)

 * as such, output tagging is also unnecessary, and there is also no
   need for users to mark anyprevout spends as "tainted" in order to
   wait for more confirmations than normal before considering those funds
   "safe"

I think it might be good to have a public testnet (based on Richard Myers
et al's signet2 work?) where we have some fake exchanges/merchants/etc
and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
can think of, and just work out if we need any extra code/tagging/whatever
to keep those fake exchanges/merchants from losing money (and write up
the weird cases we've found in a wiki or a paper so people can easily
tell if we missed something obvious).

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Anthony Towns via bitcoin-dev
On Mon, Sep 30, 2019 at 11:28:43PM +, ZmnSCPxj via bitcoin-dev wrote:
> Suppose rather than `SIGHASH_NOINPUT`, we created a new opcode, 
> `OP_CHECKSIG_WITHOUT_INPUT`.

I don't think there's any meaningful difference between making a new
opcode and making a new tapscript public key type; the difference is
just one of encoding:

   3301AC   [CHECKSIG of public key type 0x01]
   32B3 [CHECKSIG_WITHOUT_INPUT (replacing NOP4) of key]

> This new opcode ignores any `SIGHASH` flags, if present, on a signature,

(How sighash flags are treated can be redefined by new public key types;
if that's not obvious already)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot proposal

2019-06-28 Thread Anthony Towns via bitcoin-dev
On Wed, Jun 26, 2019 at 08:08:01PM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> I have a comment about the 'input_index' of the transaction digest for taproot
> signatures.  It is currently listed as 2 bytes.  I think it would be better to
> expand that to 4 bytes.

FWIW, I think this would be essentially free, at least for the current
sighash modes, as (I think) all the non-ANYONECANPAY modes have at least
4 bytes of sha256 padding at present.

In addition to (or, perhaps, as a special case of) the reasons Russell
gives, I think this change would also better support proof-of-reserves
via taproot signatures (cf [0] or BIP 127), as it would allow the proof
tx to include more than 65k utxos with each utxo being signed with a
signature that commits to all inputs including the invalid placeholder.

[0] 
https://blockstream.com/2019/02/04/en-standardizing-bitcoin-proof-of-reserves/

If you didn't have this, but wanted to do proof-of-reserves over >65k
taproot UTXOs, you could use ANYONECANPAY signatures, and use the output
amounts to ensure the signatures can't be abused, something like:

   inputs:
 0: spend from txid .. vout 0, no witness data
 1: utxo1, signed with ANYONECANPAY|ALL
 2: utxo2, signed with ANYONECANPAY|ALL
 3: utxo3, signed with ANYONECANPAY|ALL
 [etc]

   outputs:
 0: sum(utxo1..utxoN), pay to self
 1: 209997690001-sum(utxo1..utxo3), payable to whatever

The total output value is therefore one satoshi more bitcoin than there
could ever have been, so none of the utxoK signatures can be reused on the
blockchain (unless there's severe inflation due to bugs or hardforks),
but the values (and sums) all remain less than 21M BTC so it also won't
fail the current "amount too big" sanity checks.

That seems a bit more fragile/complicated than using SIGHASH_ALL for
everything, though it means your cold wallet doesn't have to serialize
your >65k transactions to verify it's signing what it thinks it is.

> [1]The var-integer field for the number of inputs (and the number of outputs)
> in a transaction looks like it should allow upto 2^64-1 inputs; however this 
> is
> an illusion.  The P2P rules dictate that these values are immediately taken
> modulo 2^32 after decoding.  For example, if the number of inputs is a
> var-integer encoding of 0x010001, it is actually just a non-canonical way
> of encoding that there is 1 input.  Try this at home!

Hmm? If I'm following what you mean, that's not the P2P rules, it's the
Unserialize code, in particular:

  compat/assumptions.h:52:static_assert(sizeof(int) == 4, "32-bit int assumed");

  serialize.h:289:uint64_t ReadCompactSize(Stream& is)

  serialize.h-679-template
  serialize.h-680-void Unserialize_impl(Stream& is, prevector& v, const 
V&)
  serialize.h-681-{
  serialize.h-682-v.clear();
  serialize.h:683:unsigned int nSize = ReadCompactSize(is);

  (and other Unserialize_impl implementations)

However, ReadCompactSize throws "size too large" if the return value is
greater than MAX_SIZE == 0x0200 =~ 33.5M, which prior to the implicit
cast to 32 bits in Unserialize_impl. And it looks like that check's been
there since Satoshi...

So as far as I can see, that encoding's just unsupported/invalid, rather
than equivalent/non-canonical?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-21 Thread Anthony Towns via bitcoin-dev
On Tue, Jun 18, 2019 at 04:57:34PM -0400, Russell O'Connor wrote:
> So with regards to OP_SECURETHEBAG, I am also "not really seeing any reason to
> complicate the spec to ensure the digest is precommitted as part of the
> opcode."

Also, I think you can simulate OP_SECURETHEBAG with an ANYPREVOUT
(NOINPUT) sighash (Johnson Lau's mentioned this before, but not sure if
it's been spelled out anywhere); ie instead of constructing

  X = Hash_BagHash( version, locktime, [outputs], [sequences], num_in )

and having the script be " OP_SECURETHEBAG" you calculate an
ANYPREVOUT sighash for SIGHASH_ANYPREVOUTANYSCRIPT | SIGHASH_ALL:

  Y = Hash_TapSighash( 0, 0xc1, version, locktime, [outputs], 0,
   amount, sequence)

and calculate a signature sig = Schnorr(P,m) for some pubkey P, and
make your script be "  CHECKSIG".

That loses the ability to commit to the number of inputs or restrict
the nsequence of other inputs, and requires a bigger script (sig and P
are ~96 bytes instead of X's 32 bytes), but is otherwise pretty much the
same as far as I can tell. Both scripts are automatically satisfied when
revealed (with the correct set of outputs), and don't need any additional
witness data.

If you wanted to construct "X" via script instead of hardcoding a value
because it got you generalised covenants or whatever; I think you could
get the same effect with CAT,LEFT, and RIGHT: you'd construct Y in much
the same way you construct X, but you'd then need to turn that into a
signature. You could do so by using pubkey P=G and nonce R=G, which
means you need to calculate s=1+hash(G,G,Y)*1 -- calculating the hash
part is easy, multiplying it by 1 is easy, and to add 1 you can probably
do something along the lines of:

OP_DUP 4 OP_RIGHT 1 OP_ADD OP_SWAP 28 OP_LEFT OP_SWAP OP_CAT

(ie, take the last 4 bytes, increment it using 4-byte arithmetic,
then cat the first 28 bytes and the result. There's overflow issues,
but I think they can be worked around either by allowing you to choose
different locktimes, or by more complicated script)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-05 Thread Anthony Towns via bitcoin-dev
On Fri, May 31, 2019 at 10:35:45PM -0700, Jeremy via bitcoin-dev wrote:
> OP_CHECKOUTPUTSHASHVERIFY is retracted in favor of OP_SECURETHEBAG*.

I think you could generalise that slightly and make it fit in
with the existing opcode naming by calling it something like
"OP_CHECKTXDIGESTVERIFY" and pull a 33-byte value from the stack,
consisting of a sha256 hash and a sighash-byte, and adding a new sighash
value corresponding to the set of info you want to include in the hash,
which I think sounds a bit like "SIGHASH_EXACTLY_ONE_INPUT | SIGHASH_ALL"

FWIW, I'm not really seeing any reason to complicate the spec to ensure
the digest is precommitted as part of the opcode.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] An alternative: OP_CAT & OP_CHECKSIGFROMSTACK

2019-05-27 Thread Anthony Towns via bitcoin-dev
On Wed, May 22, 2019 at 05:01:21PM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> Bitcoin Script appears designed to be a flexible programmable system that
> provides generic features to be composed to achieve various purposes.

Counterpoint: haven't the flexibly designed parts of script mostly been
a failure -- requiring opcodes to be disabled due to DoS vectors or
consensus bugs, and mostly not being useful in practice where they're
still enabled in BTC or on other chains where they have been re-enabled
(eg, Liquid and BCH)?

> Instead, I propose that, for the time being, we simply implement OP_CAT and
> OP_CHECKSIGFROMSTACKVERIFY.

FWIW, I'd like to see CAT enabled, though I'm less convinced about a
CHECKSIG that takes the message from the stack. I think CAT's plausibly
useful in practice, but a sig against data from the stack seems more
useful in theory than in practice. Has it actually seen use on BCH or
Liquid, eg?  (Also, I think BCH's name for that opcode makes more sense
than Elements' -- all the CHECKSIG opcodes pull a sig from the stack,
after all)

> * Transaction introspection including:
> + Simulated SIGHASH_ANYPREVOUT, which are necessarily chaperoned simply by the
> nature of the construction.

I think simulating an ANYPREVOUT sig with a data signature means checking:

S1 P CHECKSIG -- to check S1 is a signature for the tx

S1 H_TapSighash(XAB) P CHECKDATASIG
 -- to pull out the tx data "X", "A", "B")

S2 H_TapSighash(XCB) Q CHECKDATASIG
 -- for the ANYPREVOUT sig, with A changed to C to
avoid committing to prevout info

X SIZE 42 EQUALVERIFY
B SIZE 47 EQUALVERIFY
 -- to make sure only C is replaced from "XCB"

So to get all those conditions checked, I think you could do:

   P 2DUP TOALT TOALT CHECKSIGVERIFY
   SIZE 42 EQUALVERIFY
   "TapSighash" SHA256 DUP CAT SWAP CAT TOALT
   SIZE 47 EQUALVERIFY TUCK
   CAT FROMALT TUCK SWAP CAT SHA256 FROMALT SWAP FROMALT
   CHECKDATASIGVERIFY
   SWAP TOALT SWAP CAT FROMALT CAT SHA256 Q CHECKDATASIG
   
Where the stack elements are, from top to bottom:

   S1: (65B) signature by P of tx
   X:  (42B) start of TapSighash spec
   B:  (47B) end of TapSighash spec (amount, nSequence, tapleaf_hash,
 key_version, codesep_pos)
   A:  (73B) middle of TapSighash spec dropped for ANYPREVOUT (spend_type,
 scriptPubKey and outpoint)
   C:   (1B) alternate middle (different spend_type)
   S2: (64B) signature of "XCB" by key Q

So 298B for the witness data, and 119B or so for the script (if I've not
made mistakes), versus "P CHECKSIGVERIFY Q CHECKSIG" and S2 and S1 on
the stack, for 132B of witness data and 70B of script, or half that if
the chaperone requirement is removed.

I think you'd need to complicate it a bit further to do the
ANYPREVOUTANYSCRIPT variant, where you retain the commitment to
amount/nseq but drop the commitment to tapleaf_hash.

> I feel that this style of generic building blocks truly embodies what is meant
> by "programmable money".

For practical purposes, this doesn't seem like a great level of
abstraction to me. It's certainly better at "permissionless innovation"
though.

You could make these constructions a little bit simpler by having a
"CHECK_SIG_MSG_VERIFY" opcode that accepts [sig msg key], and does "sig
key CHECKSIGVERIFY" but also checks the the provided msg was what was
passed into bip-schnorr.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


  1   2   >