Re: [bitcoin-dev] Improving BIP 8 soft fork activation

2022-05-12 Thread Greg Sanders via bitcoin-dev
I think you may be confused. Mandatory signaling is not the same thing as
mandatory activation on timeout, aka Lock On Timeout aka LOT=true.

These are two related but separate things.

On Thu, May 12, 2022, 6:53 PM alicexbt via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Russell,
>
>
> As far as I understand things, I believe the whole notion of a MUST_SIGNAL
> state is misguided today. Please correct me if I'm misunderstanding
> something here.
>
> Back when BIP8 was first proposed by Shaolin Fry, we were in a situation
> where many existing clients waiting for segwit signalling had already been
> deployed. The purpose of mandatory signaling at that point in time was to
> ensure all these existing clients would be activated together with any BIP8
> clients.
>
>
> I won't consider it misguided. Not using MUST_SIGNAL gives opportunity for
> drama and politics during signaling. MUST_SIGNAL phase is initiated when
> height + 2016 >= timeoutheight and if a mining pool is still not sure about
> signaling at that point, maybe they are not interested in mining bitcoin
> anymore.
>
> Rephrasing 'motivation' section in BIP 8:
>
> BIP 9 activation is dependent on near unanimous hashrate signaling which
> may be impractical and result in veto by a small minority of
> non-signaling hashrate. All consensus rules are ultimately enforced by full
> nodes, eventually any new soft fork will be enforced by the economy. BIP 8
> provides optional flag day activation after a reasonable time, as well as
> for accelerated activation by majority of hash rate before the flag date.
>
> We also don't need such a signal span over multiple blocks. Indeed, using
> version bits and signaling over multiple blocks is quite bad because it
> risks losing mining power if miners don't conform, or are unable to
> conform, to the version bits signal. (Recall at the time taproot's
> signaling period started, the firmware needed for many miners to signal
> version bits did not even exist yet!).
>
>
> Solutions to these problems:
>
> 1)Developers plan and ship the binaries with activation code in time.
> 2)Mining pools pay attention, participate in soft fork discussions, hire
> competent developers and reach out to developers in community if require
> help.
> 3)Mining pools understand the loss involved in mining invalid blocks and
> upgrade during the first month of signaling.
>
> If some mining pools still mine invalid blocks, Bitcoin should still work
> normally as it did during May-June 2021 when 50% hashrate went down due to
> some issues in China.
>
>
> /dev/fd0
>
> Sent with ProtonMail  secure email.
>
> --- Original Message ---
> On Thursday, May 12th, 2022 at 12:52 AM, Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Hi alicexbt,
>
> As far as I understand things, I believe the whole notion of a MUST_SIGNAL
> state is misguided today. Please correct me if I'm misunderstanding
> something here.
>
> Back when BIP8 was first proposed by Shaolin Fry, we were in a situation
> where many existing clients waiting for segwit signalling had already been
> deployed. The purpose of mandatory signaling at that point in time was to
> ensure all these existing clients would be activated together with any BIP8
> clients.
>
> However, if such other clients do not exist, the MUST_SIGNAL state no
> longer accomplishes its purpose. Going forward, I think there is little
> reason to expect such other clients to exist alongside a BIP8 deployment.
> If everyone uses a BIP8 deployment, then there are no other clients to
> activate. Alternatively, Speedy Trial was specifically designed to avoid
> this parallel deployment for the reason that several people object to
> allowing their client's non-BIP8 activation logic to be hijacked in this
> manner.
>
> Now I understand that some people would like *some* signal on the chain
> that indicates a soft-fork activation in order to allow people who object
> to the fork to make an "anti-fork" that rejects blocks containing the
> soft-fork signal. And while some sort of mandatory version bit signaling
> *could* be used for this purpose, we do not *have* to use version bits. We
> also don't need such a signal span over multiple blocks. Indeed, using
> version bits and signaling over multiple blocks is quite bad because it
> risks losing mining power if miners don't conform, or are unable to
> conform, to the version bits signal. (Recall at the time taproot's
> signaling period started, the firmware needed for many miners to signal
> version bits did not even exist yet!).
>
> A soft-fork signal to enable an "anti-fork" only needs to be on a single
> block and it can be almost anything. For example we could have a signal
> that at the block at lockin or perhaps the block at activation requires
> that the coinbase must *not* contain the suffix "taproot sucks!". This
> suffices to prepare an "anti-fork" which would simply require that 

Re: [bitcoin-dev] Improving BIP 8 soft fork activation

2022-05-12 Thread alicexbt via bitcoin-dev
Hi Russell,

> As far as I understand things, I believe the whole notion of a MUST_SIGNAL 
> state is misguided today. Please correct me if I'm misunderstanding something 
> here.
> Back when BIP8 was first proposed by Shaolin Fry, we were in a situation 
> where many existing clients waiting for segwit signalling had already been 
> deployed. The purpose of mandatory signaling at that point in time was to 
> ensure all these existing clients would be activated together with any BIP8 
> clients.

I won't consider it misguided. Not using MUST_SIGNAL gives opportunity for 
drama and politics during signaling. MUST_SIGNAL phase is initiated when height 
+ 2016 >= timeoutheight and if a mining pool is still not sure about signaling 
at that point, maybe they are not interested in mining bitcoin anymore.

Rephrasing 'motivation' section in BIP 8:

BIP 9 activation is dependent on near unanimous hashrate signaling which may be 
impractical and result in veto by a small minority of non-signaling hashrate. 
All consensus rules are ultimately enforced by full nodes, eventually any new 
soft fork will be enforced by the economy. BIP 8 provides optional flag day 
activation after a reasonable time, as well as for accelerated activation by 
majority of hash rate before the flag date.

> We also don't need such a signal span over multiple blocks. Indeed, using 
> version bits and signaling over multiple blocks is quite bad because it risks 
> losing mining power if miners don't conform, or are unable to conform, to the 
> version bits signal. (Recall at the time taproot's signaling period started, 
> the firmware needed for many miners to signal version bits did not even exist 
> yet!).

Solutions to these problems:

1)Developers plan and ship the binaries with activation code in time.
2)Mining pools pay attention, participate in soft fork discussions, hire 
competent developers and reach out to developers in community if require help.
3)Mining pools understand the loss involved in mining invalid blocks and 
upgrade during the first month of signaling.

If some mining pools still mine invalid blocks, Bitcoin should still work 
normally as it did during May-June 2021 when 50% hashrate went down due to some 
issues in China.

/dev/fd0

Sent with [ProtonMail](https://protonmail.com/) secure email.

--- Original Message ---
On Thursday, May 12th, 2022 at 12:52 AM, Russell O'Connor via bitcoin-dev 
 wrote:

> Hi alicexbt,
>
> As far as I understand things, I believe the whole notion of a MUST_SIGNAL 
> state is misguided today. Please correct me if I'm misunderstanding something 
> here.
>
> Back when BIP8 was first proposed by Shaolin Fry, we were in a situation 
> where many existing clients waiting for segwit signalling had already been 
> deployed. The purpose of mandatory signaling at that point in time was to 
> ensure all these existing clients would be activated together with any BIP8 
> clients.
>
> However, if such other clients do not exist, the MUST_SIGNAL state no longer 
> accomplishes its purpose. Going forward, I think there is little reason to 
> expect such other clients to exist alongside a BIP8 deployment. If everyone 
> uses a BIP8 deployment, then there are no other clients to activate. 
> Alternatively, Speedy Trial was specifically designed to avoid this parallel 
> deployment for the reason that several people object to allowing their 
> client's non-BIP8 activation logic to be hijacked in this manner.
>
> Now I understand that some people would like *some* signal on the chain that 
> indicates a soft-fork activation in order to allow people who object to the 
> fork to make an "anti-fork" that rejects blocks containing the soft-fork 
> signal. And while some sort of mandatory version bit signaling *could* be 
> used for this purpose, we do not *have* to use version bits. We also don't 
> need such a signal span over multiple blocks. Indeed, using version bits and 
> signaling over multiple blocks is quite bad because it risks losing mining 
> power if miners don't conform, or are unable to conform, to the version bits 
> signal. (Recall at the time taproot's signaling period started, the firmware 
> needed for many miners to signal version bits did not even exist yet!).
>
> A soft-fork signal to enable an "anti-fork" only needs to be on a single 
> block and it can be almost anything. For example we could have a signal that 
> at the block at lockin or perhaps the block at activation requires that the 
> coinbase must *not* contain the suffix "taproot sucks!". This suffices to 
> prepare an "anti-fork" which would simply require that the specified block 
> must contain the suffix "taproot sucks!".
>
> Anyway, I'm sure there are lots of design choices available better than a 
> MUST_SIGNAL state that does not risk potentially taking a large fraction of 
> mining hardware offline for a protracted period of time.
>
> On Tue, May 10, 2022 at 10:02 AM alicexbt via bitcoin-dev 
>  wrote:
>
>> Hi 

Re: [bitcoin-dev] Conjectures on solving the high interactivity issue in payment pools and channel factories

2022-05-12 Thread Billy Tetrud via bitcoin-dev
@Antoine
>  it's also hard to predict in advance the liquidity needs of the
sub-pools.

Definitely. Better than not being able to use the pool at all when
someone's offline tho.

> this fan-out transaction could interfere with the confirmation of the
simple withdraw transactions
> So there is an open question about the "honest" usage of the sub-pool
states themselves.

I don't follow this one. How would it interfere? How would it call into
question the "honesty" of the sub-pools? Why would honesty matter? I would
assume they can all be structured trustlessly.

> one could envision an accumulator committing directly to balances too

Are you suggesting that there would be some kind of opcode that operates on
this accumulator to shift around balances of some participants without
disturbing others? Sounds reasonable.

> I think the challenge is to find a compact accumulator with those
properties.

The Merkle Sum Trees like are used in Taro sound like they could probably
be useful for that.

> It's more likely a lot of them will delegate this operation to
third-party providers, with the known reductions in terms of
trust-minimizations.

There is of course that limitation. But a third party empowered only to
keep the pool functioning is much better than one given the ability to
spend on your behalf without your confirmation. This would be a big
improvement despite there still being minor privacy downsides.

> Hmmm, how could you prevent the always-online service from using the
receiving-key in "spending" mode if the balance stacked there becomes
relevant ?

You mean if your balance in the pool is 1000 sats and the service
facilitates receiving 100 sats, that service could then steal those 100
sats? And you're asking how you could prevent that? Well first of all, if
you're in a channel, not only does your service need to want to steal your
funds, but your channel partner(s) must also sign for that as well - so
they both must be malicious for these funds to be stolen. I can't see a way
to prevent that, but at least this situation prevents them from stealing
your whole 1100 sats, and can only steal 100 sats.

>  see https://gitlab.com/lightning-signer/docs for wip in that direction.

Interesting. I'm glad someone's been working on this kind of thing

> A malicious pool participant could still commit her off-chain balance in
two partitions and send spends to the A hosting "receiving-keys" entities
without them being aware of the conflict, in the lack of a reconciliation
such as a publication space ?

Actually, I was envisioning that the always-online services holding a
receive-only key would *all* be online. So all participants of the pool
would have a representative, either one with a spending key or with just a
receiving-key (which could also be used to simply sign pool state changes
that don't negatively affect the balance of the user they represent). So
there still would be agreement among all participants on pool state
changes.

I kind of think if both techniques (sub-pools and limited-trust services)
are used, it might be able to substantially increase the ability for a pool
to operate effectively (ie substantially decrease the average downtime).

@ZmnSCPxj
> Is this not just basically channel factories?

It is.

> To reduce the disruption if any one pool participant is down, have each
sub-pool have only 2 participants each.

Yes. But the benefit of the pool over just having individual 2 person
channels is that you can change around the structure of the channels within
the pool without doing on-chain transactions. As Antoine mentioned, it may
often not be predictable which 2-person channels would be beneficial in the
future. So you want the pool to be as responsive as possible to the
changing needs of the pool.



On Tue, May 10, 2022 at 11:45 AM ZmnSCPxj  wrote:

> Good morning Billy,
>
>
> > Very interesting exploration. I think you're right that there are issues
> with the kind of partitioning you're talking about. Lightning works because
> all participants sign all offchain states (barring data loss). If a
> participant can be excluded from needing to agree to a new state, there
> must be an additional mechanism to ensure the relevant state for that
> participant isn't changed to their detriment.
> >
> > To summarize my below email, the two techniques I can think for solving
> this problem are:
> >
> > A. Create sub-pools when the whole group is live that can be used by the
> sub- pool participants later without the whole group's involvement. The
> whole group is needed to change the whole group's state (eg close or open
> sub-pools), but sub-pool states don't need to involve the whole group.
>
> Is this not just basically channel factories?
>
> To reduce the disruption if any one pool participant is down, have each
> sub-pool have only 2 participants each.
> More participants means that the probability that one of them is offline
> is higher, so you use the minimum number of participants in the sub-pool: 

Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-12 Thread Billy Tetrud via bitcoin-dev
@Jorge & Zmn
>  A recursive covenant guarantees that the same thing will happen in the
future.

Just a clarification: a recursive covenant does not necessarily guarantee
any particular thing will happen in the future. Both recursives and a
non-recursive covenant opcodes *can* be used to guarantee something will
happen. Neither *necessarily* guarantee anything (because of
the possibility of alternative spend paths). A covenant isn't just a
promise, its a restriction.

A "recursive covenant" opcode is one that allows loops in the progression
through covenant addresses. Here's an example of a set of transitions from
one address with a covenant in the spend path to another (or "exit" which
does not have a covenant restriction):

A -> B
A -> C
B -> C
C -> A
C -> exit

The possible combinations of changes are:

A -> B -> C -> exit
A -> C -> A -> ...
A -> B -> C -> A -> ...

This would be a recursive covenant with an exit. Remove the exit
transition, and you have a walled garden. Even with this walled garden, you
can avoid going through address B (since you can skip directly to C).

A covenant opcode that can allow for infinite recursion (often talked about
as a "recursive covenant") can be used to return to a previous state, which
allows for permanent walled gardens.

So I would instead characterize a bitcoin covenant as:

A covenant in an input script places a requirement/restriction on the
output script(s) that input sends to. Pretty much any covenant allows for a
chain or graph of covenant-laden addresses to be prescribed, while a
"recursive covenant" opcode allows previous nodes in that graph to be
returned to such that the states can be looped through forever (which may
or may not have some way to exit).

One potentially confusing thing about the way covenants are usually talked
about is that in technical discussions about the risks of covenants, what
is being talked about is not what a particular covenant opcode always does,
but rather what the boundaries are on what can be done with that opcode.
Pretty much any recursive covenant you could design would be able to be
used to create normal simple non-walled-garden situations. The question is,
since they do allow someone to create walled gardens, is that ok.

I suppose maybe an interesting possibility would be to have a covenant
limit placed into a covenant opcode. Eg, let's say that you have
OP_LIMITEDCOVENANT (OP_LC) and OP_LC specifies that the maximum covenant
chain is 100. The 100th consecutive output with an OP_LC use could simply
ignore it and be spent normally to anywhere (given that the rest of the
script allows it). This could effectively prevent the ability to create
walled gardens, without eliminating most interesting use cases. Among
people who care about covenants on this mailing list, the consensus seems
to be that infinitely recursive covenants are not something to be afraid
of. However, if maybe something like this could make more powerful
covenants acceptable to a larger group of people, it could be worth doing.

On Thu, May 12, 2022 at 7:20 AM ZmnSCPxj  wrote:

> Good morning Jorge,
>
> > I fail to understand why non recursive covenants are called covenants at
> all. Probably I'm missing something, but I guess that's another topic.
>
> A covenant simply promises that something will happen in the future.
>
> A recursive covenant guarantees that the same thing will happen in the
> future.
>
> Thus, non-recursive covenants can be useful.
>
> Consider `OP_EVICT`, for example, which is designed for a very specific
> use-case, and avoids recursion.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bringing a nuke to a knife fight: Transaction introspection to stop RBF pinning

2022-05-12 Thread Greg Sanders via bitcoin-dev
Great point in this specific case I unfortunately didn't consider! So
basically the design degenerates to the last option I gave, where the
counterparty
can send off N(25) weight-bound packages.

A couple thoughts:

0) Couldn't we relative-time lock update transactions's state input by 1
block as well to close the vector off? People are allowed
one "update transaction package" at a time in mempool, so if detected
in-mempool it can be RBF'd, or in-block can be immediately responded to.
1) other usages of ANYONECANPAY like behavior may not have these issues,
like vault structures.


On Thu, May 12, 2022, 3:17 AM David A. Harding  wrote:

> On 2022-05-10 08:53, Greg Sanders via bitcoin-dev wrote:
> > We add OPTX_SELECT_WEIGHT(pushes tx weight to stack, my addition to
> > the proposal) to the "state" input's script.
> > This is used in the update transaction to set the upper bound on the
> > final transaction weight.
> > In this same input, for each contract participant, we also
> > conditionally commit to the change output's scriptpubkey
> > via OPTX_SELECT_OUTPUT_SCRIPTPUBKEY and OPTX_SELECT_OUTPUTCOUNT==2.
> > This means any participant can send change back
> > to themselves, but with a catch. Each change output script possibility
> > in that state input also includes a 1 block
> > CSV to avoid mempool spending to reintroduce pinning.
>
> I like the idea!   However, I'm not sure the `1 CSV` trick helps much.
> Can't an attacker just submit to the mempool their other eltoo state
> updates?  For example, let's assume Bob and Mallory have a channel with
>  >25 updates and Mallory wants to prevent update[-1] from being committed
> onchain before its (H|P)TLC timeout.  Mallory also has at least 25
> unencumbered UTXOs, so she submits to the mempool update[0], update[1],
> update[...], update[24]---each of them with a different second input to pay
> fees.
>
> If `OPTX_SELECT_WEIGHT OP_TX` limits each update's weight to 1,000
> vbytes[1] and the default node relay/mempool policy of allowing a
> transaction and up to 24 descendants remains, Mallory can pin the
> unsubmitted update[-1] under 25,000 vbytes of junk---which is 25% of
> what she can pin under current mempool policies.
>
> Alice can't RBF update[0] without paying for update[1..24] (BIP125 rule
> #3), and an RBF of update[24] will have its additional fees divided by
> its size plus the 24,000 vbytes of update[1..24].
>
> To me, that seems like your proposal makes escaping the pinning at most
> 75% cheaper than today.  That's certainly an improvement---yay!---but
> I'm not sure it eliminates the underlying concern.  Also depending on
> the mempool ancestor/descendant limits makes it harder to raise those
> limits in the future, which is something I think we might want to do if
> we can ensure raising them won't increase node memory/CPU DoS risk.
>
> I'd love to hear that my analysis is missing something though!
>
> Thanks!,
>
> -Dave
>
> [1] 1,000 vbytes per update seems like a reasonable value to me.
> Obviously there's a tradeoff here: making it smaller limits the amount
> of pinning possible (assuming mempool ancestor/descendant limits remain)
> but also limits the number and complexity of inputs that may be added.
> I don't think we want to discourage people too much from holding
> bitcoins in deep taproot trees or sophisticated tapscripts.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-12 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> I fail to understand why non recursive covenants are called covenants at all. 
> Probably I'm missing something, but I guess that's another topic.

A covenant simply promises that something will happen in the future.

A recursive covenant guarantees that the same thing will happen in the future.

Thus, non-recursive covenants can be useful.

Consider `OP_EVICT`, for example, which is designed for a very specific 
use-case, and avoids recursion.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-12 Thread Jorge Timón via bitcoin-dev
I think something like visacoin could be kind of feasible without recursive
covenants. But as billy points out, I guess they could kind of do it with
multisig too.

I fail to understand why non recursive covenants are called covenants at
all. Probably I'm missing something, but I guess that's another topic.


On Tue, May 10, 2022 at 5:11 PM Billy Tetrud via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> >  So if you don't want to receive restricted coins, just don't generate
> an address with those restrictions embedded.
>
> This is an interesting point that I for some reason haven't thought of
> before. However...
>
> > Unless governments can mandate that you generate these addresses AND
> force you to accept funds bound by them for your services**, I don't
> actually see how this is a real concern.
>
> Actually, I think only the second is necessary. For example, if there was
> a law that compelled giving a good or service if payment of a publicly
> advertised amount was paid, and someone pays to an address that can be
> shown is spendable by the merchant's keys in a way that the government
> accepts, it doesn't matter whether the recipient can or has generated the
> address.
>
> Regardless I do think its still important to note that a government could
> do that today using multisig.
>
> > This is a reason to oppose legal tender laws for Bitcoin imo.
>
> I agree.
>
> On Mon, May 9, 2022 at 10:23 AM Keagan McClelland <
> keagan.mcclell...@gmail.com> wrote:
>
>> > > > To me the most scary one is visacoin, specially seeing what
>> happened in canada and other places lately and the general censorship in
>> the west, the supposed war on "misinformation" going on (really a war
>> against truth imo, but whatever) it's getting really scary. But perhaps
>> someone else can be more scared about a covenant to add demurrage fees to
>> coins or something, I don't know.
>> > > > https://bitcointalk.org/index.php?topic=278122
>>
>> > > This requires *recursive* covenants.
>>
>> > Actually, for practical use, any walled-garden requires *dynamic*
>> covenants, not recursive covenants.
>>
>> There's actually also a very straight forward defense for those who do
>> not want to receive "tainted" coins. In every covenant design I've seen to
>> date (including recursive designs) it requires that the receiver generate a
>> script that is "compliant" with the covenant provisions to which the sender
>> is bound. The consequence of this is that you can't receive coins that are
>> bound by covenants you weren't aware of*. So if you don't want to receive
>> restricted coins, just don't generate an address with those restrictions
>> embedded. As long as you can specify the spend conditions upon the receipt
>> of your funds, it really doesn't matter how others are structuring their
>> own spend conditions. So long as the verification of those conditions can
>> be predictably verified by the rest of the network, all risk incurred is
>> quarantined to the receiver of the funds. Worst case scenario is that no
>> one wants to agree to those conditions and the funds are effectively burned.
>>
>> It's not hard to make the case that any time funds are being transferred
>> between organizations with incompatible interests (external to a firm),
>> that they will want to be completely free to choose their own spend
>> conditions and will not wish to inherit the conditions of the spender.
>> Correspondingly, any well implemented covenant contract will include
>> provisions for escaping the recursion loop if some sufficiently high bar is
>> met by the administrators of those funds. Unless governments can mandate
>> that you generate these addresses AND force you to accept funds bound by
>> them for your services**, I don't actually see how this is a real concern.
>>
>> *This requires good wallet tooling and standards but that isn't
>> materially different than wallets experimenting with non-standard recovery
>> policies.
>>
>> **This is a reason to oppose legal tender laws for Bitcoin imo.
>>
>> Keagan
>>
>> On Sun, May 8, 2022 at 11:32 AM Billy Tetrud via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> >  This requires *recursive* covenants.
>>>
>>> Actually, for practical use, any walled-garden requires *dynamic*
>>> covenants, not recursive covenants. CTV can get arbitrarily close to
>>> recursive covenants, because you can have an arbitrarily long string of
>>> covenants. But this doesn't help someone implement visacoin because CTV
>>> only allows a specific predefined iteration of transactions, meaning that
>>> while "locked" into the covenant sequence, the coins can't be used in any
>>> way like normal coins - you can't choose who you pay, the sequence is
>>> predetermined.
>>>
>>> Even covenants that allow infinite recursion (like OP_TLUV and OP_CD
>>> )
>>> don't automatically allow for practical 

Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-12 Thread Russell O'Connor via bitcoin-dev
On Wed, May 11, 2022 at 11:07 PM ZmnSCPxj  wrote:

> Good morning Russell,
>
> > On Wed, May 11, 2022 at 7:42 AM ZmnSCPxj via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > > REMEMBER: `OP_CAT` BY ITSELF DOES NOT ENABLE COVENANTS, WHETHER
> RECURSIVE OR NOT.
> >
> >
> > I think the state of the art has advanced to the point where we can say
> "OP_CAT in tapscript enables non recursive covenants and it is unknown
> whether OP_CAT can enable recursive covenants or not".
> >
> > A. Poelstra in
> https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html show
> how to use CAT to use the schnorr verification opcode to get the sighash
> value + 1 onto the stack, and then through some grinding and some more CAT,
> get the actual sighash value on the stack. From there we can use SHA256 to
> get the signed transaction data onto the stack and apply introspect (using
> CAT) to build functionality similar to OP_CTV.
> >
> > The missing bits for enabling recursive covenants comes down to needing
> to transform a scriptpubkey into an taproot address, which involves some
> tweaking. Poelstra has suggested that it might be possible to hijack the
> ECDSA checksig operation from a parallel, legacy input, in order to perform
> the calculations for this tweaking. But as far as I know no one has yet
> been able to achieve this feat.
>
> Hmm, I do not suppose it would have worked in ECDSA?
> Seems like this exploits linearity in the Schnorr.
> For the ECDSA case it seems that the trick in that link leads to `s = e +
> G[x]` where `G[x]` is the x-coordinate of `G`.
> (I am not a mathist, so I probably am not making sense; in particular,
> there may be an operation to add two SECP256K1 scalars that I am not aware
> of)
>
> In that case, since Schnorr was added later, I get away by a technicality,
> since it is not *just* `OP_CAT` which enabled this style of covenant, it
> was `OP_CAT` + BIP340 v(^^);
>

Correct.


> Also holy shit math is scary.
>
> Seems this also works with `OP_SUBSTR`, simply by inverting it into
> "validate that the concatenation is correct" rather than "concatenate it
> ourselves".
>
>
>
>
> So really: are recursive covenants good or...?
> Because if recursive covenants are good, what we should really work on is
> making them cheap (in CPU load/bandwidth load terms) and private, to avoid
> centralization and censoring.
>

My view is that recursive covenants are inevitable.  It is nearly
impossible to have programmable money without it because it is so difficult
to avoid.

Given that we cannot have programmable money without recursive covenants
and given all the considerations already discussed regarding them, i.e. no
worse than being compelled to co-sign transactions, and that user generated
addresses won't be encumbered by a covenant unless they specifically
generate it to be, I do think it makes sense to embrace them.


> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [PROPOSAL] OP_TX: generalized covenants reduced to OP_CHECKTEMPLATEVERIFY

2022-05-12 Thread Alex Schoof via bitcoin-dev
Hi Rusty,

One of the common sentiments thats been expressed over the last few months
is that more people want to see experimentation with different applications
using covenants. I really like this proposal because in addition to
offering a cleaner upgrade/extension path than adding “CTV++” as a new
opcode in a few years, it also seems like it would make it very easy to
create prototype applications to game out new ideas:
If the “only this combination of fields are valid, otherwise OP_SUCCESS”
check is just comparing with a list of bitmasks for permissible field
combinations (which right now is a list of length 1), it seems like it
would be *very* easy for people who want to play with other covenant field
sets to just add the relevant bitmasks and then go spin up a signet to
build applications.

Being able to make a very targeted change like that to enable
experimentation is super cool. Thanks for sharing!

Alex

On Tue, May 10, 2022 at 6:37 AM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> TL;DR: a v1 tapscript opcode for generic covenants, but
> OP_SUCCESS unless it's used a-la OP_CHECKTEMPLATEVERIFY.  This gives an
> obvious use case, with clean future expansion.  OP_NOP4 can be
> repurposed in future as a shortcut, if experience shows that to be a
> useful optimization.
>
> (This proposal builds on Russell O'Connor's TXHASH[1], with Anthony
> Towns' modification via extending the opcode[2]; I also notice on
> re-reading that James Lu had a similar restriction idea[3]).
>
> Details
> ---
>
> OP_TX, when inside v1 tapscript, is followed by 4 bytes of flags.
> Unknown flag patterns are OP_SUCCESS, though for thoroughness some future
> potential uses are documented here.  Note that pushing more than 1000
> elements on the stack or an element more than 512 bytes will hit the
> BIP-342 resource limits and fail.
>
> Defined bits
> 
>
> (Only those marked with * have to be defined for this soft fork; the
>  others can have semantics later).
>
> OPTX_SEPARATELY: treat fields separately (vs concatenating)
> OPTX_UNHASHED: push on the stack without hashing (vs SHA256 before push)
>
> - The first nicely sidesteps the lack of OP_CAT, and the latter allows
>   OP_TXHASH semantics (and avoid stack element limits).
>
> OPTX_SELECT_VERSION*: version
> OPTX_SELECT_LOCKTIME*: nLocktime
> OPTX_SELECT_INPUTNUM*: current input number
> OPTX_SELECT_INPUTCOUNT*: number of inputs
> OPTX_SELECT_OUTPUTCOUNT*: number of outputs
>
> OPTX_INPUT_SINGLE: if set, pop input number off stack to apply to
> OPTX_SELECT_INPUT_*, otherwise iterate through all.
> OPTX_SELECT_INPUT_TXID: txid
> OPTX_SELECT_INPUT_OUTNUM: txout index
> OPTX_SELECT_INPUT_NSEQUENCE*: sequence number
> OPTX_SELECT_INPUT_AMOUNT32x2: sats in, as a high-low u31 pair
> OPTX_SELECT_INPUT_SCRIPT*: input scriptsig
> OPTX_SELECT_INPUT_TAPBRANCH: ?
> OPTX_SELECT_INPUT_TAPLEAF: ?
>
> OPTX_OUTPUT_SINGLE: if set, pop input number off stack to apply to
> OPTX_SELECT_OUTPUT_*, otherwise iterate through all.
> OPTX_SELECT_OUTPUT_AMOUNT32x2*: sats out, as a high-low u31 pair
> OPTX_SELECT_OUTPUT_SCRIPTPUBKEY*: output scriptpubkey
>
> OPTX_SELECT_19...OPTX_SELECT_31: future expansion.
>
> OP_CHECKTEMPLATEVERIFY is approximated by the following flags:
> OPTX_SELECT_VERSION
> OPTX_SELECT_LOCKTIME
> OPTX_SELECT_INPUTCOUNT
> OPTX_SELECT_INPUT_SCRIPT
> OPTX_SELECT_INPUT_NSEQUENCE
> OPTX_SELECT_OUTPUTCOUNT
> OPTX_SELECT_OUTPUT_AMOUNT32x2
> OPTX_SELECT_OUTPUT_SCRIPTPUBKEY
> OPTX_SELECT_INPUTNUM
>
> All other flag combinations result in OP_SUCCESS.
>
> Discussion
> --
>
> By enumerating exactly what can be committed to, it's absolutely clear
> what is and isn't committed (and what you need to think about!).
>
> The bits which separate concatenation and hashing provide a simple
> mechanism for template-style (i.e. CTV-style) commitments, or for
> programatic treatment of individual elements (e.g. amounts, though the
> two s31 style is awkward: a 64-bit push flag could be added in future).
>
> The lack of double-hashing of scriptsigs and other fields means we
> cannot simply re-use hashing done for SIGHASH_ALL.
>
> The OP_SUCCESS semantic is only valid in tapscript v1, so this does not
> allow covenants for v0 segwit or pre-segwit inputs.  If covenants prove
> useful, dedicated opcodes can be provided for those cases (a-la
> OP_CHECKTEMPLATEVERIFY).
>
> Cheers,
> Rusty.
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
> [2]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019819.html
> [3]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019816.html
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> 

Re: [bitcoin-dev] Bringing a nuke to a knife fight: Transaction introspection to stop RBF pinning

2022-05-12 Thread David A. Harding via bitcoin-dev

On 2022-05-10 08:53, Greg Sanders via bitcoin-dev wrote:

We add OPTX_SELECT_WEIGHT(pushes tx weight to stack, my addition to
the proposal) to the "state" input's script.
This is used in the update transaction to set the upper bound on the
final transaction weight.
In this same input, for each contract participant, we also
conditionally commit to the change output's scriptpubkey
via OPTX_SELECT_OUTPUT_SCRIPTPUBKEY and OPTX_SELECT_OUTPUTCOUNT==2.
This means any participant can send change back
to themselves, but with a catch. Each change output script possibility
in that state input also includes a 1 block
CSV to avoid mempool spending to reintroduce pinning.


I like the idea!   However, I'm not sure the `1 CSV` trick helps much.  
Can't an attacker just submit to the mempool their other eltoo state 
updates?  For example, let's assume Bob and Mallory have a channel with 
>25 updates and Mallory wants to prevent update[-1] from being committed onchain before its (H|P)TLC timeout.  Mallory also has at least 25 unencumbered UTXOs, so she submits to the mempool update[0], update[1], update[...], update[24]---each of them with a different second input to pay fees.


If `OPTX_SELECT_WEIGHT OP_TX` limits each update's weight to 1,000 
vbytes[1] and the default node relay/mempool policy of allowing a 
transaction and up to 24 descendants remains, Mallory can pin the 
unsubmitted update[-1] under 25,000 vbytes of junk---which is 25% of 
what she can pin under current mempool policies.


Alice can't RBF update[0] without paying for update[1..24] (BIP125 rule 
#3), and an RBF of update[24] will have its additional fees divided by 
its size plus the 24,000 vbytes of update[1..24].


To me, that seems like your proposal makes escaping the pinning at most 
75% cheaper than today.  That's certainly an improvement---yay!---but 
I'm not sure it eliminates the underlying concern.  Also depending on 
the mempool ancestor/descendant limits makes it harder to raise those 
limits in the future, which is something I think we might want to do if 
we can ensure raising them won't increase node memory/CPU DoS risk.


I'd love to hear that my analysis is missing something though!

Thanks!,

-Dave

[1] 1,000 vbytes per update seems like a reasonable value to me.  
Obviously there's a tradeoff here: making it smaller limits the amount 
of pinning possible (assuming mempool ancestor/descendant limits remain) 
but also limits the number and complexity of inputs that may be added.  
I don't think we want to discourage people too much from holding 
bitcoins in deep taproot trees or sophisticated tapscripts.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev