Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-06 Thread Chris Stewart via bitcoin-dev
FWIW, the initial use case that I hinted at in the OP is for lightning.

The problem this company has is they offer an inbound liquidity service,
but it is common after a user purchases liquidity, the channel goes unused.

This is bad for the company as their liquidity is tied up in unproductive
channels. The idea was to implement a monthly service fee that requires the
user to pay a fixed amount if the channel isn’t being used. This
compensates the company for the case where their liquidity is NOT being
used. With standard lightning fees, you only get paid when liquidity is
used. You don’t get paid when it is NOT being used. If you are offering
liquidity as a service this is bad.

The user purchasing liquidity can make the choice to pay the liquidity fee,
or not to pay it. In the case where a user does not pay the fee, the
company can take this as a signal that they are no longer interested in the
service. That way they can put their liquidity to use somewhere else that
is more productive for the rest of the network.

So it’s sort of a recurring payment for liquidity as a service, at least
that is how I’m thinking about it currently.

On Sun, Mar 6, 2022 at 2:11 PM ZmnSCPxj  wrote:

> Good morning Chris,
>
> > >On the other hand, the above, where the oracle determines *when* the
> fund can be spent, can also be implemented by a simple 2-of-3, and called
> an "escrow".
> >
> > I think something that is underappreciated by protocol developers is the
> fact that multisig requires interactiveness at settlement time. The
> multisig escrow provider needs to know the exact details about the bitcoin
> transaction and needs to issue a signature (gotta sign the outpoints, the
> fee, the payout addresses etc).
> >
> > With PTLCs that isn't the case, and thus gives a UX improvement for
> Alice & Bob that are using the escrow provider. The oracle (or escrow) just
> issues attestations. Bob or Alice take those attestations and complete the
> adaptor signature. Instead of a bi-directional communication requirement
> (the oracle working with Bob or Alice to build the bitcoin tx) at
> settlement time there is only unidirectional communication required.
> Non-interactive settlement is one of the big selling points of DLC style
> applications IMO.
> >
> > One of the unfortunate things about LN is the interactiveness
> requirements are very high, which makes developing applications hard
> (especially mobile applications). I don't think this solves lightning's
> problems, but it is a worthy goal to reduce interactiveness requirements
> with new bitcoin applications to give better UX.
>
> Good point.
>
> I should note that 2-of-3 contracts are *not* transportable over LN, but
> PTLCs *are* transportable.
> So the idea still has merit for LN, as a replacement for 2-fo-3 escrows.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> Even changing the weight of a transaction using jets (ie making a script 
> weigh less if it uses a jet) could be done in a similar way to how segwit 
> separated the witness out.

The way we did this in SegWit was to *hide* the witness from unupgraded nodes, 
who are then unable to validate using the upgraded rules (because you are 
hiding the data from them!), which is why I bring up:

> > If a new jet is released but nobody else has upgraded, how bad is my 
> > security if I use the new jet?
>
> Security wouldn't be directly affected, only (potentially) cost. If your 
> security depends on cost (eg if it depends on pre-signed transactions and is 
> for some reason not easily CPFPable or RBFable), then security might be 
> affected if the unjetted scripts costs substantially more to mine. 

So if we make a script weigh less if it uses a jet, we have to do that by 
telling unupgraded nodes "this script will always succeed and has weight 0", 
just like `scriptPubKey`s with `<0> ` are, to pre-SegWit nodes, 
spendable with an empty `scriptSig`.
At least, that is how I always thought SegWit worked.

Otherwise, a jet would never allow SCRIPT weights to decrease, as unupgraded 
nodes who do not recognize the jet will have to be fed the entire code of the 
jet and would consider the weight to be the expanded, uncompressed code.
And weight is a consensus parameter, blocks are restricted to 4Mweight.

So if a jet would allow SCRIPT weights to decrease, upgraded nodes need to hide 
them from unupgraded nodes (else the weight calculations of unupgraded nodes 
will hit consensus checks), then if everybody else has not upgraded, a user of 
a new jet has no security.

Not even the `softfork` form of chialisp that AJ is proposing in the other 
thread would help --- unupgraded nodes will simply skip over validation of the 
`softfork` form.

If the script does not weigh less if it uses a jet, then there is no incentive 
for end-users to use a jet, as they would still pay the same price anyway.

Now you might say "okay even if no end-users use a jet, we could have fullnodes 
recognize jettable code and insert them automatically on transport".
But the lookup table for that could potentially be large once we have a few 
hundred jets (and I think Simplicity already *has* a few hundred jets, so 
additional common jets would just add to that?), jettable code could start at 
arbitrary offsets of the original SCRIPT, and jettable code would likely have 
varying size, that makes for a difficult lookup table.
In particular that lookup table has to be robust against me feeding it some 
section of code that is *almost* jettable but suddenly has a different opcode 
at the last byte, *and* handle jettable code of varying sizes (because of 
course some jets are going to e more compressible than others).
Consider an attack where I feed you a SCRIPT that validates trivially but is 
filled with almost-but-not-quite-jettable code (and again, note that expanded 
forms of jets are varying sizes), your node has to look up all those jets but 
then fails the last byte of the almost-but-not-quite-jettable code, so it ends 
up not doing any jetting.
And since the SCRIPT validated your node feels obligated to propagate it too, 
so now you are helping my DDoS.

> >  I suppose the point would be --- how often *can* we add new jets?
>
> A simple jet would be something that's just added to bitcoin software and 
> used by nodes that recognize it. This would of course require some debate and 
> review to add it to bitcoin core or whichever bitcoin software you want to 
> add it to. However, the idea I proposed in my last email would allow anyone 
> to add a new jet. Then each node can have their own policy to determine which 
> jets of the ones registered it wants to keep an index of and use. On its own, 
> it wouldn't give any processing power optimization, but it would be able to 
> do the same kind of script compression you're talking about op_fold allowing. 
> And a list of registered jets could inform what jets would be worth building 
> an optimized function for. This would require a consensus change to implement 
> this mechanism, but thereafter any jet could be registered in userspace.

Certainly a neat idea.
Again, lookup table tho.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> >On the other hand, the above, where the oracle determines *when* the fund 
> >can be spent, can also be implemented by a simple 2-of-3, and called an 
> >"escrow".
>
> I think something that is underappreciated by protocol developers is the fact 
> that multisig requires interactiveness at settlement time. The multisig 
> escrow provider needs to know the exact details about the bitcoin transaction 
> and needs to issue a signature (gotta sign the outpoints, the fee, the payout 
> addresses etc).
>
> With PTLCs that isn't the case, and thus gives a UX improvement for Alice & 
> Bob that are using the escrow provider. The oracle (or escrow) just issues 
> attestations. Bob or Alice take those attestations and complete the adaptor 
> signature. Instead of a bi-directional communication requirement (the oracle 
> working with Bob or Alice to build the bitcoin tx) at settlement time there 
> is only unidirectional communication required. Non-interactive settlement is 
> one of the big selling points of DLC style applications IMO.
>
> One of the unfortunate things about LN is the interactiveness requirements 
> are very high, which makes developing applications hard (especially mobile 
> applications). I don't think this solves lightning's problems, but it is a 
> worthy goal to reduce interactiveness requirements with new bitcoin 
> applications to give better UX.

Good point.

I should note that 2-of-3 contracts are *not* transportable over LN, but PTLCs 
*are* transportable.
So the idea still has merit for LN, as a replacement for 2-fo-3 escrows.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-06 Thread Billy Tetrud via bitcoin-dev
> Are new jets consensus critical?
> Do I need to debate `LOT` *again* if I want to propose a new jet?

New jets should never need a consensus change. A jet is just an
optimization - a way to both save bytes in transmission as well as save
processing power. Anything that a jet can do can be done with a normal
script. Because of this, a script using a particular jet could be sent to a
node that doesn't support that jet by simply expanding the jet into the
script fragment it represents. The next node that recognizes the jet can
remove the extraneous bytes so extra transmission and processing-time would
only be needed for nodes that don't support that jet. (Note that this
interpretation of a 'jet' is probably slightly different than as described
for simplicity, but the idea is basically the same). Even changing the
weight of a transaction using jets (ie making a script weigh less if it
uses a jet) could be done in a similar way to how segwit separated the
witness out.

> If a new jet is released but nobody else has upgraded, how bad is my
security if I use the new jet?

Security wouldn't be directly affected, only (potentially) cost. If your
security depends on cost (eg if it depends on pre-signed transactions and
is for some reason not easily CPFPable or RBFable), then security might be
affected if the unjetted scripts costs substantially more to mine.

>  I suppose the point would be --- how often *can* we add new jets?

A simple jet would be something that's just added to bitcoin software and
used by nodes that recognize it. This would of course require some debate
and review to add it to bitcoin core or whichever bitcoin software you want
to add it to. However, the idea I proposed in my last email would allow
anyone to add a new jet. Then each node can have their own policy to
determine which jets of the ones registered it wants to keep an index of
and use. On its own, it wouldn't give any processing power optimization,
but it would be able to do the same kind of script compression you're
talking about op_fold allowing. And a list of registered jets could inform
what jets would be worth building an optimized function for. This would
require a consensus change to implement this mechanism, but thereafter any
jet could be registered in userspace.

On Sat, Mar 5, 2022 at 5:02 PM ZmnSCPxj  wrote:

> Good morning Billy,
>
> > It sounds like the primary benefit of op_fold is bandwidth savings.
> Programming as compression. But as you mentioned, any common script could
> be implemented as a Simplicity jet. In a world where Bitcoin implements
> jets, op_fold would really only be useful for scripts that can't use jets,
> which would basically be scripts that aren't very often used. But that
> inherently limits the usefulness of the opcode. So in practice, I think
> it's likely that jets cover the vast majority of use cases that op fold
> would otherwise have.
>
> I suppose the point would be --- how often *can* we add new jets?
> Are new jets consensus critical?
> If a new jet is released but nobody else has upgraded, how bad is my
> security if I use the new jet?
> Do I need to debate `LOT` *again* if I want to propose a new jet?
>
> > A potential benefit of op fold is that people could implement smaller
> scripts without buy-in from a relay level change in Bitcoin. However, even
> this could be done with jets. For example, you could implement a consensus
> change to add a transaction type that declares a new script fragment to
> keep a count of, and if the script fragment is used enough within a
> timeframe (eg 1 blocks) then it can thereafter be referenced by an id
> like a jet could be. I'm sure someone's thought about this kind of thing
> before, but such a thing would really relegate the compression abilities of
> op fold to just the most uncommon of scripts.
> >
> > > * We should provide more *general* operations. Users should then
> combine those operations to their specific needs.
> > > * We should provide operations that *do more*. Users should identify
> their most important needs so we can implement them on the blockchain layer.
> >
> > That's a useful way to frame this kind of problem. I think the answer
> is, as it often is, somewhere in between. Generalization future-proofs your
> system. But at the same time, the boundary conditions of that generalized
> functionality should still be very well understood before being added to
> Bitcoin. The more general, the harder to understand the boundaries. So imo
> we should be implementing the most general opcodes that we are able to
> reason fully about and come to a consensus on. Following that last
> constraint might lead to not choosing very general opcodes.
>
> Yes, that latter part is what I am trying to target with `OP_FOLD`.
> As I point out, given the restrictions I am proposing, `OP_FOLD` (and any
> bounded loop construct with similar restrictions) is implementable in
> current Bitcoin SCRIPT, so it is not an increase in attack surface.
>
> 

[bitcoin-dev] CTV vaults in the wild

2022-03-06 Thread James O'Beirne via bitcoin-dev
A few months ago, AJ wrote[0]

> I'm not really convinced CTV is ready to start trying to deploy
> on mainnet even in the next six months; I'd much rather see some real
> third-party experimentation *somewhere* public first

In the spirit of real third-party experimentation *somewhere* in public,
I've created this implementation and write-up of a simple vault design
using CTV.

   https://github.com/jamesob/simple-ctv-vault

I think it has a number of appealing characteristics for custody
operations at any size.

Regards,
James


[0]:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019920.html
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-06 Thread Jeremy Rubin via bitcoin-dev
Hi Christian,

For that purpose I'd recommend having a checksig extra that is

checksigextra that allows N extra data items on the
stack in addition to the txn hash. This would allow signers to sign some
addtl arguments, but would not be an annex since the values would not have
any consensus meaning (whereas annex is designed to have one)


I've previously discussed this for eltoo with giving signatures an explicit
extra seqnum, but it can be generalized as above.



W.r.t. pinning, if the annex is a pure function of the script execution,
then there's no issue with letting it be mutable (e.g. for a validation
cost hint). But permitting both validation cost commitments and stack
readability is asking too much of the annex IMO.

On Sun, Mar 6, 2022, 1:13 PM Christian Decker 
wrote:

> One thing that we recently stumbled over was that we use CLTV in eltoo not
> for timelock but to have a comparison between two committed numbers coming
> from the spent and the spending transaction (ordering requirement of
> states). We couldn't use a number on the stack of the scriptSig as the
> signature doesn't commit to it, which is why we commandeered nLocktime
> values that are already in the past.
>
> With the annex we could have a way to get a committed to number we can
> pull onto the stack, and free the nLocktime for other uses again. It'd also
> be less roundabout to explain in classes :-)
>
> An added benefit would be that update transactions, being singlesig, can
> be combined into larger transactions by third parties or watchtowers to
> amortize some of the fixed cost of getting them confirmed, allowing
> on-path-aggregation basically (each node can group and aggregate
> transactions as they forward them). This is currently not possible since
> all the transactions that we'd like to batch would have to have the same
> nLocktime at the moment.
>
> So I think it makes sense to partition the annex into a global annex
> shared by the entire transaction, and one for each input. Not sure if one
> for inputs would also make sense as it'd bloat the utxo set and could be
> emulated by using the input that is spending it.
>
> Cheers,
> Christian
>
> On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev
>> wrote:
>> > I've seen some discussion of what the Annex can be used for in Bitcoin.
>>
>>
>> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html
>>
>> includes some discussion on that topic from the taproot review meetings.
>>
>> The difference between information in the annex and information in
>> either a script (or the input data for the script that is the rest of
>> the witness) is (in theory) that the annex can be analysed immediately
>> and unconditionally, without necessarily even knowing anything about
>> the utxo being spent.
>>
>> The idea is that we would define some simple way of encoding (multiple)
>> entries into the annex -- perhaps a tag/length/value scheme like
>> lightning uses; maybe if we add a lisp scripting language to consensus,
>> we just reuse the list encoding from that? -- at which point we might
>> use one tag to specify that a transaction uses advanced computation, and
>> needs to be treated as having a heavier weight than its serialized size
>> implies; but we could use another tag for per-input absolute locktimes;
>> or another tag to commit to a past block height having a particular hash.
>>
>> It seems like a good place for optimising SIGHASH_GROUP (allowing a group
>> of inputs to claim a group of outputs for signing, but not allowing inputs
>> from different groups to ever claim the same output; so that each output
>> is hashed at most once for this purpose) -- since each input's validity
>> depends on the other inputs' state, it's better to be able to get at
>> that state as easily as possible rather than having to actually execute
>> other scripts before your can tell if your script is going to be valid.
>>
>> > The BIP is tight lipped about it's purpose
>>
>> BIP341 only reserves an area to put the annex; it doesn't define how
>> it's used or why it should be used.
>>
>> > Essentially, I read this as saying: The annex is the ability to pad a
>> > transaction with an additional string of 0's
>>
>> If you wanted to pad it directly, you can do that in script already
>> with a PUSH/DROP combo.
>>
>> The point of doing it in the annex is you could have a short byte
>> string, perhaps something like "0x010201a4" saying "tag 1, data length 2
>> bytes, value 420" and have the consensus intepretation of that be "this
>> transaction should be treated as if it's 420 weight units more expensive
>> than its serialized size", while only increasing its witness size by
>> 6 bytes (annex length, annex flag, and the four bytes above). Adding 6
>> bytes for a 426 weight unit increase seems much better than adding 426
>> 

Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-06 Thread Chris Stewart via bitcoin-dev
>On the other hand, the above, where the oracle determines *when* the fund
can be spent, can also be implemented by a simple 2-of-3, and called an
"escrow".

I think something that is underappreciated by protocol developers is the
fact that multisig requires interactiveness at settlement time. The
multisig escrow provider needs to know the exact details about the bitcoin
transaction and needs to issue a signature (gotta sign the outpoints, the
fee, the payout addresses etc).

With PTLCs that isn't the case, and thus gives a UX improvement for Alice &
Bob that are using the escrow provider. The oracle (or escrow) just issues
attestations. Bob or Alice take those attestations and complete the adaptor
signature. Instead of a bi-directional communication requirement (the
oracle working with Bob or Alice to build the bitcoin tx) at settlement
time there is only unidirectional communication required. Non-interactive
settlement is one of the big selling points of DLC style applications IMO.

One of the unfortunate things about LN is the interactiveness requirements
are very high, which makes developing applications hard (especially mobile
applications). I don't think this solves lightning's problems, but it is a
worthy goal to reduce interactiveness requirements with new bitcoin
applications to give better UX.

-Chris

On Sat, Mar 5, 2022 at 4:57 PM ZmnSCPxj  wrote:

> Good morning Chris,
>
> > I think this proposal describes arbitrary lines of pre-approved credit
> from a bitcoin wallet. The line can be drawn down with oracle attestations.
> You can mix in locktimes on these pre-approved lines of credit if you would
> like to rate limit, or ignore rate limiting and allow the full utxo to be
> spent by the borrower. It really is contextual to the use case IMO.
>
> Ah, that seems more useful.
>
> Here is an example application that might benefit from this scheme:
>
> I am commissioning some work from some unbranded workperson.
> I do not know how long the work will take, and I do not trust the
> workperson to accurately tell me how complete the work is.
> However, both I and the workperson trust a branded third party (the
> oracle) who can judge the work for itself and determine if it is complete
> or not.
> So I create a transaction whose signature can be completed only if the
> oracle releases a proper scalar and hand it over to the workperson.
> Then the workperson performs the work, then asks the oracle to judge if
> the work has been completed, and if so, the work can be compensated.
>
> On the other hand, the above, where the oracle determines *when* the fund
> can be spent, can also be implemented by a simple 2-of-3, and called an
> "escrow".
> After all, the oracle attestation can be a partial signature as well, not
> just a scalar.
> Is there a better application for this scheme?
>
> I suppose if the oracle attestation is intended to be shared among
> multiple such transactions?
> There may be multiple PTLCs, that are triggered by a single oracle?
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-06 Thread Christian Decker via bitcoin-dev
One thing that we recently stumbled over was that we use CLTV in eltoo not
for timelock but to have a comparison between two committed numbers coming
from the spent and the spending transaction (ordering requirement of
states). We couldn't use a number on the stack of the scriptSig as the
signature doesn't commit to it, which is why we commandeered nLocktime
values that are already in the past.

With the annex we could have a way to get a committed to number we can pull
onto the stack, and free the nLocktime for other uses again. It'd also be
less roundabout to explain in classes :-)

An added benefit would be that update transactions, being singlesig, can be
combined into larger transactions by third parties or watchtowers to
amortize some of the fixed cost of getting them confirmed, allowing
on-path-aggregation basically (each node can group and aggregate
transactions as they forward them). This is currently not possible since
all the transactions that we'd like to batch would have to have the same
nLocktime at the moment.

So I think it makes sense to partition the annex into a global annex shared
by the entire transaction, and one for each input. Not sure if one for
inputs would also make sense as it'd bloat the utxo set and could be
emulated by using the input that is spending it.

Cheers,
Christian

On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev
> wrote:
> > I've seen some discussion of what the Annex can be used for in Bitcoin.
>
>
> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html
>
> includes some discussion on that topic from the taproot review meetings.
>
> The difference between information in the annex and information in
> either a script (or the input data for the script that is the rest of
> the witness) is (in theory) that the annex can be analysed immediately
> and unconditionally, without necessarily even knowing anything about
> the utxo being spent.
>
> The idea is that we would define some simple way of encoding (multiple)
> entries into the annex -- perhaps a tag/length/value scheme like
> lightning uses; maybe if we add a lisp scripting language to consensus,
> we just reuse the list encoding from that? -- at which point we might
> use one tag to specify that a transaction uses advanced computation, and
> needs to be treated as having a heavier weight than its serialized size
> implies; but we could use another tag for per-input absolute locktimes;
> or another tag to commit to a past block height having a particular hash.
>
> It seems like a good place for optimising SIGHASH_GROUP (allowing a group
> of inputs to claim a group of outputs for signing, but not allowing inputs
> from different groups to ever claim the same output; so that each output
> is hashed at most once for this purpose) -- since each input's validity
> depends on the other inputs' state, it's better to be able to get at
> that state as easily as possible rather than having to actually execute
> other scripts before your can tell if your script is going to be valid.
>
> > The BIP is tight lipped about it's purpose
>
> BIP341 only reserves an area to put the annex; it doesn't define how
> it's used or why it should be used.
>
> > Essentially, I read this as saying: The annex is the ability to pad a
> > transaction with an additional string of 0's
>
> If you wanted to pad it directly, you can do that in script already
> with a PUSH/DROP combo.
>
> The point of doing it in the annex is you could have a short byte
> string, perhaps something like "0x010201a4" saying "tag 1, data length 2
> bytes, value 420" and have the consensus intepretation of that be "this
> transaction should be treated as if it's 420 weight units more expensive
> than its serialized size", while only increasing its witness size by
> 6 bytes (annex length, annex flag, and the four bytes above). Adding 6
> bytes for a 426 weight unit increase seems much better than adding 426
> witness bytes.
>
> The example scenario is that if there was an opcode to verify a
> zero-knowledge proof, eg I think bulletproof range proofs are something
> like 10x longer than a signature, but require something like 400x the
> validation time. Since checksig has a validation weight of 50 units,
> a bulletproof verify might have a 400x greater validation weight, ie
> 20,000 units, while your witness data is only 650 bytes serialized. In
> that case, we'd need to artificially bump the weight of you transaction
> up by the missing 19,350 units, or else an attacker could fill a block
> with perhaps 6000 bulletproofs costing the equivalent of 120M signature
> operations, rather than the 80k sigops we currently expect as the maximum
> in a block. Seems better to just have "0x01024b96" stuck in the annex,
> than 19kB of zeroes.
>
> > Introducing OP_ANNEX: Suppose there were some sort of annex 

Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-06 Thread Christian Decker via bitcoin-dev
We'd have to be very carefully with this kind of third-party malleability,
since it'd make transaction pinning trivial without even requiring the
ability to spend one of the outputs (which current CPFP based pinning
attacks require).

Cheers,
Christian

On Sat, 5 Mar 2022, 00:33 ZmnSCPxj via bitcoin-dev, <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Good morning Jeremy,
>
> Umm `OP_ANNEX` seems boring 
>
>
> > It seems like one good option is if we just go on and banish the
> OP_ANNEX. Maybe that solves some of this? I sort of think so. It definitely
> seems like we're not supposed to access it via script, given the quote from
> above:
> >
> > Execute the script, according to the applicable script rules[11], using
> the witness stack elements excluding the script s, the control block c, and
> the annex a if present, as initial stack.
> > If we were meant to have it, we would have not nixed it from the stack,
> no? Or would have made the opcode for it as a part of taproot...
> >
> > But recall that the annex is committed to by the signature.
> >
> > So it's only a matter of time till we see some sort of Cat and Schnorr
> Tricks III the Annex Edition that lets you use G cleverly to get the annex
> onto the stack again, and then it's like we had OP_ANNEX all along, or
> without CAT, at least something that we can detect that the value has
> changed and cause this satisfier looping issue somehow.
>
> ... Never mind I take that back.
>
> Hmmm.
>
> Actually if the Annex is supposed to be ***just*** for adding weight to
> the transaction so that we can do something like increase limits on SCRIPT
> execution, then it does *not* have to be covered by any signature.
> It would then be third-party malleable, but suppose we have a "valid"
> transaction on the mempool where the Annex weight is the minimum necessary:
>
> * If a malleated transaction has a too-low Annex, then the malleated
> transaction fails validation and the current transaction stays in the
> mempool.
> * If a malleated transaction has a higher Annex, then the malleated
> transaction has lower feerate than the current transaction and cannot evict
> it from the mempool.
>
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev