Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-12-21 Thread Johan Torås Halseth via bitcoin-dev
o achieve this. Something like OP_CTV
>> > > and OP_APO seems insufficient, since the number of ways the set of
>> > > HTLCs could be claimed would cause combinatorial blowup in the number
>> > > of possible spending transactions.
>> >
>> > > Personally, I’ve found the simple yet powerful properties of
>> > > OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
>> > > particularly interesting for the use case, but I’m certain many of the
>> > > other proposals could achieve the same thing. More direct inspection
>> > > like you get from a proposal like OP_TX[9] would also most likely have
>> > > the building blocks needed.
>> >
>> > As pointed out during the CTV drama and payment pool public discussion 
>> > years ago, what would be very useful to tie-break among all covenant 
>> > constructions would be an efficiency simulation framework. Even if the 
>> > same semantic can be achieved independently by multiple covenants, they 
>> > certainly do not have the same performance trade-offs (e.g average and 
>> > worst-case witness size).
>> >
>> > I don't think the blind approach of activating many complex covenants at 
>> > the same time is conservative enough in Bitcoin, where one might design 
>> > "malicious" L2 contracts, of which the game-theory is not fully understood.
>> >
>> > See e.g https://blog.bitmex.com/txwithhold-smart-contracts/
>> >
>> > > ### Proof-of-concept
>> > > I’ve implemented a rough demo** of spending an HTLC output that pays
>> > > to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
>> > > is to commit to all active HTLCs in a merkle tree, and have the
>> > > spender provide merkle proofs for the HTLCs to claim, claiming the sum
>> > > into a new output. The remainder goes back into a new output with the
>> > > claimed HTLCs removed from the merkle tree.
>> >
>> > > An interesting trick one can do when creating the merkle tree, is
>> > > sorting the HTLCs by expiry. This means that one in the timeout case
>> > > claim a subtree of HTLCs using a single merkle proof (and RBF this
>> > > batched timeout claim as more and more HTLCs expire) reducing the
>> > > timeout case to constant size witness (or rather logarithmic in the
>> > > total number of HTLCs).
>> >
>> > > **Consider it an experiment, as it is missing a lot before it could be
>> > > usable in any real commitment setting.
>> >
>> > I think this is an interesting question if more advanced cryptosystems 
>> > based on assumptions other than the DL problem could constitute a factor 
>> > of scalability of LN payment throughput by orders of magnitude, by 
>> > decoupling number of off-chain payments from the growth of the on-chain 
>> > witness size need to claim them, without lowering in security as with 
>> > trimmed HTLC due to dust limits.
>> >
>> > Best,
>> > Antoine
>> >
>> > Le jeu. 26 oct. 2023 à 20:28, Johan Torås Halseth via bitcoin-dev 
>> >  a écrit :
>> >>
>> >> Hi all,
>> >>
>> >> After the transaction recycling has spurred some discussion the last
>> >> week or so, I figured it could be worth sharing some research I’ve
>> >> done into HTLC output aggregation, as it could be relevant for how to
>> >> avoid this problem in a future channel type.
>> >>
>> >> TLDR; With the right covenant we can create HTLC outputs that are much
>> >> more chain efficient, not prone to tx recycling and harder to jam.
>> >>
>> >> ## Transaction recycling
>> >> The transaction recycling attack is made possible by the change made
>> >> to HTLC second level transactions for the anchor channel type[8];
>> >> making it possible to add fees to the transaction by adding inputs
>> >> without violating the signature. For the legacy channel type this
>> >> attack was not possible, as all fees were taken from the HTLC outputs
>> >> themselves, and had to be agreed upon by channel counterparties during
>> >> signing (of course this has its own problems, which is why we wanted
>> >> to change it).
>> >>
>> >> The idea of HTLC output aggregation is to collapse all HTLC outputs on
>> >> the commitment to a single one. This has many benefits (that I’ll get
>> >> to), one

Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-12-11 Thread Johan Torås Halseth via bitcoin-dev
gt;
> I wonder if in a PTLC world, you can generate an aggregate curve point for 
> all the sub combinations of scalar plausible. Unrevealed curve points in a 
> taproot branch are cheap. It might claim an offered HTLC near-constant size 
> too.
>
> > ## The bad news
> > The most obvious problem is that we would need a new covenant
> > primitive on L1 (see below). However, I think it could be beneficial
> > to start exploring these ideas now in order to guide the L1 effort
> > towards something we could utilize to its fullest on L2.
>
> > As mentioned, even with a functioning covenant, we don’t escape the
> > fact that a preimage needs to go on-chain, pricing out HTLCs at
> > certain fee rates. This is analogous to the dust exposure problem
> > discussed in [6], and makes some sort of limit still required.
>
> Ideally such covenant mechanisms would generalize to the withdrawal phase of 
> payment pools, where dozens or hundreds of participants wish to confirm their 
> non-competing withdrawal transactions concurrently. While unlocking preimage 
> or scalar can be aggregated in a single witness, there will still be a need 
> to verify that each withdrawal output associated with an unlocking secret is 
> present in the transaction.
>
> Maybe few other L2s are answering this N-inputs-to-M-outputs pattern with 
> advanced locking scripts conditions to satisfy.
>
> > ### Open question
> > With PTLCs, could one create a compact proof showing that you know the
> > preimage for m-of-n of the satoshis in the output? (some sort of
> > threshold signature).
>
> > If we could do this we would be able to remove the slot jamming issue
> > entirely; any number of active PTLCs would not change the on-chain
> > cost of claiming them.
>
> See comments above, I think there is a plausible scheme here you just 
> generate all the point combinations possible, and only reveal the one you 
> need at broadcast.
>
> > ## Covenant primitives
> > A recursive covenant is needed to achieve this. Something like OP_CTV
> > and OP_APO seems insufficient, since the number of ways the set of
> > HTLCs could be claimed would cause combinatorial blowup in the number
> > of possible spending transactions.
>
> > Personally, I’ve found the simple yet powerful properties of
> > OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
> > particularly interesting for the use case, but I’m certain many of the
> > other proposals could achieve the same thing. More direct inspection
> > like you get from a proposal like OP_TX[9] would also most likely have
> > the building blocks needed.
>
> As pointed out during the CTV drama and payment pool public discussion years 
> ago, what would be very useful to tie-break among all covenant constructions 
> would be an efficiency simulation framework. Even if the same semantic can be 
> achieved independently by multiple covenants, they certainly do not have the 
> same performance trade-offs (e.g average and worst-case witness size).
>
> I don't think the blind approach of activating many complex covenants at the 
> same time is conservative enough in Bitcoin, where one might design 
> "malicious" L2 contracts, of which the game-theory is not fully understood.
>
> See e.g https://blog.bitmex.com/txwithhold-smart-contracts/
>
> > ### Proof-of-concept
> > I’ve implemented a rough demo** of spending an HTLC output that pays
> > to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
> > is to commit to all active HTLCs in a merkle tree, and have the
> > spender provide merkle proofs for the HTLCs to claim, claiming the sum
> > into a new output. The remainder goes back into a new output with the
> > claimed HTLCs removed from the merkle tree.
>
> > An interesting trick one can do when creating the merkle tree, is
> > sorting the HTLCs by expiry. This means that one in the timeout case
> > claim a subtree of HTLCs using a single merkle proof (and RBF this
> > batched timeout claim as more and more HTLCs expire) reducing the
> > timeout case to constant size witness (or rather logarithmic in the
> > total number of HTLCs).
>
> > **Consider it an experiment, as it is missing a lot before it could be
> > usable in any real commitment setting.
>
> I think this is an interesting question if more advanced cryptosystems based 
> on assumptions other than the DL problem could constitute a factor of 
> scalability of LN payment throughput by orders of magnitude, by decoupling 
> number of off-chain payments from the growth of the on-chain witness size 
> need to claim them, without lowering in securit

[bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-10-26 Thread Johan Torås Halseth via bitcoin-dev
Hi all,

After the transaction recycling has spurred some discussion the last
week or so, I figured it could be worth sharing some research I’ve
done into HTLC output aggregation, as it could be relevant for how to
avoid this problem in a future channel type.

TLDR; With the right covenant we can create HTLC outputs that are much
more chain efficient, not prone to tx recycling and harder to jam.

## Transaction recycling
The transaction recycling attack is made possible by the change made
to HTLC second level transactions for the anchor channel type[8];
making it possible to add fees to the transaction by adding inputs
without violating the signature. For the legacy channel type this
attack was not possible, as all fees were taken from the HTLC outputs
themselves, and had to be agreed upon by channel counterparties during
signing (of course this has its own problems, which is why we wanted
to change it).

The idea of HTLC output aggregation is to collapse all HTLC outputs on
the commitment to a single one. This has many benefits (that I’ll get
to), one of them being the possibility to let the spender claim the
portion of the output that they’re right to, deciding how much should
go to fees. Note that this requires a covenant to be possible.

## A single HTLC output
Today, every forwarded HTLC results in an output that needs to be
manifested on the commitment transaction in order to claw back money
in case of an uncooperative channel counterparty. This puts a limit on
the number of active HTLCs (in order for the commitment transaction to
not become too large) which makes it possible to jam the channel with
small amounts of capital [1]. It also turns out that having this limit
be large makes it expensive and complicated to sweep the outputs
efficiently [2].

Instead of having new HTLC outputs manifest for each active
forwarding, with covenants on the base layer one could create a single
aggregated output on the commitment. The output amount being the sum
of the active HTLCs (offered and received), alternatively one output
for received and one for offered. When spending this output, you would
only be entitled to the fraction of the amount corresponding to the
HTLCs you know the preimage for (received), or that has timed out
(offered).

## Impacts to transaction recycling
Depending on the capabilities of the covenant available (e.g.
restricting the number of inputs to the transaction) the transaction
spending the aggregated HTLC output can be made self sustained: the
spender will be able to claim what is theirs (preimage or timeout) and
send it to whatever output they want, or to fees. The remainder will
go back into a covenant restricted output with the leftover HTLCs.
Note that this most likely requires Eltoo in order to not enable fee
siphoning[7].

## Impacts to slot jamming
With the aggregated output being a reality, it changes the nature of
“slot jamming” [1] significantly. While channel capacity must still be
reserved for in-flight HTLCs, one no longer needs to allocate a
commitment output for each up to some hardcoded limit.

In today’s protocol this limit is 483, and I believe most
implementations default to an even lower limit. This leads to channel
jamming being quite inexpensive, as one can quickly fill a channel
with small HTLCs, without needing a significant amount of capital to
do so.

The origins of the 483 slot limits is the worst case commitment size
before getting into unstandard territory [3]. With an aggregated
output this would no longer be the case, as adding HTLCs would no
longer affect commitment size. Instead, the full on-chain footprint of
an HTLC would be deferred until claim time.

Does this mean one could lift, or even remove the limit for number of
active HTLCs? Unfortunately, the obvious approach doesn’t seem to get
rid of the problem entirely, but mitigates it quite a bit.

### Slot jamming attack scenario
Consider the scenario where an attacker sends a large number of
non-dust* HTLCs across a channel, and the channel parties enforce no
limit on the number of active HTLCs.

The number of payments would not affect the size of the commitment
transaction at all, only the size of the witness that must be
presented when claiming or timing out the HTLCs. This means that there
is still a point at which chain fees get high enough for the HTLC to
be uneconomical to claim. This is no different than in today’s spec,
and such HTLCs will just be stranded on-chain until chain fees
decrease, at which point there is a race between the success and
timeout spends.

There seems to be no way around this; if you want to claim an HTLC
on-chain, you need to put the preimage on-chain. And when the HTLC
first reaches you, you have no way of predicting the future chain fee.
With a large number of uneconomical HTLCs in play, the total BTC
exposure could still be very large, so you might want to limit this
somewhat.

* Note that as long as the sum of HTLCs exceeds the dust limit, one
could manifest the ou

Re: [bitcoin-dev] Solving CoinPool high-interactivity issue with cut-through update of Taproot leaves

2023-10-12 Thread Johan Torås Halseth via bitcoin-dev
Hi, Antoine.

A brief update on this:

I created a demo script for the unilateral exit of 2-of-4 participants in a
Coinpool using OP_CCV:
https://github.com/halseth/tapsim/tree/matt-demo/examples/matt/coinpool/v2.
It shows how pubkeys and balances can be committed, how traversal and
modification of the data can be done, and validation of signatures for the
exiting users.

The script in this case is 142 bytes (can likely be optimized 20-30%) and
the witness including the script is 647 bytes. Most of this comes from the
merkle inclusion proofs, so we can expect this to grow by O(m logn) for m
users exiting a pool of n participants.

Regardless of the size, I think that would not matter in most (cooperative)
settings. N participants would jointly create a coinpool using the above
exit scripts, and a cooperative keyspend path. In case some user goes
offline, the remaining, online users can jointly use the unilateral exit
clause and exit into a _new_ coinpool and continue operations when this
transaction confirms.

What would be really interesting, is if we can do the above exit off-chain,
and when the offline user comes back online, we could revert back to the
original coinpool output updating the balances according to updates that
happened while he was offline.

Assuming APO I believe this could work, since the only thing that matters
for the off-chain transactions to remain valid is that the committed
balances and keys remain compatible. If the offline user is able to
unilaterally spend the original output where the remaining users had built
their off-chain coinpool construction ontop, the only thing they need to
change is the merkle inclusion proofs in their jointly signed transactions
(since they now spend from an output where the offline user exited). All
signatures remain valid.

Was this the kind of functionality you were looking for?

Cheers,
Johan



On Thu, Oct 5, 2023 at 9:38 AM Johan Torås Halseth 
wrote:

> Hi,
>
> Yes, one would need to have the  be a merkle root of all
> participants' keys and balances. Then, as you say, the scripts would
> have to enforce that one correctly creates new merkle roots according
> to the coin pool rules when spending it.
>
> - Johan
>
> On Thu, Oct 5, 2023 at 3:13 AM Antoine Riard 
> wrote:
> >
> > Hi Johan,
> >
> > Thanks for the insight.
> >
> > From the proposed semantics of OP_CHECKCONTRACTVERIFY iirc:
> >
> > 
> >
> > I think this is not yet indicated how the participant's pubkeys and
> balances can be disaggregated from , a partial subset push on the
> stack and verifying that corresponding signatures are valid.
> >
> > One requirement of a cut-through update of taproot leaves is to verify
> the authentication of the fan-out balances and pubkeys towards the "online"
> partition. This subset is not known at pool setup, even if the contract or
> tree construct can be equilibrated with some expectation.
> >
> > Otherwise, it sounds OP_CHECKCONTRACTVERIFY could be used to architect
> the proposed design of coinpool and its cut-through mechanism. One hard
> issue sounds to be efficient traversal, inspection and modification of the
> contract .
> >
> > Best,
> > Antoine
> >
> > Le mar. 3 oct. 2023 à 12:24, Johan Torås Halseth  a
> écrit :
> >>
> >> Hi, Antoine.
> >>
> >> It sounds like perhaps OP_CHECKCONTRACTVERIFY can achieve what you are
> >> looking for:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-May/021719.html
> >>
> >> By committing the participants' pubkeys and balances in the dynamic
> >> data instead of the taptree one can imagine a subset of online users
> >> agreeing to pool their aggregated balances in a new output, while the
> >> offline users' funds would remain inaccessible by them in a second
> >> output.
> >>
> >> The way this would work is by spending the coinpool utxo with a
> >> transaction having two outputs: one output that is the "remainder" of
> >> the previous coinpool (the offline users), and the second output the
> >> new coinpool among the online users*.
> >>
> >> When the offline users are back online, they could all agree to
> >> continue using the original coinpool utxo.
> >>
> >> * assuming Eltoo in case an offline user comes back online and double
> >> spends the UTXO.
> >>
> >> - Johan
> >>
> >>
> >> On Wed, Sep 27, 2023 at 12:08 PM Antoine Riard via bitcoin-dev
> >>  wrote:
> >> >
> >> > Hi Zeeman,
> >> >
> >> > See my comments at the time of OP_EVICT original publication.
> >> >
> >> >
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019939.html
> >> >
> >> > "I think in the context of (off-chain) payment pool, OP_EVICT requires
> >> > participant cooperation *after* the state update to allow a single
> >> > participant to withdraw her funds.
> >> >
> >> > I believe this is unsafe if we retain as an off-chain construction
> security
> >> > requirement that a participant should have the unilateral means to
> enforce
> >> > the latest agreed upon state at any time during the construct

Re: [bitcoin-dev] Solving CoinPool high-interactivity issue with cut-through update of Taproot leaves

2023-10-05 Thread Johan Torås Halseth via bitcoin-dev
Hi,

Yes, one would need to have the  be a merkle root of all
participants' keys and balances. Then, as you say, the scripts would
have to enforce that one correctly creates new merkle roots according
to the coin pool rules when spending it.

- Johan

On Thu, Oct 5, 2023 at 3:13 AM Antoine Riard  wrote:
>
> Hi Johan,
>
> Thanks for the insight.
>
> From the proposed semantics of OP_CHECKCONTRACTVERIFY iirc:
>
> 
>
> I think this is not yet indicated how the participant's pubkeys and balances 
> can be disaggregated from , a partial subset push on the stack and 
> verifying that corresponding signatures are valid.
>
> One requirement of a cut-through update of taproot leaves is to verify the 
> authentication of the fan-out balances and pubkeys towards the "online" 
> partition. This subset is not known at pool setup, even if the contract or 
> tree construct can be equilibrated with some expectation.
>
> Otherwise, it sounds OP_CHECKCONTRACTVERIFY could be used to architect the 
> proposed design of coinpool and its cut-through mechanism. One hard issue 
> sounds to be efficient traversal, inspection and modification of the contract 
> .
>
> Best,
> Antoine
>
> Le mar. 3 oct. 2023 à 12:24, Johan Torås Halseth  a écrit :
>>
>> Hi, Antoine.
>>
>> It sounds like perhaps OP_CHECKCONTRACTVERIFY can achieve what you are
>> looking for: 
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-May/021719.html
>>
>> By committing the participants' pubkeys and balances in the dynamic
>> data instead of the taptree one can imagine a subset of online users
>> agreeing to pool their aggregated balances in a new output, while the
>> offline users' funds would remain inaccessible by them in a second
>> output.
>>
>> The way this would work is by spending the coinpool utxo with a
>> transaction having two outputs: one output that is the "remainder" of
>> the previous coinpool (the offline users), and the second output the
>> new coinpool among the online users*.
>>
>> When the offline users are back online, they could all agree to
>> continue using the original coinpool utxo.
>>
>> * assuming Eltoo in case an offline user comes back online and double
>> spends the UTXO.
>>
>> - Johan
>>
>>
>> On Wed, Sep 27, 2023 at 12:08 PM Antoine Riard via bitcoin-dev
>>  wrote:
>> >
>> > Hi Zeeman,
>> >
>> > See my comments at the time of OP_EVICT original publication.
>> >
>> > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019939.html
>> >
>> > "I think in the context of (off-chain) payment pool, OP_EVICT requires
>> > participant cooperation *after* the state update to allow a single
>> > participant to withdraw her funds.
>> >
>> > I believe this is unsafe if we retain as an off-chain construction security
>> > requirement that a participant should have the unilateral means to enforce
>> > the latest agreed upon state at any time during the construction lifetime".
>> >
>> > I think this level of covenant flexibility is still wished for CoinPool as 
>> > a fundamental property, and this is offered by TLUV or MERKLESUB.
>> > On the other hand, I think OP_EVICT introduces this idea of *subgroup 
>> > novation* (i.e `K-of-N`) of a PT2R scriptpubkey.
>> >
>> > To the best of my understanding, I think there is not yet any sound 
>> > covenant proposal aiming to combine TLUV and EVICT-like semantics in a 
>> > consistent set of Script primitives to enable "cut-through" updates, while 
>> > still retaining the key property of unilateral withdraw of promised 
>> > balances in any-order.
>> >
>> > I might go to work on crafting one, though for now I'm still interested to 
>> > understand better if on-chain "cut-through" is the best direction to solve 
>> > the fundamental high interactivity issue of channel factory and payment 
>> > pool over punishment-based ideas.
>> >
>> > Best,
>> > Antoine
>> >
>> > Le mar. 26 sept. 2023 à 07:51, ZmnSCPxj  a écrit :
>> >>
>> >> Good morning Antoine,
>> >>
>> >> Does `OP_EVICT` not fit?
>> >>
>> >> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019926.html
>> >>
>> >> Regards,
>> >> ZmnSCPxj
>> >>
>> >>
>> >> Sent with Proton Mail secure email.
>> >>
>> >> --- Original Message ---
>> >> On Monday, September 25th, 2023 at 6:18 PM, Antoine Riard via bitcoin-dev 
>> >>  wrote:
>> >>
>> >>
>> >> > Payment pools and channel factories are afflicted by severe 
>> >> > interactivity constraints worsening with the number of users owning an 
>> >> > off-chain balance in the construction. The security of user funds is 
>> >> > paramount on the ability to withdraw unilaterally from the off-chain 
>> >> > construction. As such any update applied to the off-chain balances 
>> >> > requires a signature contribution from the unanimity of the 
>> >> > construction users to ensure this ability is conserved along updates.
>> >> > As soon as one user starts to be offline or irresponsive, the updates 
>> >> > of the off-chain balances must have to be halted a

Re: [bitcoin-dev] Solving CoinPool high-interactivity issue with cut-through update of Taproot leaves

2023-10-03 Thread Johan Torås Halseth via bitcoin-dev
Hi, Antoine.

It sounds like perhaps OP_CHECKCONTRACTVERIFY can achieve what you are
looking for: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-May/021719.html

By committing the participants' pubkeys and balances in the dynamic
data instead of the taptree one can imagine a subset of online users
agreeing to pool their aggregated balances in a new output, while the
offline users' funds would remain inaccessible by them in a second
output.

The way this would work is by spending the coinpool utxo with a
transaction having two outputs: one output that is the "remainder" of
the previous coinpool (the offline users), and the second output the
new coinpool among the online users*.

When the offline users are back online, they could all agree to
continue using the original coinpool utxo.

* assuming Eltoo in case an offline user comes back online and double
spends the UTXO.

- Johan


On Wed, Sep 27, 2023 at 12:08 PM Antoine Riard via bitcoin-dev
 wrote:
>
> Hi Zeeman,
>
> See my comments at the time of OP_EVICT original publication.
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019939.html
>
> "I think in the context of (off-chain) payment pool, OP_EVICT requires
> participant cooperation *after* the state update to allow a single
> participant to withdraw her funds.
>
> I believe this is unsafe if we retain as an off-chain construction security
> requirement that a participant should have the unilateral means to enforce
> the latest agreed upon state at any time during the construction lifetime".
>
> I think this level of covenant flexibility is still wished for CoinPool as a 
> fundamental property, and this is offered by TLUV or MERKLESUB.
> On the other hand, I think OP_EVICT introduces this idea of *subgroup 
> novation* (i.e `K-of-N`) of a PT2R scriptpubkey.
>
> To the best of my understanding, I think there is not yet any sound covenant 
> proposal aiming to combine TLUV and EVICT-like semantics in a consistent set 
> of Script primitives to enable "cut-through" updates, while still retaining 
> the key property of unilateral withdraw of promised balances in any-order.
>
> I might go to work on crafting one, though for now I'm still interested to 
> understand better if on-chain "cut-through" is the best direction to solve 
> the fundamental high interactivity issue of channel factory and payment pool 
> over punishment-based ideas.
>
> Best,
> Antoine
>
> Le mar. 26 sept. 2023 à 07:51, ZmnSCPxj  a écrit :
>>
>> Good morning Antoine,
>>
>> Does `OP_EVICT` not fit?
>>
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019926.html
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Sent with Proton Mail secure email.
>>
>> --- Original Message ---
>> On Monday, September 25th, 2023 at 6:18 PM, Antoine Riard via bitcoin-dev 
>>  wrote:
>>
>>
>> > Payment pools and channel factories are afflicted by severe interactivity 
>> > constraints worsening with the number of users owning an off-chain balance 
>> > in the construction. The security of user funds is paramount on the 
>> > ability to withdraw unilaterally from the off-chain construction. As such 
>> > any update applied to the off-chain balances requires a signature 
>> > contribution from the unanimity of the construction users to ensure this 
>> > ability is conserved along updates.
>> > As soon as one user starts to be offline or irresponsive, the updates of 
>> > the off-chain balances must have to be halted and payments progress are 
>> > limited among subsets of 2 users sharing a channel. Different people have 
>> > proposed solutions to this issue: introducing a coordinator, partitioning 
>> > or layering balances in off-chain users subsets. I think all those 
>> > solutions have circled around a novel issue introduced, namely 
>> > equivocation of off-chain balances at the harm of construction 
>> > counterparties [0].
>> >
>> > As ZmnSCPxj pointed out recently, one way to mitigate this equivocation 
>> > consists in punishing the cheating pre-nominated coordinator on an 
>> > external fidelity bond. One can even imagine more trust-mimized and 
>> > decentralized fraud proofs to implement this mitigation, removing the need 
>> > of a coordinator [1].
>> >
>> > However, I believe punishment equivocation to be game-theory sound should 
>> > compensate a defrauded counterparty of the integrity of its lost off-chain 
>> > balance. As one cheating counterparty can equivocate in the worst-case 
>> > against all the other counterparties in the construction, one fidelity 
>> > bond should be equal to ( C - 1 ) * B satoshi amount, where C is the 
>> > number of construction counterparty and B the initial off-chain balance of 
>> > the cheating counterparty.
>> >
>> > Moreover, I guess it is impossible to know ahead of a partition or 
>> > transition who will be the "honest" counterparties from the "dishonest" 
>> > ones, therefore this ( C - 1 ) * B-sized fidelity bond must be maintained 
>> > by 

Re: [bitcoin-dev] MATT: [demo] Optimistic execution of arbitrary programs

2023-10-03 Thread Johan Torås Halseth via bitcoin-dev
Hi, aj. Thanks for taking a look!

> "O(n log n)" sounds wrong? Isn't it O(P + log(N)) where P is the size
> of the program, and N is the number of steps (rounded up to a power of 2)?

Thanks, you are right. That's a typo, it should indeed be O(log n). n
being the number of steps in the program. I think P doesn't matter
here, as we never put the whole program on-chain, just break it down
into n steps.

> > node = h( start_pc|start_i|start_x|end_pc|end_i|end_x|h( 
> > h(sub_node1)|h(sub_node2) )
> But I don't think that works -- I think you want to know h(sub_node1)
> and h(sub_node2) directly, so that you can compare them to the results
> you get if you run the computation, and choose the one that's incorrect.

This denotes only how to create the commitment. When we traverse the
tree, the node scripts enforce that h(sub_n
ode{1,2}) that is consistent with the commitment is in the witness,
achieving exactly what you suggest.

> I'm not seeing what forces the prover to come up with a balanced state
> tree

To achieve this the participants agree up front (when the contract is
created) what is the exact length of the trace (or equivalent the
depth of the tree). If the actual execution is shorter, we fill the
rest with no-ops.

This means that we know the moment the challenge protocol starts the
transactions that are going to be played (kinda like a CTV tree), so
if one of the participants creates a trace from a non-balanced state
tree, it will be rejected by the script at that level. It is indeed
important that the state tree is built in a deterministic way.

> There seems to be an error in the "what this would look like for 4 state
> transitions" diagram -- the second node should read "0|0|2 -> 0|1|4"

Yes, fixed! Thanks :)

- Johan


On Mon, Oct 2, 2023 at 5:10 PM Anthony Towns  wrote:
>
> On Fri, Sep 29, 2023 at 03:14:25PM +0200, Johan Torås Halseth via bitcoin-dev 
> wrote:
> > TLDR; Using the proposed opcode OP_CHECKCONTRACTVERIFY and OP_CAT, we
> > show to trace execution of the program `multiply` [1] and challenge
> > this computation in O(n logn) on-chain transactions:
>
> "O(n log n)" sounds wrong? Isn't it O(P + log(N)) where P is the size
> of the program, and N is the number of steps (rounded up to a power of 2)?
>
> You say:
>
> > node = h( start_pc|start_i|start_x|end_pc|end_i|end_x|h( 
> > h(sub_node1)|h(sub_node2) )
>
> But I don't think that works -- I think you want to know h(sub_node1)
> and h(sub_node2) directly, so that you can compare them to the results
> you get if you run the computation, and choose the one that's incorrect.
> Otherwise you've got a 50/50 chance of choosing the subnode that's
> actually correct, and you'll only be able to prove a mistake with
> 1/2**N odds?
>
> Not a big change, it just becomes 32B longer (and drops some h()s):
>
>   node = start_pc|start_i|start_x|end_pc|end_i|end_x|h(sub_node1)|h(sub_node2)
>   leaf = start_pc|start_i|start_x|end_pc|end_i|end_x|null
>
> I'm not seeing what forces the prover to come up with a balanced state
> tree -- if they don't have to have a balanced tree, then I think there
> are many possible trees for the same execution trace, and again it would
> become easy to hide an error somewhere the challenger can't find. Adding a
> "start_stepcount" and "end_stepcount" would probably remedy that?
>
> There seems to be an error in the "what this would look like for 4 state
> transitions" diagram -- the second node should read "0|0|2 -> 0|1|4"
> (combining its two children), not "0|0|2 -> 1|0|2" matching its left
> child.
>
> I'm presuming that the counterparty verifies they know the program (ie,
> all the leaves in the contract taptree) before agreeing to the contract
> in the first place. I think that's fine.
>
> Cheers,
> aj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] MATT: [demo] Optimistic execution of arbitrary programs

2023-09-29 Thread Johan Torås Halseth via bitcoin-dev
Hi, all!

I've been working on an implementation of the original MATT challenge
protocol[0], with a detailed description of how we go from a
"high-level arbitrary program" to something that can be verified
on-chain in Bitcoin Script.

You can find the write-up here, which also includes instructions of
how to run the code and inspect the transactions using a local block
explorer: https://github.com/halseth/mattlab/blob/main/docs/challenge.md

TLDR; Using the proposed opcode OP_CHECKCONTRACTVERIFY and OP_CAT, we
show to trace execution of the program `multiply` [1] and challenge
this computation in O(n logn) on-chain transactions:

func multiply(x int) int {
i := 0
while {
if i < 8 {
x = x + x
i = i + 1
} else {
break
}
}
return x
}

Next steps would be to make this a generic framework with tools to
automatically compile arbitrary high-level programs down to
MATT-compatible Bitcoin Scripts.

All feedback appreciated!

- Johan

[0] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-November/021182.html
[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-November/021205.html
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Concrete MATT opcodes

2023-08-08 Thread Johan Torås Halseth via bitcoin-dev
Hi, Salvatore.

Thanks for the update! I like the fact that taptree verification now
can be done on both input and outputs, and having them be symmetrical
also makes the learning curve a bit easier.

I have implemented the updated opcodes in btcd (very rough
implementation)]1] as well as updated the example scripts for
simulating CTV[2] and Coinpools[3].

From doing this I would again like to offer some suggestions on the proposal.

- For the opcode parameter ordering, it feels unnatural for the two
tweaks (data, taptree) to be separated by the internal key. A more
natural ordering of parameters IMO would be (of course this is all
subjective):
 OP_CCV.

If you disagree, I would love some rationale for the ordering you
chose! (looks like you also changed it again after your last post?).

- The deferred amount check seems a bit out of place, and insufficient
at least for the use cases I had in mind. They work well in a vault
setting, where you want to consolidate many outputs into a single new
one, and in 1-input-1-output settings where you want to preserve the
amount exactly. However, for coinpools, CTV with more than one output
and other interesting contracts where you want to split or combine
amounts it won't be powerful enough.

I'm wondering what other use cases you had in mind for the deferred
output amount check? Maybe I have missed something, but if not it
would perhaps be better to leave out the amount preservation check, or
go the extra mile and propose a more powerful amount introspection
machinery.

Cheers,
Johan

[1] - https://github.com/halseth/btcd/pull/1
[2] - 
https://github.com/halseth/tapsim/blob/8f4ac4d914fde0847c72cd22bdd45a1b7247cadf/examples/matt/ctv2/README.md
[3] - 
https://github.com/halseth/tapsim/blob/8f4ac4d914fde0847c72cd22bdd45a1b7247cadf/examples/matt/coinpool/README.md


On Sun, Jul 30, 2023 at 11:51 PM Salvatore Ingala via bitcoin-dev
 wrote:
>
> Hi all,
>
> I have put together a first complete proposal for the core opcodes of
> MATT [1][2].
> The changes make the opcode functionally complete, and the
> implementation is revised and improved.
>
> The code is implemented in the following fork of the
> bitcoin-inquisition repo:
>
> https://github.com/Merkleize/bitcoin/tree/checkcontractverify
>
> Therefore, it also includes OP_CHECKTEMPLATEVERIFY, as in a
> previous early demo for vaults [3].
>
> Please check out the diff [4] if you are interested in the
> implementation details. It includes some basic functional tests for
> the main cases of the opcode.
>
> ## Changes vs the previous draft
>
> These are the changes compared to the initial incomplete proposal:
> - OP_CHECK{IN,OUT}CONTRACTVERIFY are replaced by a single opcode
>   OP_CHECKCONTRACTVERIFY (CCV). An additional `flags` parameter allows
>   to specify if the opcode operates on an input or an output.
>   This also allows inspection of other inputs, that was not possible
>   with the original opcodes.
> - For outputs, the default behavior is to have the following deferred
>   checks mechanism for amounts: all the inputs that have a CCV towards
>   the same output, have their input amounts summed, and that act as a
>   lower bound for that output's amount.
>   A flag can disable this behavior. [*]
> - A number of special values of the parameters were defined in order
>   to optimize for common cases, and add some implicit introspection.
> - The order of parameters is modified (particularly,  is at the
>   bottom of the arguments, as so is more natural when writing Scripts).
>
> ## Semantics
>
> The new OP_CHECKCONTRACTVERIFY takes 5 parameters from the stack:
>
>   , , , , 
>
> The core logic of the opcode is as follows:
>
> "Check if the -th input/output's scriptPubKey is a P2TR
> whose public key is obtained from , (optionally) tweaked with
> , (optionally) tap-tweaked with ".
>
> The following are special values of the parameters:
>
> - if  is empty, it is replaced with a fixed NUMS point. [**]
> - if  is -1, it is replaced with the current input's taproot
>   internal key.
> - if  is -1, it is replaced with the current input's index.
> - if  is empty, the data tweak is skipped.
> - if  is empty, the taptweak is skipped.
> - if  is -1, it is replaced with the current input's root
>   of the taproot merkle tree.
>
> There are two defined flags:
> - CCV_FLAG_CHECK_INPUT = 1: if present,  refers to an input;
>   otherwise, it refers to an output.
> - CCV_FLAG_IGNORE_OUTPUT_AMOUNT = 2: only defined when _CHECK_INPUT
>   is absent, it disables the deferred checks logic for amounts.
>
> Finally, if both the flags CCV_FLAG_CHECK_INPUT and
> CCV_FLAG_IGNORE_OUTPUT_AMOUNT are absent:
>   - Add the current input's amount to the -th output's bucket.
>
> After the evaluation of all inputs, it is verified that each output's
> amount is greater than or equal to the total amount in the bucket
> if that output (validation of the deferred checks).
>
> ## Comment
>
> It is unclear if all the special values above will be useful in

Re: [bitcoin-dev] Vaults in the MATT framework

2023-06-02 Thread Johan Torås Halseth via bitcoin-dev
Hi,

It was briefly mentioned in the original post, but wanted to show how
simple it is to use COCV as an alternative to CTV, removing that
dependency.

> In particular, it also inherits the choice of using OP_CTV as a primitive,
> building on top of the bitcoin-inquisition's current branch that has already
> merged OP_CTV. Reasonable vaults would be possible without CTV, but they
> would be less efficient, particularly in the case of sending to many addresses
> in a single unvaulting flow.

Instead of specifying a CTV hash as embedded data, one could embed the
(commitment to the) outputs of the withdrawal transaction. Then
instead of a single OP_CTV, one OP_COCV per output to match against
the embedded data. Less efficient in case of many outputs as you
mention, but simple enough to be interesting.

Here's an example how to use MATT as a CTV replacement:
https://github.com/halseth/tapsim/blob/b07f29804cf32dce0168ab5bb40558cbb18f2e76/examples/matt/ctv2/README.md

Cheers,
Johan



On Tue, May 2, 2023 at 10:22 AM Salvatore Ingala via bitcoin-dev
 wrote:
>
> Hi Michael,
>
> I can't make any claim of expertise on the field (especially on the
> other proposals that you mentioned), so this post necessarily includes
> my opinions − and possibly my biases.
>
> The core functionality of MATT is quite simple, and could be adapted
> to any version of the scripting system: basically, COCV allows to
> "embed" some data in the next output, and decide its script; CICV
> allows "reading" this data.
> The design I proposed on taproot is surely not the only possible way,
> but it's the most simple/elegant I could come up with. Moreover, it
> doesn't seem very useful to spend time trying to get it to work on
> pre-taproot Script, due to the obvious advantages of those ideas when
> deployed on taproot (like having taptrees, and all the nice properties
> of Schnorr signatures).
>
> CICV/COCV can certainly be considered an additional form of
> introspection: you're checking that the script of an input/output
> equals a certain value, which is not possible in today's Script.
> I think that's generally true for all covenant proposals.
>
> Unlike some other proposals, MATT is not yet fully formalized, so I
> generally call "MATT" the combination of CICV+COCV, plus some other
> small set of opcodes that is yet to be defined exactly. I would say it
> fits in the same family as APO/OP_CTV/OP_VAULT, per your bucketization.
>
> The previous posts about MATT, fraud proofs, etc. are an exploration of
> the deeper things that are enabled by the MATT opcodes. The claim is
> that a set of changes that is (arguably) quite small and easy to analyze
> is enough to express general smart contracts − thanks to fraud proofs.
> However, fraud proofs themselves are a quite advanced application of
> the new opcodes, and are not needed for most/all of the things that
> people are trying to build today with the other covenant proposals.
>
>
> Since you mention Simplicity: my current understanding is that its
> endeavour of replacing Script with a better language is orthogonal to
> the discussion about what features (e.g.: introspection, covenants)
> should be in the language.
>
> All the covenant proposals listed above are technically a lot smaller
> and easier to audit than both the SegWit and the Taproot soft forks,
> both in terms of code and conceptual complexity.
>
> Therefore, if we _do_ want the features that they enable, the required
> engineering for a soft-fork is relatively straightforward, and there is
> not much of a reason to wait for Simplicity. It will be trivial to "port" any
> constructions we might create today with covenants to Simplicity scripts.
>
> If we _do not_ want those features, then the decision would rather be
> guided by other considerations, like potential risks to bitcoin caused
> by the effect of those features on miners' incentives. These
> concerns are not answered by Simplicity, as far as I understand:
> you would then want to implement Simplicity _without_ those features.
>
> Best,
> Salvatore
>
> On Mon, 1 May 2023 at 16:18, Michael Folkson  
> wrote:
>>
>> Hi Salvatore
>>
>> Can you clarify for me which bucket this proposal sits? We have APO, CTV, 
>> OP_VAULT etc that are proposals to add additional functionality to SegWit 
>> version 1, Tapleaf version 0 scripts. We have Simplicity that would need a 
>> new Tapleaf version (e.g. Tapleaf version 1). And then there are CISA like 
>> proposals that would need a new SegWit version (e.g. SegWit version 2). It 
>> looks to me like your proposal is in the first bucket (same as APO, CTV etc) 
>> as it is just introducing new opcode functionality to existing script with 
>> no deeper introspection needed but previous and current discussion of fraud 
>> proofs, MATT frameworks etc made me initially think it was going to require 
>> more than that.
>>
>> Thanks
>> Michael
>>
>> --
>> Michael Folkson
>> Email: michaelfolkson at protonmail.com
>> GPG: A2CF5D71603C9201065981

Re: [bitcoin-dev] Merkleize All The Things

2023-05-30 Thread Johan Torås Halseth via bitcoin-dev
I should clarify: the current proposal already achieves the first part
needed for coin pools: removing some data from the merkle tree (I was
indeed referring to the embedded data, not the taptree).

The thing that is missing is removal of a public key from the taproot
internal key, but as mentioned I do agree that this is out of scope
for this proposal.

I believe you can get many of the benefits by falling back to "old
style multisig" in case someone exits the pool, by having a tap leaf
defining a multisig check amongst the remaining pubkeys.

Cheers,
Johan

> It seems likely that efficient use of the taproot internal pubkey with
> "dynamic key aggregation" is not possible with the current semantics
> (unless one ventures into the fraud proof machinery, which seems
> overkill!).
>
> However, in constructions with MATT opcodes, I would never expect the
> need for data to be stored in the taptree. In particular, for the case
> of CoinPools, the pubkeys of the members could also be stored in the
> embedded data, having a single "unilateral withdrawal" tapleaf.
> Removing a key would then amount to replacing it with a fixed NUMS key
> and computing the new root (re-using the same Merkle proof).
> Note that this is not a lot costlier than using a tapleaf per user:
> instead of paying the cost for the Merkle proof in the control block,
> you pay for it explicitly in the Script witness.
>
> Therefore, I would expect there to be reasonable CoinPools designs
> without additional opcodes − but I am only moderately confident as
> this is beyond the level of sophistication I've been exploring so far.


On Sun, May 28, 2023 at 12:24 PM Salvatore Ingala
 wrote:
>
> Hi Johan,
>
> Exciting to finally see some merkleization, which was only confined
> within the meme, up to this point!
>
> > A simpler way IMO, would be to make OP_CICV and OP_COCV symmetrical:
> > Have OP_CICV take an optional taproot and do the same check as is
> > done for the output: Q == tweak(tweak(X,D), T).
>
> I think that's an excellent suggestion, which I was already exploring
> for a different purpose: bringing externally signed data onto the
> stack. My goal there was to allow eltoo-style replacement.
>
> Until recently, I thought that a clean/efficient version of eltoo
> would require OP_CHECKSIGFROMSTACK or ANYPREVOUT. However, extending
> OP_CHECKINPUTCONTRACTVERIFY to enable introspection of other inputs
> allows a reasonable workaround: producing a separate UTXO signed with
> ANYONECANPAY, with the required data embedded as usual. Spending that
> UTXO together with the channel's UTXO allows one to get that data
> on the stack (with its signature already checked by consensus rules).
> I drafted this idea in a gist [1].
>
> Remark: it still seems easier (and probably slightly more efficient)
> to build eltoo replacement with CSFS or APO in addition to MATT
> opcodes.
>
> A possible semantics for OP_CHECKINPUTCONTRACTVERIFY could then be
> exactly symmetrical to that of OP_CHECKOUTPUTCONTRACTVERIFY, with
> the exception that the special input index -1 would represent the
> current input.
>
> Pushing this further, another option that could be be worth exploring
> is to have a single OP_CHECK_IN_OUT_CONTRACT_VERIFY opcode, with the
> same semantics as OP_CHECKOUTPUTCONTRACTVERIFY from [2], but with an
> additional `flags` argument, which is a bitmap where:
> - the lowest-significant bit determines if the index refers to inputs
>   or outputs (where input index -1 refers to the current input)
> - the second bit specifies if amounts should be preserved with
>   deferred checks as described in [2] (only applicable to outputs)
> - other bits are OP_SUCCESS and reserved for future behaviors.
>
> This would make the opcodes 1-2 bytes larger, but might allow greater
> flexibility, and keep some room for future extensions.
>
> > 2.To make fully functioning CoinPools, one would need functionality
> > similar to OP_MERKLESUB[4]: remove some data from the merkle tree,
> > and remove a key from the aggregated internal key.
>
> It seems likely that efficient use of the taproot internal pubkey with
> "dynamic key aggregation" is not possible with the current semantics
> (unless one ventures into the fraud proof machinery, which seems
> overkill!).
>
> However, in constructions with MATT opcodes, I would never expect the
> need for data to be stored in the taptree. In particular, for the case
> of CoinPools, the pubkeys of the members could also be stored in the
> embedded data, having a single "unilateral withdrawal" tapleaf.
> Removing a key would then amount to replacing it with a fixed NUMS key
> and computing the new root (re-using the same Merkle proof).
> Note that this is not a lot costlier than using a tapleaf per user:
> instead of paying the cost for the Merkle proof in the control block,
> you pay for it explicitly in the Script witness.
>
> Therefore, I would expect there to be reasonable CoinPools designs
> without additional opcodes − but I am on

Re: [bitcoin-dev] Merkleize All The Things

2023-05-26 Thread Johan Torås Halseth via bitcoin-dev
Hi, Salvatore.

As a further exploration of this idea, I implemented a
proof-of-concept of OP_CICV and OP_COCV in btcd[1] that together with
OP_CAT enables a set of interesting use cases.

One such use case is, as mentioned earlier, CoinPools[2]. The opcodes
let you easily check the "dynamically committed data" of an input you
are spending, and enforce a new commitment on the output. The idea is
to have the set of participants in the pool, and their balances, be
the UTXOs committed data, and  use this to validate the legitimacy of
a transaction, determining whether it permits a peer to exit with a
portion of the pooled funds.

Doing what you suggested above, having the input and output commit to
a merkle tree of participants and balances, we are able to quite
elegantly verify the coin pool exit clause. Here is a working example
of how that could look like: [3]. Obviously this lacks a lot before it
is a working CoinPool implementation, but it demonstrates how
OP_C[I/O]V introduces "memory" to Bitcoin script.

Having done this exercise, I have a few suggestions on how one could
further extend the proposal:

1. In the current proposal for OP_CHECKOUTPUTCONTRACTVERIFY, the
opcodes check whether the output key Q is key X tweaked with data D
and taproot T: Q == tweak(tweak(X,D), T).

OP_CHECKINPUTCONTRACTVERIFY on the other hand, works on the input
internal key, and does not care about the taptree on the input: P ==
tweak(X,D), where Q = tweak(P, T). In most cases this is probably good
enough, since you are already executing the current script and that
way know the spender has provided the correct taproot.

However, in the coin pool script mentioned above, I found that I
wanted to re-use the same taproot for the output (recursively). I
believe this would be a quite common use case. To solve this I
committed the taproot as part of the data itself: D' = hash(T+D),
which was then verified by OP_CICV. If you are aware of more efficient
alternatives, I am eager to hear them.

A simpler way IMO, would be to make OP_CICV and OP_COCV symmetrical:
Have OP_CICV take an optional taproot and do the same check as is done
for the output: Q == tweak(tweak(X,D), T).

2.To make fully functioning CoinPools, one would need functionality
similar to OP_MERKLESUB[4]: remove some data from the merkle tree, and
remove a key from the aggregated internal key.This suggestion may
surpass the intended scope of this proposal, and would likely
necessitate the availability of multiple EC operations to accommodate
various key schemes. If we had opcodes for adding and removing keys
from the internal key this would be even more powerful.

I look forward to hearing your thoughts on these suggestions and
further exploring the possibilities of the proposal!

Cheers,
Johan

[1] 
https://github.com/halseth/btcd/pull/1/commits/90a4065bdcd8029fe3325514a250490cba66fddd
[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017964.html
[3] https://github.com/halseth/tapsim/tree/matt-demo/examples/matt/coinpool
[4] https://github.com/ariard/bips/blob/coinpool-bips/bip-merklesub.mediawiki


On Fri, May 5, 2023 at 11:18 PM Salvatore Ingala
 wrote:
>
> On Thu, 4 May 2023 at 10:34, Johan Torås Halseth  wrote:
> >
> > It sounds like we can generalize the description of the construct to:
> > Access to (the hash of) embedded data of inputs and outputs, and the
> > enforcement of output keys and (static) taptrees. In other words, as
> > long as you can dynamically compute the output embedded data in
> > Script, you can enforce more or less anything (since you can make the
> > output script enforce presenting a witness "satisfying" the embedded
> > data).
> >
> > Does that sound about right?
>
> Yes. Fraud proofs allow us to extend beyond what Script can do (with the
> necessary tradeoffs), but there is plenty that can be done without them.
>
>
> > For instance, I believe you could simulate coin pools pretty easily:
> > Commit to the set of pubkeys and amounts owned by the participants in
> > the pool, and an output taptree where each participant has their own
> > spending path. Now, to exit the pool unilaterally, the participant
> > must present a proof that their pubkey+amount is committed to in the
> > input and an output where it is no longer committed.
>
> I don't think one would want to have a tapleaf for each participant:
> that would make you pay log n hashes just to reveal the tapleaf, and
> then you still need to pay log n hashes to access the embedded data.
>
> Instead, the "unilateral withdrawal Script" can be the same for all the
> participants. The witness would be the Merkle proof, plus perhaps some
> additional information to identify the leaf in the tree (depending on
> how the Merkle tree is implemented). In a complete Merkle tree for
> N = 2^n participants, the witness could contain the n hashes that allow
> to prove the value of the leaf, plus n bits to identify the path to the
> leaf (0/1 for 'left/right" child), since Script do

Re: [bitcoin-dev] Merkleize All The Things

2023-05-04 Thread Johan Torås Halseth via bitcoin-dev
Thank you for the example.

It sounds like we can generalize the description of the construct to:
Access to (the hash of) embedded data of inputs and outputs, and the
enforcement of output keys and (static) taptrees. In other words, as
long as you can dynamically compute the output embedded data in
Script, you can enforce more or less anything (since you can make the
output script enforce presenting a witness "satisfying" the embedded
data).

Does that sound about right?

For instance, I believe you could simulate coin pools pretty easily:
Commit to the set of pubkeys and amounts owned by the participants in
the pool, and an output taptree where each participant has their own
spending path. Now, to exit the pool unilaterally, the participant
must present a proof that their pubkey+amount is committed to in the
input and an output where it is no longer committed.

A question that arises is how one would efficiently (in Script) prove
the inclusion/exclusion of the data in the commitment. One could
naively hash all the data twice during script execution (once for the
input, once for the output), but that is costly. It would be natural
to show merkle tree inclusion/exclusion in script, but perhaps there
are more efficient ways to prove it?

- Johan


On Tue, May 2, 2023 at 12:44 AM Salvatore Ingala via bitcoin-dev
 wrote:
>
> Hi all,
>
> I apologize for a couple of oversights in my last e-mail.
>
> The first is that m_B can't be committed as-is in the contract's
> embedded data, with the current semantics of OP_COCV, which
> only allows 32-byte values. A solution could be to store its
> hash SHA256(m_B), instead.
>
> (I didn't test the Scripts, so there could be other bugs − hopefully the
> general idea is clear, anyway)
>
> On Mon, 1 May 2023 at 15:11, Salvatore Ingala  
> wrote:
>>
>> If the internal_pubkey is a musig-aggregated key of Alice and Bob,
>> the game can be settled entirely offline after the first transaction.
>> Simply, Bob communicates his move to Alice, Alice reveals her move to
>> Bob, and they can settle the bet. The game would be played without
>> any script being executed, therefore all transactions could look like
>> any other P2TR, with the only possible fingerprinting being due to the
>> input amounts.
>
>
> This is incomplete: Alice can't trust Bob by revealing her move, as
> he could then cheat on-chain and play a different move.
>
> The fix should be straightforward, after adding the requirement that the
> internal pubkey of [S1] is a musig2 of both players.
> After Bob reveals his move (say, Rock), Alice will only agree to continue
> the game off-chain if Bob pre-signs transactions for the state [S1] (where
> m_B = Paper, and m_B = Scissors) that send all the money to Alice.
> This guarantees that a cheating Bob is punished.
>
> Best,
> Salvatore Ingala
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2023-04-29 Thread Johan Torås Halseth via bitcoin-dev
Hi, Salvatore.

I find this proposal very interesting. Especially since you seemingly
can achieve such powerful capabilities by such simple opcodes.

I'm still trying to grok how this would look like on-chain (forget
about the off-chain part for now), if we were to play out such a
computation.

Let's say you have a simple game like "one player tic-tac-toe" with
only two tiles: [ _ | _ ]. The player wins if he can get two in a row
(pretty easy game tbh).

Could you give a complete example how you would encode one such state
transition (going from [ X, _ ] -> [ X, X ] for instance) in Bitcoin
script?

Feel free to choose a different game or program if you prefer :)

Thanks!
Johan



On Tue, Dec 13, 2022 at 2:08 PM Billy Tetrud via bitcoin-dev
 wrote:
>
> Re Verkle trees, that's a very interesting construction that would be super 
> useful as a tool for something like Utreexo. A potentially substantial 
> downside is that it seems the cryptography used to get those nice properties 
> of Verkle trees isn't quantum safe. While a lot of things in Bitcoin seems to 
> be going down the path of quantum-unsafe (I'm looking at you, taproot), there 
> are still a lot of people who think quantum safety is important in a lot of 
> contexts.
>
> On Thu, Dec 1, 2022 at 5:52 AM Salvatore Ingala via bitcoin-dev 
>  wrote:
>>
>> Hello Rijndael,
>>
>>
>>
>> On Wed, 30 Nov 2022 at 23:09, Rijndael  wrote:
>>>
>>> Hello Salvatore,
>>>
>>> I found my answer re-reading your original post:
>>> > During the arbitration phase (say at the i-th leaf node of M_T), any 
>>> > party can win the challenge by providing correct values for tr_i = (st_i, 
>>> > op_i, st_{i + 1}). Crucially, only one party is able to provide correct 
>>> > values, and Script can verify that indeed the state moves from st_i to 
>>> > st_{i + 1} by executing op_i. The challenge is over.
>>
>> You are correct, the computation step encoded in a leaf needs to be simple 
>> enough for Script to verify it.
>>
>> For the academic purpose of proving completeness (that is, any computation 
>> can be successfully "proved" by the availability of the corresponding fraud 
>> proof), one can imagine reducing the computation all the way down to a 
>> circuit, where each step (leaf) is as simple as what can be checked with 
>> {OP_NOT, OP_BOOLAND, OP_BOOLOR, OP_EQUAL}.
>>
>> In practice, you would want to utilize Script to its fullest, so for example 
>> you wouldn't compile a SHA256 computation to something else – you'd rather 
>> use OP_SHA256 directly.
>>
>>>
>>> That raises leads to a different question: Alice initially posts a 
>>> commitment to an execution trace of `f(x) = y`, `x`, and `y`. Bob Disagrees 
>>> with `y` so starts the challenge protocol. Is there a commitment to `f`? In 
>>> other words, the dispute protocol (as I read it) finds the leftmost step in 
>>> Alice and Bob's execution traces that differ, and then rewards the coins to 
>>> the participant who's "after-value" is computed by the step's operation 
>>> applied to the "before value". But if the participants each present valid 
>>> steps but with different operations, who wins? In other words, Alice could 
>>> present [64, DECREMENT, 63] and Bob could present [64, INCREMENT, 65]. 
>>> Those steps don't match, but both are valid. Is there something to ensure 
>>> that before the challenge protocol starts, that the execution trace that 
>>> Alice posts is for the right computation and not a different computation 
>>> that yields a favorable result for her (and for which she can generate a 
>>> valid merkle tree)?
>>
>>
>> The function f is already hard-coded in the contract itself, by means of the 
>> tree of scripts − that already commits to the possible futures. Therefore, 
>> once you are at state S14, you know that you are verifying the 6th step of 
>> the computation; and the operation in the 6th step of the computation 
>> depends solely on f, not its inputs. In fact, you made me realize that I 
>> could drop op_i from the i-th leaf commitment, and just embed the 
>> information in the Script of that corresponding state.
>>
>> Note that the states S0 to S14 of the 256x game are not _all_ the possible 
>> states, but only the ones that occurred in that execution of the contract 
>> (corresponding to a path from the root to the leaf of the Merkle tree of the 
>> computation trace), and therefore the ones that materialized in a UTXO. 
>> Different choices made by the parties (by providing different data, and 
>> therefore choosing different branches) would lead to a different leaf, and 
>> therefore to different (but in a certain sense "symmetric") states.
>>
>> 
>>
>> Since we are talking about the fact that f is committed to in the contract, 
>> I'll take the chance to extend on this a bit with a fun construction on top.
>> It is well-known in the academic literature of state channels that you can 
>> create contracts where even the function ("program", or "contract") is not 
>> decided when

Re: [bitcoin-dev] [Lightning-dev] Taro: A Taproot Asset Representation Overlay

2022-11-07 Thread Johan Torås Halseth via bitcoin-dev
Hi Laolu,

Yeah, that is definitely the main downside, as Ruben also mentioned:
tokens are "burned" if they get sent to an already spent UTXO, and
there is no way to block those transfers.

And I do agree with your concern about losing the blockchain as the
main synchronization point, that seems indeed to be a prerequisite for
making the scheme safe in terms of re-orgs and asynchronicity.

I do think the scheme itself is sound though (maybe not off-chain, see
below): it prevents double spending and as long as the clients adhere
to the "rule" of not sending to a spent UTXO you'll be fine (if not
your tokens will be burned, the same way as if you don't satisfy the
Taro script when spending).

Thinking more about the examples you gave, I think you are right it
won't easily be compatible with LN channels though:
If you want to refill an existing channel with tokens, you need the
channel counterparties to start signing new commitments that include
spending the newly sent tokens. A problem arises however, if the
channel is force-closed with a pre-existing commitment from before the
token transfer took place. Since this commitment will be spending the
funding UTXO, but not the new tokens, the tokens will be burned. And
that seems to be harder to deal with (Eltoo style channels could be an
avenue to explore, if one could override the broadcasted commitment).

Tl;dr: I think you're right, the scheme is not compatible with LN.

- Johan


On Sat, Nov 5, 2022 at 1:36 AM Olaoluwa Osuntokun  wrote:
>
> Hi Johan,
>
> I haven't really been able to find a precise technical explanation of the
> "utxo teleport" scheme, but after thinking about your example use cases a
> bit, I don't think the scheme is actually sound. Consider that the scheme
> attempts to target transmitting "ownership" to a UTXO. However, by the time
> that transaction hits the chain, the UTXO may no longer exist. At that
> point, what happens to the asset? Is it burned? Can you retry it again? Does
> it go back to the sender?
>
> As a concrete example, imagine I have a channel open, and give you an
> address to "teleport" some additional assets to it. You take that addr, then
> make a transaction to commit to the transfer. However, the block before you
> commit to the transfer, my channel closes for w/e reason. As a result, when
> the transaction committing to the UTXO (blinded or not), hits the chain, the
> UTXO no longer exists. Alternatively, imagine the things happen in the
> expected order, but then a re-org occurs, and my channel close is mined in a
> block before the transfer. Ultimately, as a normal Bitcoin transaction isn't
> used as a serialization point, the scheme seems to lack a necessary total
> ordering to ensure safety.
>
> If we look at Taro's state transition model in contrast, everything is fully
> bound to a single synchronization point: a normal Bitcoin transaction with
> inputs consumed and outputs created. All transfers, just like Bitcoin
> transactions, end up consuming assets from the set of inputs, and
> re-creating them with a different distribution with the set of outputs. As a
> result, Taro transfers inherit the same re-org safety traits as regular
> Bitcoin transactions. It also isn't possible to send to something that won't
> ultimately exist, as sends create new outputs just like Bitcoin
> transactions.
>
> Taro's state transition model also means anything you can do today with
> Bitcoin/LN also apply. As an example, it would be possible for you to
> withdrawn from your exchange into a Loop In address (on chain to off chain
> swap), and have everything work as expected, with you topping off your
> channel. Stuff like splicing, and other interactive transaction construction
> schemes (atomic swaps, MIMO swaps, on chain auctions, etc) also just work.
>
> Ignoring the ordering issue I mentioned above, I don't think this is a great
> model for anchoring assets in channels either. With Taro, when you make the
> channel, you know how many assets are committed since they're all committed
> to in the funding output when the channel is created. However, let's say we
> do teleporting instead: at which point would we recognize the new asset
> "deposits"? What if we close before a pending deposits confirms, how can one
> regain those funds? Once again you lose the serialization of events/actions
> the blockchain provides. I think you'd also run into similar issues when you
> start to think about how these would even be advertised on a hypothetical
> gossip network.
>
> I think one other drawback of the teleport model iiuc is that: it either
> requires an OP_RETURN, or additional out of band synchronization to complete
> the transfer. Since it needs to commit to w/e hash description of the
> teleport, it either needs to use an OP_RETURN (so the receiver can see the
> on chain action), or the sender needs to contact the receiver to initiate
> the resolution of the transfer (details committed to in a change addr or
> w/e).
>
> With Taro,

Re: [bitcoin-dev] [Lightning-dev] Taro: A Taproot Asset Representation Overlay

2022-11-03 Thread Johan Torås Halseth via bitcoin-dev
Hi,

I wanted to chime in on the "teleport" feature explained by Ruben, as I
think exploring something similar for Taro could be super useful in an LN
setting.

In today's Taro, to transfer tokens you have to spend a UTXO, and present a
proof showing that there are tokens committed to in the output you are
spending. Let's say this UTXO is 'utxo:0'.

In contrast, to spend teleported tokens, you would still spend utxo:0, but
you would only have to present a proof that _some txout_ on-chain have
committed tokens to utxo:0.

As Ruben points out, this makes it possible to send tokens to an already
spent TXO, essentially burning the tokens.

However, it opens up some exciting possibilities IMO. You can in essence
use this to "re-fill" UTXOs with tokens, which is very interesting for LN
channels:

- You could "add" tokens to your already open channels. The only thing
needed is for the channel participants to be presented the proof that
tokens were sent to the funding output, and they can update their
commitment transaction to start spending these tokens.
- You can "top-up" all your channels in a single on-chain tx. Since a
single output can commit tokens to several UTXOs, you could with a single
on-chain transaction add tokens to many channels without opening and
closing them.

RGB also has the ability to "blind" the UTXO that tokens get teleported to,
hiding the recipient UTXO. This is cool, since I could withdraw tokens from
an exchange directly into my LN channel, without revealing my channel UTXO.

I found the explanation of the teleport feature in this blog post pretty
good:
https://medium.com/@FedericoTenga/understanding-rgb-protocol-7dc7819d3059

- Johan

On Sun, Apr 10, 2022 at 6:52 PM Ruben Somsen  wrote:

> Hi Laolu,
>
> >happy to hear that someone was actually able to extract enough details
> from the RGB devs/docs to be able to analyze it properly
>
> Actually, even though I eventually puzzled everything together, this did
> not go well for me either. There is a ton of documentation, but it's a maze
> of unhelpful details, and none of it clearly maps out the fundamental
> design. I was also disappointed by the poor response I received when asking
> questions, and I ended up getting chastised for helping others understand
> it and pointing out potential flaws[1][2][3].Given my experience, I think
> the project is not in great shape, so the decision to rebuild from scratch
> seems right to me.
>
> That said, in my opinion the above should not factor into the decision of
> whether RGB should be credited in the Taro documentation. The design
> clearly precedes (and seems to have inspired) Taro, so in my opinion this
> should be acknowledged. Also, the people that are responsible for the
> current shape of RGB aren't the people who originated the idea, so it would
> not be fair to the originators either (Peter Todd, Alekos Filini, Giacomo
> Zucco).
>
> >assets can be burnt if a user doesn't supply a valid witness
>
> I am in agreement with what you said, but it is not clear to me whether we
> are on the same page. What I tried to say was that it does not make sense
> to build scripting support into Taro, because you can't actually do
> anything interesting with it due to this limitation. The only type of smart
> contract you can build is one where you limit what the owner (as defined by
> Bitcoin's script) can do with their own Taro tokens, or else he will burn
> them – not very useful. Anything involving a conditional transfer of
> ownership to either A or B (i.e. any meaningful type of script) won't work.
> Do you see what I mean, or should I elaborate further?
>
> >TAPLEAF_UPDATE_VERIFY can actually be used to further _bind_ Taro transitions
> at the Bitcoin level, without Bitcoin explicitly needing to be aware
>
> That is conceptually quite interesting. So theoretically you could get
> Bitcoin covenants to enforce certain spending conditions on Taro assets.
> Not sure how practical that ends up being, but intriguing to consider.
>
> >asset issuer to do a "re-genesis"
>
> Yes, RGB suggested the same thing, and this can work under some
> circumstances, but note that this won't help for tokens that aim to have a
> publicly audited supply, as the proof that a token was legitimately
> re-issued is the history of the previous token (so you'd actually be making
> things worse, as now everyone has to verify it). And of course the idea
> also requires the issuer to be active, which may not always be the case.
>
> >I'm not familiar with how the RGB "teleport" technique works [...] Can
> you point me to a coherent explanation of the technique
>
> To my knowledge no good explanation exists. "Teleporting" is just what I
> thought was a good way of describing it. Basically, in your design when
> Alice wants to send a Taro token to Bob, Alice has to spend her own output,
> make a new output for Bob, and make a change output for herself. Inside the
> Taro tree you'll then point to the index of Bob's output in order to

Re: [bitcoin-dev] Ephemeral Anchors: Fixing V3 Package RBF againstpackage limit pinning

2022-10-27 Thread Johan Torås Halseth via bitcoin-dev
Hi, Greg.

I find this proposal super interesting, and IIUC something that seems
fairly "safe" to allow (assuming V3).

For LN having the commitment transaction pay a non-zero fee is a cause for
a lot of complexity in the channel state machine. Something like this would
remove a lot of edge cases and add more flexibility around adding HTLCs.

Thanks!

- Johan

On Thu, Oct 20, 2022 at 3:42 PM Greg Sanders via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> So it doesn't look like I'm ignoring a good question:
>
> No solid noninteractive ideas, unless we get some very flexible sighash
> softfork. Interactively, I think you can get collaborative fee bumps under
> the current consensus regime and ephemeral anchors. The child will just be
> built with inputs from different people.
>
> On Wed, Oct 19, 2022 at 11:12 AM James O'Beirne 
> wrote:
>
>> I'm also very happy to see this proposal, since it gets us closer to
>> having a mechanism that allows the contribution to feerate in an
>> "unauthenticated" way, which seems to be a very helpful feature for vaults
>> and other contracting protocols.
>>
>> One possible advantage of the sponsors interface -- and I'm curious for
>> your input here Greg -- is that with sponsors, assuming we relaxed the "one
>> sponsor per sponsoree" constraint, multiple uncoordinated parties can
>> collaboratively bump a tx's feerate. A simple example would be a batch
>> withdrawal from an exchange could be created with a low feerate, and then
>> multiple users with a vested interest of expedited confirmation could all
>> "chip in" to raise the feerate with multiple sponsor transactions.
>>
>> Having a single ephemeral output seems to create a situation where a
>> single UTXO has to shoulder the burden of CPFPing a package. Is there some
>> way we could (possibly later) amend the ephemeral anchor interface to allow
>> for this kind of collaborative sponsoring? Could you maybe see "chained"
>> ephemeral anchors that would allow this?
>>
>>
>> On Tue, Oct 18, 2022 at 12:52 PM Jeremy Rubin via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Excellent proposal and I agree it does capture much of the spirit of
>>> sponsors w.r.t. how they might be used for V3 protocols.
>>>
>>> The only drawbacks I see is they don't work for lower tx version
>>> contracts, so there's still something to be desired there, and that the
>>> requirement to sweep the output must be incentive compatible for the miner,
>>> or else they won't enforce it (pass the buck onto the future bitcoiners).
>>> The Ephemeral UTXO concept can be a consensus rule (see
>>> https://rubin.io/public/pdfs/multi-txn-contracts.pdf "Intermediate
>>> UTXO") we add later on in lieu of managing them by incentive, so maybe it's
>>> a cleanup one can punt.
>>>
>>> One question I have is if V3 is designed for lightning, and this is
>>> designed for lightning, is there any sense in requiring these outputs for
>>> v3? That might help with e.g. anonymity set, as well as potentially keep
>>> the v3 surface smaller.
>>>
>>> On Tue, Oct 18, 2022 at 11:51 AM Greg Sanders via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
 > does that effectively mark output B as unspendable once the child
 gets confirmed?

 Not at all. It's a normal spend like before, since the parent has been
 confirmed. It's completely unrestricted, not being bound to any
 V3/ephemeral anchor restrictions on size, version, etc.

 On Tue, Oct 18, 2022 at 11:47 AM Arik Sosman via bitcoin-dev <
 bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Greg,
>
> Thank you very much for sharing your proposal!
>
> I think there's one thing about the second part of your proposal that
> I'm missing. Specifically, assuming the scenario of a v3 transaction with
> three outputs, A, B, and the ephemeral anchor OP_TRUE. If a child
> transaction spends A and OP_TRUE, does that effectively mark output B as
> unspendable once the child gets confirmed? If so, isn't the implication
> therefore that to safely spend a transaction with an ephemeral anchor, all
> outputs must be spent? Thanks!
>
> Best,
> Arik
>
> On Tue, Oct 18, 2022, at 6:52 AM, Greg Sanders via bitcoin-dev wrote:
>
> Hello Everyone,
>
> Following up on the "V3 Transaction" discussion here
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
> , I would like to elaborate a bit further on some potential follow-on work
> that would make pinning severely constrained in many setups].
>
> V3 transactions may solve bip125 rule#3 and rule#5 pinning attacks
> under some constraints[0]. This means that when a replacement is to be 
> made
> and propagated, it costs the expected amount of fees to do so. This is a
> great start. What's left in this subset of pinning is *package limit*
> pinning. In other wor

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-11-02 Thread Johan Torås Halseth via bitcoin-dev
On Mon, Oct 28, 2019 at 6:16 PM David A. Harding  wrote:

> A parent transaction near the limit of 100,000 vbytes could have almost
> 10,000 outputs paying OP_TRUE (10 vbytes per output).  If the children
> were limited to 10,000 vbytes each (the current max carve-out size),
> that allows relaying 100 mega-vbytes or nearly 400 MB data size (larger
> than the default maximum mempool size in Bitcoin Core).
>

Thanks, Dave, I wasn't aware the limits would allow this many outputs. And
as your calculation shows, this opens up the potential for free relay of
large amounts of data.

We could start special casing to only allow this for "LN commitment-like"
transactions, but this would be application specific changes, and your
calculation shows that even with the BOLT2 numbers there still exists cases
with a large number of children.

We are moving forward with adding a 1 block delay to all outputs to utilize
the current carve-out rule, and the changes aren't that bad. See Joost's
post in "[PATCH] First draft of option_simplfied_commitment"

- Johan
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread Johan Torås Halseth via bitcoin-dev
>
>
> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.

Matt: With the proposed change, party B would always be able to add a child
to its output, regardless of what games party A is playing.


Thanks for the explanation, Jeremy!


> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.


Relay cost is the obvious problem with just naively removing all limits.
Relaxing the current rules by allowing to add a child to each output as
long as it has a single unconfirmed parent would still only allow free
relay of O(size of parent) extra data (which might not be that bad? Similar
to the carve-out rule we could put limits on the child size). This would be
enough for the current LN use case (increasing fee of commitment tx), but
not for OP_SECURETHEBAG I guess, as you need the tree of children, as you
mention.

I imagine walking the mempool wouldn't change much, as you would only have
one extra child per output. But here I'm just speculating, as I don't know
the code well enough know what the diff would look like.


> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".


This is interesting for an LN commitment! You could really hide every
output of the commitment within OP_STB, which could either allow bypassing
the fee-pinning attack entirely (if the output cannot be spent unconfirmed)
or adding fees to the commitment using SIGHASH_SINGLE|ANYONECANPAY.

- Johan

On Sun, Oct 27, 2019 at 8:13 PM Jeremy  wrote:

> Johan,
>
> The issues with mempool limits for OP_SECURETHEBAG are related, but have
> distinct solutions.
>
> There are two main categories of mempool issues at stake. One is relay
> cost, the other is mempool walking.
>
> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.
>
> The other issue is walking the mempool -- many of the algorithms we use in
> the mempool can be N log N or N^2 in the number of descendants. (simple
> example: an input chain of length N to a fan out of N outputs that are all
> spent, is O(N^2) to look up ancestors per-child, unless we're caching).
>
> The other sort of walking issue is where the indegree or outdegree for a
> transaction is high. Then when we are computing descendants or ancestors we
> will need to visit it multiple times. To avoid re-expanding a node, we
> currently cache it with a set. This uses O(N) extra memory and makes O(N
> Log N) (we use std::set not unordered_set) comparisons.
>
> I just opened a PR which should help with some of the walking issues by
> allowing us to cheaply cache which nodes we've visited on a run. It makes a
> lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
> memory. See: https://github.com/bitcoin/bitcoin/pull/17268.
>
>
> Now, for OP_SECURETHEBAG we want a particular property that is very
> different from with lightning htlcs (as is). We want that an unlimited
> number of child OP_SECURETHEBAG txns may extend from a confirmed
> OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
> lightning (one dangling unconfirmed to permit channels).
>
> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".
>
>
>
> --
> @JeremyRubin 
> 
>
>
> On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
> wrote:
>
>> I don’te see how? Let’s imagine Party A has two spendable outputs, now
>> they stuff the package size on one of their spendable outlets until it is
>> right at the limit, add one more on their other output (to meet the
>> Carve-Out), and now Party B can’t do anything.
>>
>> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
>>
>> 
>> It essentially changes the rule to always allow CPFP-ing the

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread Johan Torås Halseth via bitcoin-dev
It essentially changes the rule to always allow CPFP-ing the commitment as
long as there is an output available without any descendants. It changes
the commitment from "you always need at least, and exactly, one non-CSV
output per party. " to "you always need at least one non-CSV output per
party. "

I realize these limits are there for a reason though, but I'm wondering if
could relax them. Also now that jeremyrubin has expressed problems with the
current mempool limits.

On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
wrote:

> I may be missing something, but I'm not sure how this changes anything?
>
> If you have a commitment transaction, you always need at least, and
> exactly, one non-CSV output per party. The fact that there is a size
> limitation on the transaction that spends for carve-out purposes only
> effects how many other inputs/outputs you can add, but somehow I doubt
> its ever going to be a large enough number to matter.
>
> Matt
>
> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> > Reviving this old thread now that the recently released RC for bitcoind
> > 0.19 includes the above mentioned carve-out rule.
> >
> > In an attempt to pave the way for more robust CPFP of on-chain contracts
> > (Lightning commitment transactions), the carve-out rule was added in
> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> > an implementation of a new commitment format for utilizing the Bring
> > Your Own Fees strategy using CPFP, I’m wondering if the special case
> > rule should have been relaxed a bit, to avoid the need for adding a 1
> > CSV to all outputs (in case of Lightning this means HTLC scripts would
> > need to be changed to add the CSV delay).
> >
> > Instead, what about letting the rule be
> >
> > The last transaction which is added to a package of dependent
> > transactions in the mempool must:
> >   * Have no more than one unconfirmed parent.
> >
> > This would of course allow adding a large transaction to each output of
> > the unconfirmed parent, which in effect would allow an attacker to
> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> > this a problem with the current mempool acceptance code in bitcoind? I
> > would imagine evicting transactions based on feerate when the max
> > mempool size is met handles this, but I’m asking since it seems like
> > there has been several changes to the acceptance code and eviction
> > policy since the limit was first introduced.
> >
> > - Johan
> >
> >
> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > > wrote:
> >
> > Matt Corallo  > > writes:
> > >>> Thus, even if you imagine a steady-state mempool growth, unless
> the
> > >>> "near the top of the mempool" criteria is "near the top of the
> next
> > >>> block" (which is obviously *not* incentive-compatible)
> > >>
> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie.
> next
> > >> block, and assumed you'd only allow RBF if the old package wasn't
> > in the
> > >> top and the replacement would be.  That seems incentive
> > compatible; more
> > >> than the current scheme?
> > >
> > > My point was, because of block time variance, even that criteria
> > doesn't hold up. If you assume a steady flow of new transactions and
> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
> > likely to get confirmed until a few blocks come in "early". Given
> > block variance within a 12 block window, this is a relatively likely
> > scenario.
> >
> > [ Digging through old mail. ]
> >
> > Doesn't really matter.  Lightning close algorithm would be:
> >
> > 1.  Give bitcoind unileratal close.
> > 2.  Ask bitcoind what current expidited fee is (or survey your
> mempool).
> > 3.  Give bitcoind child "push" tx at that total feerate.
> > 4.  If next block doesn't contain unilateral close tx, goto 2.
> >
> > In this case, if you allow a simpified RBF where 'you can replace if
> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> > old tx isnt',
> > it works.
> >
> > It allows someone 100k of free tx spam, sure.  But it's simple.
> >
> > We could further restrict it by marking the unilateral close somehow
> to
> > say "gonna be pushed" and further limiting the child tx weight (say,
> > 5kSipa?) in that case.
> >
> > Cheers,
> > Rusty.
> > ___
> > Lightning-dev mailing list
> > lightning-...@lists.linuxfoundation.org
> > 
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Johan Torås Halseth via bitcoin-dev
Reviving this old thread now that the recently released RC for bitcoind
0.19 includes the above mentioned carve-out rule.

In an attempt to pave the way for more robust CPFP of on-chain contracts
(Lightning commitment transactions), the carve-out rule was added in
https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on an
implementation of a new commitment format for utilizing the Bring Your Own
Fees strategy using CPFP, I’m wondering if the special case rule should
have been relaxed a bit, to avoid the need for adding a 1 CSV to all
outputs (in case of Lightning this means HTLC scripts would need to be
changed to add the CSV delay).

Instead, what about letting the rule be

The last transaction which is added to a package of dependent
transactions in the mempool must:
  * Have no more than one unconfirmed parent.

This would of course allow adding a large transaction to each output of the
unconfirmed parent, which in effect would allow an attacker to exceed the
MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is this a problem
with the current mempool acceptance code in bitcoind? I would imagine
evicting transactions based on feerate when the max mempool size is met
handles this, but I’m asking since it seems like there has been several
changes to the acceptance code and eviction policy since the limit was
first introduced.

- Johan


On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  wrote:

> Matt Corallo  writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't in the
> >> top and the replacement would be.  That seems incentive compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria doesn't
> hold up. If you assume a steady flow of new transactions and one or two
> blocks come in "late", suddenly "top 4MWeight" isn't likely to get
> confirmed until a few blocks come in "early". Given block variance within a
> 12 block window, this is a relatively likely scenario.
>
> [ Digging through old mail. ]
>
> Doesn't really matter.  Lightning close algorithm would be:
>
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
>
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx
> isnt',
> it works.
>
> It allows someone 100k of free tx spam, sure.  But it's simple.
>
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-23 Thread Johan Torås Halseth via bitcoin-dev
Thanks, Jimpo!

This is very encouraging, I think. I sorta assumed that separating the
elements into their own sub-filters would hurt the compression a lot more.
Can the compression ratio/false positive rate be tweaked with the
sub-filters in mind?

With the total size of the separated filters being no larger than the
combined filters, I see no benefit of combined filters? Committing to them
all in the headers would also save space, and we could ensure nodes are
serving all sub-filters.

- Johan

On Wed, May 23, 2018 at 9:38 AM, Jim Posen  wrote:

> So I checked filter sizes (as a proportion of block size) for each of the
> sub-filters. The graph is attached.
>
> As interpretation, the first ~120,000 blocks are so small that the
> Golomb-Rice coding can't compress the filters that well, which is why the
> filter sizes are so high proportional to the block size. Except for the
> input filter, because the coinbase input is skipped, so many of them have 0
> elements. But after block 120,000 or so, the filter compression converges
> pretty quickly to near the optimal value. The encouraging thing here is
> that if you look at the ratio of the combined size of the separated filters
> vs the size of a filter containing all of them (currently known as the
> basic filter), they are pretty much the same size. The mean of the ratio
> between them after block 150,000 is 99.4%. So basically, not much
> compression efficiently is lost by separating the basic filter into
> sub-filters.
>
> On Tue, May 22, 2018 at 5:42 PM, Jim Posen  wrote:
>
>> My suggestion was to advertise a bitfield for each filter type the node
>>> serves,
>>> where the bitfield indicates what elements are part of the filters. This
>>> essentially
>>> removes the notion of decided filter types and instead leaves the
>>> decision to
>>> full-nodes.
>>>
>>
>> I think it makes more sense to construct entirely separate filters for
>> the different types of elements and allow clients to download only the ones
>> they care about. If there are enough elements per filter, the compression
>> ratio shouldn't be much worse by splitting them up. This prevents the
>> exponential blowup in the number of filters that you mention, Johan, and it
>> works nicely with service bits for advertising different filter types
>> independently.
>>
>> So if we created three separate filter types, one for output scripts, one
>> for input outpoints, and one for TXIDs, each signaled with a separate
>> service bit, are people good with that? Or do you think there shouldn't be
>> a TXID filter at all, Matt? I didn't include the option of a prev output
>> script filter or rolling that into the block output script filter because
>> it changes the security model (cannot be proven to be correct/incorrect
>> succinctly).
>>
>> Then there's the question of whether to separate or combine the headers.
>> I'd lean towards keeping them separate because it's simpler that way.
>>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-22 Thread Johan Torås Halseth via bitcoin-dev
Maybe I didn't make it clear, but the distinction is that the current track
allocates
one service bit for each "filter type", where it has to be agreed upon up
front what
elements such a filter type contains.

My suggestion was to advertise a bitfield for each filter type the node
serves,
where the bitfield indicates what elements are part of the filters. This
essentially
removes the notion of decided filter types and instead leaves the decision
to
full-nodes.

This would require a "getcftypes" message, of course.

- Johan


On Tue, May 22, 2018 at 3:16 AM, Olaoluwa Osuntokun 
wrote:

> > What if instead of trying to decide up front which subset of elements
> will
> > be most useful to include in the filters, and the size tradeoff, we let
> the
> > full-node decide which subsets of elements it serves filters for?
>
> This is already the case. The current "track" is to add new service bits
> (while we're in the uncommitted phase) to introduce new fitler types. Light
> clients can then filter out nodes before even connecting to them.
>
> -- Laolu
>
> On Mon, May 21, 2018 at 1:35 AM Johan Torås Halseth 
> wrote:
>
>> Hi all,
>>
>> Most light wallets will want to download the minimum amount of data
>> required to operate, which means they would ideally download the smallest
>> possible filters containing the subset of elements they need.
>>
>> What if instead of trying to decide up front which subset of elements
>> will be most useful to include in the filters, and the size tradeoff, we
>> let the full-node decide which subsets of elements it serves filters for?
>>
>> For instance, a full node would advertise that it could serve filters for
>> the subsets 110 (txid+script+outpoint), 100 (txid only), 011 
>> (script+outpoint)
>> etc. A light client could then choose to download the minimal filter type
>> covering its needs.
>>
>> The obvious benefit of this would be minimal bandwidth usage for the
>> light client, but there are also some less obvious ones. We wouldn’t have
>> to decide up front what each filter type should contain, only the possible
>> elements a filter can contain (more can be added later without breaking
>> existing clients). This, I think, would let the most served filter types
>> grow organically, with full-node implementations coming with sane defaults
>> for served filter types (maybe even all possible types as long as the
>> number of elements is small), letting their operator add/remove types at
>> will.
>>
>> The main disadvantage of this as I see it, is that there’s an exponential
>> blowup in the number of possible filter types in the number of element
>> types. However, this would let us start out small with only the elements we
>> need, and in the worst case the node operators just choose to serve the
>> subsets corresponding to what now is called “regular” + “extended” filters
>> anyway, requiring no more resources.
>>
>> This would also give us some data on what is the most widely used filter
>> types, which could be useful in making the decision on what should be part
>> of filters to eventually commit to in blocks.
>>
>> - Johan
>> On Sat, May 19, 2018 at 5:12, Olaoluwa Osuntokun via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> On Thu, May 17, 2018 at 2:44 PM Jim Posen via bitcoin-dev >
>>> Monitoring inputs by scriptPubkey vs input-txid also has a massive
 advantage for parallel filtering: You can usually known your pubkeys
 well in advance, but if you have to change what you're watching block
 N+1 for based on the txids that paid you in N you can't filter them
 in parallel.

>>>
>>> Yes, I'll grant that this is a benefit of your suggestion.
>>>
>>
>> Yeah parallel filtering would be pretty nice. We've implemented a serial
>> filtering for btcwallet [1] for the use-case of rescanning after a seed
>> phrase import. Parallel filtering would help here, but also we don't yet
>> take advantage of batch querying for the filters themselves. This would
>> speed up the scanning by quite a bit.
>>
>> I really like the filtering model though, it really simplifies the code,
>> and we can leverage identical logic for btcd (which has RPCs to fetch the
>> filters) as well.
>>
>> [1]: https://github.com/Roasbeef/btcwallet/blob/master/chain/
>> neutrino.go#L180
>>
>> ___ bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.
>> org/mailman/listinfo/bitcoin-dev
>>
>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-21 Thread Johan Torås Halseth via bitcoin-dev
Hi all,
Most light wallets will want to download the minimum amount of data required to 
operate, which means they would ideally download the smallest possible filters 
containing the subset of elements they need.
What if instead of trying to decide up front which subset of elements will be 
most useful to include in the filters, and the size tradeoff, we let the 
full-node decide which subsets of elements it serves filters for?

For instance, a full node would advertise that it could serve filters for the 
subsets 110 (txid+script+outpoint), 100 (txid only), 011 ( script+outpoint) 
etc. A light client could then choose to download the minimal filter type 
covering its needs.
The obvious benefit of this would be minimal bandwidth usage for the light 
client, but there are also some less obvious ones. We wouldn’t have to decide 
up front what each filter type should contain, only the possible elements a 
filter can contain (more can be added later without breaking existing clients). 
This, I think, would let the most served filter types grow organically, with 
full-node implementations coming with sane defaults for served filter types 
(maybe even all possible types as long as the number of elements is small), 
letting their operator add/remove types at will.
The main disadvantage of this as I see it, is that there’s an exponential 
blowup in the number of possible filter types in the number of element types. 
However, this would let us start out small with only the elements we need, and 
in the worst case the node operators just choose to serve the subsets 
corresponding to what now is called “regular” + “extended” filters anyway, 
requiring no more resources.
This would also give us some data on what is the most widely used filter types, 
which could be useful in making the decision on what should be part of filters 
to eventually commit to in blocks.
- Johan On Sat, May 19, 2018 at 5:12, Olaoluwa Osuntokun via bitcoin-dev 
 wrote:
On Thu, May 17, 2018 at 2:44 PM Jim Posen via bitcoin-dev https://github.com/Roasbeef/btcwallet/blob/master/chain/neutrino.go#L180 
[https://github.com/Roasbeef/btcwallet/blob/master/chain/neutrino.go#L180]
___ bitcoin-dev mailing list 
bitcoin-dev@lists.linuxfoundation.org 
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev