Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-19 Thread Bastien TEINTURIER via bitcoin-dev
Thanks Antoine for your work on this issue.

I confirm that eclair v0.9.0 contains the migitations described.

Eclair has been watching the mempool for preimages since its very early
versions (years ago), relying on Bitcoin Core's ZMQ notifications for
incoming transactions. I believe this guarantees that we see the HTLC
success transaction, even if it is immediately replaced (as long as we
don't overflow the ZMQ limits).

I agree with Matt though that more fundamental work most likely needs to
happen at the bitcoin layer to allow L2 protocols to be more robust
against that class of attacks.

Thanks,
Bastien

Le mer. 18 oct. 2023 à 11:07, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> The disclosure mails noted a 3rd mitigation beyond mempool scanning and
> transaction re-signing / re-broadcasting, namely bumping CLTV delta.
>
> Generally bumping CLTV delta is a basic line of mitigations for a lot of
> lightning attacks, as it gives opportunity to node operators to intervene
> and re-broadcast their time-sensitive transactions on other interfaces (e.g
> a secondary full-node if the first one is eclipsed).
>
> About the second mitigation transaction re-signing, if done correctly at
> least sounds to put an economic cost (denominated in fees / feerates) on
> the attack. This is unclear to me if the game-theory of this cost holds.
>
> One thing which sounds to me making the attack harder is stratum v2
> deployment, as you're increasing the number of miners which might do their
> own block templates, and therefore the number of miners' mempools where an
> attacker has to successfully continuously replace in cycles channels
> counterparties transactions.
>
> A replacement buffer or history of transactions at the mempool level might
> be a mitigation to this attack. I believe this is yet to be seen if it can
> be made robust enough.
>
> I don't know if folks like tadge or rusty who have been involved in the
> early design of lightning have more ideas of mitigations. Fees was noted as
> a hard issue in the original paper.
>
> Le mer. 18 oct. 2023 à 01:17, Matt Corallo  a
> écrit :
>
>> There appears to be some confusion about this issue and the mitigations.
>> To be clear, the deployed
>> mitigations are not expected to fix this issue, its arguable if they
>> provide anything more than a PR
>> statement.
>>
>> There are two discussed mitigations here - mempool scanning and
>> transaction re-signing/re-broadcasting.
>>
>> Mempool scanning relies on regularly checking the mempool of a local node
>> to see if we can catch the
>> replacement cycle mid-cycle. It only works if wee see the first
>> transaction before the second
>> transaction replaces it.
>>
>> Today, a large majority of lightning nodes run on machines with a Bitcoin
>> node on the same IP
>> address, making it very clear what the "local node" of the lightning node
>> is. An attacker can
>> trivially use this information to connect to said local node and do the
>> replacement quickly,
>> preventing the victim from seeing the replacement.
>>
>> More generally, however, similar discoverability is true for mining
>> pools. An attacker performing
>> this attack is likely to do the replacement attack on a miner's node
>> directly, potentially reducing
>> the reach of the intermediate transaction to only miners, such that the
>> victim can never discover it
>> at all.
>>
>> The second mitigation is similarly pathetic. Re-signing and
>> re-broadcasting the victim's transaction
>> in an attempt to get it to miners even if its been removed may work, if
>> the attacker is super lazy
>> and didn't finish writing their attack system. If the attacker is
>> connected to a large majority of
>> hashrate (which has historically been fairly doable), they can simply do
>> their replacement in a
>> cycle aggressively and arbitrarily reduce the probability that the
>> victim's transaction gets confirmed.
>>
>> Now, the above is all true in a spherical cow kinda world, and the P2P
>> network has plenty of slow
>> nodes and strange behavior. Its possible that these mitigations might, by
>> some stroke of luck,
>> happen to catch such an attack and prevent it, because something took
>> longer than the attacker
>> intended or whatever. But, that's a far cry from any kind of material
>> "fix" for the issue.
>>
>> Ultimately the only fix for this issue will be when miners keep a history
>> of transactions they've
>> seen and try them again after they may be able to enter the mempool
>> because of an attack like this.
>>
>> Matt
>>
>> On 10/16/23 12:57 PM, Antoine Riard wrote:
>> > (cross-posting mempool issues identified are exposing lightning chan to
>> loss of funds risks, other
>> > multi-party bitcoin apps might be affected)
>> >
>> > Hi,
>> >
>> > End of last year (December 2022), amid technical discussions on eltoo
>> payment channels and
>> > incentives compatibility of the mempool anti-DoS rules, a new
>> transaction-relay jamming 

Re: [bitcoin-dev] [Lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-19 Thread Bastien TEINTURIER via bitcoin-dev
Hi Antoine,

> If I'm correct, two users can cooperate maliciously against the batch
> withdrawal transactions by re-signing a CPFP from 2-of-2 and
> broadcasting the batch withdrawal as a higher-feerate package / high
> fee package and then evicting out the CPFP.

Yes, they can, and any user could also double-spend the batch using a
commit tx spending from the previous funding output. Participants must
expect that this may happen, that's what I mentioned previously that
you cannot use 0-conf on that splice transaction. But apart from that,
it acts as a regular splice: participants must watch for double-spends
(as discussed in the previous messages) while waiting for confirmations.

> If the batch withdrawal has been signed with 0-fee thanks to the
> nversion=3 policy exemption, it will be evicted out of the mempool.
> A variant of a replacement cycling attack.

I don't think this should use nVersion=3 and pay 0 fees. On the contrary
this is a "standard" transaction that should use a reasonable feerate
and nVersion=2, that's why I don't think this comment applies.

Cheers,
Bastien

Le mer. 18 oct. 2023 à 20:04, Antoine Riard  a
écrit :

> Hi Bastien,
>
> Thanks for the answer.
>
> If I understand correctly the protocol you're describing you're aiming to
> enable batched withdrawals where a list of users are being sent funds from
> an exchange directly in a list of channel funding outputs ("splice-out").
> Those channels funding outputs are 2-of-2, between two lambda users or e.g
> a lambda user and a LSP.
>
> If I'm correct, two users can cooperate maliciously against the batch
> withdrawal transactions by re-signing a CPFP from 2-of-2 and broadcasting
> the batch withdrawal as a higher-feerate package / high fee package and
> then evicting out the CPFP.
>
> If the batch withdrawal has been signed with 0-fee thanks to the
> nversion=3 policy exemption, it will be evicted out of the mempool. A
> variant of a replacement cycling attack.
>
> I think this more or less matches the test I'm pointing to you which is on
> non-deployed package acceptance code:
>
> https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>
> Please correct me if I'm wrong or missing assumptions. Agree with you on
> the assumptions that the exchange does not have an incentive to
> double-spend its own withdrawal transactions, or if all the batched funding
> outputs are shared with a LSP, malicious collusion is less plausible.
>
> Best,
> Antoine
>
> Le mer. 18 oct. 2023 à 15:35, Bastien TEINTURIER  a
> écrit :
>
>> Hey Z-man, Antoine,
>>
>> Thank you for your feedback, responses inline.
>>
>> z-man:
>>
>> > Then if I participate in a batched splice, I can disrupt the batched
>> > splice by broadcasting the old state and somehow convincing miners to
>> > confirm it before the batched splice.
>>
>> Correct, I didn't mention it in my post but batched splices cannot use
>> 0-conf, the transaction must be confirmed to remove the risk of double
>> spends using commit txs associated with the previous funding tx.
>>
>> But interestingly, with the protocol I drafted, the LSP can finalize and
>> broadcast the batched splice transaction while users are offline. With a
>> bit of luck, when the users reconnect, that transaction will already be
>> confirmed so it will "feel 0-conf".
>>
>> Also, we need a mechanism like the one you describe when we detect that
>> a splice transaction has been double-spent. But this isn't specific to
>> batched transactions, 2-party splice transactions can also be double
>> spent by either participant. So we need that mechanism anyway? The spec
>> doesn't have a way of aborting a splice after exchanging signatures, but
>> you can always do it as an RBF operation (which actually just does a
>> completely different splice). This is what Greg mentioned in his answer.
>>
>> > part of the splice proposal is that while a channel is being spliced,
>> > it should not be spliced again, which your proposal seems to violate.
>>
>> The spec doesn't require that, I'm not sure what made you think that.
>> While a channel is being spliced, it can definitely be spliced again as
>> an RBF attempt (this is actually a very important feature), which double
>> spends the other unconfirmed splice attempts.
>>
>> ariard:
>>
>> > It is uncertain to me if secure fee-bumping, even with future
>> > mechanisms like package relay and nversion=3, is robust enough for
>> > multi-party transactions and covenant-enable constructions under usual
>> > risk models.
>>
>> I'm not entirely sure why you're bringing this up in this context...
>> I agree that we most likely cannot use RBF on those batched transactions
>> we will need to rely on CPFP and potentially package relay. But why is
>> it different from non-multi-party transactions here?
>>
>> > See test here:
>> >
>> https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>>
>> I'd argue that this is quite different from the standard replacement

Re: [bitcoin-dev] [Lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-18 Thread Bastien TEINTURIER via bitcoin-dev
Hey Z-man, Antoine,

Thank you for your feedback, responses inline.

z-man:

> Then if I participate in a batched splice, I can disrupt the batched
> splice by broadcasting the old state and somehow convincing miners to
> confirm it before the batched splice.

Correct, I didn't mention it in my post but batched splices cannot use
0-conf, the transaction must be confirmed to remove the risk of double
spends using commit txs associated with the previous funding tx.

But interestingly, with the protocol I drafted, the LSP can finalize and
broadcast the batched splice transaction while users are offline. With a
bit of luck, when the users reconnect, that transaction will already be
confirmed so it will "feel 0-conf".

Also, we need a mechanism like the one you describe when we detect that
a splice transaction has been double-spent. But this isn't specific to
batched transactions, 2-party splice transactions can also be double
spent by either participant. So we need that mechanism anyway? The spec
doesn't have a way of aborting a splice after exchanging signatures, but
you can always do it as an RBF operation (which actually just does a
completely different splice). This is what Greg mentioned in his answer.

> part of the splice proposal is that while a channel is being spliced,
> it should not be spliced again, which your proposal seems to violate.

The spec doesn't require that, I'm not sure what made you think that.
While a channel is being spliced, it can definitely be spliced again as
an RBF attempt (this is actually a very important feature), which double
spends the other unconfirmed splice attempts.

ariard:

> It is uncertain to me if secure fee-bumping, even with future
> mechanisms like package relay and nversion=3, is robust enough for
> multi-party transactions and covenant-enable constructions under usual
> risk models.

I'm not entirely sure why you're bringing this up in this context...
I agree that we most likely cannot use RBF on those batched transactions
we will need to rely on CPFP and potentially package relay. But why is
it different from non-multi-party transactions here?

> See test here:
>
https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf

I'd argue that this is quite different from the standard replacement
cycling attack, because in this protocol wallet users can only
unilaterally double-spend with a commit tx, on which they cannot set
the feerate. The only participant that can "easily" double-spend is
the exchange, and they wouldn't have an incentive to here, users are
only withdrawing funds, there's no opportunity of stealing funds?

Thanks,
Bastien

Le mar. 17 oct. 2023 à 21:10, Antoine Riard  a
écrit :

> Hi Bastien,
>
> > The naive way of enabling lightning withdrawals is to make the user
> > provide a lightning invoice that the exchange pays over lightning. The
> > issue is that in most cases, this simply shifts the burden of making an
> > on-chain transaction to the user's wallet provider: if the user doesn't
> > have enough inbound liquidity (which is likely), a splice transaction
> > will be necessary. If N users withdraw funds from an exchange, we most
> > likely will end up with N separate splice transactions.
>
> It is uncertain to me if secure fee-bumping, even with future mechanisms
> like package relay and nversion=3, is robust enough for multi-party
> transactions and covenant-enable constructions under usual risk models.
>
> See test here:
>
> https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>
> Appreciated expert eyes of folks understanding both lightning and core
> mempool on this.
> There was a lot of back and forth on nversion=3 design rules, though the
> test is normally built on glozow top commit of the 3 Oct 2023.
>
> Best,
> Antoine
>
> Le mar. 17 oct. 2023 à 14:03, Bastien TEINTURIER  a
> écrit :
>
>> Good morning list,
>>
>> I've been trying to design a protocol to let users withdraw funds from
>> exchanges directly into their lightning wallet in an efficient way
>> (with the smallest on-chain footprint possible).
>>
>> I've come to the conclusion that this is only possible with some form of
>> covenants (e.g. `SIGHASH_ANYPREVOUT` would work fine in this case). The
>> goal of this post is to explain why, and add this usecase to the list of
>> useful things we could do if we had covenants (insert "wen APO?" meme).
>>
>> The naive way of enabling lightning withdrawals is to make the user
>> provide a lightning invoice that the exchange pays over lightning. The
>> issue is that in most cases, this simply shifts the burden of making an
>> on-chain transaction to the user's wallet provider: if the user doesn't
>> have enough inbound liquidity (which is likely), a splice transaction
>> will be necessary. If N users withdraw funds from an exchange, we most
>> likely will end up with N separate splice transactions.
>>
>> Hence the idea of batching those into a single transaction. Since we
>> don't 

[bitcoin-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-17 Thread Bastien TEINTURIER via bitcoin-dev
Good morning list,

I've been trying to design a protocol to let users withdraw funds from
exchanges directly into their lightning wallet in an efficient way
(with the smallest on-chain footprint possible).

I've come to the conclusion that this is only possible with some form of
covenants (e.g. `SIGHASH_ANYPREVOUT` would work fine in this case). The
goal of this post is to explain why, and add this usecase to the list of
useful things we could do if we had covenants (insert "wen APO?" meme).

The naive way of enabling lightning withdrawals is to make the user
provide a lightning invoice that the exchange pays over lightning. The
issue is that in most cases, this simply shifts the burden of making an
on-chain transaction to the user's wallet provider: if the user doesn't
have enough inbound liquidity (which is likely), a splice transaction
will be necessary. If N users withdraw funds from an exchange, we most
likely will end up with N separate splice transactions.

Hence the idea of batching those into a single transaction. Since we
don't want to introduce any intermediate transaction, we must be able
to create one transaction that splices multiple channels at once. The
issue is that for each of these channels, we need a signature from the
corresponding wallet user, because we're spending the current funding
output, which is a 2-of-2 multisig between the wallet user and the
wallet provider. So we run into the usual availability problem: we need
signatures from N users who may not be online at the same time, and if
one of those users never comes online or doesn't complete the protocol,
we must discard the whole batch.

There is a workaround though: each wallet user can provide a signature
using `SIGHASH_SINGLE | SIGHASH_ANYONECANPAY` that spends their current
funding output to create a new funding output with the expected amount.
This lets users sign *before* knowing the final transaction, which the
exchange can create by batching pairs of inputs/outputs. But this has
a fatal issue: at that point the wallet user has no way of spending the
new funding output (since it is also a 2-of-2 between the wallet user
and the wallet provider). The wallet provider can now blackmail the user
and force them to pay to get their funds back.

Lightning normally fixes this by exchanging signatures for a commitment
transaction that sends the funds back to their owners *before* signing
the parent funding/splice transaction. But here that is impossible,
because we don't know yet the `txid` of the batch transaction (that's
the whole point, we want to be able to sign before creating the batch)
so we don't know the new `prevout` we should spend from. I couldn't find
a clever way to work around that, and I don't think there is one (but
I would be happy to be wrong).

With `SIGHASH_ANYPREVOUT`, this is immediately fixed: we can exchange
anyprevout signatures for the commitment transaction, and they will be
valid to spend from the batch transaction. We are safe from signature
reuse, because funding keys are rotated at each splice so we will never
create another output that uses the same 2-of-2 script.

I haven't looked at other forms of covenants, but most of them likely
address this problem as well.

Cheers,
Bastien
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New transaction policies (nVersion=3) for contracting protocols

2022-09-30 Thread Bastien TEINTURIER via bitcoin-dev
t;
>>>> Apologies for forgetting to contextualize, I've been sitting on this for
>>>> too long.
>>>>
>>>> > The other scenario it doesn't really fix is where
>>>> HTLC/commitment-like transactions are being resolved in a batch, but due to
>>>> relative time constraints, you may want to accelerate some and not others.
>>>> Now you must pay higher rates to replace all of the transaction bumps. This
>>>> is a "self-pin" and "get good at utxos noob" type problem, but it's
>>>> something that axing rule#3 in favor of a Replace-by-ancestor-feerate
>>>> system would get us.
>>>>
>>>> I understand you to mean "if you don't have enough UTXOs and you're
>>>> forced to batch-bump, you over-pay because you need to bring them all to
>>>> the highest target feerate." Isn't this kind of separate, wallet-related
>>>> problem? Contracting or not, surely every wallet needs to have enough UTXOs
>>>> to not batch transactions that shouldn't be batched... I don't see how a
>>>> replace-by-ancestor-feerate policy would make any difference for this?
>>>>
>>>> Also in general I'd like to reiterate that ancestor feerate is not a
>>>> panacea to all our RBF incentive compatibility concerns. Like individual
>>>> feerate, unless we run the mining algorithm, it cannot tell us exactly how
>>>> quickly this transaction would be mined.
>>>>
>>>> We're estimating the incentive compatibility of the original
>>>> transaction(s) and replacement transaction(s), with the goal of not letting
>>>> a transaction replace something that would have been more incentive
>>>> compatible to mine. As such, we don't want to overestimate how good the
>>>> replacement is, and we don't want to underestimate how good the original
>>>> transactions are. This rule "The minimum between package feerate and
>>>> ancestor feerate of the child is not lower than the individual feerates of
>>>> all directly conflicting transactions and the ancestor feerates of all
>>>> original transactions" is a conservative estimate.
>>>>
>>>> > Would kind of be nice if package RBF would detect a "sibling output
>>>> spend" conflict, and knock it out of the mempool via the other replacement
>>>> rules? Getting rid of the requirement to 1 block csv lock every output
>>>> would be quite nice from a smart contracting composability point of view.
>>>>
>>>> Interesting, so when a transaction hits a mempool tx's descendant
>>>> limit, we consider evicting one of its descendants in favor of this
>>>> transaction, based on the RBF rules.
>>>> Cool idea! After chewing on this for a bit, I think this *also* just
>>>> boils down to the fact that RBF should require replacements to be better
>>>> mining candidates. As in, if we added this policy and it can make us evict
>>>> the sibling and accept a transaction with a bunch of low-feerate ancestor
>>>> junk, it would be a new pinning vector.
>>>>
>>>> > If you're a miner and you receive a non-V3, second descendant of an
>>>> unconfirmed V3 transaction, if the offered fee is in the top mempool
>>>> backlog, I think you would have an interest to accept such a transaction.
>>>>
>>>> > So I'm not sure if those two rules are compatible with miners
>>>> incentives...
>>>>
>>>> The same argument can be made for the 26th descendant of a mempool
>>>> transaction; it's also not entirely incentive-compatible to reject it, but
>>>> that is not the *only* design goal in mempool policy. Of course, the
>>>> difference here is that the 25-descendant limit rule is a sensible DoS
>>>> protection, while this 1-descendant limit rule is more of a "help the
>>>> Bitcoin ecosystem" policy, just like CPFP carve-out, dust limit, etc. I can
>>>> of course understand why not everyone would be in favor of this, but I do
>>>> think it's worth it.
>>>>
>>>> > > 4. A V3 transaction that has an unconfirmed V3 ancestor cannot be
>>>>
>>>> > >larger than 1000 virtual bytes.
>>>>
>>>> > If I understand correctly the 1000 vb upper bound rational, it would
>>>> be to constraint the pinning counterparty to attach a high fee to a child
>>>> due to the limited size, if the

Re: [bitcoin-dev] New transaction policies (nVersion=3) for contracting protocols

2022-09-29 Thread Bastien TEINTURIER via bitcoin-dev
 fingerprints leaking from unilateral LN
> closures such as HTLC/PTLC timelocks...
>
> > I agree with you, this isn't worse than today, unilateral closes will
> probably always be identifiable on-chain.
>
> Great to hear that there is no privacy worsening!
>
> Best,
> Gloria
>
> On Mon, Sep 26, 2022 at 5:02 PM Greg Sanders  wrote:
>
>> Bastien,
>>
>> > This may be already covered by the current package RBF logic, in that
>> scenario we are simply replacing [ParentTx, ChildTx1] with
>> [ParentTx, ChildTx2] that pays more fees, right?
>>
>> For clarification, package RBF is ParentTx*s*(plural), and
>> ChildTx(singular), so it might be a bit more complicated than we're
>> thinking, and currently the V3 proposal would first de-duplicate the
>> ParentTx based on what is in the mempool, then look at the "rest" of the
>> transactions as a package, then individually. Not the same, not sure how
>> different. I'll defer to experts.
>>
>> Best,
>> Greg
>>
>> On Mon, Sep 26, 2022 at 11:48 AM Bastien TEINTURIER via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Thanks Gloria for this great post.
>>>
>>> This is very valuable work for L2 contracts, and will greatly improve
>>> their security model.
>>>
>>> > "Only 1 anchor output? What if I need to bump counterparty's
>>> commitment tx in mempool?"
>>> > You won't need to fee-bump a counterparty's commitment tx using CPFP.
>>> > You would just package RBF it by attaching a high-feerate child to
>>> > your commitment tx.
>>>
>>> Note that we can also very easily make that single anchor spendable by
>>> both participants (or even anyone), so if you see your counterparty's
>>> commitment in your mempool, you can bump it without publishing your
>>> own commitment, which is quite desirable (your own commitment tx has
>>> CSV delays on your outputs, whereas your counterparty's commitment tx
>>> doesn't).
>>>
>>> > "Is this a privacy issue, i.e. doesn't it allow fingerprinting LN
>>> transactions based on nVersion?"
>>>
>>> I agree with you, this isn't worse than today, unilateral closes will
>>> probably always be identifiable on-chain.
>>>
>>> > Would kind of be nice if package RBF would detect a "sibling output
>>> spend"
>>> > conflict, and knock it out of the mempool via the other replacement
>>> rules?
>>> > Getting rid of the requirement to 1 block csv lock every output would
>>> be
>>> > quite nice from a smart contracting composability point of view.
>>>
>>> +1, that would be very neat!
>>>
>>> This may be already covered by the current package RBF logic, in that
>>> scenario we are simply replacing [ParentTx, ChildTx1] with
>>> [ParentTx, ChildTx2] that pays more fees, right?
>>>
>>> > 1) I do think that we should seriously consider allowing OP_TRUE to
>>> become
>>> > a standard script type as part of this policy update. If pinning is
>>> solved,
>>> > then there's no reason to require all those extra bytes for "binding"
>>> an
>>> > anchor to a specific wallet/user. We can save quite a few bytes by
>>> having
>>> > the input be empty of witness data.
>>> > 2) If we allow for a single dust-value(0 on up) output which is
>>> immediately
>>> > spent by the package, anchors become even easier to to design. No
>>> value has
>>> > to be "sapped" from contract participants to make an anchor output.
>>> There's
>>> > more complications for this, such as making sure the parent
>>> transaction is
>>> > dropped if the child spend is dropped, but maybe it's worth the
>>> squeeze.
>>>
>>> I also think both of these could be quite useful. This would probably
>>> always
>>> be used in combination with a parent transaction that pays 0 fees, so the
>>> 0-value output would always be spent in the same block.
>>>
>>> But this means we could end up with 0-value outputs in the utxo set, if
>>> for
>>> some reason the parent tx is CPFP-ed via another output than the 0-value
>>> one,
>>> which would be a utxo set bloat issue. But I'd argue that we're probably
>>> already creating utxo set bloat with the 330 sat anchor outputs
>>> (especially
>>> since we use two of

Re: [bitcoin-dev] New transaction policies (nVersion=3) for contracting protocols

2022-09-26 Thread Bastien TEINTURIER via bitcoin-dev
Thanks Gloria for this great post.

This is very valuable work for L2 contracts, and will greatly improve
their security model.

> "Only 1 anchor output? What if I need to bump counterparty's commitment
tx in mempool?"
> You won't need to fee-bump a counterparty's commitment tx using CPFP.
> You would just package RBF it by attaching a high-feerate child to
> your commitment tx.

Note that we can also very easily make that single anchor spendable by
both participants (or even anyone), so if you see your counterparty's
commitment in your mempool, you can bump it without publishing your
own commitment, which is quite desirable (your own commitment tx has
CSV delays on your outputs, whereas your counterparty's commitment tx
doesn't).

> "Is this a privacy issue, i.e. doesn't it allow fingerprinting LN
transactions based on nVersion?"

I agree with you, this isn't worse than today, unilateral closes will
probably always be identifiable on-chain.

> Would kind of be nice if package RBF would detect a "sibling output spend"
> conflict, and knock it out of the mempool via the other replacement rules?
> Getting rid of the requirement to 1 block csv lock every output would be
> quite nice from a smart contracting composability point of view.

+1, that would be very neat!

This may be already covered by the current package RBF logic, in that
scenario we are simply replacing [ParentTx, ChildTx1] with
[ParentTx, ChildTx2] that pays more fees, right?

> 1) I do think that we should seriously consider allowing OP_TRUE to become
> a standard script type as part of this policy update. If pinning is
solved,
> then there's no reason to require all those extra bytes for "binding" an
> anchor to a specific wallet/user. We can save quite a few bytes by having
> the input be empty of witness data.
> 2) If we allow for a single dust-value(0 on up) output which is
immediately
> spent by the package, anchors become even easier to to design. No value
has
> to be "sapped" from contract participants to make an anchor output.
There's
> more complications for this, such as making sure the parent transaction is
> dropped if the child spend is dropped, but maybe it's worth the squeeze.

I also think both of these could be quite useful. This would probably always
be used in combination with a parent transaction that pays 0 fees, so the
0-value output would always be spent in the same block.

But this means we could end up with 0-value outputs in the utxo set, if for
some reason the parent tx is CPFP-ed via another output than the 0-value
one,
which would be a utxo set bloat issue. But I'd argue that we're probably
already creating utxo set bloat with the 330 sat anchor outputs (especially
since we use two of them, but only one is usually spent), so it would
probably be *better* than what we're doing today.

Thanks,
Bastien

Le lun. 26 sept. 2022 à 03:22, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Gloria,
>
> Thanks for the progress on package RBF, few early questions.
>
> > 2. Any descendant of an unconfirmed V3 transaction must also be V3.
>
> > 3. An unconfirmed V3 transaction cannot have more than 1 descendant.
>
> If you're a miner and you receive a non-V3, second descendant of an
> unconfirmed V3 transaction, if the offered fee is in the top mempool
> backlog, I think you would have an interest to accept such a transaction.
>
> So I'm not sure if those two rules are compatible with miners incentives...
>
> > 4. A V3 transaction that has an unconfirmed V3 ancestor cannot be
> >larger than 1000 virtual bytes.
>
> If I understand correctly the 1000 vb upper bound rational, it would be to
> constraint the pinning counterparty to attach a high fee to a child due to
> the limited size, if they would like this transaction to be stuck in the
> network mempools. By doing so  this child has high odds to confirm.
>
> I still wonder if this compatible with miner incentives in period of empty
> mempools, in the sense that if you've already a V3 transaction of size
> 100Kvb offering 2 sat/vb, it's more interesting than a V3 replacement
> candidate of size 1000 vb offering 10 sat/vb. It could be argued the former
> should be conserved.
>
> (That said, the hard thing with any replacement strategy we might evict a
> parent transaction *now* to which is attached a high-feerate child *latter*
> making for a utxo considered the best ancestor set. Maybe in the long-term
> miners should keep every transaction ever accepted...)
>
> > (Lower bound) the smaller this limit, the fewer UTXOs a child may use
> > to fund this fee-bump. For example, only allowing the V3 child to have
> > 2 inputs would require L2 protocols to manage a wallet with high-value
> > UTXOs and make batched fee-bumping impossible. However, as the
> > fee-bumping child only needs to fund fees (as opposed to payments),
> > just a few UTXOs should suffice.
>
> Reminder for L2 devs, batched fee-bumping of time-sensitive confirmations
> of 

Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Bastien TEINTURIER via bitcoin-dev
Good morning,

> The tricky question is what happens when X arrives on its own and it
> might be that no one ever sends a replacement for B,C,D)

It feels ok to me, but this is definitely arguable.

It covers the fact that B,C,D could have been fake transactions whose
sole purpose was to do a pinning attack: in that case the attacker would
have found a way to ensure these transactions don't confirm anyway (or
pay minimal/negligible fees).

If these transactions were legitimate, I believe that their owners would
remake them at some point (because these transactions reflect a business
relationship that needed to happen, so it should very likely still
happen). It's probably hard to verify because the new corresponding
transactions may have nothing in common with the first, but I think the
simplifications it offers for wallets is worth it (which is just my
opinion and needs more scrutiny/feedback).

> But if your backlog's feerate does drop off, *and* that matters, then
> I don't think you can ignore the impact of the descendent transactions
> that you might not get a replacement for.

That is because you're only taking into account the current backlog, and
not taking into account the fact that new items will be added to it soon
to replace the evicted descendants. But I agree that this is a bet: we
can't predict the future and guarantee these replacements will come.

It is really a trade-off, ignoring descendents provides a much simpler
contract that doesn't vary from one mempool to another, but when your
backlog isn't full enough, you may lose some future profits if
transactions don't come in later.

> I think "Y% higher" rather than just "higher" is only useful for
> rate-limiting, not incentive compatibility. (Though maybe it helps
> stabilise a greedy algorithm in some cases?)

That's true. I claimed these policies only address incentives, but using
a percentage increase addresses rate-limiting a bit as well (I couldn't
resist trying to do at least something for it!). I find it a very easy
mechanism to implement, while choosing an absolute value is hard (it's
always easier to think in relatives than absolutes).

> This is why I think it is important to understand the rationales for
introducing the rules in the first place

I completely agree. As you mentioned, we are still in brainstorming
phase, once (if?) we start to converge on what could be better policies,
we do need to clearly explain each policy's expected goal. That will let
future Bastien writing code in 2030 clearly highlight why the 2022 rules
don't make sense anymore!

Cheers,
Bastien

Le sam. 5 févr. 2022 à 14:22, Michael Folkson 
a écrit :

> Thanks for this Bastien (and Gloria for initially posting about this).
>
> I sympathetically skimmed the eclair PR (
> https://github.com/ACINQ/eclair/pull/2113) dealing with replaceable
> transactions fee bumping.
>
> There will continue to be a (hopefully) friendly tug of war on this
> probably for the rest of Bitcoin's existence. I am sure people like Luke,
> Prayank etc will (rightfully) continue to raise that Lightning and other
> second layer protocols shouldn't demand that policy rules be changed if
> there is a reason (e.g. DoS vector) for those rules on the base network.
> But if there are rules that have no upside, introduce unnecessary
> complexity for no reason and make Lightning implementers like Bastien's
> life miserable attempting to deal with them I really hope we can make
> progress on removing or simplifying them.
>
> This is why I think it is important to understand the rationales for
> introducing the rules in the first place (and why it is safe to remove them
> if indeed it is) and being as rigorous as possible on the rationales for
> introducing additional rules. It sounds like from Gloria's initial post we
> are still at a brainstorming phase (which is fine) but knowing what we know
> today I really hope we can learn from the mistakes of the original BIP 125,
> namely the Core implementation not matching the BIP and the sparse
> rationales for the rules. As Bastien says this is not criticizing the
> original BIP 125 authors, 7 years is a long time especially in Bitcoin
> world and they probably weren't thinking about Bastien sitting down to
> write an eclair PR in late 2021 (and reviewers of that PR) when they wrote
> the BIP in 2015.
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
>
>
> --- Original Message ---
> On Monday, January 31st, 2022 at 3:57 PM, Bastien TEINTURIER via
> bitcoin-dev  wrote:
>
> Hi Gloria,
>
> Many thanks for raising awareness on these issues and constantly pushing
> towards finding a better model. This work will highly improve the
> security of any multi-party co

Re: [bitcoin-dev] Improving RBF Policy

2022-02-01 Thread Bastien TEINTURIER via bitcoin-dev
Hi AJ, Prayank,

> I think that's backwards: we're trying to discourage people from wasting
> the network's bandwidth, which they would do by publishing transactions
> that will never get confirmed -- if they were to eventually get confirmed
> it wouldn't be a waste of bandwith, after all. But if the original
> descendent txs were that sort of spam, then they may well not be
> submitted again if the ancestor tx reaches a fee rate that's actually
> likely to confirm.

But do you agree that descendants only matter for DoS resistance then,
not for miner incentives?

I'm asking this because I think a new set of policies should separate
policies that address the miner incentives from policies that address
the DoS issues.

The two policies I proposed address miner incentives. I think they're
insufficient to address DoS issues. But adding a 3rd policy to address
DoS issues may be a good solution?

I think that rate-limiting p2p as you suggest (and Gloria also mentioned
it) is likely a better way of fixing the DoS concerns than a descendant
rule like BIP 125 rule 5 (which as I mentioned earlier, is problematic
because the descendent set varies from one mempool to another).

I would like to add a small update to my policy suggestions. The X and Y
percentage increase should be met for both the ancestor scores AND the
transaction in isolation. Otherwise I could replace txA with txA' that
uses a new ancestor txB that has a high fee and high feerate, while txA'
has a low fee and low feerate. It's then possible for txB to confirm
without txA', and what would remain then in the mempool would be worse
than before the replacement.

> All you need is for there to be *a* path that follows the new relay rules
> and gets from your node/wallet to perhaps 10% of hashpower, which seems
> like something wallet providers could construct relatively quickly?

That's true, maybe we can be more optimistic about the timeline for
using an updated set of policies ;)

> Do you think such multi party contracts are vulnerable by design
> considering they rely on policy that cannot be enforced?

It's a good question. Even though these policies cannot be enforced, if
they are rational to apply by nodes, I think it's ok to rely on them.
Others may disagree with that, but I guess it's worth a separate thread.

> Not sure I understand this part because if a transaction is on-chain
> it can't be replaced.

Sorry, that was a bit unclear.

Suppose I have txA that I want to RBF, but I only have unconfirmed utxos
and I can't simply lower its existing outputs to reach my desired
feerate.

I must make one of my unconfirmed utxos confirm asap just to be able to
use it to RBF txA. That means I'll need to pay fees a first time just to
convert one of my unconfirmed utxos to a confirmed one. Then I'll pay
the fees to bump txA. I had to overpay fees compared to just using my
unconfirmed utxo in the first place (and manage more complexity to track
the confirmation of my unconfirmed utxo).

Thanks for your feedback!
Bastien

Le mar. 1 févr. 2022 à 03:47, Prayank  a écrit :

> Hi Bastein,
>
> > This work will highly improve the security of any multi-party contract
> trying to build on top of bitcoin
>
> Do you think such multi party contracts are vulnerable by design
> considering they rely on policy that cannot be enforced?
>
> > For starters, let me quickly explain why the current rules are hard to
> work with in the context of lightning
>
> Using the term 'rules' can be confusing sometimes because it's just a
> policy and different from consensus rules. I wish we could change this in
> the BIP with something else.
>
> > I'm actually paying a high fee twice instead of once (and needlessly
> using on-chain space, our scarcest asset, because we could have avoided
> that additional transaction
>
> Not sure I understand this part because if a transaction is on-chain it
> can't be replaced.
>
> > The second biggest pain point is rule 3. It prevents me from efficiently
> using my capital while it's unconfirmed
>
> > I'm curious to hear other people's thoughts on that. If it makes sense,
> I would propose the following very simple rules
>
> Looks interesting however not sure about X and Y.
>
>
> --
> Prayank
>
> A3B1 E430 2298 178F
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Bastien TEINTURIER via bitcoin-dev
Hi Jeremy,

Right now, lightning anchor outputs use a 330 sats amount. Each commitment
transaction has two such outputs, and only one of them is spent to help the
transaction get confirmed, so the other stays there and bloats the utxo set.
We allow anyone to spend them after a csv of 16 blocks, in the hope that
someone will claim a batch of them when the fees are low and remove them
from the utxo set. However, that trick wouldn't work with 0-value outputs,
as
no-one would ever claim them (doesn't make economical sense).

We actually need to have two of them to avoid pinning: each participant is
able to spend only one of these outputs while the parent tx is unconfirmed.
I believe N-party protocols would likely need N such outputs (not sure).

You mention a change to the carve-out rule, can you explain it further?
I believe it would be a necessary step, otherwise 0-value outputs for
CPFP actually seem worse than low-value ones...

Thanks,
Bastien

Le mer. 8 déc. 2021 à 02:29, Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Bitcoin Devs (+cc lightning-dev),
>
> Earlier this year I proposed allowing 0 value outputs and that was shot
> down for various reasons, see
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
>
> I think that there can be a simple carve out now that package relay is
> being launched based on my research into covenants from 2017
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf.
>
> Essentially, if we allow 0 value outputs BUT require as a matter of policy
> (or consensus, but policy has major advantages) that the output be used as
> an Intermediate Output (that is, in order for the transaction to be
> creating it to be in the mempool it must be spent by another tx)  with the
> additional rule that the parent must have a higher feerate after CPFP'ing
> the parent than the parent alone we can both:
>
> 1) Allow 0 value outputs for things like Anchor Outputs (very good for not
> getting your eltoo/Decker channels pinned by junk witness data using Anchor
> Inputs, very good for not getting your channels drained by at-dust outputs)
> 2) Not allow 0 value utxos to proliferate long
> 3) It still being valid for a 0 value that somehow gets created to be
> spent by the fee paying txn later
>
> Just doing this as a mempool policy also has the benefits of not
> introducing any new validation rules. Although in general the IUTXO concept
> is very attractive, it complicates mempool :(
>
> I understand this may also be really helpful for CTV based contracts (like
> vault continuation hooks) as well as things like spacechains.
>
> Such a rule -- if it's not clear -- presupposes a fully working package
> relay system.
>
> I believe that this addresses all the issues with allowing 0 value outputs
> to be created for the narrow case of immediately spendable outputs.
>
> Cheers,
>
> Jeremy
>
> p.s. why another post today? Thank Greg
> https://twitter.com/JeremyRubin/status/1468390561417547780
>
>
> --
> @JeremyRubin 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-27 Thread Bastien TEINTURIER via bitcoin-dev
>
> I think we could restrain package acceptance to only confirmed inputs for
> now and revisit later this point ? For LN-anchor, you can assume that the
> fee-bumping UTXO feeding the CPFP is already
> confirmed. Or are there currently-deployed use-cases which would benefit
> from your proposed Rule #2 ?
>

I think constraining package acceptance to only confirmed inputs
is very limiting and quite dangerous for L2 protocols.

In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

Bastien

Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Gloria,
>
> Thanks for your answers,
>
> > In summary, it seems that the decisions that might still need
> > attention/input from devs on this mailing list are:
> > 1. Whether we should start with multiple-parent-1-child or
> 1-parent-1-child.
> > 2. Whether it's ok to require that the child not have conflicts with
> > mempool transactions.
>
> Yes 1) it would be good to have inputs of more potential users of package
> acceptance . And 2) I think it's more a matter of clearer wording of the
> proposal.
>
> However, see my final point on the relaxation around "unconfirmed inputs"
> which might in fact alter our current block construction strategy.
>
> > Right, the fact that we essentially always choose the first-seen witness
> is
> > an unfortunate limitation that exists already. Adding package mempool
> > accept doesn't worsen this, but the procedure in the future is to replace
> > the witness when it makes sense economically. We can also add logic to
> > allow package feerate to pay for witness replacements as well. This is
> > pretty far into the future, though.
>
> Yes I agree package mempool doesn't worsen this. And it's not an issue for
> current LN as you can't significantly inflate a spending witness for the
> 2-of-2 funding output.
> However, it might be an issue for multi-party protocol where the spending
> script has alternative branches with asymmetric valid witness weights.
> Taproot should ease that kind of script so hopefully we would deploy
> wtxid-replacement not too far in the future.
>
> > I could be misunderstanding, but an attacker wouldn't be able to
> > batch-attack like this. Alice's package only conflicts with A' + D', not
> A'
> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>
> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
> consider.
>
> In LN, if you're trying to confirm a commitment transaction to time-out or
> claim on-chain a HTLC and the timelock is near-expiration, you should be
> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
> value offered by the HTLC.
>
> Following this security assumption, an attacker can exploit it by
> targeting together commitment transactions from different channels by
> blocking them under a high-fee child, of which the fee value
> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
> from observing mempools state, victims can't learn they're targeted by the
> same attacker.
>
> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
> at loss on stealing P1, as she has paid more in fees but will realize a
> gain on P2+P3.
>
> In this model, Alice is allowed to evict those 2 transactions (A' + D')
> but as she is economically-bounded she won't succeed.
>
> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
> 1st pinning scenario is correct and "lucractive" when you sum the global
> gain/loss.
>
> There is a 2nd attack scenario where A + B + C + D, where D is the child
> of A,B,C. All those transactions are honestly issued by Alice. Once A + B +
> C + D are propagated in network mempools, Mallory is able to replace A + D
> with  A' + D' where D' is paying a higher fee. This package A' + D' will
> confirm soon if D feerate was compelling but Mallory succeeds in delaying
> the confirmation
> of B + C for one or more blocks. As B + C are pre-signed commitments with
> a low-fee rate they won't confirm without Alice issuing a new child E.
> Mallory can repeat the same trick by broadcasting
> B' + E' and delay again the confirmation of C.
>
> If the remaining package pending HTLC has a higher-value than all the
> malicious fees over-bid, Mallory should realize a 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-22 Thread Bastien TEINTURIER via bitcoin-dev
Great, thanks for this clarification!

Can you confirm that this won't be an issue either with your
example 2C (in your first set of diagrams)? If I understand it
correctly it shouldn't, but I'd rather be 100% sure.

A package A + C will be able to replace A' + B regardless of
the weight of A' + B?

Thanks,
Bastien

Le mar. 21 sept. 2021 à 18:42, Gloria Zhao  a écrit :

> Hi Bastien,
>
> Excellent diagram :D
>
> > Here the issue is that a revoked commitment tx A' is pinned in other
> > mempools, with a long chain of descendants (or descendants that reach
> > the maximum replaceable size).
> > We would really like A + C to be able to replace this pinned A'.
> > We can't submit individually because A on its own won't replace A'...
>
> Right, this is a key motivation for having Package RBF. In this case, A+C
> can replace A' + B1...B24.
>
> Due to the descendant limit (each node operator can increase it on their
> own node, but the default is 25), A' should have no more than 25
> descendants, even including CPFP carve out. As long as A only conflicts
> with A', it won't be trying to replace more than 100 transactions. The
> proposed package RBF will allow C to pay for A's conflicts, since their
> package feerate is used in the fee comparisons. A is not a descendant of
> A', so the existence of B1...B24 does not prevent the replacement.
>
> Best,
> Gloria
>
> On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER 
> wrote:
>
>> Hi Gloria,
>>
>> > I believe this attack is mitigated as long as we attempt to submit
>> transactions individually
>>
>> Unfortunately not, as there exists a pinning scenario in LN where a
>> different commit tx is pinned, but you actually can't know which one.
>>
>> Since I really like your diagrams, I made one as well to illustrate:
>>
>> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>>
>> Here the issue is that a revoked commitment tx A' is pinned in other
>> mempools, with a long chain of descendants (or descendants that reach
>> the maximum replaceable size).
>>
>> We would really like A + C to be able to replace this pinned A'.
>> We can't submit individually because A on its own won't replace A'...
>>
>> > I would note that this proposal doesn't accommodate something like
>> diagram B, where C is getting CPFP carve out and wants to bring a +1
>>
>> No worries, that case shouldn't be a concern.
>> I believe any L2 protocol can always ensure it confirms such tx trees
>> "one depth after the other" without impacting funds safety, so it
>> only needs to ensure A + C can get into mempools.
>>
>> Thanks,
>> Bastien
>>
>> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a
>> écrit :
>>
>>> Hi Bastien,
>>>
>>> Thank you for your feedback!
>>>
>>> > In your example we have a parent transaction A already in the mempool
>>> > and an unrelated child B. We submit a package C + D where C spends
>>> > another of A's inputs. You're highlighting that this package may be
>>> > rejected because of the unrelated transaction(s) B.
>>>
>>> > The way I see this, an attacker can abuse this rule to ensure
>>> > transaction A stays pinned in the mempool without confirming by
>>> > broadcasting a set of child transactions that reach these limits
>>> > and pay low fees (where A would be a commit tx in LN).
>>>
>>> I believe you are describing a pinning attack in which your adversarial
>>> counterparty attempts to monopolize the mempool descendant limit of the
>>> shared  transaction A in order to prevent you from submitting a fee-bumping
>>> child C; I've tried to illustrate this as diagram A here:
>>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>>> (please let me know if I'm misunderstanding).
>>>
>>> I believe this attack is mitigated as long as we attempt to submit
>>> transactions individually (and thus take advantage of CPFP carve out)
>>> before attempting package validation. So, in scenario A2, even if the
>>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>>> individual transaction, and allow it due to the CPFP carve out exemption. A
>>> more general goal is: if a transaction would propagate successfully on its
>>> own now, it should still propagate regardless of whether it is included in
>>> a package. The best way to ensure this, as far as I can tell, is to always
>>> try to submit them individually first.
>>>
>>> I would note that this proposal doesn't accommodate something like
>>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>>> C has very low fees and is bumped by D). I don't think this is a use case
>>> since C should be the one fee-bumping A, but since we're talking about
>>> limitations around the CPFP carve out, this is it.
>>>
>>> Let me know if this addresses your concerns?
>>>
>>> Thanks,
>>> Gloria
>>>
>>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
>>> wrote:
>>>
 Hi Gloria,

 Thanks for this detailed 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Bastien TEINTURIER via bitcoin-dev
Hi Gloria,

> I believe this attack is mitigated as long as we attempt to submit
transactions individually

Unfortunately not, as there exists a pinning scenario in LN where a
different commit tx is pinned, but you actually can't know which one.

Since I really like your diagrams, I made one as well to illustrate:
https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg

Here the issue is that a revoked commitment tx A' is pinned in other
mempools, with a long chain of descendants (or descendants that reach
the maximum replaceable size).

We would really like A + C to be able to replace this pinned A'.
We can't submit individually because A on its own won't replace A'...

> I would note that this proposal doesn't accommodate something like
diagram B, where C is getting CPFP carve out and wants to bring a +1

No worries, that case shouldn't be a concern.
I believe any L2 protocol can always ensure it confirms such tx trees
"one depth after the other" without impacting funds safety, so it
only needs to ensure A + C can get into mempools.

Thanks,
Bastien

Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a écrit :

> Hi Bastien,
>
> Thank you for your feedback!
>
> > In your example we have a parent transaction A already in the mempool
> > and an unrelated child B. We submit a package C + D where C spends
> > another of A's inputs. You're highlighting that this package may be
> > rejected because of the unrelated transaction(s) B.
>
> > The way I see this, an attacker can abuse this rule to ensure
> > transaction A stays pinned in the mempool without confirming by
> > broadcasting a set of child transactions that reach these limits
> > and pay low fees (where A would be a commit tx in LN).
>
> I believe you are describing a pinning attack in which your adversarial
> counterparty attempts to monopolize the mempool descendant limit of the
> shared  transaction A in order to prevent you from submitting a fee-bumping
> child C; I've tried to illustrate this as diagram A here:
> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
> (please let me know if I'm misunderstanding).
>
> I believe this attack is mitigated as long as we attempt to submit
> transactions individually (and thus take advantage of CPFP carve out)
> before attempting package validation. So, in scenario A2, even if the
> mempool receives a package with A+C, it would deduplicate A, submit C as an
> individual transaction, and allow it due to the CPFP carve out exemption. A
> more general goal is: if a transaction would propagate successfully on its
> own now, it should still propagate regardless of whether it is included in
> a package. The best way to ensure this, as far as I can tell, is to always
> try to submit them individually first.
>
> I would note that this proposal doesn't accommodate something like diagram
> B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
> very low fees and is bumped by D). I don't think this is a use case since C
> should be the one fee-bumping A, but since we're talking about limitations
> around the CPFP carve out, this is it.
>
> Let me know if this addresses your concerns?
>
> Thanks,
> Gloria
>
> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
> wrote:
>
>> Hi Gloria,
>>
>> Thanks for this detailed post!
>>
>> The illustrations you provided are very useful for this kind of graph
>> topology problems.
>>
>> The rules you lay out for package RBF look good to me at first glance
>> as there are some subtle improvements compared to BIP 125.
>>
>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>
>> I have a question regarding this rule, as your example 2C could be
>> concerning for LN (unless I didn't understand it correctly).
>>
>> This also touches on the package RBF rule 5 ("The package cannot
>> replace more than 100 mempool transactions.")
>>
>> In your example we have a parent transaction A already in the mempool
>> and an unrelated child B. We submit a package C + D where C spends
>> another of A's inputs. You're highlighting that this package may be
>> rejected because of the unrelated transaction(s) B.
>>
>> The way I see this, an attacker can abuse this rule to ensure
>> transaction A stays pinned in the mempool without confirming by
>> broadcasting a set of child transactions that reach these limits
>> and pay low fees (where A would be a commit tx in LN).
>>
>> We had to create the CPFP carve-out rule explicitly to work around
>> this limitation, and I think it would be necessary for package RBF
>> as well, because in such cases we do want to be able to submit a
>> package A + C where C pays high fees to speed up A's confirmation,
>> regardless of unrelated unconfirmed children of A...
>>
>> We could submit only C to benefit from the existing CPFP carve-out
>> rule, but that wouldn't work if our local mempool doesn't have A 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-20 Thread Bastien TEINTURIER via bitcoin-dev
Hi Gloria,

Thanks for this detailed post!

The illustrations you provided are very useful for this kind of graph
topology problems.

The rules you lay out for package RBF look good to me at first glance
as there are some subtle improvements compared to BIP 125.

> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]

I have a question regarding this rule, as your example 2C could be
concerning for LN (unless I didn't understand it correctly).

This also touches on the package RBF rule 5 ("The package cannot
replace more than 100 mempool transactions.")

In your example we have a parent transaction A already in the mempool
and an unrelated child B. We submit a package C + D where C spends
another of A's inputs. You're highlighting that this package may be
rejected because of the unrelated transaction(s) B.

The way I see this, an attacker can abuse this rule to ensure
transaction A stays pinned in the mempool without confirming by
broadcasting a set of child transactions that reach these limits
and pay low fees (where A would be a commit tx in LN).

We had to create the CPFP carve-out rule explicitly to work around
this limitation, and I think it would be necessary for package RBF
as well, because in such cases we do want to be able to submit a
package A + C where C pays high fees to speed up A's confirmation,
regardless of unrelated unconfirmed children of A...

We could submit only C to benefit from the existing CPFP carve-out
rule, but that wouldn't work if our local mempool doesn't have A yet,
but other remote mempools do.

Is my concern justified? Is this something that we should dig into a
bit deeper?

Thanks,
Bastien

Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi there,
>
> I'm writing to propose a set of mempool policy changes to enable package
> validation (in preparation for package relay) in Bitcoin Core. These would
> not
> be consensus or P2P protocol changes. However, since mempool policy
> significantly affects transaction propagation, I believe this is relevant
> for
> the mailing list.
>
> My proposal enables packages consisting of multiple parents and 1 child.
> If you
> develop software that relies on specific transaction relay assumptions
> and/or
> are interested in using package relay in the future, I'm very interested
> to hear
> your feedback on the utility or restrictiveness of these package policies
> for
> your use cases.
>
> A draft implementation of this proposal can be found in [Bitcoin Core
> PR#22290][1].
>
> An illustrated version of this post can be found at
> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
> I have also linked the images below.
>
> ## Background
>
> Feel free to skip this section if you are already familiar with mempool
> policy
> and package relay terminology.
>
> ### Terminology Clarifications
>
> * Package = an ordered list of related transactions, representable by a
> Directed
>   Acyclic Graph.
> * Package Feerate = the total modified fees divided by the total virtual
> size of
>   all transactions in the package.
> - Modified fees = a transaction's base fees + fee delta applied by the
> user
>   with `prioritisetransaction`. As such, we expect this to vary across
> mempools.
> - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>   virtual size][2] and sigop weight. [Implemented here in Bitcoin
> Core][3].
> - Note that feerate is not necessarily based on the base fees and
> serialized
>   size.
>
> * Fee-Bumping = user/wallet actions that take advantage of miner
> incentives to
>   boost a transaction's candidacy for inclusion in a block, including
> Child Pays
> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
> mempool policy is to recognize when the new transaction is more economical
> to
> mine than the original one(s) but not open DoS vectors, so there are some
> limitations.
>
> ### Policy
>
> The purpose of the mempool is to store the best (to be most
> incentive-compatible
> with miners, highest feerate) candidates for inclusion in a block. Miners
> use
> the mempool to build block templates. The mempool is also useful as a
> cache for
> boosting block relay and validation performance, aiding transaction relay,
> and
> generating feerate estimations.
>
> Ideally, all consensus-valid transactions paying reasonable fees should
> make it
> to miners through normal transaction relay, without any special
> connectivity or
> relationships with miners. On the other hand, nodes do not have unlimited
> resources, and a P2P network designed to let any honest node broadcast
> their
> transactions also exposes the transaction validation engine to DoS attacks
> from
> malicious peers.
>
> As such, for unconfirmed transactions we are considering for our mempool,
> we
> apply a set of validation rules in addition to consensus, primarily to
> protect
> 

Re: [bitcoin-dev] [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Bastien TEINTURIER via bitcoin-dev
Great idea, I'll join as well.
Thanks for setting this in motion.

Le ven. 23 avr. 2021 à 17:39, Antoine Riard  a
écrit :

> Hi Jeremy,
>
> Yes dates are floating for now. After Bitcoin 2021, sounds a good idea.
>
> Awesome, I'll be really interested to review again an improved version of
> sponsorship. And I'll try to sketch out the sighash_no-input fee-bumping
> idea which was floating around last year during pinnings discussions. Yet
> another set of trade-offs :)
>
> Le ven. 23 avr. 2021 à 11:25, Jeremy  a écrit :
>
>> I'd be excited to join. Recommend bumping the date  to mid June, if
>> that's ok, as many Americans will be at Bitcoin 2021.
>>
>> I was thinking about reviving the sponsors proposal with a 100 block lock
>> on spending a sponsoring tx which would hopefully make less controversial,
>> this would be a great place to discuss those tradeoffs.
>>
>> On Fri, Apr 23, 2021, 8:17 AM Antoine Riard 
>> wrote:
>>
>>> Hi,
>>>
>>> During the lastest years, tx-relay and mempool acceptances rules of the
>>> base layer have been sources of major security and operational concerns for
>>> Lightning and other Bitcoin second-layers [0]. I think those areas require
>>> significant improvements to ease design and deployment of higher Bitcoin
>>> layers and I believe this opinion is shared among the L2 dev community. In
>>> order to make advancements, it has been discussed a few times in the last
>>> months to organize in-person workshops to discuss those issues with the
>>> presence of both L1/L2 devs to make exchange fruitful.
>>>
>>> Unfortunately, I don't think we'll be able to organize such in-person
>>> workshops this year (because you know travel is hard those days...) As a
>>> substitution, I'm proposing a series of one or more irc meetings. That
>>> said, this substitution has the happy benefit to gather far more folks
>>> interested by those issues that you can fit in a room.
>>>
>>> # Scope
>>>
>>> I would like to propose the following 4 items as topics of discussion.
>>>
>>> 1) Package relay design or another generic L2 fee-bumping primitive like
>>> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
>>> making obsolete propagation of transactions with pre-signed feerate, solve
>>> pinning attacks compromising Lightning/multi-party contract protocol
>>> safety, offer an usable and stable API to L2 software stack, stay
>>> compatible with miner and full-node operators incentives and obviously
>>> minimize CPU/memory DoS vectors.
>>>
>>> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it
>>> trivial for an attacker to partition network mempools in divergent subsets
>>> and from then launch advanced security or privacy attacks against a
>>> Lightning node. Note, it might also be a concern for bandwidth bleeding
>>> attacks against L1 nodes.
>>>
>>> 3) Guidelines about coordinated cross-layers security disclosures.
>>> Mitigating a security issue around tx-relay or the mempool in Core might
>>> have harmful implications for downstream projects. Ideally, L2 projects
>>> maintainers should be ready to upgrade their protocols in emergency in
>>> coordination with base layers developers.
>>>
>>> 4) Guidelines about L2 protocols onchain security design. Currently
>>> deployed like Lightning are making a bunch of assumptions on tx-relay and
>>> mempool acceptances rules. Those rules are non-normative, non-reliable and
>>> lack documentation. Further, they're devoid of tooling to enforce them at
>>> runtime [2]. IMHO, it could be preferable to identify a subset of them on
>>> which second-layers protocols can do assumptions without encroaching too
>>> much on nodes's policy realm or making the base layer development in those
>>> areas too cumbersome.
>>>
>>> I'm aware that some folks are interested in other topics such as
>>> extension of Core's mempools package limits or better pricing of RBF
>>> replacement. So l propose a 2-week concertation period to submit other
>>> topics related to tx-relay or mempools improvements towards L2s before to
>>> propose a finalized scope and agenda.
>>>
>>> # Goals
>>>
>>> 1) Reaching technical consensus.
>>> 2) Reaching technical consensus, before seeking community consensus as
>>> it likely has ecosystem-wide implications.
>>> 3) Establishing a security incident response policy which can be applied
>>> by dev teams in the future.
>>> 4) Establishing a philosophy design and associated documentations (BIPs,
>>> best practices, ...)
>>>
>>> # Timeline
>>>
>>> 2021-04-23: Start of concertation period
>>> 2021-05-07: End of concertation period
>>> 2021-05-10: Proposition of workshop agenda and schedule
>>> late 2021-05/2021-06: IRC meetings
>>>
>>> As the problem space is savagely wide, I've started a collection of
>>> documents to assist this workshop : https://github.com/ariard/L2-zoology
>>> Still wip, but I'll have them in a good shape at agenda publication,
>>> with reading suggestions and open questions to structure discussions.

Re: [bitcoin-dev] MAD-HTLC

2020-06-25 Thread Bastien TEINTURIER via bitcoin-dev
Good morning list,

This is an interesting and simple idea, thanks for sharing this paper!

However I think there are a couple of important issues (but it could be me
misunderstanding):

* the assumption that the mempool is a shared resource is flawed: it's
entirely possible
  to have very different mempools in different areas of the network, for a
potentially long
  period of time (see the RBF pinning thread [1]), and an attacker can
leverage this fact
* a corollary is that Bob may not know that Alice has published her
transaction, and will
  end up publishing his timeout tx, unknowingly giving the two preimages to
the miners
* a corollary of that is a very unhealthy incentive to miners, when they
receive an HTLC
  success tx, to always wait for the timeout before confirming the
transaction, in hope that
  they'll receive the second preimage and will be able to claim the funds
for themselves
  (whereas currently they don't gain anything by waiting before confirming
these txs)

To be fair the paper states that it doesn't address issues of malicious
miners or an attacker
colluding with a miner, but I think that even honest miners now have an
unhealthy incentive
regarding htlc success confirmation.

Let me know if I misunderstood something, or if you have ideas on how to
explore that
threat model in the future.

Cheers,
Bastien

[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002639.html



Le jeu. 25 juin 2020 à 14:45, Nadav Ivgi via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi ZmnSCPxj,
>
> You are of course correct. I had considered the effect of reorgs, but the
> email seemed to be getting too lengthy to mention that too.
>
> You would need a few spare blocks in which Bob won't be accused of bribery
> as a safety margin, which does reduce the time frame in which Alice can get
> her transaction confirmed in order to have a valid bribery fraud. This
> seems workable if the time frame was long enough (over a few hours should
> be sufficient, assuming we consider reorgs of over 3-4 blocks to be
> unlikely), but could indeed be problematic if the time frame is already
> short to begin with.
>
> Nadav
>
> On Thu, Jun 25, 2020 at 7:04 AM ZmnSCPxj  wrote:
>
>> Good morning Nadav,
>>
>> > > I and some number of Lightning devs consider this to be sufficient
>> disincentive to Bob not attacking in the first place.
>> >
>> > An additional disincentive could be introduced in the form of bribery
>> proofs for failed attempts.
>> >
>> > If we assume that "honest" users of the LN protocol won't reveal their
>> timelocked transactions before reaching the timelock expiry (they shouldn't
>> anyway because standard full node implementations won't relay them), we can
>> prove that Bob attempted bribery and failed to an outside observer by
>> showing Bob's signed timelocked transaction, spending an output that was in
>> reality spent by a different transaction prior to the locktime expiry,
>> which should not be possible if Bob had waited.
>>
>>
>> Unfortunately this could be subject to an inversion of this attack.
>>
>> Alice can wait for the timelock to expire, then bribe miners to prevent
>> confirmation of the Bob timelocked transaction, getting the Alice
>> hashlocked transaction confirmed.
>>
>> Now of course you do mention "prior to the locktime expiry" but there is
>> now risk at around locktime.
>>
>> Particularly, "natural" orphaned blocks and short-term chainsplits can
>> exist.
>> Bob might see that the locktime has arrived and broadcast the signed
>> timelocked transaction, then Alice sees the locktime has not yet arrived
>> (due to short-term chainsplits/propagation delays) and broadcast the signed
>> hashlocked transaction, then in the end the Alice side of the short-term
>> chainsplit is what solidifies into reality due to random chance on which
>> miner wins which block.
>> Then Bob can now be accused of bribery, even though it acted innocently;
>> it broadcasted the timelock branch due to a natural chainsplit but Alice
>> hashlocked branch got confirmed.
>>
>> Additional complications can be added on top to help mitigate this edge
>> case but more complex == worse in general.
>> For example it could "prior to locktime expiry" can ignore a few blocks
>> before the actual timelock, but this might allow Bob to mount the attack by
>> initiating its bribery behavior earlier by those few blocks.
>>
>> Finally, serious attackers would just use new pseudonyms, the important
>> thing is to make pseudonyms valuable and costly to lose, so it is
>> considered sufficient that LN nodes need to have some commitment to the LN
>> in the form of actual channels (which are valuable, potentially
>> money-earning constructs, and costly to set up).
>>
>> Other HTLC-using systems, such as the "SwapMarket" being proposed by
>> Chris Belcher, could use similar disincentivizing; I know Chris is planning
>> a fidelity bond system for SwapMarket makers, for example, which would
>> mimic 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-22 Thread Bastien TEINTURIER via bitcoin-dev
Hey ZmnSCPxj,

I agree that in theory this looks possible, but doing it in practice with
accurate control
of what parts of the network get what tx feels impractical to me (but maybe
I'm wrong!).

It feels to me that an attacker who would be able to do this would break
*any* off-chain
construction that relies on absolute timeouts, so I'm hoping this is
insanely hard to
achieve without cooperation from a miners subset. Let me know if I'm too
optimistic on
this!

Cheers,
Bastien

Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :

> Good morning Bastien,
>
> > Thanks for the detailed write-up on how it affects incentives and
> centralization,
> > these are good points. I need to spend more time thinking about them.
> >
> > > This is one reason I suggested using independent pay-to-preimage
> > > transactions[1]
> >
> > While this works as a technical solution, I think it has some incentives
> issues too.
> > In this attack, I believe the miners that hide the preimage tx in their
> mempool have
> > to be accomplice with the attacker, otherwise they would share that tx
> with some of
> > their peers, and some non-miner nodes would get that preimage tx and be
> able to
> > gossip them off-chain (and even relay them to other mempools).
>
> I believe this is technically possible with current mempool rules, without
> miners cooperating with the attacker.
>
> Basically, the attacker releases two transactions with near-equal fees, so
> that neither can RBF the other.
> It releases the preimage tx near miners, and the timelock tx near
> non-miners.
>
> Nodes at the boundaries between those that receive the preimage tx and the
> timelock tx will receive both.
> However, they will receive one or the other first.
> Which one they receive first will be what they keep, and they will reject
> the other (and *not* propagate the other), because the difference in fees
> is not enough to get past the RBF rules (which requires not just a feerate
> increase, but also an increase in absolute fee, of at least the minimum
> relay feerate times transaction size).
>
> Because they reject the other tx, they do not propagate the other tx, so
> the boundary between the two txes is inviolate, neither can get past that
> boundary, this occurs even if everyone is running 100% unmodified Bitcoin
> Core code.
>
> I am not a mempool expert and my understanding may be incorrect.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-22 Thread Bastien TEINTURIER via bitcoin-dev
Thanks for the detailed write-up on how it affects incentives and
centralization,
these are good points. I need to spend more time thinking about them.

This is one reason I suggested using independent pay-to-preimage
> transactions[1]
>

While this works as a technical solution, I think it has some incentives
issues too.
In this attack, I believe the miners that hide the preimage tx in their
mempool have
to be accomplice with the attacker, otherwise they would share that tx with
some of
their peers, and some non-miner nodes would get that preimage tx and be
able to
gossip them off-chain (and even relay them to other mempools).

If they are actively helping the attacker, they wouldn't spend the
pay-to-preimage tx,
unless they gain more from it than the share the attacker gives them. This
becomes
a simple bidding war, and the honest user will always be the losing party
here (the
attacker has nothing to lose). For this reason I'm afraid it wouldn't work
out in practice
as well as we'd hope...what do you think? And even if the honest user wins
the bidding
war, the attack still steals money from that user; it just goes into the
miner's pocket.

But from the perspective of a single LN node, it
> might make more sense to get the information and *not* share it
>

I think it depends. If this attack becomes doable in practice and we see it
happening,
LN routing nodes and service providers have a very high incentive to thwart
these attacks,
because otherwise they'd lose their business as people would leave the
lightning network.

As long as enough nodes think that way (with "enough" being a very hard to
define quantity),
this should mitigate the attack. The only risk would be a big "exit scam"
scenario, but the
coordination cost between all these nodes makes that scenario unlikely
(IMHO).

Thanks,
Bastien

Le sam. 20 juin 2020 à 12:37, David A. Harding  a écrit :

> On Sat, Jun 20, 2020 at 10:54:03AM +0200, Bastien TEINTURIER wrote:
> > We're simply missing information, so it looks like the only good
> > solution is to avoid being in that situation by having a foot in
> > miners' mempools.
>
> The problem I have with that approach is that the incentive is to
> connect to the highest hashrate pools and ignore the long tail of
> smaller pools and solo miners.  If miners realize people are doing this,
> they may begin to charge for information about their mempool and the
> largest miners will likely be able to charge more money per hashrate
> than smaller miners, creating a centralization force by increasing
> existing economies of scale.
>
> Worse, information about a node's mempool is partly trusted.  A node can
> easily prove what transactions it has, but it can't prove that it
> doesn't have a certain transaction.  This implies incumbent pools with a
> long record of trustworthy behavior may be able to charge more per
> hashrate than a newer pools, creating a reputation-based centralizing
> force that pushes individual miners towards well-established pools.
>
> This is one reason I suggested using independent pay-to-preimage
> transactions[1].  Anyone who knows the preimage can mine the
> transaction, so it doesn't provide reputational advantage or direct
> economies of scale---pay-to-preimage is incentive equivalent to paying
> normal onchain transaction fees.  There is an indirect economy of
> scale---attackers are most likely to send the low-feerate
> preimage-containing transaction to just the largest pools, so small
> miners are unlikely to learn the preimage and thus unlikely to be able
> to claim the payment.  However, if the defense is effective, the attack
> should rarely happen and so this should not have a significant effect on
> mining profitability---unlike monitoring miner mempools which would have
> to be done continuously and forever.
>
> ZmnSCPxj noted that pay-to-preimage doesn't work with PTLCs.[2]  I was
> hoping one of Bitcoin's several inventive cryptographers would come
> along and describe how someone with an adaptor signature could use that
> information to create a pubkey that could be put into a transaction with
> a second output that OP_RETURN included the serialized adaptor
> signature.  The pubkey would be designed to be spendable by anyone with
> the final signature in a way that revealed the hidden value to the
> pubkey's creator, allowing them to resolve the PTLC.  But if that's
> fundamentally not possible, I think we could advocate for making
> pay-to-revealed-adaptor-signature possible using something like
> OP_CHECKSIGFROMSTACK.[3]
>
> [1]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002664.html
> [2]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002667.html
> [3] https://bitcoinops.org/en/topics/op_checksigfromstack/
>
> > Do you think it's unreasonable to expect at least some LN nodes to
> > also invest in running nodes in mining pools, ensuring that they learn
> > about attackers' txs and can potentially share discovered 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-20 Thread Bastien TEINTURIER via bitcoin-dev
Hello Dave and list,

Thanks for your quick answers!

The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.
>

Exactly, if the attacker submits an outdated transaction he would be
shooting himself in the foot,
as we could claim the revocation paths when seeing the transaction in a
block and get all the
channel funds (since the attacker's outputs will be CSV-locked).

The only way your Bitcoin peer will relay your blind child
> is if it already has the parent transaction.
>

That's an excellent point that I missed in the blind CPFP carve-out trick!
I think this makes the
blind CPFP carve-out quite hard in practice (even using getdata - thanks
for detailing that option)...

In the worst case scenario where most miners' mempools contain the
attacker's tx and the rest of
the network's mempools contains the honest participant's tx, I think there
isn't much we can do.
We're simply missing information, so it looks like the only good solution
is to avoid being in that
situation by having a foot in miners' mempools. Do you think it's
unreasonable to expect at least
some LN nodes to also invest in running nodes in mining pools, ensuring
that they learn about
attackers' txs and can potentially share discovered preimages with the
network off-chain (by
gossiping preimages found in the mempool over LN)? I think that these
recent attacks show that
we need (at least some) off-chain nodes to be somewhat heavily invested in
on-chain operations
(layers can't be fully decoupled with the current security assumptions -
maybe Eltoo will help
change that in the future?).

Thank you for your time!
Bastien



Le ven. 19 juin 2020 à 22:53, David A. Harding  a écrit :

> On Fri, Jun 19, 2020 at 03:58:46PM -0400, David A. Harding via bitcoin-dev
> wrote:
> > I think you're assuming here that the attacker broadcast a particular
> > state.
>
> Whoops, I managed to confuse myself despite looking at Bastien's
> excellent explainer.  The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.  However, the blind child will only be relayed by a Bitcoin peer
> if the peer also has the parent transaction (the latest state) and, if
> it has the parent transaction, you should be able to just getdata('tx',
> $txid) that transaction from the peer without CPFPing anything.  That
> will give you the preimage and so you can immediately resolve the HTLC
> with the upstream channel.
>
> Revising my conclusion from the previous post:
>
> I think the strongman argument for the attack would be that the attacker
> will be able to perform a targeted relay of the low-feerate
> preimage-containing transaction to just miners---everyone else on the
> network will receive the honest user's higher-feerate expired-timelock
> transaction.  Unless the honest user happens to have a connection to a
> miner's node, the user will neither be able to CPFP fee bump nor use
> getdata to retrieve the preimage.
>
> Sorry for the confusion.
>
> -Dave
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread Bastien TEINTURIER via bitcoin-dev
Good morning list,

Sorry for being (very) late to the party on that subject, but better late
than never.

A lot of ideas have been thrown at the problem and are scattered across
emails, IRC discussions,
and github issues. I've spent some time putting it all together in one
gist, hoping that it will
help stir the discussion forward as well as give newcomers all the
background they need to ramp up
on this issue and join the discussion, bringing new ideas to the table.

The gist is here, and I'd appreciate your feedback if I have wrongly
interpreted some of the ideas:
https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12

Readers of this list can probably directly skip to the "Future work"
section. I believe my
"alternative proposal" should loosely reflect Matt's proposal from the very
first mail of this
thread; note that I included anchors and new txs only in some places, as I
think they aren't
necessary everywhere.

My current state-of-mind (subject to change as I discover more potential
attacks) is that:

* The proposal to add more anchors and pre-signed txs adds non-negligible
complexity and hurts
small HTLCs, so it would be better if we didn't need it
* The blind CPFP carve-out trick is a one shot, so you'll likely need to
pay a lot of fees for it
to work which still makes you lose money in case an attacker targets you
(but the money goes to
miners, not to the attacker - unless he is the miner). It's potentially
hard to estimate what fee
you should put into that blind CPFP carve-out because you have no idea what
the current fee of the
pinned success transaction package is, so it's unsure if that solution will
really work in practice
* If we take a step back, the only attack we need to protect against is an
attacker pinning a
preimage transaction while preventing us from learning that preimage for at
least `N` blocks
(see the gist for the complete explanation). Please correct me if that
claim is incorrect as it
will invalidate my conclusion! Thus if we have:
* a high enough `cltv_expiry_delta`
* [off-chain preimage broadcast](
https://github.com/lightningnetwork/lightning-rfc/issues/783)
(or David's proposal to do it by sending txs that can be redeemed via only
the preimage)
* LN hubs (or any party commercially investing in running a lightning node)
participating in
various mining pools to help discover preimages
* decent mitigations for eclipse attacks
* then the official anchor outputs proposal should be safe enough and is
much simpler?

Thank you for reading, I hope the work I put into this gist will be useful
for some of you.

Bastien

Le ven. 24 avr. 2020 à 00:47, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

>
>
> On 4/23/20 8:46 AM, ZmnSCPxj wrote:
> >>> -   Miners, being economically rational, accept this proposal and
> include this in a block.
> >>>
> >>> The proposal by Matt is then:
> >>>
> >>> -   The hashlock branch should instead be:
> >>> -   B and C must agree, and show the preimage of some hash H (hashlock
> branch).
> >>> -   Then B and C agree that B provides a signature spending the
> hashlock branch, to a transaction with the outputs:
> >>> -   Normal payment to C.
> >>> -   Hook output to B, which B can use to CPFP this transaction.
> >>> -   Hook output to C, which C can use to CPFP this transaction.
> >>> -   B can still (somehow) not maintain a mempool, by:
> >>> -   B broadcasts its timelock transaction.
> >>> -   B tries to CPFP the above hashlock transaction.
> >>> -   If CPFP succeeds, it means the above hashlock transaction exists
> and B queries the peer for this transaction, extracting the preimage and
> claiming the A->B HTLC.
> >>
> >> Note that no query is required. The problem has been solved and the
> preimage-containing transaction should now confirm just fine.
> >
> > Ah, right, so it gets confirmed and the `blocksonly` B sees it in a
> block.
> >
> > Even if C hooks a tree of low-fee transactions on its hook output or
> normal payment, miners will still be willing to confirm this and the B hook
> CPFP transaction without, right?
>
> Correct, once it makes it into the mempool we can CPFP it and all the
> regular sub-package CPFP calculation will pick it
> and its descendants up. Of course this relies on it not spending any other
> unconfirmed inputs.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev