Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-10-14 Thread darosior via bitcoin-dev
Hi Gloria,

> In summary, it seems that the decisions that might still need attention/input 
> from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or 1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with mempool 
> transactions.

I would like to point out that package relay is not only useful in Lightning's 
adversarial scenarii but also for a better user experience of CPFP.
Take for instance a wallet managing coins it can only spend using pre-signed 
transactions. It may batch these coins into a single transaction, but only 
after broadcasting the pre-signed tx for each of these coins.
So for a 3 utxos it'd be:
coin1 -> pres. tx1 - |
coin2 -> pres. tx2 - | - - - spending transaction
coin3 -> pres. tx3 - |

Now all these pre-signed transactions are pre-signed with a fixed feerate, 
which might be below the mempool minimum fee at the time of broadcast.
This is a usecase for multiple-parents-1-child packages. This is also something 
we do for Revault: you have pre-signed Unvault transactions, each have a CPFP 
output [0]. Since their confirmation is not security critical, you'd really 
want to batch the child-fee-paying tx.

Regarding 2. i did not come up with a reason for dropping this rule (yet?) 
since if you need to replace the child you can use individual submission, and 
if you need to replace the parent the child itself does not conflict anymore.

Thanks for the effort put into requesting feedback,
Antoine

[0] 
https://github.com/revault/practical-revault/blob/master/transactions.md#unvault_tx___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-29 Thread Antoine Riard via bitcoin-dev
Hi Bastien

> In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

I share the concern about splitting utxos into smaller ones.
IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your
counterparties attach a junk branch on her own anchor output, are you
allowed to chain your self-owned unconfirmed CPFP ?
I'm thinking about the topology "Chained CPFPs" exposed here :
https://github.com/rust-bitcoin/rust-lightning/issues/989.
Or if you have another L2 broadcast topology which could be safe w.r.t our
current mempool logic :) ?


Le lun. 27 sept. 2021 à 03:15, Bastien TEINTURIER  a
écrit :

> I think we could restrain package acceptance to only confirmed inputs for
>> now and revisit later this point ? For LN-anchor, you can assume that the
>> fee-bumping UTXO feeding the CPFP is already
>> confirmed. Or are there currently-deployed use-cases which would benefit
>> from your proposed Rule #2 ?
>>
>
> I think constraining package acceptance to only confirmed inputs
> is very limiting and quite dangerous for L2 protocols.
>
> In the case of LN, an attacker can game this and heavily restrict
> your RBF attempts if you're only allowed to use confirmed inputs
> and have many channels (and a limited number of confirmed inputs).
> Otherwise you'll need node operators to pre-emptively split their
> utxos into many small utxos just for fee bumping, which is inefficient...
>
> Bastien
>
> Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi Gloria,
>>
>> Thanks for your answers,
>>
>> > In summary, it seems that the decisions that might still need
>> > attention/input from devs on this mailing list are:
>> > 1. Whether we should start with multiple-parent-1-child or
>> 1-parent-1-child.
>> > 2. Whether it's ok to require that the child not have conflicts with
>> > mempool transactions.
>>
>> Yes 1) it would be good to have inputs of more potential users of package
>> acceptance . And 2) I think it's more a matter of clearer wording of the
>> proposal.
>>
>> However, see my final point on the relaxation around "unconfirmed inputs"
>> which might in fact alter our current block construction strategy.
>>
>> > Right, the fact that we essentially always choose the first-seen
>> witness is
>> > an unfortunate limitation that exists already. Adding package mempool
>> > accept doesn't worsen this, but the procedure in the future is to
>> replace
>> > the witness when it makes sense economically. We can also add logic to
>> > allow package feerate to pay for witness replacements as well. This is
>> > pretty far into the future, though.
>>
>> Yes I agree package mempool doesn't worsen this. And it's not an issue
>> for current LN as you can't significantly inflate a spending witness for
>> the 2-of-2 funding output.
>> However, it might be an issue for multi-party protocol where the spending
>> script has alternative branches with asymmetric valid witness weights.
>> Taproot should ease that kind of script so hopefully we would deploy
>> wtxid-replacement not too far in the future.
>>
>> > I could be misunderstanding, but an attacker wouldn't be able to
>> > batch-attack like this. Alice's package only conflicts with A' + D',
>> not A'
>> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>
>> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
>> consider.
>>
>> In LN, if you're trying to confirm a commitment transaction to time-out
>> or claim on-chain a HTLC and the timelock is near-expiration, you should be
>> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
>> value offered by the HTLC.
>>
>> Following this security assumption, an attacker can exploit it by
>> targeting together commitment transactions from different channels by
>> blocking them under a high-fee child, of which the fee value
>> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
>> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
>> from observing mempools state, victims can't learn they're targeted by the
>> same attacker.
>>
>> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
>> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
>> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
>> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
>> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
>> at loss on stealing P1, as she has paid more in fees but will realize a
>> gain on P2+P3.
>>
>> In this model, Alice is allowed to evict those 2 transactions (A' + D')
>> but as she is 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-27 Thread Bastien TEINTURIER via bitcoin-dev
>
> I think we could restrain package acceptance to only confirmed inputs for
> now and revisit later this point ? For LN-anchor, you can assume that the
> fee-bumping UTXO feeding the CPFP is already
> confirmed. Or are there currently-deployed use-cases which would benefit
> from your proposed Rule #2 ?
>

I think constraining package acceptance to only confirmed inputs
is very limiting and quite dangerous for L2 protocols.

In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

Bastien

Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Gloria,
>
> Thanks for your answers,
>
> > In summary, it seems that the decisions that might still need
> > attention/input from devs on this mailing list are:
> > 1. Whether we should start with multiple-parent-1-child or
> 1-parent-1-child.
> > 2. Whether it's ok to require that the child not have conflicts with
> > mempool transactions.
>
> Yes 1) it would be good to have inputs of more potential users of package
> acceptance . And 2) I think it's more a matter of clearer wording of the
> proposal.
>
> However, see my final point on the relaxation around "unconfirmed inputs"
> which might in fact alter our current block construction strategy.
>
> > Right, the fact that we essentially always choose the first-seen witness
> is
> > an unfortunate limitation that exists already. Adding package mempool
> > accept doesn't worsen this, but the procedure in the future is to replace
> > the witness when it makes sense economically. We can also add logic to
> > allow package feerate to pay for witness replacements as well. This is
> > pretty far into the future, though.
>
> Yes I agree package mempool doesn't worsen this. And it's not an issue for
> current LN as you can't significantly inflate a spending witness for the
> 2-of-2 funding output.
> However, it might be an issue for multi-party protocol where the spending
> script has alternative branches with asymmetric valid witness weights.
> Taproot should ease that kind of script so hopefully we would deploy
> wtxid-replacement not too far in the future.
>
> > I could be misunderstanding, but an attacker wouldn't be able to
> > batch-attack like this. Alice's package only conflicts with A' + D', not
> A'
> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>
> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
> consider.
>
> In LN, if you're trying to confirm a commitment transaction to time-out or
> claim on-chain a HTLC and the timelock is near-expiration, you should be
> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
> value offered by the HTLC.
>
> Following this security assumption, an attacker can exploit it by
> targeting together commitment transactions from different channels by
> blocking them under a high-fee child, of which the fee value
> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
> from observing mempools state, victims can't learn they're targeted by the
> same attacker.
>
> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
> at loss on stealing P1, as she has paid more in fees but will realize a
> gain on P2+P3.
>
> In this model, Alice is allowed to evict those 2 transactions (A' + D')
> but as she is economically-bounded she won't succeed.
>
> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
> 1st pinning scenario is correct and "lucractive" when you sum the global
> gain/loss.
>
> There is a 2nd attack scenario where A + B + C + D, where D is the child
> of A,B,C. All those transactions are honestly issued by Alice. Once A + B +
> C + D are propagated in network mempools, Mallory is able to replace A + D
> with  A' + D' where D' is paying a higher fee. This package A' + D' will
> confirm soon if D feerate was compelling but Mallory succeeds in delaying
> the confirmation
> of B + C for one or more blocks. As B + C are pre-signed commitments with
> a low-fee rate they won't confirm without Alice issuing a new child E.
> Mallory can repeat the same trick by broadcasting
> B' + E' and delay again the confirmation of C.
>
> If the remaining package pending HTLC has a higher-value than all the
> malicious fees over-bid, Mallory should realize a gain

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-26 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

Thanks for your answers,

> In summary, it seems that the decisions that might still need
> attention/input from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or
1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with
> mempool transactions.

Yes 1) it would be good to have inputs of more potential users of package
acceptance . And 2) I think it's more a matter of clearer wording of the
proposal.

However, see my final point on the relaxation around "unconfirmed inputs"
which might in fact alter our current block construction strategy.

> Right, the fact that we essentially always choose the first-seen witness
is
> an unfortunate limitation that exists already. Adding package mempool
> accept doesn't worsen this, but the procedure in the future is to replace
> the witness when it makes sense economically. We can also add logic to
> allow package feerate to pay for witness replacements as well. This is
> pretty far into the future, though.

Yes I agree package mempool doesn't worsen this. And it's not an issue for
current LN as you can't significantly inflate a spending witness for the
2-of-2 funding output.
However, it might be an issue for multi-party protocol where the spending
script has alternative branches with asymmetric valid witness weights.
Taproot should ease that kind of script so hopefully we would deploy
wtxid-replacement not too far in the future.

> I could be misunderstanding, but an attacker wouldn't be able to
> batch-attack like this. Alice's package only conflicts with A' + D', not
A'
> + B' + C' + D'. She only needs to pay for evicting 2 transactions.

Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
consider.

In LN, if you're trying to confirm a commitment transaction to time-out or
claim on-chain a HTLC and the timelock is near-expiration, you should be
ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
value offered by the HTLC.

Following this security assumption, an attacker can exploit it by targeting
together commitment transactions from different channels by blocking them
under a high-fee child, of which the fee value
is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
from observing mempools state, victims can't learn they're targeted by the
same attacker.

To draw from the aforementioned topology, Mallory broadcasts A' + B' + C' +
D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
at loss on stealing P1, as she has paid more in fees but will realize a
gain on P2+P3.

In this model, Alice is allowed to evict those 2 transactions (A' + D') but
as she is economically-bounded she won't succeed.

Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
1st pinning scenario is correct and "lucractive" when you sum the global
gain/loss.

There is a 2nd attack scenario where A + B + C + D, where D is the child of
A,B,C. All those transactions are honestly issued by Alice. Once A + B + C
+ D are propagated in network mempools, Mallory is able to replace A + D
with  A' + D' where D' is paying a higher fee. This package A' + D' will
confirm soon if D feerate was compelling but Mallory succeeds in delaying
the confirmation
of B + C for one or more blocks. As B + C are pre-signed commitments with a
low-fee rate they won't confirm without Alice issuing a new child E.
Mallory can repeat the same trick by broadcasting
B' + E' and delay again the confirmation of C.

If the remaining package pending HTLC has a higher-value than all the
malicious fees over-bid, Mallory should realize a gain. With this 2nd
pinning attack, the malicious entity buys confirmation delay of your
packaged-together commitments.

Assuming those attacks are correct, I'm leaning towards being conservative
with the LDK broadcast backend. Though once again, other L2 devs have
likely other use-cases and opinions :)

>  B' only needs to pay for itself in this case.

Yes I think it's a nice discount when UTXO is single-owned. In the context
of shared-owned UTXO (e.g LN), you might not if there is an in-mempool
package already spending the UTXO and have to assume the worst-case
scenario. I.e have B' committing enough fee to pay for A' replacement
bandwidth. I think we can't do that much for this case...

> If a package meets feerate requirements as a
package, the parents in the transaction are allowed to replace-by-fee
mempool transactions. The child cannot replace mempool transactions."

I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B' to
replace A+B because the first broadcast isn't satisfying anymore due to
mempool spike

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-23 Thread Gloria Zhao via bitcoin-dev
Hi Antoine,

Thanks as always for your input. I'm glad we agree on so much!

In summary, it seems that the decisions that might still need
attention/input from devs on this mailing list are:
1. Whether we should start with multiple-parent-1-child or 1-parent-1-child.
2. Whether it's ok to require that the child not have conflicts with
mempool transactions.

Responding to your comments...

> IIUC, you have package A+B, during the dedup phase early in
`AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
and A' is higher feerate than A, you trim A and replace by A' ?

> I think this approach is safe, the one who appears unsafe to me is when
A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
In that case iirc that would be a pinning.

Right, the fact that we essentially always choose the first-seen witness is
an unfortunate limitation that exists already. Adding package mempool
accept doesn't worsen this, but the procedure in the future is to replace
the witness when it makes sense economically. We can also add logic to
allow package feerate to pay for witness replacements as well. This is
pretty far into the future, though.

> It sounds uneconomical for an attacker but I think it's not when you
consider than you can "batch" attack against multiple honest
counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
with Alice's honest package P1, B' conflicts with Bob's honest package P2,
C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
A' + B' + C'.

> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
confirmed by P1+P2+P3, I think it's lucrative for the attacker ?

I could be misunderstanding, but an attacker wouldn't be able to
batch-attack like this. Alice's package only conflicts with A' + D', not A'
+ B' + C' + D'. She only needs to pay for evicting 2 transactions.

> Do we assume that broadcasted packages are "honest" by default and that
the parent(s) always need the child to pass the fee checks, that way saving
the processing of individual transactions which are expected to fail in 99%
of cases or more ad hoc composition of packages at relay ?
> I think this point is quite dependent on the p2p packages format/logic
we'll end up on and that we should feel free to revisit it later ?

I think it's the opposite; there's no way for us to assume that p2p
packages will be "honest." I'd like to have two things before we expose on
P2P: (1) ensure that the amount of resources potentially allocated for
package validation isn't disproportionately higher than that of single
transaction validation and (2) only use package validation when we're
unsatisifed with the single validation result, e.g. we might get better
fees.
Yes, let's revisit this later :)

 > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
discard its feerate as B should pay for all fees checked on its own. Where
I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
have a fee high enough to cover the bandwidth penalty replacement
(`PaysForRBF`, 2nd check) of both A+B' or only B' ?

 B' only needs to pay for itself in this case.

> > Do we want the child to be able to replace mempool transactions as well?

> If we mean when you have replaceable A+B then A'+B' try to replace with a
higher-feerate ? I think that's exactly the case we need for Lightning as
A+B is coming from Alice and A'+B' is coming from Bob :/

Let me clarify this because I can see that my wording was ambiguous, and
then please let me know if it fits Lightning's needs?

In my proposal, I wrote "If a package meets feerate requirements as a
package, the parents in the transaction are allowed to replace-by-fee
mempool transactions. The child cannot replace mempool transactions." What
I meant was: the package can replace mempool transactions if any of the
parents conflict with mempool transactions. The child cannot not conflict
with any mempool transactions.
The Lightning use case this attempts to address is: Alice and Mallory are
LN counterparties, and have packages A+B and A'+B', respectively. A and A'
are their commitment transactions and conflict with each other; they have
shared inputs and different txids.
B spends Alice's anchor output from A. B' spends Mallory's anchor output
from A'. Thus, B and B' do not conflict with each other.
Alice can broadcast her package, A+B, to replace Mallory's package, A'+B',
since B doesn't conflict with the mempool.

Would this be ok?

> The second option, a child of A', In the LN case I think the CPFP is
attached on one's anchor output.

While it would be nice to have full RBF, malleability of the child won't
block RBF here. If we're trying to replace A', we only require that A'
signals replaceability, and don't mind if its child doesn't.

> > B has an ancestor score of 10sat/vb and D has an
> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
B's,
> > it fails the proposed package RBF R

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-23 Thread Antoine Riard via bitcoin-dev
> Correct, if B+C is too low feerate to be accepted, we will reject it. I
> prefer this because it is incentive compatible: A can be mined by itself,
> so there's no reason to prefer A+B+C instead of A.
> As another way of looking at this, consider the case where we do accept
> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
> capacity, we evict the lowest descendant feerate transactions, which are
> B+C in this case. This gives us the same resulting mempool, with A and not
> B+C.

I agree here. Doing otherwise, we might evict other transactions mempool in
`MempoolAccept::Finalize` with a higher-feerate than B+C while those
evicted transactions are the most compelling for block construction.

I thought at first missing this acceptance requirement would break a
fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
child fee is capturing the parent value. I can't think of other fee-bumping
schemes potentially affected. If they do exist I would say they're wrong in
their design assumptions.

> If or when we have witness replacement, the logic is: if the individual
> transaction is enough to replace the mempool one, the replacement will
> happen during the preceding individual transaction acceptance, and
> deduplication logic will work. Otherwise, we will try to deduplicate by
> wtxid, see that we need a package witness replacement, and use the package
> feerate to evaluate whether this is economically rational.

IIUC, you have package A+B, during the dedup phase early in
`AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
and A' is higher feerate than A, you trim A and replace by A' ?

I think this approach is safe, the one who appears unsafe to me is when A'
has a _lower_ feerate, even if A' is already accepted by our mempool ? In
that case iirc that would be a pinning.

Good to see progress on witness replacement before we see usage of Taproot
tree in the context of multi-party, where a malicious counterparty inflates
its witness to jam a honest spending.

(Note, the commit linked currently points nowhere :))


> Please note that A may replace A' even if A' has higher fees than A
> individually, because the proposed package RBF utilizes the fees and size
> of the entire package. This just requires E to pay enough fees, although
> this can be pretty high if there are also potential B' and C' competing
> commitment transactions that we don't know about.

Ah right, if the package acceptance waives `PaysMoreThanConflicts` for the
individual check on A, the honest package should replace the pinning
attempt. I've not fully parsed the proposed implementation yet.

Though note, I think it's still unsafe for a Lightning
multi-commitment-broadcast-as-one-package as a malicious A' might have an
absolute fee higher than E. It sounds uneconomical for
an attacker but I think it's not when you consider than you can "batch"
attack against multiple honest counterparties. E.g, Mallory broadcast A' +
B' + C' + D' where A' conflicts with Alice's honest package P1, B'
conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
package P3. And D' is a high-fee child of A' + B' + C'.

If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
confirmed by P1+P2+P3, I think it's lucrative for the attacker ?

> So far, my understanding is that multi-parent-1-child is desired for
> batched fee-bumping (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
> I've also seen your response which I have less context on (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
That
> being said, I am happy to create a new proposal for 1 parent + 1 child
> (which would be slightly simpler) and plan for moving to
> multi-parent-1-child later if that is preferred. I am very interested in
> hearing feedback on that approach.

I think batched fee-bumping is okay as long as you don't have
time-sensitive outputs encumbering your commitment transactions. For the
reasons mentioned above, I think that's unsafe.

What I'm worried about is  L2 developers, potentially not aware about all
the mempool subtleties blurring the difference and always batching their
broadcast by default.

IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
constraint L2 design space for now and minimize risks of unsafe usage of
the package API :)

I think that's a point where it would be relevant to have the opinion of
more L2 devs.

> I think there is a misunderstanding here - let me describe what I'm
> proposing we'd do in this situation: we'll try individual submission for
A,
> see that it fails due to "insufficient fees." Then, we'll try package
> validation for A+B and use package RBF. If A+B pays enough, it can still
> replace A'. If A fails for a bad signature, we won't look at B or A+B.
Does
> this meet your expectations?

Yes there was a misun

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-22 Thread Gloria Zhao via bitcoin-dev
Hi Bastien,

> A package A + C will be able to replace A' + B regardless of
> the weight of A' + B?

Correct, the weight of A' + B will not prevent A+C from replacing it (as
long as A+C pays enough fees). In example 2C, we would be able to replace A
with a package.

Best,
Gloria

On Wed, Sep 22, 2021 at 8:10 AM Bastien TEINTURIER  wrote:

> Great, thanks for this clarification!
>
> Can you confirm that this won't be an issue either with your
> example 2C (in your first set of diagrams)? If I understand it
> correctly it shouldn't, but I'd rather be 100% sure.
>
> A package A + C will be able to replace A' + B regardless of
> the weight of A' + B?
>
> Thanks,
> Bastien
>
> Le mar. 21 sept. 2021 à 18:42, Gloria Zhao  a
> écrit :
>
>> Hi Bastien,
>>
>> Excellent diagram :D
>>
>> > Here the issue is that a revoked commitment tx A' is pinned in other
>> > mempools, with a long chain of descendants (or descendants that reach
>> > the maximum replaceable size).
>> > We would really like A + C to be able to replace this pinned A'.
>> > We can't submit individually because A on its own won't replace A'...
>>
>> Right, this is a key motivation for having Package RBF. In this case, A+C
>> can replace A' + B1...B24.
>>
>> Due to the descendant limit (each node operator can increase it on their
>> own node, but the default is 25), A' should have no more than 25
>> descendants, even including CPFP carve out. As long as A only conflicts
>> with A', it won't be trying to replace more than 100 transactions. The
>> proposed package RBF will allow C to pay for A's conflicts, since their
>> package feerate is used in the fee comparisons. A is not a descendant of
>> A', so the existence of B1...B24 does not prevent the replacement.
>>
>> Best,
>> Gloria
>>
>> On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER 
>> wrote:
>>
>>> Hi Gloria,
>>>
>>> > I believe this attack is mitigated as long as we attempt to submit
>>> transactions individually
>>>
>>> Unfortunately not, as there exists a pinning scenario in LN where a
>>> different commit tx is pinned, but you actually can't know which one.
>>>
>>> Since I really like your diagrams, I made one as well to illustrate:
>>>
>>> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>>>
>>> Here the issue is that a revoked commitment tx A' is pinned in other
>>> mempools, with a long chain of descendants (or descendants that reach
>>> the maximum replaceable size).
>>>
>>> We would really like A + C to be able to replace this pinned A'.
>>> We can't submit individually because A on its own won't replace A'...
>>>
>>> > I would note that this proposal doesn't accommodate something like
>>> diagram B, where C is getting CPFP carve out and wants to bring a +1
>>>
>>> No worries, that case shouldn't be a concern.
>>> I believe any L2 protocol can always ensure it confirms such tx trees
>>> "one depth after the other" without impacting funds safety, so it
>>> only needs to ensure A + C can get into mempools.
>>>
>>> Thanks,
>>> Bastien
>>>
>>> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a
>>> écrit :
>>>
 Hi Bastien,

 Thank you for your feedback!

 > In your example we have a parent transaction A already in the mempool
 > and an unrelated child B. We submit a package C + D where C spends
 > another of A's inputs. You're highlighting that this package may be
 > rejected because of the unrelated transaction(s) B.

 > The way I see this, an attacker can abuse this rule to ensure
 > transaction A stays pinned in the mempool without confirming by
 > broadcasting a set of child transactions that reach these limits
 > and pay low fees (where A would be a commit tx in LN).

 I believe you are describing a pinning attack in which your adversarial
 counterparty attempts to monopolize the mempool descendant limit of the
 shared  transaction A in order to prevent you from submitting a fee-bumping
 child C; I've tried to illustrate this as diagram A here:
 https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
 (please let me know if I'm misunderstanding).

 I believe this attack is mitigated as long as we attempt to submit
 transactions individually (and thus take advantage of CPFP carve out)
 before attempting package validation. So, in scenario A2, even if the
 mempool receives a package with A+C, it would deduplicate A, submit C as an
 individual transaction, and allow it due to the CPFP carve out exemption. A
 more general goal is: if a transaction would propagate successfully on its
 own now, it should still propagate regardless of whether it is included in
 a package. The best way to ensure this, as far as I can tell, is to always
 try to submit them individually first.

 I would note that this proposal doesn't accommodate something like
 diagram B, where C is getting CP

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-22 Thread Bastien TEINTURIER via bitcoin-dev
Great, thanks for this clarification!

Can you confirm that this won't be an issue either with your
example 2C (in your first set of diagrams)? If I understand it
correctly it shouldn't, but I'd rather be 100% sure.

A package A + C will be able to replace A' + B regardless of
the weight of A' + B?

Thanks,
Bastien

Le mar. 21 sept. 2021 à 18:42, Gloria Zhao  a écrit :

> Hi Bastien,
>
> Excellent diagram :D
>
> > Here the issue is that a revoked commitment tx A' is pinned in other
> > mempools, with a long chain of descendants (or descendants that reach
> > the maximum replaceable size).
> > We would really like A + C to be able to replace this pinned A'.
> > We can't submit individually because A on its own won't replace A'...
>
> Right, this is a key motivation for having Package RBF. In this case, A+C
> can replace A' + B1...B24.
>
> Due to the descendant limit (each node operator can increase it on their
> own node, but the default is 25), A' should have no more than 25
> descendants, even including CPFP carve out. As long as A only conflicts
> with A', it won't be trying to replace more than 100 transactions. The
> proposed package RBF will allow C to pay for A's conflicts, since their
> package feerate is used in the fee comparisons. A is not a descendant of
> A', so the existence of B1...B24 does not prevent the replacement.
>
> Best,
> Gloria
>
> On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER 
> wrote:
>
>> Hi Gloria,
>>
>> > I believe this attack is mitigated as long as we attempt to submit
>> transactions individually
>>
>> Unfortunately not, as there exists a pinning scenario in LN where a
>> different commit tx is pinned, but you actually can't know which one.
>>
>> Since I really like your diagrams, I made one as well to illustrate:
>>
>> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>>
>> Here the issue is that a revoked commitment tx A' is pinned in other
>> mempools, with a long chain of descendants (or descendants that reach
>> the maximum replaceable size).
>>
>> We would really like A + C to be able to replace this pinned A'.
>> We can't submit individually because A on its own won't replace A'...
>>
>> > I would note that this proposal doesn't accommodate something like
>> diagram B, where C is getting CPFP carve out and wants to bring a +1
>>
>> No worries, that case shouldn't be a concern.
>> I believe any L2 protocol can always ensure it confirms such tx trees
>> "one depth after the other" without impacting funds safety, so it
>> only needs to ensure A + C can get into mempools.
>>
>> Thanks,
>> Bastien
>>
>> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a
>> écrit :
>>
>>> Hi Bastien,
>>>
>>> Thank you for your feedback!
>>>
>>> > In your example we have a parent transaction A already in the mempool
>>> > and an unrelated child B. We submit a package C + D where C spends
>>> > another of A's inputs. You're highlighting that this package may be
>>> > rejected because of the unrelated transaction(s) B.
>>>
>>> > The way I see this, an attacker can abuse this rule to ensure
>>> > transaction A stays pinned in the mempool without confirming by
>>> > broadcasting a set of child transactions that reach these limits
>>> > and pay low fees (where A would be a commit tx in LN).
>>>
>>> I believe you are describing a pinning attack in which your adversarial
>>> counterparty attempts to monopolize the mempool descendant limit of the
>>> shared  transaction A in order to prevent you from submitting a fee-bumping
>>> child C; I've tried to illustrate this as diagram A here:
>>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>>> (please let me know if I'm misunderstanding).
>>>
>>> I believe this attack is mitigated as long as we attempt to submit
>>> transactions individually (and thus take advantage of CPFP carve out)
>>> before attempting package validation. So, in scenario A2, even if the
>>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>>> individual transaction, and allow it due to the CPFP carve out exemption. A
>>> more general goal is: if a transaction would propagate successfully on its
>>> own now, it should still propagate regardless of whether it is included in
>>> a package. The best way to ensure this, as far as I can tell, is to always
>>> try to submit them individually first.
>>>
>>> I would note that this proposal doesn't accommodate something like
>>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>>> C has very low fees and is bumped by D). I don't think this is a use case
>>> since C should be the one fee-bumping A, but since we're talking about
>>> limitations around the CPFP carve out, this is it.
>>>
>>> Let me know if this addresses your concerns?
>>>
>>> Thanks,
>>> Gloria
>>>
>>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
>>> wrote:
>>>
 Hi Gloria,

 Thanks for this detailed p

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Gloria Zhao via bitcoin-dev
Hi Bastien,

Excellent diagram :D

> Here the issue is that a revoked commitment tx A' is pinned in other
> mempools, with a long chain of descendants (or descendants that reach
> the maximum replaceable size).
> We would really like A + C to be able to replace this pinned A'.
> We can't submit individually because A on its own won't replace A'...

Right, this is a key motivation for having Package RBF. In this case, A+C
can replace A' + B1...B24.

Due to the descendant limit (each node operator can increase it on their
own node, but the default is 25), A' should have no more than 25
descendants, even including CPFP carve out. As long as A only conflicts
with A', it won't be trying to replace more than 100 transactions. The
proposed package RBF will allow C to pay for A's conflicts, since their
package feerate is used in the fee comparisons. A is not a descendant of
A', so the existence of B1...B24 does not prevent the replacement.

Best,
Gloria

On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER  wrote:

> Hi Gloria,
>
> > I believe this attack is mitigated as long as we attempt to submit
> transactions individually
>
> Unfortunately not, as there exists a pinning scenario in LN where a
> different commit tx is pinned, but you actually can't know which one.
>
> Since I really like your diagrams, I made one as well to illustrate:
>
> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>
> Here the issue is that a revoked commitment tx A' is pinned in other
> mempools, with a long chain of descendants (or descendants that reach
> the maximum replaceable size).
>
> We would really like A + C to be able to replace this pinned A'.
> We can't submit individually because A on its own won't replace A'...
>
> > I would note that this proposal doesn't accommodate something like
> diagram B, where C is getting CPFP carve out and wants to bring a +1
>
> No worries, that case shouldn't be a concern.
> I believe any L2 protocol can always ensure it confirms such tx trees
> "one depth after the other" without impacting funds safety, so it
> only needs to ensure A + C can get into mempools.
>
> Thanks,
> Bastien
>
> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a
> écrit :
>
>> Hi Bastien,
>>
>> Thank you for your feedback!
>>
>> > In your example we have a parent transaction A already in the mempool
>> > and an unrelated child B. We submit a package C + D where C spends
>> > another of A's inputs. You're highlighting that this package may be
>> > rejected because of the unrelated transaction(s) B.
>>
>> > The way I see this, an attacker can abuse this rule to ensure
>> > transaction A stays pinned in the mempool without confirming by
>> > broadcasting a set of child transactions that reach these limits
>> > and pay low fees (where A would be a commit tx in LN).
>>
>> I believe you are describing a pinning attack in which your adversarial
>> counterparty attempts to monopolize the mempool descendant limit of the
>> shared  transaction A in order to prevent you from submitting a fee-bumping
>> child C; I've tried to illustrate this as diagram A here:
>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>> (please let me know if I'm misunderstanding).
>>
>> I believe this attack is mitigated as long as we attempt to submit
>> transactions individually (and thus take advantage of CPFP carve out)
>> before attempting package validation. So, in scenario A2, even if the
>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>> individual transaction, and allow it due to the CPFP carve out exemption. A
>> more general goal is: if a transaction would propagate successfully on its
>> own now, it should still propagate regardless of whether it is included in
>> a package. The best way to ensure this, as far as I can tell, is to always
>> try to submit them individually first.
>>
>> I would note that this proposal doesn't accommodate something like
>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>> C has very low fees and is bumped by D). I don't think this is a use case
>> since C should be the one fee-bumping A, but since we're talking about
>> limitations around the CPFP carve out, this is it.
>>
>> Let me know if this addresses your concerns?
>>
>> Thanks,
>> Gloria
>>
>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
>> wrote:
>>
>>> Hi Gloria,
>>>
>>> Thanks for this detailed post!
>>>
>>> The illustrations you provided are very useful for this kind of graph
>>> topology problems.
>>>
>>> The rules you lay out for package RBF look good to me at first glance
>>> as there are some subtle improvements compared to BIP 125.
>>>
>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>
>>> I have a question regarding this rule, as your example 2C could be
>>> concerning for LN (unless I didn't understand it correctly

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Bastien TEINTURIER via bitcoin-dev
Hi Gloria,

> I believe this attack is mitigated as long as we attempt to submit
transactions individually

Unfortunately not, as there exists a pinning scenario in LN where a
different commit tx is pinned, but you actually can't know which one.

Since I really like your diagrams, I made one as well to illustrate:
https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg

Here the issue is that a revoked commitment tx A' is pinned in other
mempools, with a long chain of descendants (or descendants that reach
the maximum replaceable size).

We would really like A + C to be able to replace this pinned A'.
We can't submit individually because A on its own won't replace A'...

> I would note that this proposal doesn't accommodate something like
diagram B, where C is getting CPFP carve out and wants to bring a +1

No worries, that case shouldn't be a concern.
I believe any L2 protocol can always ensure it confirms such tx trees
"one depth after the other" without impacting funds safety, so it
only needs to ensure A + C can get into mempools.

Thanks,
Bastien

Le mar. 21 sept. 2021 à 13:18, Gloria Zhao  a écrit :

> Hi Bastien,
>
> Thank you for your feedback!
>
> > In your example we have a parent transaction A already in the mempool
> > and an unrelated child B. We submit a package C + D where C spends
> > another of A's inputs. You're highlighting that this package may be
> > rejected because of the unrelated transaction(s) B.
>
> > The way I see this, an attacker can abuse this rule to ensure
> > transaction A stays pinned in the mempool without confirming by
> > broadcasting a set of child transactions that reach these limits
> > and pay low fees (where A would be a commit tx in LN).
>
> I believe you are describing a pinning attack in which your adversarial
> counterparty attempts to monopolize the mempool descendant limit of the
> shared  transaction A in order to prevent you from submitting a fee-bumping
> child C; I've tried to illustrate this as diagram A here:
> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
> (please let me know if I'm misunderstanding).
>
> I believe this attack is mitigated as long as we attempt to submit
> transactions individually (and thus take advantage of CPFP carve out)
> before attempting package validation. So, in scenario A2, even if the
> mempool receives a package with A+C, it would deduplicate A, submit C as an
> individual transaction, and allow it due to the CPFP carve out exemption. A
> more general goal is: if a transaction would propagate successfully on its
> own now, it should still propagate regardless of whether it is included in
> a package. The best way to ensure this, as far as I can tell, is to always
> try to submit them individually first.
>
> I would note that this proposal doesn't accommodate something like diagram
> B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
> very low fees and is bumped by D). I don't think this is a use case since C
> should be the one fee-bumping A, but since we're talking about limitations
> around the CPFP carve out, this is it.
>
> Let me know if this addresses your concerns?
>
> Thanks,
> Gloria
>
> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
> wrote:
>
>> Hi Gloria,
>>
>> Thanks for this detailed post!
>>
>> The illustrations you provided are very useful for this kind of graph
>> topology problems.
>>
>> The rules you lay out for package RBF look good to me at first glance
>> as there are some subtle improvements compared to BIP 125.
>>
>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>
>> I have a question regarding this rule, as your example 2C could be
>> concerning for LN (unless I didn't understand it correctly).
>>
>> This also touches on the package RBF rule 5 ("The package cannot
>> replace more than 100 mempool transactions.")
>>
>> In your example we have a parent transaction A already in the mempool
>> and an unrelated child B. We submit a package C + D where C spends
>> another of A's inputs. You're highlighting that this package may be
>> rejected because of the unrelated transaction(s) B.
>>
>> The way I see this, an attacker can abuse this rule to ensure
>> transaction A stays pinned in the mempool without confirming by
>> broadcasting a set of child transactions that reach these limits
>> and pay low fees (where A would be a commit tx in LN).
>>
>> We had to create the CPFP carve-out rule explicitly to work around
>> this limitation, and I think it would be necessary for package RBF
>> as well, because in such cases we do want to be able to submit a
>> package A + C where C pays high fees to speed up A's confirmation,
>> regardless of unrelated unconfirmed children of A...
>>
>> We could submit only C to benefit from the existing CPFP carve-out
>> rule, but that wouldn't work if our local mempool doesn't have A yet

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Gloria Zhao via bitcoin-dev
Hi Bastien,

Thank you for your feedback!

> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.

> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).

I believe you are describing a pinning attack in which your adversarial
counterparty attempts to monopolize the mempool descendant limit of the
shared  transaction A in order to prevent you from submitting a fee-bumping
child C; I've tried to illustrate this as diagram A here:
https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
(please let me know if I'm misunderstanding).

I believe this attack is mitigated as long as we attempt to submit
transactions individually (and thus take advantage of CPFP carve out)
before attempting package validation. So, in scenario A2, even if the
mempool receives a package with A+C, it would deduplicate A, submit C as an
individual transaction, and allow it due to the CPFP carve out exemption. A
more general goal is: if a transaction would propagate successfully on its
own now, it should still propagate regardless of whether it is included in
a package. The best way to ensure this, as far as I can tell, is to always
try to submit them individually first.

I would note that this proposal doesn't accommodate something like diagram
B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
very low fees and is bumped by D). I don't think this is a use case since C
should be the one fee-bumping A, but since we're talking about limitations
around the CPFP carve out, this is it.

Let me know if this addresses your concerns?

Thanks,
Gloria

On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
wrote:

> Hi Gloria,
>
> Thanks for this detailed post!
>
> The illustrations you provided are very useful for this kind of graph
> topology problems.
>
> The rules you lay out for package RBF look good to me at first glance
> as there are some subtle improvements compared to BIP 125.
>
> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
> I have a question regarding this rule, as your example 2C could be
> concerning for LN (unless I didn't understand it correctly).
>
> This also touches on the package RBF rule 5 ("The package cannot
> replace more than 100 mempool transactions.")
>
> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.
>
> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).
>
> We had to create the CPFP carve-out rule explicitly to work around
> this limitation, and I think it would be necessary for package RBF
> as well, because in such cases we do want to be able to submit a
> package A + C where C pays high fees to speed up A's confirmation,
> regardless of unrelated unconfirmed children of A...
>
> We could submit only C to benefit from the existing CPFP carve-out
> rule, but that wouldn't work if our local mempool doesn't have A yet,
> but other remote mempools do.
>
> Is my concern justified? Is this something that we should dig into a
> bit deeper?
>
> Thanks,
> Bastien
>
> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi there,
>>
>> I'm writing to propose a set of mempool policy changes to enable package
>> validation (in preparation for package relay) in Bitcoin Core. These
>> would not
>> be consensus or P2P protocol changes. However, since mempool policy
>> significantly affects transaction propagation, I believe this is relevant
>> for
>> the mailing list.
>>
>> My proposal enables packages consisting of multiple parents and 1 child.
>> If you
>> develop software that relies on specific transaction relay assumptions
>> and/or
>> are interested in using package relay in the future, I'm very interested
>> to hear
>> your feedback on the utility or restrictiveness of these package policies
>> for
>> your use cases.
>>
>> A draft implementation of this proposal can be found in [Bitcoin Core
>> PR#22290][1].
>>
>> An illustrated version of this post can be found at
>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>> I have also linked the images below.
>>
>> ## Background
>>
>> Feel free to skip this se

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-20 Thread Gloria Zhao via bitcoin-dev
Hi Antoine,

First of all, thank you for the thorough review. I appreciate your insight
on LN requirements.

> IIUC, you have a package A+B+C submitted for acceptance and A is already
in your mempool. You trim out A from the package and then evaluate B+C.

> I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?

Correct, if B+C is too low feerate to be accepted, we will reject it. I
prefer this because it is incentive compatible: A can be mined by itself,
so there's no reason to prefer A+B+C instead of A.
As another way of looking at this, consider the case where we do accept
A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
capacity, we evict the lowest descendant feerate transactions, which are
B+C in this case. This gives us the same resulting mempool, with A and not
B+C.


> Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.

I agree that variations of the same package with different witnesses is a
case that must be handled. I consider witness replacement to be a project
that can be done in parallel to package mempool acceptance because being
able to accept packages does not worsen the problem of a
same-txid-different-witness "pinning" attack.

If or when we have witness replacement, the logic is: if the individual
transaction is enough to replace the mempool one, the replacement will
happen during the preceding individual transaction acceptance, and
deduplication logic will work. Otherwise, we will try to deduplicate by
wtxid, see that we need a package witness replacement, and use the package
feerate to evaluate whether this is economically rational.

See the #22290 "handle package transactions already in mempool" commit (
https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
which handles the case of same-txid-different-witness by simply using the
transaction in the mempool for now, with TODOs for what I just described.


> I'm not clearly understanding the accepted topologies. By "parent and
child to share a parent", do you mean the set of transactions A, B, C,
where B is spending A and C is spending A and B would be correct ?

Yes, that is what I meant. Yes, that would a valid package under these
rules.

> If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?

No, there is no limit on connectivity other than "child with all
unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
in-mempool + in-package ancestor limits.


> Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.

> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.

Please note that A may replace A' even if A' has higher fees than A
individually, because the proposed package RBF utilizes the fees and size
of the entire package. This just requires E to pay enough fees, although
this can be pretty high if there are also potential B' and C' competing
commitment transactions that we don't know about.


> IMHO, I'm leaning towards deploying during a first phase
1-parent/1-child. I think it's the most conservative step still improving
second-layer safety.

So far, my understanding is that multi-parent-1-child is desired for
batched fee-bumping (
https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
I've also seen your response which I have less context on (
https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That
being said, I am happy to create a new proposal for 1 parent + 1 child
(which would be slightly simpler) and plan for moving to
multi-parent-1-child later if that is preferred. I am very interested in
hearing feedback on that approach.


> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?

I think there is a misunderstanding here - let me describe what I'm
proposing we'd do in this situation: we'll try individual submission for A,
see that it fails due to "insufficient fees." Then, we'll try package
validation for A+B and use package RBF. If A+B pays enough, it can still
replace A'. If A fails for a bad signatur

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-20 Thread Bastien TEINTURIER via bitcoin-dev
Hi Gloria,

Thanks for this detailed post!

The illustrations you provided are very useful for this kind of graph
topology problems.

The rules you lay out for package RBF look good to me at first glance
as there are some subtle improvements compared to BIP 125.

> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]

I have a question regarding this rule, as your example 2C could be
concerning for LN (unless I didn't understand it correctly).

This also touches on the package RBF rule 5 ("The package cannot
replace more than 100 mempool transactions.")

In your example we have a parent transaction A already in the mempool
and an unrelated child B. We submit a package C + D where C spends
another of A's inputs. You're highlighting that this package may be
rejected because of the unrelated transaction(s) B.

The way I see this, an attacker can abuse this rule to ensure
transaction A stays pinned in the mempool without confirming by
broadcasting a set of child transactions that reach these limits
and pay low fees (where A would be a commit tx in LN).

We had to create the CPFP carve-out rule explicitly to work around
this limitation, and I think it would be necessary for package RBF
as well, because in such cases we do want to be able to submit a
package A + C where C pays high fees to speed up A's confirmation,
regardless of unrelated unconfirmed children of A...

We could submit only C to benefit from the existing CPFP carve-out
rule, but that wouldn't work if our local mempool doesn't have A yet,
but other remote mempools do.

Is my concern justified? Is this something that we should dig into a
bit deeper?

Thanks,
Bastien

Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi there,
>
> I'm writing to propose a set of mempool policy changes to enable package
> validation (in preparation for package relay) in Bitcoin Core. These would
> not
> be consensus or P2P protocol changes. However, since mempool policy
> significantly affects transaction propagation, I believe this is relevant
> for
> the mailing list.
>
> My proposal enables packages consisting of multiple parents and 1 child.
> If you
> develop software that relies on specific transaction relay assumptions
> and/or
> are interested in using package relay in the future, I'm very interested
> to hear
> your feedback on the utility or restrictiveness of these package policies
> for
> your use cases.
>
> A draft implementation of this proposal can be found in [Bitcoin Core
> PR#22290][1].
>
> An illustrated version of this post can be found at
> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
> I have also linked the images below.
>
> ## Background
>
> Feel free to skip this section if you are already familiar with mempool
> policy
> and package relay terminology.
>
> ### Terminology Clarifications
>
> * Package = an ordered list of related transactions, representable by a
> Directed
>   Acyclic Graph.
> * Package Feerate = the total modified fees divided by the total virtual
> size of
>   all transactions in the package.
> - Modified fees = a transaction's base fees + fee delta applied by the
> user
>   with `prioritisetransaction`. As such, we expect this to vary across
> mempools.
> - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>   virtual size][2] and sigop weight. [Implemented here in Bitcoin
> Core][3].
> - Note that feerate is not necessarily based on the base fees and
> serialized
>   size.
>
> * Fee-Bumping = user/wallet actions that take advantage of miner
> incentives to
>   boost a transaction's candidacy for inclusion in a block, including
> Child Pays
> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
> mempool policy is to recognize when the new transaction is more economical
> to
> mine than the original one(s) but not open DoS vectors, so there are some
> limitations.
>
> ### Policy
>
> The purpose of the mempool is to store the best (to be most
> incentive-compatible
> with miners, highest feerate) candidates for inclusion in a block. Miners
> use
> the mempool to build block templates. The mempool is also useful as a
> cache for
> boosting block relay and validation performance, aiding transaction relay,
> and
> generating feerate estimations.
>
> Ideally, all consensus-valid transactions paying reasonable fees should
> make it
> to miners through normal transaction relay, without any special
> connectivity or
> relationships with miners. On the other hand, nodes do not have unlimited
> resources, and a P2P network designed to let any honest node broadcast
> their
> transactions also exposes the transaction validation engine to DoS attacks
> from
> malicious peers.
>
> As such, for unconfirmed transactions we are considering for our mempool,
> we
> apply a set of validation rules in addition to consensus, primarily to
> protect
> us

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-19 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.

IIUC, you have a package A+B+C submitted for acceptance and A is already in
your mempool. You trim out A from the package and then evaluate B+C.

I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?

Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.

E.g you're going to evaluate the package A+B and A' is already in your
mempool with a bigger valid witness. You trim A based on txid, then you
evaluate A'+B, which fails the fee checks. However, evaluating A+B would
have been a success.

AFAICT, the dedup rationale would be to save on CPU time/IO disk, to avoid
repeated signatures verification and parent UTXOs fetches ? Can we achieve
the same goal by bypassing tx-level checks for already-in txn while
conserving the package integrity for package-level checks ?

> Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.

I'm not clearly understanding the accepted topologies. By "parent and child
to share a parent", do you mean the set of transactions A, B, C, where B is
spending A and C is spending A and B would be correct ?

If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?

IIRC, one rationale to come with this topology limitation was to lower the
DoS risks when potentially deploying p2p packages.

Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.

E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.

That said, if you're broadcasting commitment transactions without
time-sensitive HTLC outputs, I think the batching is effectively a fee
saving as you don't have to duplicate the CPFP.

IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child.
I think it's the most conservative step still improving second-layer safety.

> *Rationale*:  It would be incorrect to use the fees of transactions that
are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.

I'm unsure about the logical order of the checks proposed.

If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats and
A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?

Note this situation is plausible, with current LN design, your counterparty
can have a commitment transaction with a better fee just by selecting a
higher `dust_limit_satoshis` than yours.

> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not
be
> sufficient in this example, since the 300sat looks like enough for the
300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
have
> a
> size of 100vB.

What problem are you trying to solve by the package feerate *after* dedup
rule ?

My understanding is that an in-package transaction might be already in the
mempool. Therefore, to compute a correct RBF penalty replacement, the vsize
of this transaction could be discarded lowering the cost of package RBF.

If we keep a "safe" dedup mechanism (see my point above), I think this
discount is justified, as the validation cost of node operators is paid for
?

> The child cannot replace mempool transactions.

Let's say you issue package A+B, then package C+B', where B' is a child of
both A and C. This rule fails the acceptance of C+B' ?

I t