Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-11-02 Thread Johan Torås Halseth via bitcoin-dev
On Mon, Oct 28, 2019 at 6:16 PM David A. Harding  wrote:

> A parent transaction near the limit of 100,000 vbytes could have almost
> 10,000 outputs paying OP_TRUE (10 vbytes per output).  If the children
> were limited to 10,000 vbytes each (the current max carve-out size),
> that allows relaying 100 mega-vbytes or nearly 400 MB data size (larger
> than the default maximum mempool size in Bitcoin Core).
>

Thanks, Dave, I wasn't aware the limits would allow this many outputs. And
as your calculation shows, this opens up the potential for free relay of
large amounts of data.

We could start special casing to only allow this for "LN commitment-like"
transactions, but this would be application specific changes, and your
calculation shows that even with the BOLT2 numbers there still exists cases
with a large number of children.

We are moving forward with adding a 1 block delay to all outputs to utilize
the current carve-out rule, and the changes aren't that bad. See Joost's
post in "[PATCH] First draft of option_simplfied_commitment"

- Johan
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread David A. Harding via bitcoin-dev
On Mon, Oct 28, 2019 at 10:45:39AM +0100, Johan Torås Halseth wrote:
> Relay cost is the obvious problem with just naively removing all limits.
> Relaxing the current rules by allowing to add a child to each output as
> long as it has a single unconfirmed parent would still only allow free
> relay of O(size of parent) extra data (which might not be that bad? Similar
> to the carve-out rule we could put limits on the child size). 

A parent transaction near the limit of 100,000 vbytes could have almost
10,000 outputs paying OP_TRUE (10 vbytes per output).  If the children
were limited to 10,000 vbytes each (the current max carve-out size),
that allows relaying 100 mega-vbytes or nearly 400 MB data size (larger
than the default maximum mempool size in Bitcoin Core).

As Matt noted in discussion on #lightning-dev about this issue, it's
possible to increase second-child carve-out to nth-child carve-out but
we'd need to be careful about choosing an appropriately low value for n.

For example, BOLT2 limits the number of HTLCs to 483 on each side of the
channel (so 966 + 2 outputs total), which means the worst case free
relay to support the current LN protocol would be approximately:

(10 + 968 * 1) * 4 = ~39 MB

Even if the mempool was empty (as it sometimes is these days), it would
only cost an attacker about 1.5 BTC to fill it at the default minimum
relay feerate[1] so that they could execute this attack at the minimal
cost per iteration of paying for a few hundred or a few thousand vbytes
at slightly higher than the current mempool minimum fee.

Instead, with the existing rules (including second-child carve-out),
they'd have to iterate (39 MB / 400 kB = ~100) times more often to
achieve an equivalent waste of bandwidth, costing them proportionally
more in fees.

So, I think these rough numbers clearly back what Matt said about us
being able to raise the limits a bit if we need to, but that we have to
be careful not to raise them so far that attackers can make it
significantly more bandwidth expensive for people to run relaying full
nodes.

-Dave

[1] Several developers are working on lowering the default minimum in
Bitcoin Core, which would of course make this attack proportionally
cheaper.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread Johan Torås Halseth via bitcoin-dev
>
>
> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.

Matt: With the proposed change, party B would always be able to add a child
to its output, regardless of what games party A is playing.


Thanks for the explanation, Jeremy!


> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.


Relay cost is the obvious problem with just naively removing all limits.
Relaxing the current rules by allowing to add a child to each output as
long as it has a single unconfirmed parent would still only allow free
relay of O(size of parent) extra data (which might not be that bad? Similar
to the carve-out rule we could put limits on the child size). This would be
enough for the current LN use case (increasing fee of commitment tx), but
not for OP_SECURETHEBAG I guess, as you need the tree of children, as you
mention.

I imagine walking the mempool wouldn't change much, as you would only have
one extra child per output. But here I'm just speculating, as I don't know
the code well enough know what the diff would look like.


> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".


This is interesting for an LN commitment! You could really hide every
output of the commitment within OP_STB, which could either allow bypassing
the fee-pinning attack entirely (if the output cannot be spent unconfirmed)
or adding fees to the commitment using SIGHASH_SINGLE|ANYONECANPAY.

- Johan

On Sun, Oct 27, 2019 at 8:13 PM Jeremy  wrote:

> Johan,
>
> The issues with mempool limits for OP_SECURETHEBAG are related, but have
> distinct solutions.
>
> There are two main categories of mempool issues at stake. One is relay
> cost, the other is mempool walking.
>
> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.
>
> The other issue is walking the mempool -- many of the algorithms we use in
> the mempool can be N log N or N^2 in the number of descendants. (simple
> example: an input chain of length N to a fan out of N outputs that are all
> spent, is O(N^2) to look up ancestors per-child, unless we're caching).
>
> The other sort of walking issue is where the indegree or outdegree for a
> transaction is high. Then when we are computing descendants or ancestors we
> will need to visit it multiple times. To avoid re-expanding a node, we
> currently cache it with a set. This uses O(N) extra memory and makes O(N
> Log N) (we use std::set not unordered_set) comparisons.
>
> I just opened a PR which should help with some of the walking issues by
> allowing us to cheaply cache which nodes we've visited on a run. It makes a
> lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
> memory. See: https://github.com/bitcoin/bitcoin/pull/17268.
>
>
> Now, for OP_SECURETHEBAG we want a particular property that is very
> different from with lightning htlcs (as is). We want that an unlimited
> number of child OP_SECURETHEBAG txns may extend from a confirmed
> OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
> lightning (one dangling unconfirmed to permit channels).
>
> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".
>
>
>
> --
> @JeremyRubin 
> 
>
>
> On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
> wrote:
>
>> I don’te see how? Let’s imagine Party A has two spendable outputs, now
>> they stuff the package size on one of their spendable outlets until it is
>> right at the limit, add one more on their other output (to meet the
>> Carve-Out), and now Party B can’t do anything.
>>
>> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
>>
>> 
>> It essentially changes the rule to always allow CPFP-ing the

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread Johan Torås Halseth via bitcoin-dev
It essentially changes the rule to always allow CPFP-ing the commitment as
long as there is an output available without any descendants. It changes
the commitment from "you always need at least, and exactly, one non-CSV
output per party. " to "you always need at least one non-CSV output per
party. "

I realize these limits are there for a reason though, but I'm wondering if
could relax them. Also now that jeremyrubin has expressed problems with the
current mempool limits.

On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
wrote:

> I may be missing something, but I'm not sure how this changes anything?
>
> If you have a commitment transaction, you always need at least, and
> exactly, one non-CSV output per party. The fact that there is a size
> limitation on the transaction that spends for carve-out purposes only
> effects how many other inputs/outputs you can add, but somehow I doubt
> its ever going to be a large enough number to matter.
>
> Matt
>
> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> > Reviving this old thread now that the recently released RC for bitcoind
> > 0.19 includes the above mentioned carve-out rule.
> >
> > In an attempt to pave the way for more robust CPFP of on-chain contracts
> > (Lightning commitment transactions), the carve-out rule was added in
> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> > an implementation of a new commitment format for utilizing the Bring
> > Your Own Fees strategy using CPFP, I’m wondering if the special case
> > rule should have been relaxed a bit, to avoid the need for adding a 1
> > CSV to all outputs (in case of Lightning this means HTLC scripts would
> > need to be changed to add the CSV delay).
> >
> > Instead, what about letting the rule be
> >
> > The last transaction which is added to a package of dependent
> > transactions in the mempool must:
> >   * Have no more than one unconfirmed parent.
> >
> > This would of course allow adding a large transaction to each output of
> > the unconfirmed parent, which in effect would allow an attacker to
> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> > this a problem with the current mempool acceptance code in bitcoind? I
> > would imagine evicting transactions based on feerate when the max
> > mempool size is met handles this, but I’m asking since it seems like
> > there has been several changes to the acceptance code and eviction
> > policy since the limit was first introduced.
> >
> > - Johan
> >
> >
> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > > wrote:
> >
> > Matt Corallo  > > writes:
> > >>> Thus, even if you imagine a steady-state mempool growth, unless
> the
> > >>> "near the top of the mempool" criteria is "near the top of the
> next
> > >>> block" (which is obviously *not* incentive-compatible)
> > >>
> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie.
> next
> > >> block, and assumed you'd only allow RBF if the old package wasn't
> > in the
> > >> top and the replacement would be.  That seems incentive
> > compatible; more
> > >> than the current scheme?
> > >
> > > My point was, because of block time variance, even that criteria
> > doesn't hold up. If you assume a steady flow of new transactions and
> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
> > likely to get confirmed until a few blocks come in "early". Given
> > block variance within a 12 block window, this is a relatively likely
> > scenario.
> >
> > [ Digging through old mail. ]
> >
> > Doesn't really matter.  Lightning close algorithm would be:
> >
> > 1.  Give bitcoind unileratal close.
> > 2.  Ask bitcoind what current expidited fee is (or survey your
> mempool).
> > 3.  Give bitcoind child "push" tx at that total feerate.
> > 4.  If next block doesn't contain unilateral close tx, goto 2.
> >
> > In this case, if you allow a simpified RBF where 'you can replace if
> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> > old tx isnt',
> > it works.
> >
> > It allows someone 100k of free tx spam, sure.  But it's simple.
> >
> > We could further restrict it by marking the unilateral close somehow
> to
> > say "gonna be pushed" and further limiting the child tx weight (say,
> > 5kSipa?) in that case.
> >
> > Cheers,
> > Rusty.
> > ___
> > Lightning-dev mailing list
> > lightning-...@lists.linuxfoundation.org
> > 
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-27 Thread David A. Harding via bitcoin-dev
On Thu, Oct 24, 2019 at 03:49:09PM +0200, Johan Torås Halseth wrote:
> [...] what about letting the rule be
> 
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
>   * Have no more than one unconfirmed parent.
> [... subsequent email ...]
> I realize these limits are there for a reason though, but I'm wondering if
> we could relax them.

Johan,

I'm not sure any of the other replies to this thread addressed your
request for a reason behind the limits related to your proposal, so I
thought I'd point out that---subsequent to your posting here---a
document[1] was added to the Bitcoin Core developer wiki that I think
describes the risk of the approach you proposed:

> Free relay attack:
>
> - Create a low feerate transaction T.
>
> - Send zillions of child transactions that are slightly higher feerate
>   than T until mempool is full.
>
> - Create one small transaction with feerate just higher than T’s, and
>watch T and all its children get evicted. Total fees in mempool drops
>dramatically!
>
> - Attacker just relayed (say) 300MB of data across the whole network
>   but only pays small feerate on one small transaction.

The document goes on to describe at a high level how Bitcoin Core
attempts to mitigate this problem as well as other ways it tries to
optimize the mempool in order to maximize miner profit (and so ensure
that miners continue to use public transaction relay).

I hope that's helpful to you and to others in both understanding the
current state and in thinking about ways in which it might be improved.

-Dave

[1] https://github.com/bitcoin-core/bitcoin-devwiki/wiki/Mempool-and-mining
Content adapted from slides by Suhas Daftuar, uploaded and formatted
by Gregory Sanders and Marco Falke.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-27 Thread Jeremy via bitcoin-dev
Johan,

The issues with mempool limits for OP_SECURETHEBAG are related, but have
distinct solutions.

There are two main categories of mempool issues at stake. One is relay
cost, the other is mempool walking.

In terms of relay cost, if an ancestor can be replaced, it will invalidate
all it's children, meaning that no one paid for that broadcasting. This can
be fixed by appropriately assessing Replace By Fee update fees to
encapsulate all descendants, but there are some tricky edge cases that make
this non-obvious to do.

The other issue is walking the mempool -- many of the algorithms we use in
the mempool can be N log N or N^2 in the number of descendants. (simple
example: an input chain of length N to a fan out of N outputs that are all
spent, is O(N^2) to look up ancestors per-child, unless we're caching).

The other sort of walking issue is where the indegree or outdegree for a
transaction is high. Then when we are computing descendants or ancestors we
will need to visit it multiple times. To avoid re-expanding a node, we
currently cache it with a set. This uses O(N) extra memory and makes O(N
Log N) (we use std::set not unordered_set) comparisons.

I just opened a PR which should help with some of the walking issues by
allowing us to cheaply cache which nodes we've visited on a run. It makes a
lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
memory. See: https://github.com/bitcoin/bitcoin/pull/17268.


Now, for OP_SECURETHEBAG we want a particular property that is very
different from with lightning htlcs (as is). We want that an unlimited
number of child OP_SECURETHEBAG txns may extend from a confirmed
OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
lightning (one dangling unconfirmed to permit channels).

OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a tree
where they are individualized leaf nodes with a preceding CSV. Then, the
above fix would ensure each HTLC always has time to close properly as they
would have individualized lockpoints. This is desirable for some additional
reasons and not for others, but it should "work".



--
@JeremyRubin 



On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
wrote:

> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.
>
> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
>
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as
> long as there is an output available without any descendants. It changes
> the commitment from "you always need at least, and exactly, one non-CSV
> output per party. " to "you always need at least one non-CSV output per
> party. "
>
> I realize these limits are there for a reason though, but I'm wondering if
> could relax them. Also now that jeremyrubin has expressed problems with the
> current mempool limits.
>
> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
> wrote:
>
>> I may be missing something, but I'm not sure how this changes anything?
>>
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>>
>> Matt
>>
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> >
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked
>> on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> >
>> > Instead, what about letting the rule be
>> >
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> >
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles th

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-25 Thread Matt Corallo via bitcoin-dev
I don’te see how? Let’s imagine Party A has two spendable outputs, now they 
stuff the package size on one of their spendable outlets until it is right at 
the limit, add one more on their other output (to meet the Carve-Out), and now 
Party B can’t do anything.

> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
> 
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as 
> long as there is an output available without any descendants. It changes the 
> commitment from "you always need at least, and exactly, one non-CSV output 
> per party. " to "you always need at least one non-CSV output per party. "
> 
> I realize these limits are there for a reason though, but I'm wondering if 
> could relax them. Also now that jeremyrubin has expressed problems with the 
> current mempool limits.
> 
>> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo  
>> wrote:
>> I may be missing something, but I'm not sure how this changes anything?
>> 
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>> 
>> Matt
>> 
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> > 
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> > 
>> > Instead, what about letting the rule be
>> > 
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> > 
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles this, but I’m asking since it seems like
>> > there has been several changes to the acceptance code and eviction
>> > policy since the limit was first introduced.
>> > 
>> > - Johan
>> > 
>> > 
>> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell > > > wrote:
>> > 
>> > Matt Corallo > > > writes:
>> > >>> Thus, even if you imagine a steady-state mempool growth, unless the
>> > >>> "near the top of the mempool" criteria is "near the top of the next
>> > >>> block" (which is obviously *not* incentive-compatible)
>> > >>
>> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
>> > >> block, and assumed you'd only allow RBF if the old package wasn't
>> > in the
>> > >> top and the replacement would be.  That seems incentive
>> > compatible; more
>> > >> than the current scheme?
>> > >
>> > > My point was, because of block time variance, even that criteria
>> > doesn't hold up. If you assume a steady flow of new transactions and
>> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
>> > likely to get confirmed until a few blocks come in "early". Given
>> > block variance within a 12 block window, this is a relatively likely
>> > scenario.
>> > 
>> > [ Digging through old mail. ]
>> > 
>> > Doesn't really matter.  Lightning close algorithm would be:
>> > 
>> > 1.  Give bitcoind unileratal close.
>> > 2.  Ask bitcoind what current expidited fee is (or survey your 
>> > mempool).
>> > 3.  Give bitcoind child "push" tx at that total feerate.
>> > 4.  If next block doesn't contain unilateral close tx, goto 2.
>> > 
>> > In this case, if you allow a simpified RBF where 'you can replace if
>> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
>> > old tx isnt',
>> > it works.
>> > 
>> > It allows someone 100k of free tx spam, sure.  But it's simple.
>> > 
>> > We could further restrict it by marking the unilateral close somehow to
>> > say "gonna be pushed" and further limiting the child tx weight (say,
>> > 5kSipa?) in that case.
>> > 
>> > Cheers,
>> > Rusty.
>> > _

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Matt Corallo via bitcoin-dev
I may be missing something, but I'm not sure how this changes anything?

If you have a commitment transaction, you always need at least, and
exactly, one non-CSV output per party. The fact that there is a size
limitation on the transaction that spends for carve-out purposes only
effects how many other inputs/outputs you can add, but somehow I doubt
its ever going to be a large enough number to matter.

Matt

On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> Reviving this old thread now that the recently released RC for bitcoind
> 0.19 includes the above mentioned carve-out rule.
> 
> In an attempt to pave the way for more robust CPFP of on-chain contracts
> (Lightning commitment transactions), the carve-out rule was added in
> https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> an implementation of a new commitment format for utilizing the Bring
> Your Own Fees strategy using CPFP, I’m wondering if the special case
> rule should have been relaxed a bit, to avoid the need for adding a 1
> CSV to all outputs (in case of Lightning this means HTLC scripts would
> need to be changed to add the CSV delay).
> 
> Instead, what about letting the rule be
> 
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
>   * Have no more than one unconfirmed parent.
> 
> This would of course allow adding a large transaction to each output of
> the unconfirmed parent, which in effect would allow an attacker to
> exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> this a problem with the current mempool acceptance code in bitcoind? I
> would imagine evicting transactions based on feerate when the max
> mempool size is met handles this, but I’m asking since it seems like
> there has been several changes to the acceptance code and eviction
> policy since the limit was first introduced.
> 
> - Johan
> 
> 
> On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > wrote:
> 
> Matt Corallo  > writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't
> in the
> >> top and the replacement would be.  That seems incentive
> compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria
> doesn't hold up. If you assume a steady flow of new transactions and
> one or two blocks come in "late", suddenly "top 4MWeight" isn't
> likely to get confirmed until a few blocks come in "early". Given
> block variance within a 12 block window, this is a relatively likely
> scenario.
> 
> [ Digging through old mail. ]
> 
> Doesn't really matter.  Lightning close algorithm would be:
> 
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
> 
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> old tx isnt',
> it works.
> 
> It allows someone 100k of free tx spam, sure.  But it's simple.
> 
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
> 
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> 
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Johan Torås Halseth via bitcoin-dev
Reviving this old thread now that the recently released RC for bitcoind
0.19 includes the above mentioned carve-out rule.

In an attempt to pave the way for more robust CPFP of on-chain contracts
(Lightning commitment transactions), the carve-out rule was added in
https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on an
implementation of a new commitment format for utilizing the Bring Your Own
Fees strategy using CPFP, I’m wondering if the special case rule should
have been relaxed a bit, to avoid the need for adding a 1 CSV to all
outputs (in case of Lightning this means HTLC scripts would need to be
changed to add the CSV delay).

Instead, what about letting the rule be

The last transaction which is added to a package of dependent
transactions in the mempool must:
  * Have no more than one unconfirmed parent.

This would of course allow adding a large transaction to each output of the
unconfirmed parent, which in effect would allow an attacker to exceed the
MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is this a problem
with the current mempool acceptance code in bitcoind? I would imagine
evicting transactions based on feerate when the max mempool size is met
handles this, but I’m asking since it seems like there has been several
changes to the acceptance code and eviction policy since the limit was
first introduced.

- Johan


On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  wrote:

> Matt Corallo  writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't in the
> >> top and the replacement would be.  That seems incentive compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria doesn't
> hold up. If you assume a steady flow of new transactions and one or two
> blocks come in "late", suddenly "top 4MWeight" isn't likely to get
> confirmed until a few blocks come in "early". Given block variance within a
> 12 block window, this is a relatively likely scenario.
>
> [ Digging through old mail. ]
>
> Doesn't really matter.  Lightning close algorithm would be:
>
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
>
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx
> isnt',
> it works.
>
> It allows someone 100k of free tx spam, sure.  But it's simple.
>
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-02-13 Thread Rusty Russell via bitcoin-dev
Matt Corallo  writes:
>>> Thus, even if you imagine a steady-state mempool growth, unless the 
>>> "near the top of the mempool" criteria is "near the top of the next 
>>> block" (which is obviously *not* incentive-compatible)
>> 
>> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
>> block, and assumed you'd only allow RBF if the old package wasn't in the
>> top and the replacement would be.  That seems incentive compatible; more
>> than the current scheme?
>
> My point was, because of block time variance, even that criteria doesn't hold 
> up. If you assume a steady flow of new transactions and one or two blocks 
> come in "late", suddenly "top 4MWeight" isn't likely to get confirmed until a 
> few blocks come in "early". Given block variance within a 12 block window, 
> this is a relatively likely scenario.

[ Digging through old mail. ]

Doesn't really matter.  Lightning close algorithm would be:

1.  Give bitcoind unileratal close.
2.  Ask bitcoind what current expidited fee is (or survey your mempool).
3.  Give bitcoind child "push" tx at that total feerate.
4.  If next block doesn't contain unilateral close tx, goto 2.

In this case, if you allow a simpified RBF where 'you can replace if
1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx isnt',
it works.

It allows someone 100k of free tx spam, sure.  But it's simple.

We could further restrict it by marking the unilateral close somehow to
say "gonna be pushed" and further limiting the child tx weight (say,
5kSipa?) in that case.

Cheers,
Rusty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-01-08 Thread Matt Corallo via bitcoin-dev
I responded to a few things in-line before realizing I think we're out of sync 
on what this alternative proposal actually implies. In my understanding is it, 
it does *not* imply that you are guaranteed the ability to RBF as fees change. 
The previous problem is still there - your counterparty can announce a bogus 
package and leave you unable to add a new transaction to it, the difference 
being it may be significantly more expensive to do so. If it were the case the 
you could RBF after the fact, I would likely agree with you.

> On Jan 8, 2019, at 00:50, Rusty Russell  wrote:
> 
> Matt Corallo  writes:
>> Ultimately, defining a "near the top of the mempool" criteria is fraught 
>> with issues. While it's probably OK for the original problem (large 
>> batched transactions where you don't want a single counterparty to 
>> prevent confirmation), lightning's requirements are very different. 
>> Instead is wanting a high probability that the transaction in question 
>> confirms "soon", we need certainty that it will confirm by some deadline.
> 
> I don't think it's different, in practice.

I strongly disagree. If you're someone sending a batched payment, 5% chance it 
takes 13 blocks is perfectly acceptable. If you're a lightning operator, that 
quickly turns into "5% chance, or 35% chance if your counterparty is malicious 
and knows more about the market structure than you". Eg in the past it's been 
the case that transaction volume would spike every day at the same time when 
Bitmex proceed a flood of withdrawals all at once in separate transactions. 
Worse, it's probably still the case that, in case is sudden market movement, 
transaction volume can spike while people arb exchanges and move coins into 
exchanges to sell.

>> Thus, even if you imagine a steady-state mempool growth, unless the 
>> "near the top of the mempool" criteria is "near the top of the next 
>> block" (which is obviously *not* incentive-compatible)
> 
> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> block, and assumed you'd only allow RBF if the old package wasn't in the
> top and the replacement would be.  That seems incentive compatible; more
> than the current scheme?

My point was, because of block time variance, even that criteria doesn't hold 
up. If you assume a steady flow of new transactions and one or two blocks come 
in "late", suddenly "top 4MWeight" isn't likely to get confirmed until a few 
blocks come in "early". Given block variance within a 12 block window, this is 
a relatively likely scenario.

> The attack against this is to make a 100k package which would just get
> into this "top", then push it out with a separate tx at slightly higher
> fee, then repeat.  Of course, timing makes that hard to get right, and
> you're paying real fees for it too.
> 
> Sure, an attacker can make you pay next-block high fees, but it's still
> better than our current "*always* overpay and hope!", and you can always
> decide at the time based on whether the expiring HTLC(s) are worth it.
> 
> But I think whatever's simplest to implement should win, and I'm not in
> a position to judge that accurately.
> 
> Thanks,
> Rusty.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-01-08 Thread Rusty Russell via bitcoin-dev
Matt Corallo  writes:
> Ultimately, defining a "near the top of the mempool" criteria is fraught 
> with issues. While it's probably OK for the original problem (large 
> batched transactions where you don't want a single counterparty to 
> prevent confirmation), lightning's requirements are very different. 
> Instead is wanting a high probability that the transaction in question 
> confirms "soon", we need certainty that it will confirm by some deadline.

I don't think it's different, in practice.

> Thus, even if you imagine a steady-state mempool growth, unless the 
> "near the top of the mempool" criteria is "near the top of the next 
> block" (which is obviously *not* incentive-compatible)

I was defining "top of mempool" as "in the first 4 MSipa", ie. next
block, and assumed you'd only allow RBF if the old package wasn't in the
top and the replacement would be.  That seems incentive compatible; more
than the current scheme?

The attack against this is to make a 100k package which would just get
into this "top", then push it out with a separate tx at slightly higher
fee, then repeat.  Of course, timing makes that hard to get right, and
you're paying real fees for it too.

Sure, an attacker can make you pay next-block high fees, but it's still
better than our current "*always* overpay and hope!", and you can always
decide at the time based on whether the expiring HTLC(s) are worth it.

But I think whatever's simplest to implement should win, and I'm not in
a position to judge that accurately.

Thanks,
Rusty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-01-07 Thread Matt Corallo via bitcoin-dev

Sorry for the late reply.

Hmm, I included the old RBF-pinning proposal as a comparison. 
Personally, I find it both less clean and less convincingly secure.


Ultimately, defining a "near the top of the mempool" criteria is fraught 
with issues. While it's probably OK for the original problem (large 
batched transactions where you don't want a single counterparty to 
prevent confirmation), lightning's requirements are very different. 
Instead is wanting a high probability that the transaction in question 
confirms "soon", we need certainty that it will confirm by some deadline.


Thus, even if you imagine a steady-state mempool growth, unless the 
"near the top of the mempool" criteria is "near the top of the next 
block" (which is obviously *not* incentive-compatible), its easy to see 
how the package would fail to confirm within a handful of blocks given 
block time variance. Giving up the ability to RBF/CPFP more than once in 
case the fee moves away from us seems to be a rather significant 
restriction.


THe original proposal is somewhat of a hack, but its a hack on the 
boundary condition where packages meet our local anti-DoS rules in 
violation of the "incentive compatible" goal anyway (essentially, though 
miners also care about anti-DoS). This proposal is very different and, 
similar to how it doesn't work if blocks randomly come in a bit slow for 
an hour or two, isn't incentive compatible if blocks come in a bit fast 
for an hour or two, as all of a sudden that "near the top of the 
mempool" criteria makes no sense and you should have accepted the new 
transaction(s).


As for package relay, indeed, we can probably do soemthing simpler for 
this specific case, but itdepends on what the scope of that design is. 
Suhas opened an issue to try to scope it out a bit more at 
https://github.com/bitcoin/bitcoin/issues/14895


Matt


On Dec 3, 2018, at 22:33, Rusty Russell  wrote:

Matt Corallo  writes:
As an alternative proposal, at various points there have been 
discussions around solving the "RBF-pinning" problem by allowing 
transactors to mark their transactions as "likely-to-be-RBF'ed", which 
could enable a relay policy where children of such transactions would be 
rejected unless the resulting package would be "near the top of the 
mempool". This would theoretically imply such attacks are not possible 
to pull off consistently, as any "transaction-delaying" channel 
participant will have to place the package containing A at an effective 
feerate which makes confirmation to occur soon with some likelihood. It 
is, however, possible to pull off this attack with low probability in 
case of feerate spikes right after broadcast.


I like this idea.

Firstly, it's incentive-compatible[1]: assuming blocks are full, miners
should always take a higher feerate tx if that tx would be in the
current block and the replaced txs would not.[2]

Secondly, it reduces the problem that the current lightning proposal
adds to the UTXO set with two anyone-can-spend txs for 1000 satoshis,
which might be too small to cleanup later.  This rule would allow a
simple single P2WSH(OP_TRUE) output, or, with IsStandard changed,
a literal OP_TRUE.

Note that this clearly relies on some form of package relay, which comes 
with its own challenges, but I'll start a separate thread on that.


Could be done client-side, right?  Do a quick check if this is above 250
satoshi per kweight but below minrelayfee, put it in a side-cache with a
60 second timeout sweep.  If something comes in which depends on it
which is above minrelayfee, then process them as a pair[3].

Cheers,
Rusty.
[1] Miners have generally been happy with Defaults Which Are Good For The
   Network, but I feel a long term development aim should to be reduce
   such cases to smaller and smaller corners.
[2] The actual condition is subtler, but this is a clear subset AFAICT.
[3] For Lightning, we don't care about child-pays-for-grandparent etc.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2018-12-04 Thread Rusty Russell via bitcoin-dev
Matt Corallo  writes:
> As an alternative proposal, at various points there have been 
> discussions around solving the "RBF-pinning" problem by allowing 
> transactors to mark their transactions as "likely-to-be-RBF'ed", which 
> could enable a relay policy where children of such transactions would be 
> rejected unless the resulting package would be "near the top of the 
> mempool". This would theoretically imply such attacks are not possible 
> to pull off consistently, as any "transaction-delaying" channel 
> participant will have to place the package containing A at an effective 
> feerate which makes confirmation to occur soon with some likelihood. It 
> is, however, possible to pull off this attack with low probability in 
> case of feerate spikes right after broadcast.

I like this idea.

Firstly, it's incentive-compatible[1]: assuming blocks are full, miners
should always take a higher feerate tx if that tx would be in the
current block and the replaced txs would not.[2]

Secondly, it reduces the problem that the current lightning proposal
adds to the UTXO set with two anyone-can-spend txs for 1000 satoshis,
which might be too small to cleanup later.  This rule would allow a
simple single P2WSH(OP_TRUE) output, or, with IsStandard changed,
a literal OP_TRUE.

> Note that this clearly relies on some form of package relay, which comes 
> with its own challenges, but I'll start a separate thread on that.

Could be done client-side, right?  Do a quick check if this is above 250
satoshi per kweight but below minrelayfee, put it in a side-cache with a
60 second timeout sweep.  If something comes in which depends on it
which is above minrelayfee, then process them as a pair[3].

Cheers,
Rusty.
[1] Miners have generally been happy with Defaults Which Are Good For The
Network, but I feel a long term development aim should to be reduce
such cases to smaller and smaller corners.
[2] The actual condition is subtler, but this is a clear subset AFAICT.
[3] For Lightning, we don't care about child-pays-for-grandparent etc.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2018-12-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Bob,

Would `SIGHASH_SINGLE` work?
Commitment transactions have a single input but multiple outputs.

Regards,
ZmnSCPxj


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Sunday, December 2, 2018 11:08 PM, Bob McElrath  wrote:

> I've long thought about using SIGHASH_SINGLE, then either party can add inputs
> to cover whatever fee they want on channel close and it doesn't have to be
> pre-planned at setup.
>
> For Lightning I think you'd want to cross-sign, e.g. Alice signs her input
> and Bob's output, while Bob signs his input and Alice's output. This would
> demotivate the two parties from picking apart the transaction and broadcasting
> one of the two SIGHASH_SINGLE's in a Lightning transaction.
>
> Matt Corallo via bitcoin-dev [bitcoin-dev@lists.linuxfoundation.org] wrote:
>
> > (cross-posted to both lists to make lightning-dev folks aware, please take
> > lightning-dev off CC when responding).
> > As I'm sure everyone is aware, Lightning (and other similar systems) work by
> > exchanging pre-signed transactions for future broadcast. Of course in many
> > cases this requires either (a) predicting what the feerate required for
> > timely confirmation will be at some (or, really, any) point in the future,
> > or (b) utilizing CPFP and dependent transaction relay to allow parties to
> > broadcast low-feerate transactions with children created at broadcast-time
> > to increase the effective feerate. Ideally transactions could be constructed
> > to allow for after-the-fact addition of inputs to increase fee without CPFP
> > but it is not always possible to do so.
> > Option (a) is rather obviously intractible, and implementation complexity
> > has led to channel failures in lightning in practice (as both sides must
> > agree on a reasonable-in-the-future feerate). Option (b) is a much more
> > natural choice (assuming some form of as-yet-unimplemented package relay on
> > the P2P network) but is made difficult due to complexity around RBF/CPFP
> > anti-DoS rules.
> > For example, if we take a simplified lightning design with pre-signed
> > commitment transaction A with one 0-value anyone-can-spend output available
> > for use as a CPFP output, a counterparty can prevent confirmation
> > of/significantly increase the fee cost of confirming A by chaining a
> > large-but-only-moderate-feerate transaction off of this anyone-can-spend
> > output. This transaction, B, will have a large absolute fee while making the
> > package (A, B) have a low-ish feerate, placing it solidly at the bottom of
> > the mempool but without significant risk of it getting evicted during memory
> > limiting. This large absolute fee forces a counterparty which wishes to have
> > the commitment transaction confirm to increase on this absolute fee in order
> > to meet RBF rules.
> > For this reason (and many other similar attacks utilizing the package size
> > limits), in discussing the security model around CPFP, we've generally
> > considered it too-difficulty-to-prevent third parties which are able to
> > spend an output of a transaction from delaying its confirmation, at least
> > until/unless the prevailing feerates decline and some of the mempool backlog
> > gets confirmed.
> > You'll note, however, that this attack doesn't have to be permanent to work
> >
> > -   Lightning's (and other contracting/payment channel systems') security
> > model assumes the ability to get such commitment transactions confirmed 
> > in a
> > timely manner, as otherwise HTLCs may time out and counterparties can 
> > claim
> > the timeout-refund before we can claim the HTLC using the hash-preimage.
> >
> >
> > To partially-address the CPFP security model considerations, a next step
> > might involve tweaking Lightning's commitment transaction to have two
> > small-value outputs which are immediately spendable, one by each channel
> > participant, allowing them to chain children off without allowng unrelated
> > third-parties to chain children. Obviously this does not address the
> > specific attack so we need a small tweak to the anti-DoS CPFP rules in
> > Bitcoin Core/BIP 125:
> > The last transaction which is added to a package of dependent transactions
> > in the mempool must:
> >
> > -   Have no more than one unconfirmed parent,
> > -   Be of size no greater than 1K in virtual size.
> > (for implementation sanity, this would effectively reduce all mempool
> > package size limits by 1 1K-virtual-size transaction, and the last 
> > would be
> > "allowed to violate the limits" as long as it meets the above criteria).
> >
> >
> > For contracting applications like lightning, this means that as long as the
> > transaction we wish to confirm (in this case the commitment transaction)
> >
> > -   Has only two immediately-spendable (ie non-CSV) outputs,
> > -   where each immediately-spendable output is only spendable by one
> > counterparty,
> >
> > -   and is no larger than MAX_PACKAGE_VIRTUAL_S