>
>
> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.

Matt: With the proposed change, party B would always be able to add a child
to its output, regardless of what games party A is playing.


Thanks for the explanation, Jeremy!


> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.


Relay cost is the obvious problem with just naively removing all limits.
Relaxing the current rules by allowing to add a child to each output as
long as it has a single unconfirmed parent would still only allow free
relay of O(size of parent) extra data (which might not be that bad? Similar
to the carve-out rule we could put limits on the child size). This would be
enough for the current LN use case (increasing fee of commitment tx), but
not for OP_SECURETHEBAG I guess, as you need the tree of children, as you
mention.

I imagine walking the mempool wouldn't change much, as you would only have
one extra child per output. But here I'm just speculating, as I don't know
the code well enough know what the diff would look like.


> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".


This is interesting for an LN commitment! You could really hide every
output of the commitment within OP_STB, which could either allow bypassing
the fee-pinning attack entirely (if the output cannot be spent unconfirmed)
or adding fees to the commitment using SIGHASH_SINGLE|ANYONECANPAY.

- Johan

On Sun, Oct 27, 2019 at 8:13 PM Jeremy <jlru...@mit.edu> wrote:

> Johan,
>
> The issues with mempool limits for OP_SECURETHEBAG are related, but have
> distinct solutions.
>
> There are two main categories of mempool issues at stake. One is relay
> cost, the other is mempool walking.
>
> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.
>
> The other issue is walking the mempool -- many of the algorithms we use in
> the mempool can be N log N or N^2 in the number of descendants. (simple
> example: an input chain of length N to a fan out of N outputs that are all
> spent, is O(N^2) to look up ancestors per-child, unless we're caching).
>
> The other sort of walking issue is where the indegree or outdegree for a
> transaction is high. Then when we are computing descendants or ancestors we
> will need to visit it multiple times. To avoid re-expanding a node, we
> currently cache it with a set. This uses O(N) extra memory and makes O(N
> Log N) (we use std::set not unordered_set) comparisons.
>
> I just opened a PR which should help with some of the walking issues by
> allowing us to cheaply cache which nodes we've visited on a run. It makes a
> lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
> memory. See: https://github.com/bitcoin/bitcoin/pull/17268.
>
>
> Now, for OP_SECURETHEBAG we want a particular property that is very
> different from with lightning htlcs (as is). We want that an unlimited
> number of child OP_SECURETHEBAG txns may extend from a confirmed
> OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
> lightning (one dangling unconfirmed to permit channels).
>
> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".
>
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo <lf-li...@mattcorallo.com>
> wrote:
>
>> I don’te see how? Let’s imagine Party A has two spendable outputs, now
>> they stuff the package size on one of their spendable outlets until it is
>> right at the limit, add one more on their other output (to meet the
>> Carve-Out), and now Party B can’t do anything.
>>
>> On Oct 24, 2019, at 21:05, Johan Torås Halseth <joha...@gmail.com> wrote:
>>
>> 
>> It essentially changes the rule to always allow CPFP-ing the commitment
>> as long as there is an output available without any descendants. It changes
>> the commitment from "you always need at least, and exactly, one non-CSV
>> output per party. " to "you always need at least one non-CSV output per
>> party. "
>>
>> I realize these limits are there for a reason though, but I'm wondering
>> if could relax them. Also now that jeremyrubin has expressed problems with
>> the current mempool limits.
>>
>> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo <lf-li...@mattcorallo.com>
>> wrote:
>>
>>> I may be missing something, but I'm not sure how this changes anything?
>>>
>>> If you have a commitment transaction, you always need at least, and
>>> exactly, one non-CSV output per party. The fact that there is a size
>>> limitation on the transaction that spends for carve-out purposes only
>>> effects how many other inputs/outputs you can add, but somehow I doubt
>>> its ever going to be a large enough number to matter.
>>>
>>> Matt
>>>
>>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>>> > Reviving this old thread now that the recently released RC for bitcoind
>>> > 0.19 includes the above mentioned carve-out rule.
>>> >
>>> > In an attempt to pave the way for more robust CPFP of on-chain
>>> contracts
>>> > (Lightning commitment transactions), the carve-out rule was added in
>>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked
>>> on
>>> > an implementation of a new commitment format for utilizing the Bring
>>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>>> > need to be changed to add the CSV delay).
>>> >
>>> > Instead, what about letting the rule be
>>> >
>>> > The last transaction which is added to a package of dependent
>>> > transactions in the mempool must:
>>> >   * Have no more than one unconfirmed parent.
>>> >
>>> > This would of course allow adding a large transaction to each output of
>>> > the unconfirmed parent, which in effect would allow an attacker to
>>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>>> > this a problem with the current mempool acceptance code in bitcoind? I
>>> > would imagine evicting transactions based on feerate when the max
>>> > mempool size is met handles this, but I’m asking since it seems like
>>> > there has been several changes to the acceptance code and eviction
>>> > policy since the limit was first introduced.
>>> >
>>> > - Johan
>>> >
>>> >
>>> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell <ru...@rustcorp.com.au
>>> > <mailto:ru...@rustcorp.com.au>> wrote:
>>> >
>>> >     Matt Corallo <lf-li...@mattcorallo.com
>>> >     <mailto:lf-li...@mattcorallo.com>> writes:
>>> >     >>> Thus, even if you imagine a steady-state mempool growth,
>>> unless the
>>> >     >>> "near the top of the mempool" criteria is "near the top of the
>>> next
>>> >     >>> block" (which is obviously *not* incentive-compatible)
>>> >     >>
>>> >     >> I was defining "top of mempool" as "in the first 4 MSipa", ie.
>>> next
>>> >     >> block, and assumed you'd only allow RBF if the old package
>>> wasn't
>>> >     in the
>>> >     >> top and the replacement would be.  That seems incentive
>>> >     compatible; more
>>> >     >> than the current scheme?
>>> >     >
>>> >     > My point was, because of block time variance, even that criteria
>>> >     doesn't hold up. If you assume a steady flow of new transactions
>>> and
>>> >     one or two blocks come in "late", suddenly "top 4MWeight" isn't
>>> >     likely to get confirmed until a few blocks come in "early". Given
>>> >     block variance within a 12 block window, this is a relatively
>>> likely
>>> >     scenario.
>>> >
>>> >     [ Digging through old mail. ]
>>> >
>>> >     Doesn't really matter.  Lightning close algorithm would be:
>>> >
>>> >     1.  Give bitcoind unileratal close.
>>> >     2.  Ask bitcoind what current expidited fee is (or survey your
>>> mempool).
>>> >     3.  Give bitcoind child "push" tx at that total feerate.
>>> >     4.  If next block doesn't contain unilateral close tx, goto 2.
>>> >
>>> >     In this case, if you allow a simpified RBF where 'you can replace
>>> if
>>> >     1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
>>> >     old tx isnt',
>>> >     it works.
>>> >
>>> >     It allows someone 100k of free tx spam, sure.  But it's simple.
>>> >
>>> >     We could further restrict it by marking the unilateral close
>>> somehow to
>>> >     say "gonna be pushed" and further limiting the child tx weight
>>> (say,
>>> >     5kSipa?) in that case.
>>> >
>>> >     Cheers,
>>> >     Rusty.
>>> >     _______________________________________________
>>> >     Lightning-dev mailing list
>>> >     lightning-...@lists.linuxfoundation.org
>>> >     <mailto:lightning-...@lists.linuxfoundation.org>
>>> >     https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>> >
>>>
>> _______________________________________________
>> Lightning-dev mailing list
>> lightning-...@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to