Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-18 Thread darosior via bitcoin-dev
James,

You seem to imply that the scenario described isn't prevented today. It is. The 
mempool acceptance for a replacement not only
depend on the transaction feerate but also the transaction fee [0]. That's why 
i raised it in the first place...

Antoine

[0] 
https://github.com/bitcoin/bitcoin/blob/66636ca438cb65fb18bcaa4540856cef0cee2029/src/validation.cpp#L944-L947

Of course if you are evicting transactions then you don't have the issue i 
mentioned, so it's fine doing so.
 Original Message 
On Feb 17, 2022, 19:18, James O'Beirne < james.obei...@gmail.com> wrote:

>> Is it really true that miners do/should care about that?
>
> De facto, any miner running an unmodified version of bitcoind doesn't
> care about anything aside from ancestor fee rate, given that the
> BlockAssembler as-written orders transactions for inclusion by
> descending ancestor fee-rate and then greedily adds them to the block
> template. [0]
>
> If anyone has any indication that there are miners running forks of
> bitcoind that change this behavior, I'd be curious to know it.
>
> Along the lines of what AJ wrote, optimal transaction selection is
> NP-hard (knapsack problem). Any time that a miner spends deciding how
> to assemble the next block is time not spent grinding on the nonce, and
> so I'm skeptical that miners in practice are currently doing anything
> that isn't fast and simple like the default implementation: sorting
> fee-rate in descending order and then greedily packing.
>
> But it would be interesting to hear evidence to the contrary.
>
> ---
>
> You can make the argument that transaction selection is just a function
> of mempool contents, and so mempool maintenance criteria might be the
> thing to look at. Mempool acceptance is gated based on a minimum
> feerate[1]. Mempool eviction (when running low on space) happens on
> the basis of max(self_feerate, descendant_feerate) [2]. So even in the
> mempool we're still talking in terms of fee rates, not absolute fees.
>
> That presents us with the "is/ought" problem: just because the mempool
> *is* currently gating only on fee rate doesn't mean that's optimal. But
> if the whole point of the mempool is to hold transactions that will be
> mined, and if there's good reason that txns are chosen for mining based
> on fee rate (it's quick and good enough), then it seems like fee rate
> is the approximation that should ultimately prevail for txn
> replacement.
>
> [0]:
> https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
> [1]:
> https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
> [2]:
> https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-18 Thread Prayank via bitcoin-dev
> If anyone has any indication that there are miners running forks of bitcoind 
> that change this behavior, I'd be curious to know it.
It is possible because some mining pools use bitcoind with custom patches. 

Example: https://twitter.com/0xB10C/status/1461392912600776707 (f2pool)

-- 
Prayank

A3B1 E430 2298 178F
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-18 Thread Prayank via bitcoin-dev
> I suspect the "economically rational" choice would be to happily trade off 
> that immediate loss against even a small chance of a simpler policy 
> encouraging higher adoption of bitcoin, _or_ a small chance of more on-chain 
> activity due to higher adoption of bitcoin protocols like lightning and thus 
> a lower chance of an empty mempool in future.

Is this another way of saying a few developers will decide RBF policy for 
miners and they should follow it because it is the only way bitcoin gets more 
adoption? On-chain activity is dependent on lot of things. I suspect any change 
in policy will change it any time soon and miners should have the freedom to 
decide things that aren't consensus rules.

Lightning network contributes to on-chain activity only with opening and 
closing of channels. Based on the chart I see in the below link for channels 
opened/closed per block, its contribution is less than 1% in fees:

https://txstats.com/dashboard/db/lightning-network?orgId=1=now-6M=now

-- 
Prayank

A3B1 E430 2298 178F
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-18 Thread Antoine Riard via bitcoin-dev
While I roughly agree with the thesis that different replacement policies
offer marginal block reward gains _in the current state_ of the ecosystem,
I would be more conservative about extending the conclusions to the
medium/long-term future.

> I suspect the "economically rational" choice would be to happily trade
> off that immediate loss against even a small chance of a simpler policy
> encouraging higher adoption of bitcoin, _or_ a small chance of more
> on-chain activity due to higher adoption of bitcoin protocols like
> lightning and thus a lower chance of an empty mempool in future.

This is making the assumption that the economic interests of the different
class of actors in the Bitcoin ecosystem are not only well-understood but
also aligned. We have seen in the past mining actors behaviors delaying the
adoption of protocol upgrades which were expected to encourage higher
adoption of Bitcoin. Further, if miners likely have an incentive to see an
increase of on-chain activity, there is also the possibility that lightning
will be so throughput-efficient to drain mempools backlog, to a point where
the block demand is not high enough to pay back the cost of mining hardware
and operational infrastructure. Or at least not matching the return on
mining investments expectations.

Of course, it could be argued that as a utxo-sharing protocol like
lightning just compresses the number of payments per block space unit, it
lowers the fees burden, thus making Bitcoin as a payment system far more
attractive for a wider population of users. In fine increasing the block
space demand and satisfying the miners.

In the state of today's knowledge, this hypothesis sounds the most
plausible. Though, I would say it's better to be cautious until we
understand better the interactions between the different layers of the
Bitcoin ecosystem ?

> Certainly those percentages can be expected to double every four years as
> the block reward halves (assuming we don't also reduce the min relay fee
> and block min tx fee), but I think for both miners and network stability,
> it'd be better to have the mempool backlog increase over time, which
> would both mean there's no/less need to worry about the special case of
> the mempool being empty, and give a better incentive for people to pay
> higher fees for quicker confirmations.

Intuitively, if we assume that liquidity isn't free on lightning [0], there
should be a subjective equilibrium where it's cheaper to open new channels
to reduce one's own graph traversal instead of paying too high routing fees.

As the core of the network should start to be more busy, I think we should
see more LN actors doing that kind of arbitrage, guaranteeing in the
long-term mempools backlog.

> If you really want to do that
> optimally, I think you have to have a mempool that retains conflicting
> txs and runs a dynamic programming solution to pick the best set, rather
> than today's simple greedy algorithms both for building the block and
> populating the mempool?

As of today, I think power efficiency of mining chips and access to
affordable sources of energy are more significant factors of the
rentability of mining operations rather than optimality of block
construction/replacement policy. IMO, making the argument that small deltas
in block reward gains aren't that much relevant.

That said, the former factors might become a commodity, and the latter one
become a competitive advantage. It could incentivize the development of
such greedy algorithms, potentially in a covert way as we have seen with
AsicBoost ?

> Is there a plausible example where the difference isn't that marginal?

The paradigm might change in the future. If we see the deployment of
channel factories/payment pools, we might have users competing to spend a
shared-utxo with different liquidity needs and thus ready to overbid. Lack
of a "conflict pool" logic might make you lose income.

> Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think

I think the pinnings we're affected with today are due to the ability of a
malicious counterparty to halt the on-chain resolution of the channel. The
presence of a  pinning commitment transaction with low-chance of
confirmation (abuse of BIP125 rule 3)
prevents the honest counterparty to fee-bump her own version of the
commitment, thus redeeming a HTLC before timelock expiration. As long as
one commitment confirms, independently of who issued it, the pinning is
over. I think moving to replace-by-feerate allows the honest counterparty
to fee-bump her commitment, thus offering a compelling block space demand,
or forces the malicious counterparty to enter in a fee race.


To gather my thinking on the subject, the replace-by-feerate policy could
produce lower fees blocks in the presence of today's environment of
empty/low-fulfilled blocks. That said, the delta sounds marginal enough
w.r.t other factors of mining business units
to not be worried (or 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread James O'Beirne via bitcoin-dev
> Is it really true that miners do/should care about that?

De facto, any miner running an unmodified version of bitcoind doesn't
care about anything aside from ancestor fee rate, given that the
BlockAssembler as-written orders transactions for inclusion by
descending ancestor fee-rate and then greedily adds them to the block
template. [0]

If anyone has any indication that there are miners running forks of
bitcoind that change this behavior, I'd be curious to know it.

Along the lines of what AJ wrote, optimal transaction selection is
NP-hard (knapsack problem). Any time that a miner spends deciding how
to assemble the next block is time not spent grinding on the nonce, and
so I'm skeptical that miners in practice are currently doing anything
that isn't fast and simple like the default implementation: sorting
fee-rate in descending order and then greedily packing.

But it would be interesting to hear evidence to the contrary.

---

You can make the argument that transaction selection is just a function
of mempool contents, and so mempool maintenance criteria might be the
thing to look at. Mempool acceptance is gated based on a minimum
feerate[1].  Mempool eviction (when running low on space) happens on
the basis of max(self_feerate, descendant_feerate) [2]. So even in the
mempool we're still talking in terms of fee rates, not absolute fees.

That presents us with the "is/ought" problem: just because the mempool
*is* currently gating only on fee rate doesn't mean that's optimal. But
if the whole point of the mempool is to hold transactions that will be
mined, and if there's good reason that txns are chosen for mining based
on fee rate (it's quick and good enough), then it seems like fee rate
is the approximation that should ultimately prevail for txn
replacement.


[0]:
https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
[1]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
[2]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread Anthony Towns via bitcoin-dev
On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:
> This is where *all* the complexity comes from. If our goal is to "ensure a
> bump increases a miner's overall revenue" (thus not wasting relay for
> everyone else), then we precisely *do* need
> > Special consideration for "what should be in the next
> > block" and/or the caching of block templates seems like an imposing
> > dependency
> Whether a transaction increases a miner's revenue depends precisely on
> whether the transaction (package) being replaced is in the next block - if
> it is, you care about the absolute fee of the package and its replacement.

On Thu, Feb 10, 2022 at 11:44:38PM +, darosior via bitcoin-dev wrote:
> It's not that simple. As a miner, if i have less than 1vMB of transactions in 
> my mempool. I don't want a 10sats/vb transaction paying 10sats by a 
> 100sats/vb transaction paying only 1sats.

Is it really true that miners do/should care about that?

If you did this particular example, the miner would be losing 90k sats
in fees, which would be at most 1.44 *millionths* of a percent of the
block reward with the subsidy at 6.25BTC per block, even if there were
no other transactions in the mempool. Even cumulatively, 10sats/vb over
1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.

I suspect the "economically rational" choice would be to happily trade
off that immediate loss against even a small chance of a simpler policy
encouraging higher adoption of bitcoin, _or_ a small chance of more
on-chain activity due to higher adoption of bitcoin protocols like
lightning and thus a lower chance of an empty mempool in future.

If the network has an "empty mempool" (say less than 2MvB-10MvB of
backlog even if you have access to every valid 1+ sat/vB tx on any node
connected to the network), then I don't think you'll generally have txs
with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
which means your maximum loss is about 3% of block revenue, at least
while the block subsidy remains at 6.25BTC/block.

Certainly those percentages can be expected to double every four years as
the block reward halves (assuming we don't also reduce the min relay fee
and block min tx fee), but I think for both miners and network stability,
it'd be better to have the mempool backlog increase over time, which
would both mean there's no/less need to worry about the special case of
the mempool being empty, and give a better incentive for people to pay
higher fees for quicker confirmations.

If we accept that logic (and assuming we had some additional policy
to prevent p2p relay spam due to replacement txs), we could make
the mempool accept policy for replacements just be (something like)
"[package] feerate is greater than max(descendent fee rate)", which
seems like it'd be pretty straightforward to deal with in general?



Thinking about it a little more; I think the decision as to whether
you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
100sat/vb" tx in your mempool if you're going to take into account
unrelated, lower fee rate txs that are also in the mempool makes block
building "more" of an NP-hard problem and makes the greedy solution
we've currently got much more suboptimal -- if you really want to do that
optimally, I think you have to have a mempool that retains conflicting
txs and runs a dynamic programming solution to pick the best set, rather
than today's simple greedy algorithms both for building the block and
populating the mempool?

For example, if you had two such replacements come through the network,
a miner could want to flip from initially accepting the first replacement,
to unaccepting it:

Initial mempool: two big txs at 100k each, many small transactions at
15s/vB and 1s/vB

 [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]
   -> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)

Replacement for the 20s/vB tx paying a higher fee rate but lower total
fee; that's worth including:

 [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
   -> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)

Later, replacement for the 12s/vB tx comes in, also paying higher fee
rate but lower total fee. Worth including, but only if you revert the
original replacement:

 [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.16 BTC for 1MvB (150*20 + 850*15)

 [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)

Algorithms/mempool policies you might have, and their results with
this example:

 * current RBF rules: reject both replacements because they don't
   increase the absolute fee, thus get the minimum block fees of
   0.148 BTC

 * reject RBF unless it increases the fee rate, and get 0.1484 BTC in
   fees

 * reject RBF if it's lower fee rate or immediately decreases the block
   reward: so, accept the 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-16 Thread Billy Tetrud via bitcoin-dev
>  the validity of a sponsor txn is "monotonically" true at any point after
the inclusion of the sponsored txn in a block.

Oh I see his point now. If sponsors were valid at any point in the future,
not only would a utxo index be needed but an index of all transactions.
Yeah, that wouldn't be good. And the solution of bounding the sponsor
transaction to be valid in some window after the transaction is included
doesn't solve the original point of making sponsor transactions never
become invalid. Thanks for the clarification James, and good point Jeremy.

On Wed, Feb 16, 2022 at 1:19 PM James O'Beirne 
wrote:

> > What do you mean by monotone in the context of sponsor transactions?
>
> I take this to mean that the validity of a sponsor txn is
> "monotonically" true at any point after the inclusion of the sponsored
> txn in a block.
>
> > And when you say tx-index, do you mean an index for looking up a
> > transaction by its ID? Is that not already something nodes do?
>
> Indeed, not all nodes have this ability. Each bitcoind node has a map
> of unspent coins which can be referenced by outpoint i.e.(txid, index),
> but the same isn't true for all historical transactions. I
> (embarrassingly) forgot this in the prior post.
>
> The map of (txid -> transaction) for all time is a separate index that
> must be enabled via the `-txindex=1` flag; it isn't enabled by default
> because it isn't required for consensus and its growth is unbounded.
>
> > > The current consensus threshold for transactions to become invalid
> > > is a 100 block reorg
> >
> > What do you mean by this? The only 100 block period I'm aware of is
> > the coinbase cooldown period.
>
> If there were a reorg deeper than 100 blocks, it would permanently
> invalidate any transactions spending the recently-matured coinbase
> subsidy in any block between $new_reorg_tip and ($former_tip_height -
> 100). These invalidated spends would not be able to be reorganized
> into a new replacement chain.
>
> How this differs in practice or principle from a "regular" double-spend
> via reorg I'll leave for another message. I'm not sure that I understand
> that myself. Personally I think if we hit a >100 block reorg, we've got
> bigger issues than coinbase invalidation.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-16 Thread James O'Beirne via bitcoin-dev
> What do you mean by monotone in the context of sponsor transactions?

I take this to mean that the validity of a sponsor txn is
"monotonically" true at any point after the inclusion of the sponsored
txn in a block.

> And when you say tx-index, do you mean an index for looking up a
> transaction by its ID? Is that not already something nodes do?

Indeed, not all nodes have this ability. Each bitcoind node has a map
of unspent coins which can be referenced by outpoint i.e.(txid, index),
but the same isn't true for all historical transactions. I
(embarrassingly) forgot this in the prior post.

The map of (txid -> transaction) for all time is a separate index that
must be enabled via the `-txindex=1` flag; it isn't enabled by default
because it isn't required for consensus and its growth is unbounded.

> > The current consensus threshold for transactions to become invalid
> > is a 100 block reorg
>
> What do you mean by this? The only 100 block period I'm aware of is
> the coinbase cooldown period.

If there were a reorg deeper than 100 blocks, it would permanently
invalidate any transactions spending the recently-matured coinbase
subsidy in any block between $new_reorg_tip and ($former_tip_height -
100). These invalidated spends would not be able to be reorganized
into a new replacement chain.

How this differs in practice or principle from a "regular" double-spend
via reorg I'll leave for another message. I'm not sure that I understand
that myself. Personally I think if we hit a >100 block reorg, we've got
bigger issues than coinbase invalidation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-16 Thread Billy Tetrud via bitcoin-dev
@Jeremy

 > there are technical reasons for sponsors to not be monotone. Mostly that
it requires the maintenance of an additional permanent TX-Index, making
Bitcoin's state grow at a much worse rate

What do you mean by monotone in the context of sponsor transactions? And when
you say tx-index, do you mean an index for looking up a transaction by its
ID? Is that not already something nodes do?

> The sponsors proposal is a change from Epsilon-Strong Reorgability to
Epsilon-Weak Reorgability

It doesn't look like you defined that term in your list. Did you mean what
you listed as "Epsilon: Simple Existential Reorgability"? If so, I would
say that should be sufficient. I'm not sure I would even distinguish
between the "strong" and "simple" versions of these things, tho you could
talk about things that make reorgs more or less computationally difficult
on a spectrum. As long as the computational difficulty isn't significant
for miners vs their other computational costs, the computation isn't really
a problem.

@Russell
> The current consensus threshold for transactions to become invalid is a
100 block reorg

What do you mean by this? The only 100 block period I'm aware of is the
coinbase cooldown period.

>  I promise to personally build a wallet that always creates transactions
on the verge of becoming invalid should anyone ever implement a feature
that violates this tx validity principle.

Could you explain how you would build a wallet like that with a sponsor
transaction as described by Jeremy? What damage do you think such a wallet
could do? As far as I can tell, such a wallet is very unlikely to do more
damage to the network than it does to the user of that wallet.

On Tue, Feb 15, 2022 at 3:39 PM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The difference between sponsors and this issue is more subtle. The issue
> Suhas raised was with a variant of sponsors trying to address a second
> criticism, not sponsors itself, which is secure against this.
>
> I think I can make this clear by defining a few different properties:
>
> Strong Reorgability: The transaction graph can be arbitrarily reorged into
> any series of blocks as long as dependency order/timelocks are respected.
> Simple Existential Reorgability: The transaction graph can be reorged into
> a different series of blocks, and it is not computationally difficult to
> find such an ordering.
> Epsilon-Strong Reorgability: The transaction graph can be arbitrarily
> reorged into any series of blocks as long as dependency order/timelocks are
> respected, up to Epsilon blocks.
> Epsilon: Simple Existential Reorgability: The transaction graph can be
> reorged into a different series of blocks, and it is not computationally
> difficult to find such an ordering, up to epsilon blocks.
> Perfect Reorgability: The transaction graph can be reorged into a
> different series of blocks, but the transactions themselves are already
> locked in.
>
> Perfect Reorgability doesn't exist in Bitcoin because unconfirmed
> transactions can be double spent which invalidates descendants. Notably,
> for a subset of the graph which is CTV Congestion control tree expansions,
> perfect reorg ability would exist, so it's not just a bullshit concept to
> think about :)
>
> The sponsors proposal is a change from Epsilon-Strong Reorgability to
> Epsilon-Weak Reorgability. It's not clear to me that there is any
> functional reason to rely on Strongness when Bitcoin's reorgability is
> already not Perfect, so a reorg generator with malicious intent can already
> disturb the tx graph. Epsion-Weak Reorgability seems to be a sufficient
> property.
>
> Do you disagree with that?
>
> Best,
>
> Jeremy
>
> --
> @JeremyRubin 
>
> On Tue, Feb 15, 2022 at 12:25 PM Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>>
>>> >> 2. (from Suhas) "once a valid transaction is created, it should not
>>> become invalid later on unless the inputs are double-spent."
>>> > This doesn't seem like a huge concern to me
>>>
>>> I agree that this shouldn't be a concern. In fact, I've asked numerous
>>> people in numerous places what practical downside there is to transactions
>>> that become invalid, and I've heard basically radio silence other than one
>>> off hand remark by satoshi at the dawn of time which didn't seem to me to
>>> have good reasoning. I haven't seen any downside whatsoever of transactions
>>> that can become invalid for anyone waiting the standard 6 confirmations -
>>> the reorg risks only exists for people not waiting for standard
>>> finalization. So I don't think we should consider that aspect of a
>>> sponsorship transaction that can only be mined with the transaction it
>>> sponsors to be a problem unless a specific practical problem case can be
>>> identified. Even if a significant such case was identified, an easy
>>> solution would be to simply allow sponsorship transactions to be 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Jeremy Rubin via bitcoin-dev
The difference between sponsors and this issue is more subtle. The issue
Suhas raised was with a variant of sponsors trying to address a second
criticism, not sponsors itself, which is secure against this.

I think I can make this clear by defining a few different properties:

Strong Reorgability: The transaction graph can be arbitrarily reorged into
any series of blocks as long as dependency order/timelocks are respected.
Simple Existential Reorgability: The transaction graph can be reorged into
a different series of blocks, and it is not computationally difficult to
find such an ordering.
Epsilon-Strong Reorgability: The transaction graph can be arbitrarily
reorged into any series of blocks as long as dependency order/timelocks are
respected, up to Epsilon blocks.
Epsilon: Simple Existential Reorgability: The transaction graph can be
reorged into a different series of blocks, and it is not computationally
difficult to find such an ordering, up to epsilon blocks.
Perfect Reorgability: The transaction graph can be reorged into a different
series of blocks, but the transactions themselves are already locked in.

Perfect Reorgability doesn't exist in Bitcoin because unconfirmed
transactions can be double spent which invalidates descendants. Notably,
for a subset of the graph which is CTV Congestion control tree expansions,
perfect reorg ability would exist, so it's not just a bullshit concept to
think about :)

The sponsors proposal is a change from Epsilon-Strong Reorgability to
Epsilon-Weak Reorgability. It's not clear to me that there is any
functional reason to rely on Strongness when Bitcoin's reorgability is
already not Perfect, so a reorg generator with malicious intent can already
disturb the tx graph. Epsion-Weak Reorgability seems to be a sufficient
property.

Do you disagree with that?

Best,

Jeremy

--
@JeremyRubin 

On Tue, Feb 15, 2022 at 12:25 PM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
>> >> 2. (from Suhas) "once a valid transaction is created, it should not
>> become invalid later on unless the inputs are double-spent."
>> > This doesn't seem like a huge concern to me
>>
>> I agree that this shouldn't be a concern. In fact, I've asked numerous
>> people in numerous places what practical downside there is to transactions
>> that become invalid, and I've heard basically radio silence other than one
>> off hand remark by satoshi at the dawn of time which didn't seem to me to
>> have good reasoning. I haven't seen any downside whatsoever of transactions
>> that can become invalid for anyone waiting the standard 6 confirmations -
>> the reorg risks only exists for people not waiting for standard
>> finalization. So I don't think we should consider that aspect of a
>> sponsorship transaction that can only be mined with the transaction it
>> sponsors to be a problem unless a specific practical problem case can be
>> identified. Even if a significant such case was identified, an easy
>> solution would be to simply allow sponsorship transactions to be mined on
>> or after the sponsored transaction is mined.
>>
>
> The downside is that in a 6 block reorg any transaction that is moved past
> its expiration date becomes invalid and all its descendants become invalid
> too.
>
> The current consensus threshold for transactions to become invalid is a
> 100 block reorg, and I see no reason to change this threshold.  I promise
> to personally build a wallet that always creates transactions on the verge
> of becoming invalid should anyone ever implement a feature that violates
> this tx validity principle.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Jeremy Rubin via bitcoin-dev
James,

Unfortunately, there are technical reasons for sponsors to not be monotone.
Mostly that it requires the maintenance of an additional permanent
TX-Index, making Bitcoin's state grow at a much worse rate. Instead, you
could introduce a time-bound for inclusion, e.g. 100 blocks. However, this
time-bounded version has the issue that Roconnor raised which is that
validity "stops" after a certain time, hurting reorganization.

However, If you wanted to map this conceptually onto existing tx indexes,
you could have an output with exactly the script `<100 blocks> OP_CSV` and
then allow sponsor references to be pruned after that output is "garbage
collected" by pruning it out of a block. This would be a way that
sponsorship would be opt-in (must have the flag output) and then sponsors
observations of txid existence would be only guaranteed to work for 100
blocks after which it could be garbage collected by a miner.

It's not a huge leap to say that this behavior should be made entirely
"virtual", as you are essentially arguing that there exists a transaction
graph we could construct that would be equivalent to the graph were we to
actually have such an output / spends relationship. Since the property we
care about is about all graphs, that a specific one could exist that has
the same dependency / invalidity relationships during a reorg is important
for the theory of bitcoin transaction execution.

So it really isn't clear to me that we're hurting the transaction graph
properties that severely with changes in this family. It's also not clear
to me that having a TXINDEX is a huge issue given that making a dust-out
per tx would have the same impact (and people might do it if it's
functionally useful, so just making it default behavior would at least help
us optimize it to be done through e.g. a separate witness space/utreexo-y
thing).

Another consideration is to make the outputs from sponsor txn subject to a
100 block cool-off period. E.g., so even if you have your inverse timelock,
adding a constraint that all outputs then have something similar to
fCoinbase set on them (for spending timelocks only) would mean that little
reorgs could not disturb the tx graph, although this poses a UX challenge
for wallets that aim to bump often (e.g., 1 bump per block would mean you
need to maintain 100 outputs).

Lastly, it's pretty clear from a UX perspective that I should not want to
pay miners who did *not* mine my transactions! Therefore, it would be
natural to see if you pay a high enough fee that users might want to cancel
their (now very desirable) stale fee bumps by replacing it with something
more useful to them. So allowing sponsors to be in subsequent blocks might
make it rational for users to do more transactions, which increases the
costs of such an approach.


All things considered, I favor the simple version of just having sponsors
only valid for the block their target is co-resident in.


Jeremy





--
@JeremyRubin 

On Tue, Feb 15, 2022 at 12:53 PM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > The downside is that in a 6 block reorg any transaction that is moved
> > past its expiration date becomes invalid and all its descendants
> > become invalid too.
>
> Worth noting that the transaction sponsors design is no worse an
> offender on this count than, say, CPFP is, provided we adopt the change
> that sponsored txids are required to be included in the current block
> *or* prior blocks. (The original proposal allowed current block only).
>
> In other words, the sponsored txids are just "virtual inputs" to the
> sponsor transaction.
>
> This is a much different case than e.g. transaction expiry based on
> wall-clock time or block height, which I agree complicates reorgs
> significantly.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Russell O'Connor via bitcoin-dev
> >> 2. (from Suhas) "once a valid transaction is created, it should not
> become invalid later on unless the inputs are double-spent."
> > This doesn't seem like a huge concern to me
>
> I agree that this shouldn't be a concern. In fact, I've asked numerous
> people in numerous places what practical downside there is to transactions
> that become invalid, and I've heard basically radio silence other than one
> off hand remark by satoshi at the dawn of time which didn't seem to me to
> have good reasoning. I haven't seen any downside whatsoever of transactions
> that can become invalid for anyone waiting the standard 6 confirmations -
> the reorg risks only exists for people not waiting for standard
> finalization. So I don't think we should consider that aspect of a
> sponsorship transaction that can only be mined with the transaction it
> sponsors to be a problem unless a specific practical problem case can be
> identified. Even if a significant such case was identified, an easy
> solution would be to simply allow sponsorship transactions to be mined on
> or after the sponsored transaction is mined.
>

The downside is that in a 6 block reorg any transaction that is moved past
its expiration date becomes invalid and all its descendants become invalid
too.

The current consensus threshold for transactions to become invalid is a 100
block reorg, and I see no reason to change this threshold.  I promise to
personally build a wallet that always creates transactions on the verge of
becoming invalid should anyone ever implement a feature that violates this
tx validity principle.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Billy Tetrud via bitcoin-dev
>   If you wish to fee-bump transaction X with sponsor, how can you be sure
that transaction isn't present in the majority of network nodes, and X has
_not_ been dropped since your last broadcast ?

You're right that you can't assume your target transaction hasn't been
dropped. However, I assume when James said "No rebroadcast (wasted
bandwidth) is required for the original txn data" he meant that in the
context of the "diff" he was talking about. It would be easy enough to
specify a sponsorship transaction that points to a transaction with a
specific id without *requiring* that transaction to be rebroadcast. If your
partner node has that transaction, no rebroadcast is necessary. If your
partner node doesn't have it, they can request it. That way rebroadcast is
only done when necessary. Correct me if my understanding of your suggestion
is wrong James.

>> 2. (from Suhas) "once a valid transaction is created, it should not
become invalid later on unless the inputs are double-spent."
> This doesn't seem like a huge concern to me

I agree that this shouldn't be a concern. In fact, I've asked numerous
people in numerous places what practical downside there is to transactions
that become invalid, and I've heard basically radio silence other than one
off hand remark by satoshi at the dawn of time which didn't seem to me to
have good reasoning. I haven't seen any downside whatsoever of transactions
that can become invalid for anyone waiting the standard 6 confirmations -
the reorg risks only exists for people not waiting for standard
finalization. So I don't think we should consider that aspect of a
sponsorship transaction that can only be mined with the transaction it
sponsors to be a problem unless a specific practical problem case can be
identified. Even if a significant such case was identified, an easy
solution would be to simply allow sponsorship transactions to be mined on
or after the sponsored transaction is mined.



On Mon, Feb 14, 2022 at 7:10 PM Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > In the context of fee bumping, I don't see how this is a criticism
> > unique to transaction sponsors, since it also applies to CPFP: if you
> > tried to bump fees for transaction A with child txn B, if some mempool
> > hasn't seen parent A, it will reject B.
>
> Agree, it's a comment raising the shenanigans of tx-diff-only propagation,
> afaict affecting equally all fee-bumping primitives. It wasn't a criticism
> specific to transaction sponsors, as at that point of your post, sponsors
> are not introduced yet.
>
> > This still doesn't address the issue I'm talking about, which is if you
> > pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> > ends up being compromised. This isn't a matter of data availability or
> > redundancy.
>
> I'm not sure about the real safety risk of the compromise of the anchor
> output key. Of course, if your anchor output key is compromised and the
> bumped package is already public/known, an attacker can extend your package
> with junk to neutralize your carve-out capability (I think). That said,
> this issue sounds solved to me with package relay, as you can always
> broadcast a new version of the package from the root UTXO, without
> attention to the carve-out limitation.
>
> (Side-note: I think we can slowly deprecate the carve-out once package
> relay is deployed, as the fee-bumping flexibility of the latter is a
> superset of the former).
>
> > As I mentioned in the reply to Matt's message, I'm not quite
> > understanding this idea of wanting to bump the fee for something
> > without knowing what it is; that doesn't make much sense to me.
> > The "bump fee" operation seems contingent on knowing
> > what you want to bump.
>
> From your post : "No rebroadcast (wasted bandwidth) is required for the
> original txn data."
>
> I'm objecting to that supposed benefit of a transaction sponsor. If you
> have transaction X and transaction Y spending the same UTXO, both of them
> can be defined as "the original txn data". If you wish to fee-bump
> transaction X with sponsor, how can you be sure that transaction
> Y isn't present in the majority of network nodes, and X has _not_ been
> dropped since your last broadcast ? Otherwise iirc sponsor design, your
> sponsor transaction is going to be rejected.
>
> I think you can't, and thus preventively you should broadcast as a (new
> type) of package the sponsoring/sponsored transaction.
>
> That said, I'm not sure if that issue is equally affecting vaults than
> payment channels. With vaults, the tree of transactions is  known ahead,
> and there is no competition in the spends. Assuming the first broadcast has
> been efficient (and it could be a reasonable assumption thanks to mempool
> rebroadcast), the sponsor should propagate.
>
> So I think here for the sake of sponsor efficiency analysis, we might have
> to class between the protocol with once-for-all-transaction-negotiation
> 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread Antoine Riard via bitcoin-dev
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.

Agree, it's a comment raising the shenanigans of tx-diff-only propagation,
afaict affecting equally all fee-bumping primitives. It wasn't a criticism
specific to transaction sponsors, as at that point of your post, sponsors
are not introduced yet.

> This still doesn't address the issue I'm talking about, which is if you
> pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> ends up being compromised. This isn't a matter of data availability or
> redundancy.

I'm not sure about the real safety risk of the compromise of the anchor
output key. Of course, if your anchor output key is compromised and the
bumped package is already public/known, an attacker can extend your package
with junk to neutralize your carve-out capability (I think). That said,
this issue sounds solved to me with package relay, as you can always
broadcast a new version of the package from the root UTXO, without
attention to the carve-out limitation.

(Side-note: I think we can slowly deprecate the carve-out once package
relay is deployed, as the fee-bumping flexibility of the latter is a
superset of the former).

> As I mentioned in the reply to Matt's message, I'm not quite
> understanding this idea of wanting to bump the fee for something
> without knowing what it is; that doesn't make much sense to me.
> The "bump fee" operation seems contingent on knowing
> what you want to bump.

>From your post : "No rebroadcast (wasted bandwidth) is required for the
original txn data."

I'm objecting to that supposed benefit of a transaction sponsor. If you
have transaction X and transaction Y spending the same UTXO, both of them
can be defined as "the original txn data". If you wish to fee-bump
transaction X with sponsor, how can you be sure that transaction
Y isn't present in the majority of network nodes, and X has _not_ been
dropped since your last broadcast ? Otherwise iirc sponsor design, your
sponsor transaction is going to be rejected.

I think you can't, and thus preventively you should broadcast as a (new
type) of package the sponsoring/sponsored transaction.

That said, I'm not sure if that issue is equally affecting vaults than
payment channels. With vaults, the tree of transactions is  known ahead,
and there is no competition in the spends. Assuming the first broadcast has
been efficient (and it could be a reasonable assumption thanks to mempool
rebroadcast), the sponsor should propagate.

So I think here for the sake of sponsor efficiency analysis, we might have
to class between the protocol with once-for-all-transaction-negotiation
(vaults) and the ones with off-chain, dynamic re-negotiation (payment
channels, factories) ?

> I'm not familiar with the L2 dust-limit issues, and I do think that
> "fixing" RBF behavior is *probably* worthwhile.

Sadly, it sounds that "fixing" RBF behavior is a requirement to eradicate
the most advanced pinnings... That fix is independent of the fee-bumping
primitive considered.

>  Those issues aside, I
> think the transaction sponsors idea may be closer to a silver bullet
> than you're giving it credit for, because designing specifically for the
> fee-management use case has some big benefits.

I don't deny the scheme is interesting, though I would argue SIGHASH_GROUP
is more efficient, while offering more flexibility. In any case, I think we
should still pursue further the collections of problems and requirements
(batching, key management, ...) that new fee-bumping primitives should aim
to solve, before engaging more on the deployment of one of them [0].

[0] In that sense see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html

Le lun. 14 févr. 2022 à 15:29, James O'Beirne  a
écrit :

> Thanks for your thoughtful reply Antoine.
>
> > In a distributed system such as the Bitcoin p2p network, you might
> > have transaction A and transaction B  broadcast at the same time and
> > your peer topology might fluctuate between original send and
> > broadcast of the diff, you don't know who's seen what... You might
> > inefficiently announce diff A on top of B and diff B on top A. We
> > might leverage set reconciliation there a la Erlay, though likely
> > with increased round-trips.
>
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.
>
> > Have you heard about SIGHASH_GROUP [0] ?
>
> I haven't - I'll spend some time reviewing this. Thanks.
>
> > > [me complaining CPFP requires lock-in to keys]
> >
> > It's true it requires to pre-specify the fee-bumping key. Though note
> > the fee-bumping key can be fully 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread James O'Beirne via bitcoin-dev
Thanks for your thoughtful reply Antoine.

> In a distributed system such as the Bitcoin p2p network, you might
> have transaction A and transaction B  broadcast at the same time and
> your peer topology might fluctuate between original send and
> broadcast of the diff, you don't know who's seen what... You might
> inefficiently announce diff A on top of B and diff B on top A. We
> might leverage set reconciliation there a la Erlay, though likely
> with increased round-trips.

In the context of fee bumping, I don't see how this is a criticism
unique to transaction sponsors, since it also applies to CPFP: if you
tried to bump fees for transaction A with child txn B, if some mempool
hasn't seen parent A, it will reject B.

> Have you heard about SIGHASH_GROUP [0] ?

I haven't - I'll spend some time reviewing this. Thanks.

> > [me complaining CPFP requires lock-in to keys]
>
> It's true it requires to pre-specify the fee-bumping key. Though note
> the fee-bumping key can be fully separated from the
> "vaults"/"channels" set of main keys and hosted on replicated
> infrastructure such as watchtowers.

This still doesn't address the issue I'm talking about, which is if you
pre-commit to some "fee-bumping" key in your CPFP outputs and that key
ends up being compromised. This isn't a matter of data availability or
redundancy.

Note that this failure may be unique to vault use cases, when you're
pre-generating potentially large numbers of transactions or covenants
that cannot be altered after the fact. If you generate vault txns that
assume the use of some key for CPFP-based fee bumping and that key
winds up being compromised, that puts you in a an uncomfortable
situation: you can no longer bump fees on unvaulting transactions,
rendering the vaults possibly unretrievable depending on the fee market.

> As a L2 transaction issuer you can't be sure the transaction you wish
> to point to is already in the mempool, or have not been replaced by
> your counterparty spending the same shared-utxo, either competitively
> or maliciously. So as a measure of caution, you should broadcast
> sponsor + target transactions in the same package, thus cancelling
> the bandwidth saving (I think).

As I mentioned in the reply to Matt's message, I'm not quite
understanding this idea of wanting to bump the fee for something
without knowing what it is; that doesn't make much sense to me.
The "bump fee" operation seems contingent on knowing
what you want to bump.

And if you're, say, trying to broadcast a lightning channel close and
you know you need to bump the fee right away, before even broadcasting
it, either you're going to

- reformulate the txn to bring up the fee rate (e.g. add inputs
  with some yet-undeployed sighash) as you would have done with RBF, or

- you'd have the same "package relay" problem with CPFP that you
  would with transaction sponsors.

So I don't understand the objection here.

Also, I didn't mean to discourage existing work on package relay or
fixing RBF, which seem clearly important. Maybe I should have noted
that explicitly in the original message

> I don't think a sponsor is a silver-bullet to solve all the
> L2-related mempool issues. It won't solve the most concerning pinning
> attacks, as I think the bottleneck is replace-by-fee. Neither solve
> the issues encumbered by the L2s by the dust limit.

I'm not familiar with the L2 dust-limit issues, and I do think that
"fixing" RBF behavior is *probably* worthwhile. Those issues aside, I
think the transaction sponsors idea may be closer to a silver bullet
than you're giving it credit for, because designing specifically for the
fee-management use case has some big benefits.

For one, it makes migration easier. That is to say: there is none,
whereas there is existing RBF policy that needs consideration.

But maybe more importantly, transaction sponsors' limited use case also
allows for specifying much more targeted "replacement" policy since
sponsors are special-purpose transactions that only exist to
dynamically bump feerate. E.g. my SIGHASH_{NONE,SINGLE}|ANYONECANPAY
proposal might make complete sense for the sponsors/fee-management use
case, and clarify the replacement problem, but obviously wouldn't work
for more general transaction replacement. In other words, RBF's
general nature might make it a much harder problem to solve well.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread James O'Beirne via bitcoin-dev
> This entirely misses the network cost. Yes, sure, we can send
> "diffs", but if you send enough diffs eventually you send a lot of data.

The whole point of that section of the email was to consider the
network cost. There are many cases for which transmitting a
supplementary 1-in-1-out transaction (i.e. a sponsorship txn) is going
to be more efficient from a bandwidth standpoint than rebroadcasting a
potentially large txn during RBF.

> > In an ideal design, special structural foresight would not be
> > needed in order for a txn's feerate to be improved after broadcast.
> >
> > Anchor outputs specified solely for CPFP, which amount to many
> > bytes of wasted chainspace, are a hack. > It's probably
> > uncontroversial at this
>
> This has nothing to do with fee bumping, though, this is only solved
> with covenants or something in that direction, not different relay
> policy.

My post isn't only about relay policy; it's that txn
sponsors allows for fee-bumping in cases where RBF isn't possible and
CPFP would be wasteful, e.g. for a tree of precomputed vault
transactions or - maybe more generally - certain kinds of
covenants.

> How does this not also fail your above criteria of not wasting block
> space?

In certain cases (e.g. vault structures), using sponsorship txns to
bump fees as-needed is more blockspace-efficient than including
mostly-unused CPFP "anchor" outputs that pay to fee-management wallets.
I'm betting there are other similar cases where CPFP anchors are
included but not necessarily used, and amount to wasted blockspace.

> Further, this doesn't solve pinning attacks at all. In lightning we
> want to be able to *replace* something in the mempool (or see it
> confirm soon, but that assumes we know exactly what transaction is in
> "the" mempool). Just being able to sponsor something doesn't help if
> you don't know what that thing is.

When would you be trying to bump the fee on a transaction without
knowing what it is? Seeing a specific transaction "stuck" in the
mempool seems to be a prerequisite to bumping fees. I'm not sure what
you're getting at here.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-12 Thread Billy Tetrud via bitcoin-dev
With respect to the disagreement/misunderstanding about the  "<1vMB in the
mempool" case, I think it's important to be clear about what the goals of
relay policy are. Should the goal be to only relay transactions that
increase miner revenue? Sure ideally, because we want to minimize load on
the network. But practically, getting that goal 100% probably involves
tradeoffs of diminishing returns.

The only way to ensure that a transaction is only relayed when it increases
miner revenue is to make relay rules exactly match miner inclusion rules.
And since we don't want to (nor can we) force miners to do transaction
inclusion the same as each other, we certainly can't realistically produce
an environment where relay rules exactly match miner inclusion rules.

So I think the goal should *not *be strictly minimal relay, because it's
not practical and basically not even possible. Instead the goal should be
some close-enough approach.

This relates to the  "<1vMB in the mempool" case because the disagreement
seems to be related to what trade offs to make. A simple rule that the
fee-rate must be bumped by at least X satoshi would indeed allow the
scenario darosior describes, where someone can broadcast one large
low-feerate transaction and then subsequently broadcast smaller but
higher-feerate transactions. The question is: is that really likely be a
problem? This can be framed by considering a couple cases:

* The average case
* The adversarial worst case

In the average case, no one is going to be broadcasting any transactions
like that because they don't need to. So in the average case, that scenario
can be ignored. In the adversarial case however, some large actor that
sends lots of transactions could spam the network any time blockchain
congestion. What's the worst someone could do?

Well if there's really simply not enough transactions to even fill the
block, without an absolute-fee bump requirement, a malicious actor could
create a lot of spam. To the tune of over 8000 transactions (assuming a 1
sat/vb relay rule) for an empty mempool where the malicious actor sends a
2MB transaction with a 1 sat/vb fee, then a 1MB transaction with a 2
sat/vb, then 666KB transaction for 3 sat/vb etc. But in considering that
this transaction would already take up the entire block, it would be just
as effective for an attacker to send 8000 minimal sized transactions and
have them relayed. So this method of attack doesn't gain the attacker any
additional power to spam the network. Not to mention that nodes should be
easily able to handle that load, so there's not much of an actual "attack"
happening here. Just an insignificant amount of avoidable extra spent
electricity and unnecessary internet traffic. Nothing that's going to make
running a full node any harder.

And in the case that there *are* enough transactions to fill the block
(which I think is the normal case, and it really should become a rarity for
this not to the case in the future), higher feerate transactions are always
better unless you already overpaid for fees. Sure you can overpay and then
add some spam by making successively higher feerate but smaller
transactions, but in that case you've basically paid for all that spam up
front with your original fee. So is it really spam? If you've covered the
cost of it, then its not spam as much as it is stupid behavior.

So I'm inclined to agree with O'Beirne (and Lisa Neigut) that valid
transactions with feerate bumps should never be excluded from relay as long
as the amount of the feerate bump is more than the node's minimum
transaction fee. Doing that would also get rid of the spectre of
transaction pinning.

*I'm curious if there's some other type of scenario where removing the
absolute fee bump rule would cause nodes to relay more transactions than
they would relay in a full/congested mempool scenario*. We shouldn't care
about spam that only happens when the network is quiet and can't bring
network traffic above normal non-quiet loads because a case like that isn't
a dos risk.

On Fri, Feb 11, 2022 at 3:13 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Well because in the example i gave you this decreases the miner's reward.
> The rule of increasing feerate you stated isn't always economically
> rationale.
>
>
> Note how it can also be extended, for instance if the miner only has
> 1.5vMB of txs and is not assured to receive enough transactions to fill 2
> blocks he might be interested in maximizing absolute fees, not feerate.
>
>
> Sure, we could make the argument that long term we need a large backlog of
> transactions anyways.. But that'd be unfortunately not in phase with
> today's reality.
>
>
>  Original Message 
> On Feb 11, 2022, 00:51, James O'Beirne < james.obei...@gmail.com> wrote:
>
>
> > It's not that simple. As a miner, if i have less than 1vMB of
> transactions in my mempool. I don't want a 10sats/vb transaction paying
> 10sats by a 100sats/vb 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-11 Thread darosior via bitcoin-dev
Well because in the example i gave you this decreases the miner's reward. The 
rule of increasing feerate you stated isn't always economically rationale.

Note how it can also be extended, for instance if the miner only has 1.5vMB of 
txs and is not assured to receive enough transactions to fill 2 blocks he might 
be interested in maximizing absolute fees, not feerate.

Sure, we could make the argument that long term we need a large backlog of 
transactions anyways.. But that'd be unfortunately not in phase with today's 
reality.

 Original Message 
On Feb 11, 2022, 00:51, James O'Beirne wrote:

>> It's not that simple. As a miner, if i have less than 1vMB of transactions 
>> in my mempool. I don't want a 10sats/vb transaction paying 10sats by a 
>> 100sats/vb transaction paying only 1sats.
>
> I don't understand why the "<1vMB in the mempool" case is even worth 
> consideration because the miner will just include the entire mempool in the 
> next block regardless of feerate.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread darosior via bitcoin-dev
(I have not yet read the recent posts on RBF but i wanted to react on the 
"additive feerate".)

> # Purely additive feerate bumps should never be impossible

It's not that simple. As a miner, if i have less than 1vMB of transactions in 
my mempool. I don't want a 10sats/vb transaction paying 10sats by a 
100sats/vb transaction paying only 1sats.

Apart from that i very much agree with the approach of taking a step back and 
reframing, with CPFP being inadapted long term (wasteful, not useful for 
delegating fee bumping (i'm surprised i didn't mention it publicly but it makes 
it unsuitable for Revault for instance), and the current carve-out rule makes 
it only suitable for 2-party protocols), and the `diff` approach.

All that again with the caveat that i need to update myself on the recent 
proposals.

 Original Message 
On Feb 10, 2022, 20:40, James O'Beirne via bitcoin-dev wrote:

> There's been much talk about fee-bumping lately, and for good reason -
> dynamic fee management is going to be a central part of bitcoin use as
> the mempool fills up (lord willing) and right now fee-bumping is
> fraught with difficulty and pinning peril.
>
> Gloria's recent post on the topic[0] was very lucid and highlights a
> lot of the current issues, as well as some proposals to improve the
> situation.
>
> As others have noted, the post was great. But throughout the course
> of reading it and the ensuing discussion, I became troubled by the
> increasing complexity of both the status quo and some of the
> proposed remedies.
>
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is. Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency, dragging in a bunch of state and infrastructure to a
> question that should be solely limited to mempool feerate aggregates
> and the feerate of the particular txn package a wallet is concerned
> with.
>
> This is bad enough for protocol designers and Core developers, but
> making the situation any more intractable for "end-users" and wallet
> developers feels wrong.
>
> I thought it might be useful to step back and reframe. Here are a few
> aims that are motivated chiefly by the quality of end-user experience,
> constrained to obey incentive compatibility (i.e. miner reward, DoS
> avoidance). Forgive the abstract dalliance for a moment; I'll talk
> through concretes afterwards.
>
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
>
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
>
> This dovetails with the idea that...
>
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate. Not to mention RBF's seemingly wasteful consumption of
> bandwidth due to the rebroadcast of data the network has already seen.
>
> In a sane design, no structural foresight - and certainly no wasted
> bytes in the form of unused anchor outputs - should be needed in order
> to add to a miner's reward for confirming a given transaction.
>
> Planning for fee-bumps explicitly in transaction structure also often
> winds up locking in which keys are required to bump fees, at odds
> with the idea that...
>
> # Feerate bumps should be able to come from anywhere
>
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
>
> What if the key you specified n the anchor outputs for a bunch of
> 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread Matt Corallo via bitcoin-dev
This is great in theory, but I think it kinda misses *why* the complexity keeps creeping in. We 
agree on (most of) the goals here, but the problem is the goals explicitly lead to the complexity, 
its not some software engineering failure or imagination failure that leads to the complexity.


On 2/10/22 14:40, James O'Beirne via bitcoin-dev wrote:
-snip-

# Purely additive feerate bumps should never be impossible

Any user should always be able to add to the incentive to mine any
transaction in a purely additive way. The countervailing force here
ends up being spam prevention (a la min-relay-fee) to prevent someone
from consuming bandwidth and mempool space with a long series of
infinitesimal fee-bumps.

A fee bump, naturally, should be given the same per-byte consideration
as a normal Bitcoin transaction in terms of relay and block space,
although it would be nice to come up with a more succinct
representation. This leads to another design principle:


This is where *all* the complexity comes from. If our goal is to "ensure a bump increases a miner's 
overall revenue" (thus not wasting relay for everyone else), then we precisely *do* need


> Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency

Whether a transaction increases a miner's revenue depends precisely on whether the transaction 
(package) being replaced is in the next block - if it is, you care about the absolute fee of the 
package and its replacement. If it is not in the next block (or, really, not near a block boundary 
or further down in the mempool where you assume other transactions will appear around it over time), 
then you care about the fee *rate*, not the fee difference.


> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.

This entirely misses the network cost. Yes, sure, we can send "diffs", but if you send enough diffs 
eventually you send a lot of data. We cannot simply ignore network-wide costs like total relay 
bandwidth (or implementation runtime DoS issues).



# Special transaction structure should not be required to bump fees

In an ideal design, special structural foresight would not be needed
in order for a txn's feerate to be improved after broadcast.

Anchor outputs specified solely for CPFP, which amount to many bytes of
wasted chainspace, are a hack. > It's probably uncontroversial at this


This has nothing to do with fee bumping, though, this is only solved with covenants or something in 
that direction, not different relay policy.



Coming down to earth, the "tabula rasa" thought experiment above has led
me to favor an approach like the transaction sponsors design that Jeremy
proposed in a prior discussion back in 2020[1].


How does this not also fail your above criteria of not wasting block space?

Further, this doesn't solve pinning attacks at all. In lightning we want to be able to *replace* 
something in the mempool (or see it confirm soon, but that assumes we know exactly what transaction 
is in "the" mempool). Just being able to sponsor something doesn't help if you don't know what that 
thing is.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread James O'Beirne via bitcoin-dev
> It's not that simple. As a miner, if i have less than 1vMB of
transactions in my mempool. I don't want a 10sats/vb transaction paying
10sats by a 100sats/vb transaction paying only 1sats.

I don't understand why the "<1vMB in the mempool" case is even worth
consideration because the miner will just include the entire mempool in the
next block regardless of feerate.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread Greg Sanders via bitcoin-dev
One quick thought to the proposal and perhaps to sponsors in general(didn't
have time to go over original proposal again):

Since sponsors can come from anywhere, the wallet application must have
access to the mempool to know what inputs must be double spent to RBF the
sponsor transaction.

Seems like an important difference to be considered.

On Fri, Feb 11, 2022 at 3:49 AM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> There's been much talk about fee-bumping lately, and for good reason -
> dynamic fee management is going to be a central part of bitcoin use as
> the mempool fills up (lord willing) and right now fee-bumping is
> fraught with difficulty and pinning peril.
>
> Gloria's recent post on the topic[0] was very lucid and highlights a
> lot of the current issues, as well as some proposals to improve the
> situation.
>
> As others have noted, the post was great. But throughout the course
> of reading it and the ensuing discussion, I became troubled by the
> increasing complexity of both the status quo and some of the
> proposed remedies.
>
> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is. Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency, dragging in a bunch of state and infrastructure to a
> question that should be solely limited to mempool feerate aggregates
> and the feerate of the particular txn package a wallet is concerned
> with.
>
> This is bad enough for protocol designers and Core developers, but
> making the situation any more intractable for "end-users" and wallet
> developers feels wrong.
>
> I thought it might be useful to step back and reframe. Here are a few
> aims that are motivated chiefly by the quality of end-user experience,
> constrained to obey incentive compatibility (i.e. miner reward, DoS
> avoidance). Forgive the abstract dalliance for a moment; I'll talk
> through concretes afterwards.
>
>
> # Purely additive feerate bumps should never be impossible
>
> Any user should always be able to add to the incentive to mine any
> transaction in a purely additive way. The countervailing force here
> ends up being spam prevention (a la min-relay-fee) to prevent someone
> from consuming bandwidth and mempool space with a long series of
> infinitesimal fee-bumps.
>
> A fee bump, naturally, should be given the same per-byte consideration
> as a normal Bitcoin transaction in terms of relay and block space,
> although it would be nice to come up with a more succinct
> representation. This leads to another design principle:
>
>
> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.
>
> This dovetails with the idea that...
>
>
> # Special transaction structure should not be required to bump fees
>
> In an ideal design, special structural foresight would not be needed
> in order for a txn's feerate to be improved after broadcast.
>
> Anchor outputs specified solely for CPFP, which amount to many bytes of
> wasted chainspace, are a hack. It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate. Not to mention RBF's seemingly wasteful consumption of
> bandwidth due to the rebroadcast of data the network has already seen.
>
> In a sane design, no structural foresight - and certainly no wasted
> bytes in the form of unused anchor outputs - should be needed in order
> to add to a miner's reward for confirming a given transaction.
>
> Planning for fee-bumps explicitly in transaction structure also often
> winds up locking in which keys are required to bump fees, at odds
> with the idea that...
>
>
> # Feerate bumps should be able to come from anywhere
>
> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.
>
> What if the key you specified n the anchor outputs for a bunch of
> pre-signed txns is compromised? What if you'd like to be able to
> dynamically select the wallet that bumps fees? CPFP does you no favors
> here.
>
> There is of course a tension between allowing fee bumps to come from
> anywhere and the threat of pinning-like attacks. So we should venture
> to remove pinning as a possibility, in line with the first design
> principle I discuss.
>
>
> ---
>
> Coming down to earth, the "tabula rasa" thought experiment above 

[bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread James O'Beirne via bitcoin-dev
There's been much talk about fee-bumping lately, and for good reason -
dynamic fee management is going to be a central part of bitcoin use as
the mempool fills up (lord willing) and right now fee-bumping is
fraught with difficulty and pinning peril.

Gloria's recent post on the topic[0] was very lucid and highlights a
lot of the current issues, as well as some proposals to improve the
situation.

As others have noted, the post was great. But throughout the course
of reading it and the ensuing discussion, I became troubled by the
increasing complexity of both the status quo and some of the
proposed remedies.

Layering on special cases, more carve-outs, and X and Y percentage
thresholds is going to make reasoning about the mempool harder than it
already is. Special consideration for "what should be in the next
block" and/or the caching of block templates seems like an imposing
dependency, dragging in a bunch of state and infrastructure to a
question that should be solely limited to mempool feerate aggregates
and the feerate of the particular txn package a wallet is concerned
with.

This is bad enough for protocol designers and Core developers, but
making the situation any more intractable for "end-users" and wallet
developers feels wrong.

I thought it might be useful to step back and reframe. Here are a few
aims that are motivated chiefly by the quality of end-user experience,
constrained to obey incentive compatibility (i.e. miner reward, DoS
avoidance). Forgive the abstract dalliance for a moment; I'll talk
through concretes afterwards.


# Purely additive feerate bumps should never be impossible

Any user should always be able to add to the incentive to mine any
transaction in a purely additive way. The countervailing force here
ends up being spam prevention (a la min-relay-fee) to prevent someone
from consuming bandwidth and mempool space with a long series of
infinitesimal fee-bumps.

A fee bump, naturally, should be given the same per-byte consideration
as a normal Bitcoin transaction in terms of relay and block space,
although it would be nice to come up with a more succinct
representation. This leads to another design principle:


# The bandwidth and chain space consumed by a fee-bump should be minimal

Instead of prompting a rebroadcast of the original transaction for
replacement, which contains a lot of data not new to the network, it
makes more sense to broadcast the "diff" which is the additive
contribution towards some txn's feerate.

This dovetails with the idea that...


# Special transaction structure should not be required to bump fees

In an ideal design, special structural foresight would not be needed
in order for a txn's feerate to be improved after broadcast.

Anchor outputs specified solely for CPFP, which amount to many bytes of
wasted chainspace, are a hack. It's probably uncontroversial at this
point to say that even RBF itself is kind of a hack - a special
sequence number should not be necessary for post-broadcast contribution
toward feerate. Not to mention RBF's seemingly wasteful consumption of
bandwidth due to the rebroadcast of data the network has already seen.

In a sane design, no structural foresight - and certainly no wasted
bytes in the form of unused anchor outputs - should be needed in order
to add to a miner's reward for confirming a given transaction.

Planning for fee-bumps explicitly in transaction structure also often
winds up locking in which keys are required to bump fees, at odds
with the idea that...


# Feerate bumps should be able to come from anywhere

One of the practical downsides of CPFP that I haven't seen discussed in
this conversation is that it requires the transaction to pre-specify the
keys needed to sign for fee bumps. This is problematic if you're, for
example, using a vault structure that makes use of pre-signed
transactions.

What if the key you specified n the anchor outputs for a bunch of
pre-signed txns is compromised? What if you'd like to be able to
dynamically select the wallet that bumps fees? CPFP does you no favors
here.

There is of course a tension between allowing fee bumps to come from
anywhere and the threat of pinning-like attacks. So we should venture
to remove pinning as a possibility, in line with the first design
principle I discuss.


---

Coming down to earth, the "tabula rasa" thought experiment above has led
me to favor an approach like the transaction sponsors design that Jeremy
proposed in a prior discussion back in 2020[1].

Transaction sponsors allow feerates to be bumped after a transaction's
broadcast, regardless of the structure of the original transaction.
No rebroadcast (wasted bandwidth) is required for the original txn data.
No wasted chainspace on only-maybe-used prophylactic anchor outputs.

The interface for end-users is very straightforward: if you want to bump
fees, specify a transaction that contributes incrementally to package
feerate for some txid. Simple.

In the original discussion, there were a few