Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Peter Todd via bitcoin-dev
On Wed, Oct 19, 2022 at 04:29:57PM +0200, Sergej Kotliar via bitcoin-dev wrote:
> Hi all,
> 
> Chiming in on this thread as I feel like the real dangers of RBF as default
> policy aren't sufficiently elaborated here. It's not only about the
> zero-conf (I'll get to that) but there is an even bigger danger called the
> american call option, which risks endangering the entirety of BIP21 "Scan
> this QR code with your wallet to buy this product" model that I believe
> we've all come to appreciate. Specifically, in a scenario with high
> volatility and many transactions in the mempools (which is where RBF would
> come in handy), a user can make a low-fee transaction and then wait for
> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
> up, user can cancel his transaction and make a new - cheaper one. The

I just checked this, and Bitrefill accepts transactions with RBF enabled.

> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
> (it's actually quite easily managed), it's FX risk as the merchant must
> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
> some transactions lose money to FX and others earn money - that evens out
> in the end. But if there is an _easily accessible in the wallet_ feature to
> "cancel transaction" that means it will eventually get systematically

...and I checked this with Electrum on Android, which has a handy "Cancel
Transaction" feature in the UI to easily cancel a payment. Which I did. You
should have a pending payment from this email, and unsurprisingly I don't have
my gift card. :)

The ship has already sailed on this. I'd suggest accepting Lightning, which
drastically shortens the time window involved.

FWIW, fixedfloat.com already deals with this call option risk by charging a
higher fee (1% vs 0.5%) for conversions where the exact destination amount has
been locked in; the default is for the exact destination amount to be picked at
the moment of confirmation.

> abused. A risk of X% loss on many payments that's easy to systematically
> Bitrefill currently processes 1500-2000 onchain payments every day. For us,
> a world where bitcoin becomes de facto RBF by default, means that we would

Electrum is RBF by default. So does Green Wallet, and many other wallets,  as
well as many exchanges. Most of those wallets/exchanges don't even have a way
to send a transaction without RBF. This ship has sailed.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Antoine Riard via bitcoin-dev
Hi Sergej,

Thanks for the insightful posting, especially highlighting the FX risk
which was far from being evident on my side!

I don't know in details the security architecture of Bitrefill zeroconf
acceptance system, though from what I suppose there is at least a set of
full-nodes well-connected across the p2p network, on top of which some
mempools reconciliation is exercised
and zeroconf candidate sanitize against. While I believe this is a far-more
robust deployment against double-spend attempts, there is still the ability
for a sophisticated attacker to "taint" miner mempools, and from then
partition judiciously the transaction-relay network to game such
distributed mempool monitoring system. There is also the possibility of an
attacker using some "divide-and-conquer" transaction broadcast algorithm to
map Bitrefill monitoring point, though as far as I'm aware such algorithm
has not been discussed. I agree with all of that, easier said than done.

(Which let me think that such distributed mempool monitoring system should
be provide some enhanced security even in a full-rbf world, that they would
require far more resources than the average node from the p2p network as a
whole might be a counter-argument for their social acceptance, however I'm
also thinking that a robust Lightning infrastructure of the future might
require multiple mempool/transaction-relay endpoints, at least to reduce
cross-layer mapping links, though conversation for another day...).

About the FX risk itself, this is far from being isolated from 0conf, as
Lightning payments themselves might still have a time lapse between the
issuance of invoices and the settlement of the HTLC at the payee endpoint.
In fact this volatility concern is endured by anyone using Bitcoin
regularly in interface with the fiats worlds, i.e everyone excepted the
long-term store of wealth crowd. From a merchant perspective, effectively,
the options to cover themselves against this risk are simple. One could
take positions directly in traditional financial derivatives, like doing
participants in international trades, though it would require an educated
manpower on the merchant side. Or leveraging some stablecoins derivatives
system, coming with its own technical complexity and social trust hazards.
Another direction would be to clearly define the responsibility between
merchants or users, on whom is the FX risk. If it's on users, they should
be the one RBFing/CPFPing to increase the merchant address output, beyond
the fact "dynamic pricing" would be a weird UX, it would require liveliness
from the wallets until block confirmation (introducing here many
requirements of a LN wallet). If it's on the merchants, they could be the
ones CPFPing thanks to package relay, though it would come again with some
engineering complexity and overhead blockspace cost (and the first version
of package relay likely won't enable CPFP batching for concerns of
potential bandwidth/CPU DoS).

On the efficacy of RBF, I understand the current approach of assuming
"manual" RBFing by power users ill UX thinking. I hope in the future to
have automatic fee-bumping implemented by user wallets, where a fee-bumping
budget and a confirmation preference are pre-defined for all payments, and
the fee-bumping logic "simply" enforcing the user policy, ideally based on
historical mempool data. True fact: we don't have such logic in consumer
wallets today. Or at least only rudimentary in the backend of LN
implementations, and only for time-sensitive on-chain claims for now (or at
least speaking for LDK). If we take the history of browsers as a
comparison, while we might be out of the Lynx-style phase of wallets, we
might still be more in the late Netscape kind of thing than something like
Chrome today. In other words, there are many directions for improvements
for users' wallets.

All that said, I learn to converge that as a community we would be better
off to weigh deeper the risks/costs between 0confs applications and
contracting protocols in light of full-rbf.

Best,
Antoine

Le mer. 19 oct. 2022 à 10:33, Sergej Kotliar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi all,
>
> Chiming in on this thread as I feel like the real dangers of RBF as
> default policy aren't sufficiently elaborated here. It's not only about the
> zero-conf (I'll get to that) but there is an even bigger danger called the
> american call option, which risks endangering the entirety of BIP21 "Scan
> this QR code with your wallet to buy this product" model that I believe
> we've all come to appreciate. Specifically, in a scenario with high
> volatility and many transactions in the mempools (which is where RBF would
> come in handy), a user can make a low-fee transaction and then wait for
> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
> up, user can cancel his transaction and make a new - cheaper one. The
> biggest risk in accepting bitcoin payments is in fact not zeroconf 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Wednesday, October 19th, 2022 at 2:40 PM, angus  wrote:

>> Let's allow a miner to include transactions until the block is filled, let's 
>> call this structure (coining a new term 'Brick'), B0. [brick=block that 
>> doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce 
>> corresponding to a minimum numeric value of its hash found until it got 
>> filled.
>
> So, if I'm understanding right, this amounts to "reduce difficulty required 
> for a block ('brick') to be valid if the mempool contains more than 1 block's 
> worth of transactions so we get transactions confirmed faster" using 'bricks' 
> as short-lived sidechains that get merged into blocks?

They wouldn't get confirmed faster.
Imagine a regular Big Block (BB) could be re-structured as a brickchain
BB = B0 <- B1 <- ... <- Bn (Block = chain of bricks)

Only B0 contains the coinbase transaction.

Bi are streamed from miner to nodes as they are produced.
The node creates a separate fork on B0 arrival, and on arrival of the last B1 
treats the whole brickchain as they now treat a 1 Block: Either accept it or 
reject it as as a whole. (like is the complete block had just arrived entirely. 
(In reality it has arrived as a stream of bricks).
Before the brickchain is complete the node does nothing special, just validate 
each brick on arrival and wait for the next.

> This would have the same fundamental problem as just making the max blocksize 
> bigger - it increases the rate of growth of storage required for a full node, 
> because you're allowing blocks/bricks to be created faster, so there will be 
> more confirmed transactions to store in a given time window than under 
> current Bitcoin rules.

Yes, the data transmitted over the network is bigger, because we are 
intentionally increasing the throughput, instead of delaying tx in the mempool.
This is a potential howto in case there was an intention of speeding up L1.
The unavoidable price of speed in tx/s is bandwidth and volume of data to 
process.
The point is to do it without making bigger blocks.

> Bitcoin doesn't take the size of the mempool into account when adjusting the 
> difficulty because the time-between-blocks is 'more important' than avoiding 
> congestion where transactions take ages to get into a block. The fee 
> mechanism in part allows users to decide how urgently they want their tx to 
> get confirmed, and high fees when there is congestion also disincentivises 
> others from transacting at all, which helps arrest mempool growth.

streaming bricks instead of delivering a big block can be considered a way of 
reducing congestion. This is valid at any scale.
E.g. 1 Mb block delivered at once every 10 minutes versus a stream of 10 
100Kib-brick delivered 1 per minute

> I'd imagine we'd also see a 'highway widening' effect with this kind of 
> proposal - if you increase the tx volume Bitcoin can settle in a given time, 
> that will quickly be used up by more people transacting until we're back at a 
> congested state again.
>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
>> would be broadcasted and nodes would have it on in a separate fork as usual.

congestion can alweays happen with enough workload.
A system able to determine its workload can regulate it (keeping tx in the 
mempool to temporarily aliviate).

The congestion counter-measure remains in essence the same.

> How do we know if the hash the miner does find for a brick was their 'best 
> effort' and they're not just being lazy? There's an element of luck in the 
> best hash a miner can find, sometimes it takes a long time to meet the 
> difficulty requirement and sometimes it happens almost at instantly.

a lazy miner will produce a longer brickchain because they would have greater 
hashes than a more powerful miner. A more competitive miner will deliver the 
complete brickchain faster and hence its half-way through brickchain will be 
discarded.
It is exactly like the current system and a lazy miner.

> How would we know how 'busy' the mempool was at the time a brick from months 
> or years ago was mined?

I dont understand the question, but i guess it is the same answer replacing 
bricks for blocks

> Nodes have to be able to run through the entire history of the blockchain and 
> check everything is valid. They have to do this using only the previous 
> blocks they've already validated - they won't have historical snapshots of 
> the mempool (they'll build and mutate a UTXO set, but that's different). 
> Transactions don't contain a 'created-at' time that you could compare to the 
> block's creation time (and if they did, you probably couldn't trust it).

Why does this question apply to the concept of bricks and not to the concept of 
block?

I see a resulting blockchain would be a chain of blocks and bricks:
Bi = Block at height i
bi = brick at height i


Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Wednesday, October 19th, 2022 at 10:34 PM, G. Andrew Stone 
 wrote:

> Consider that a miner can also produce transactions. So every miner would 
> produce spam tx to fill their bricks at the minimum allowed difficulty to 
> reap the brick coinbase reward.

except that, as I explained in a prev email, bricks don't contain reward. They 
are meaningless unless they form a complete brickchain with an accumulated 
difficulty that is equivalent to current block difficulty.

> You might quickly respond with a modification that changes or eliminates the 
> brick coinbase reward, but perhaps that exact reward and the major negative 
> consequence of miners creating spam tx needs careful thought.

since 1 block is equivalent to a brickchain, there exist only 1 coinbase tx
and since the brickchain is treated atomically as a whole, it follows the same 
processing as a block.
The only observable difference (and the reason of augmentating throughput) in 
the wire is that the information has been transmitted in streaming (decomposed 
block spaced in time)

> See "bobtail" for a weak block proposal that produces a more consistent 
> discovery time, and "tailstorm" for a proposal that uses the content of those 
> weak blocks as commitment to what transactions miners are working on (which 
> will allow more trustworthy (but still not foolproof) use of transactions 
> before confirmation)... neither of which have a snowball's chance in hell 
> (along with any other hard forking change) of being put into bitcoin :-).

thanks
Marcos

> Andrew
>
> On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev 
>  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable 
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to 
>> operate" and hence reduce the chance of some of them giving up which would 
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my 
>> opinion, given the competitive nature of PoW itself, which introduce an 
>> accepted centralization, forcing some miners to give up). In this case the 
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed 
>> up L1.
>> Thanks
>> M
>>
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
>> wrote:
>>
 currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need 
>>> to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & 
>>> transfer, uncommitted transactions as pools / joins, so that layer 1's 
>>> decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>>>  wrote:
>>>
 Hi Bitcoin devs,
 I'd like to share an idea of a method to increase throughput in the 
 bitcoin network.

 Currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.

 Big-blockers proposed the removal of limits but this didn't come with 
 undesirable effects that have been widely discussed and rejected.

 The main feature we wanted to preserve is 'small blocks', providing 
 'better network effects' I won't focus on them.

 The problem with small blocks is that, once a block is filled 
 transactions, they are kept back in the mempool, waiting for their turn in 
 future blocks.

 The following changes in the protocol aim to let all transactions go in 
 the current block, while keeping the block size small. It requires changes 
 in the PoW algorithm.

 Currently, the PoW algorithm consists on finding a valid hash for the 
 block. Its validity is determined by comparing the numeric value of the 
 block hash with a protocol-defined value difficulty.

 Once a miner finds a nonce for the block that satisfies the condition the 
 new block becomes valid and can be propagated. All nodes would update 
 their blockchains with it. (assuming no conflict resolution (orphan 
 blocks, ...) for clarity).

 This process is meant to happen every 10 minutes in average.

 With this background information (we all already know) I go on to describe 
 the idea:

 Let's allow a miner to include transactions until the block is filled, 
 let's call this structure (coining a new term 'Brick'), B0. [brick=block 
 that doesn't meet the difficulty rule and is filled of tx to 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread G. Andrew Stone via bitcoin-dev
Consider that a miner can also produce transactions.  So every miner would
produce spam tx to fill their bricks at the minimum allowed difficulty to
reap the brick coinbase reward.

You might quickly respond with a modification that changes or eliminates
the brick coinbase reward, but perhaps that exact reward and the major
negative consequence of miners creating spam tx needs careful thought.

See "bobtail" for a weak block proposal that produces a more consistent
discovery time, and "tailstorm" for a proposal that uses the content of
those weak blocks as commitment to what transactions miners are working on
(which will allow more trustworthy (but still not foolproof) use of
transactions before confirmation)... neither of which have a snowball's
chance in hell (along with any other hard forking change) of being put into
bitcoin :-).

Andrew

On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> --- Original Message ---
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Greg Sanders via bitcoin-dev
Another downside is that the sender may not opt into a non-pinnable future
format like "V3 transactions", making CPFP difficult. They may spend a lot
of fees to do this however, so maybe we're really reaching here.

On Wed, Oct 19, 2022 at 12:07 PM Sergej Kotliar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> It's an interesting idea, presumably it would work w the new package relay.
> Scorched earth bidding war is definitely fine to deter this type of abuse.
> Need to consider it more thoroughly from all sides tho. CPFP on the server
> side generally has a couple of downsides:
> * Requires a hot wallet to receive bitcoin
> * an entity that is reliably known to do CPFP can be abused by people
> looking to consolidate utxos, which can be quite costly. Might be solvable
> with a set of conditionals, and bad UX for abusers is less of a concern :)
>
> Will follow up after more deliberation, thanks!
>
>
> On Wed, 19 Oct 2022 at 17:43, Jeremy Rubin 
> wrote:
>
>> If they do this to you, and the delta is substantial, can't you sweep all
>> such abusers with a cpfp transaction replacing their package and giving you
>> the original txn?
>>
>> On Wed, Oct 19, 2022, 7:33 AM Sergej Kotliar via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi all,
>>>
>>> Chiming in on this thread as I feel like the real dangers of RBF as
>>> default policy aren't sufficiently elaborated here. It's not only about the
>>> zero-conf (I'll get to that) but there is an even bigger danger called the
>>> american call option, which risks endangering the entirety of BIP21 "Scan
>>> this QR code with your wallet to buy this product" model that I believe
>>> we've all come to appreciate. Specifically, in a scenario with high
>>> volatility and many transactions in the mempools (which is where RBF would
>>> come in handy), a user can make a low-fee transaction and then wait for
>>> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
>>> up, user can cancel his transaction and make a new - cheaper one. The
>>> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
>>> (it's actually quite easily managed), it's FX risk as the merchant must
>>> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
>>> some transactions lose money to FX and others earn money - that evens out
>>> in the end. But if there is an _easily accessible in the wallet_ feature to
>>> "cancel transaction" that means it will eventually get systematically
>>> abused. A risk of X% loss on many payments that's easy to systematically
>>> abuse is more scary than a rare risk of losing 100% of one occasional
>>> payment. It's already possible to execute this form of abuse with opt-in
>>> RBF, which may lead to us at some point refusing those payments (even with
>>> confirmation) or cumbersome UX to work around it, such as crediting the
>>> bitcoin to a custodial account.
>>>
>>> To compare zeroconf risk with FX risk: I think we've had one incident in
>>> 8 years of operation where a user successfully fooled our server to accept
>>> a payment that in the end didn't confirm. To successfully fool (non-RBF)
>>> zeroconf one needs to have access to mining infrastructure and probability
>>> of success is the % of hash rate controlled. This is simply due to the fact
>>> that the network currently won't propagage the replacement transaction to
>>> the miner, which is what's being discussed here. American call option risk
>>> would however be available to 100% of all users, needs nothing beyond the
>>> wallet app, and has no cost to the user - only upside.
>>>
>>> Bitrefill currently processes 1500-2000 onchain payments every day. For
>>> us, a world where bitcoin becomes de facto RBF by default, means that we
>>> would likely turn off the BIP21 model for onchain payments, instruct
>>> Bitcoin users to use Lightning or deposit onchain BTC to a custodial
>>> account that we have.
>>> This option is however not available for your typical
>>> BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
>>> from other merchants or payment providers how they see this new behavior
>>> and how they would counteract it.
>>>
>>> Currently Lightning is somewhere around 15% of our total bitcoin
>>> payments. This is very much not nothing, and all of us here want Lightning
>>> to grow, but I think it warrants a serious discussion on whether we want
>>> Lightning adoption to go to 100% by means of disabling on-chain commerce.
>>> For me personally it would be an easier discussion to have when Lightning
>>> is at 80%+ of all bitcoin transactions. Currently far too many bitcoin
>>> users simply don't have access to Lightning, and of those that do and hold
>>> their own keys Muun is the biggest wallet per our data, not least due to
>>> their ease-of-use which is under threat per the OP. It's hard to assess how
>>> many users would switch to Lightning in such a scenario, the communication
>>> 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Sergej Kotliar via bitcoin-dev
It's an interesting idea, presumably it would work w the new package relay.
Scorched earth bidding war is definitely fine to deter this type of abuse.
Need to consider it more thoroughly from all sides tho. CPFP on the server
side generally has a couple of downsides:
* Requires a hot wallet to receive bitcoin
* an entity that is reliably known to do CPFP can be abused by people
looking to consolidate utxos, which can be quite costly. Might be solvable
with a set of conditionals, and bad UX for abusers is less of a concern :)

Will follow up after more deliberation, thanks!


On Wed, 19 Oct 2022 at 17:43, Jeremy Rubin  wrote:

> If they do this to you, and the delta is substantial, can't you sweep all
> such abusers with a cpfp transaction replacing their package and giving you
> the original txn?
>
> On Wed, Oct 19, 2022, 7:33 AM Sergej Kotliar via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi all,
>>
>> Chiming in on this thread as I feel like the real dangers of RBF as
>> default policy aren't sufficiently elaborated here. It's not only about the
>> zero-conf (I'll get to that) but there is an even bigger danger called the
>> american call option, which risks endangering the entirety of BIP21 "Scan
>> this QR code with your wallet to buy this product" model that I believe
>> we've all come to appreciate. Specifically, in a scenario with high
>> volatility and many transactions in the mempools (which is where RBF would
>> come in handy), a user can make a low-fee transaction and then wait for
>> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
>> up, user can cancel his transaction and make a new - cheaper one. The
>> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
>> (it's actually quite easily managed), it's FX risk as the merchant must
>> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
>> some transactions lose money to FX and others earn money - that evens out
>> in the end. But if there is an _easily accessible in the wallet_ feature to
>> "cancel transaction" that means it will eventually get systematically
>> abused. A risk of X% loss on many payments that's easy to systematically
>> abuse is more scary than a rare risk of losing 100% of one occasional
>> payment. It's already possible to execute this form of abuse with opt-in
>> RBF, which may lead to us at some point refusing those payments (even with
>> confirmation) or cumbersome UX to work around it, such as crediting the
>> bitcoin to a custodial account.
>>
>> To compare zeroconf risk with FX risk: I think we've had one incident in
>> 8 years of operation where a user successfully fooled our server to accept
>> a payment that in the end didn't confirm. To successfully fool (non-RBF)
>> zeroconf one needs to have access to mining infrastructure and probability
>> of success is the % of hash rate controlled. This is simply due to the fact
>> that the network currently won't propagage the replacement transaction to
>> the miner, which is what's being discussed here. American call option risk
>> would however be available to 100% of all users, needs nothing beyond the
>> wallet app, and has no cost to the user - only upside.
>>
>> Bitrefill currently processes 1500-2000 onchain payments every day. For
>> us, a world where bitcoin becomes de facto RBF by default, means that we
>> would likely turn off the BIP21 model for onchain payments, instruct
>> Bitcoin users to use Lightning or deposit onchain BTC to a custodial
>> account that we have.
>> This option is however not available for your typical
>> BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
>> from other merchants or payment providers how they see this new behavior
>> and how they would counteract it.
>>
>> Currently Lightning is somewhere around 15% of our total bitcoin
>> payments. This is very much not nothing, and all of us here want Lightning
>> to grow, but I think it warrants a serious discussion on whether we want
>> Lightning adoption to go to 100% by means of disabling on-chain commerce.
>> For me personally it would be an easier discussion to have when Lightning
>> is at 80%+ of all bitcoin transactions. Currently far too many bitcoin
>> users simply don't have access to Lightning, and of those that do and hold
>> their own keys Muun is the biggest wallet per our data, not least due to
>> their ease-of-use which is under threat per the OP. It's hard to assess how
>> many users would switch to Lightning in such a scenario, the communication
>> around it would be hard. My intuition says that the majority of the current
>> 85% of bitcoin users that pay onchain would just not use bitcoin anymore,
>> probably shift to an alt. The benefits of Lightning are many and obvious,
>> we don't need to limit onchain to make Lightning more appealing. As an
>> anecdote, we did experiment with defaulting to bech32 addresses some years
>> back. The result was that simply 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
Thanks all for your responses.
so is it a no-go is because "reduced settlement speed is a desirable feature"?

I don';t know what weights more in this consideration:
A) to not increase the workload of full-nodes, being "less difficult to 
operate" and hence reduce the chance of some of them giving up which would lead 
to a negative centralization effect. (a bit cumbersome reasoning in my opinion, 
given the competitive nature of PoW itself, which introduce an accepted 
centralization, forcing some miners to give up). In this case the fact is 
accepted because is decentralized enough.
B) to not undermine L2 systems like LN.

in any case it is a major no-go reason, if there is not intention to speed up 
L1.
Thanks
M

--- Original Message ---
On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  wrote:

>> currently, a miner produce blocks with a limited capacity of transactions 
>> that ultimately limits the global settlement throughput to a reduced number 
>> of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we need 
> to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold & 
> transfer, uncommitted transactions as pools / joins, so that layer 1's 
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>  wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the bitcoin 
>> network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions 
>> that ultimately limits the global settlement throughput to a reduced number 
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with 
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing 'better 
>> network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled transactions, 
>> they are kept back in the mempool, waiting for their turn in future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in the 
>> current block, while keeping the block size small. It requires changes in 
>> the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the block. 
>> Its validity is determined by comparing the numeric value of the block hash 
>> with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the 
>> new block becomes valid and can be propagated. All nodes would update their 
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...) 
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to describe 
>> the idea:
>>
>> Let's allow a miner to include transactions until the block is filled, let's 
>> call this structure (coining a new term 'Brick'), B0. [brick=block that 
>> doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce 
>> corresponding to a minimum numeric value of its hash found until it got 
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start 
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that don't 
>> satisfy the difficulty rule, provided the brick is fully filled of 
>> transactions. Bricks not fully filled would be rejected as invalid to 
>> prevent spam (except if constitutes the last brick of a brickchain, 
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state where 
>> N bricks have been produced and the accumulated PoW calculated using 
>> mathematics (every brick contains a 'minimum hash found', when a series of 
>> 'minimum hashes' is computationally equivalent to the network difficulty is 
>> then the full 'brickchain' is valid as a Block.
>>
>> This calculus shall be better defined, but I hope that this idea can serve 
>> as a seed to a BIP, or otherwise deemed absurd, which might be possible and 
>> I'd be delighted to discover why a scheme like this wouldn't work.
>>
>> If it finally worked, it could completely flush mempools, keep transactions 
>> fees low and increase throughput without an increase in the block size that 
>> would raise other concerns related to propagation.
>>
>> Thank you.
>> I look forward to your responses.
>>
>> --
>> Marcos Mayorgahttps://twitter.com/KatlasC
>>
>> ___
>> bitcoin-dev mailing list
>> 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Greg Sanders via bitcoin-dev
Isn't the extreme of this that the merchant tries to lock in gains on the
upswing via CPFP, and users on the downswing, both doing scorched earth,
tossing the delta to fees?

Seems like a MAD situation?

On Wed, Oct 19, 2022 at 11:44 AM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> If they do this to you, and the delta is substantial, can't you sweep all
> such abusers with a cpfp transaction replacing their package and giving you
> the original txn?
>
> On Wed, Oct 19, 2022, 7:33 AM Sergej Kotliar via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi all,
>>
>> Chiming in on this thread as I feel like the real dangers of RBF as
>> default policy aren't sufficiently elaborated here. It's not only about the
>> zero-conf (I'll get to that) but there is an even bigger danger called the
>> american call option, which risks endangering the entirety of BIP21 "Scan
>> this QR code with your wallet to buy this product" model that I believe
>> we've all come to appreciate. Specifically, in a scenario with high
>> volatility and many transactions in the mempools (which is where RBF would
>> come in handy), a user can make a low-fee transaction and then wait for
>> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
>> up, user can cancel his transaction and make a new - cheaper one. The
>> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
>> (it's actually quite easily managed), it's FX risk as the merchant must
>> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
>> some transactions lose money to FX and others earn money - that evens out
>> in the end. But if there is an _easily accessible in the wallet_ feature to
>> "cancel transaction" that means it will eventually get systematically
>> abused. A risk of X% loss on many payments that's easy to systematically
>> abuse is more scary than a rare risk of losing 100% of one occasional
>> payment. It's already possible to execute this form of abuse with opt-in
>> RBF, which may lead to us at some point refusing those payments (even with
>> confirmation) or cumbersome UX to work around it, such as crediting the
>> bitcoin to a custodial account.
>>
>> To compare zeroconf risk with FX risk: I think we've had one incident in
>> 8 years of operation where a user successfully fooled our server to accept
>> a payment that in the end didn't confirm. To successfully fool (non-RBF)
>> zeroconf one needs to have access to mining infrastructure and probability
>> of success is the % of hash rate controlled. This is simply due to the fact
>> that the network currently won't propagage the replacement transaction to
>> the miner, which is what's being discussed here. American call option risk
>> would however be available to 100% of all users, needs nothing beyond the
>> wallet app, and has no cost to the user - only upside.
>>
>> Bitrefill currently processes 1500-2000 onchain payments every day. For
>> us, a world where bitcoin becomes de facto RBF by default, means that we
>> would likely turn off the BIP21 model for onchain payments, instruct
>> Bitcoin users to use Lightning or deposit onchain BTC to a custodial
>> account that we have.
>> This option is however not available for your typical
>> BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
>> from other merchants or payment providers how they see this new behavior
>> and how they would counteract it.
>>
>> Currently Lightning is somewhere around 15% of our total bitcoin
>> payments. This is very much not nothing, and all of us here want Lightning
>> to grow, but I think it warrants a serious discussion on whether we want
>> Lightning adoption to go to 100% by means of disabling on-chain commerce.
>> For me personally it would be an easier discussion to have when Lightning
>> is at 80%+ of all bitcoin transactions. Currently far too many bitcoin
>> users simply don't have access to Lightning, and of those that do and hold
>> their own keys Muun is the biggest wallet per our data, not least due to
>> their ease-of-use which is under threat per the OP. It's hard to assess how
>> many users would switch to Lightning in such a scenario, the communication
>> around it would be hard. My intuition says that the majority of the current
>> 85% of bitcoin users that pay onchain would just not use bitcoin anymore,
>> probably shift to an alt. The benefits of Lightning are many and obvious,
>> we don't need to limit onchain to make Lightning more appealing. As an
>> anecdote, we did experiment with defaulting to bech32 addresses some years
>> back. The result was that simply users of the wallets that weren't able to
>> pay to bech32 didn't complete the purchase, no support ticket or anything,
>> just "it didn't work 路‍♂️" and user moved on. We rolled it back, and later
>> implemented a wallet selector to allow modern wallets to pay to bech32
>> while other wallets can pay to P2SH. This type 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Jeremy Rubin via bitcoin-dev
If they do this to you, and the delta is substantial, can't you sweep all
such abusers with a cpfp transaction replacing their package and giving you
the original txn?

On Wed, Oct 19, 2022, 7:33 AM Sergej Kotliar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Chiming in on this thread as I feel like the real dangers of RBF as
> default policy aren't sufficiently elaborated here. It's not only about the
> zero-conf (I'll get to that) but there is an even bigger danger called the
> american call option, which risks endangering the entirety of BIP21 "Scan
> this QR code with your wallet to buy this product" model that I believe
> we've all come to appreciate. Specifically, in a scenario with high
> volatility and many transactions in the mempools (which is where RBF would
> come in handy), a user can make a low-fee transaction and then wait for
> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
> up, user can cancel his transaction and make a new - cheaper one. The
> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
> (it's actually quite easily managed), it's FX risk as the merchant must
> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
> some transactions lose money to FX and others earn money - that evens out
> in the end. But if there is an _easily accessible in the wallet_ feature to
> "cancel transaction" that means it will eventually get systematically
> abused. A risk of X% loss on many payments that's easy to systematically
> abuse is more scary than a rare risk of losing 100% of one occasional
> payment. It's already possible to execute this form of abuse with opt-in
> RBF, which may lead to us at some point refusing those payments (even with
> confirmation) or cumbersome UX to work around it, such as crediting the
> bitcoin to a custodial account.
>
> To compare zeroconf risk with FX risk: I think we've had one incident in 8
> years of operation where a user successfully fooled our server to accept a
> payment that in the end didn't confirm. To successfully fool (non-RBF)
> zeroconf one needs to have access to mining infrastructure and probability
> of success is the % of hash rate controlled. This is simply due to the fact
> that the network currently won't propagage the replacement transaction to
> the miner, which is what's being discussed here. American call option risk
> would however be available to 100% of all users, needs nothing beyond the
> wallet app, and has no cost to the user - only upside.
>
> Bitrefill currently processes 1500-2000 onchain payments every day. For
> us, a world where bitcoin becomes de facto RBF by default, means that we
> would likely turn off the BIP21 model for onchain payments, instruct
> Bitcoin users to use Lightning or deposit onchain BTC to a custodial
> account that we have.
> This option is however not available for your typical
> BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
> from other merchants or payment providers how they see this new behavior
> and how they would counteract it.
>
> Currently Lightning is somewhere around 15% of our total bitcoin payments.
> This is very much not nothing, and all of us here want Lightning to grow,
> but I think it warrants a serious discussion on whether we want Lightning
> adoption to go to 100% by means of disabling on-chain commerce. For me
> personally it would be an easier discussion to have when Lightning is at
> 80%+ of all bitcoin transactions. Currently far too many bitcoin users
> simply don't have access to Lightning, and of those that do and hold their
> own keys Muun is the biggest wallet per our data, not least due to their
> ease-of-use which is under threat per the OP. It's hard to assess how many
> users would switch to Lightning in such a scenario, the communication
> around it would be hard. My intuition says that the majority of the current
> 85% of bitcoin users that pay onchain would just not use bitcoin anymore,
> probably shift to an alt. The benefits of Lightning are many and obvious,
> we don't need to limit onchain to make Lightning more appealing. As an
> anecdote, we did experiment with defaulting to bech32 addresses some years
> back. The result was that simply users of the wallets that weren't able to
> pay to bech32 didn't complete the purchase, no support ticket or anything,
> just "it didn't work 路‍♂️" and user moved on. We rolled it back, and later
> implemented a wallet selector to allow modern wallets to pay to bech32
> while other wallets can pay to P2SH. This type of thing  is clunky, and
> requires a certain level of scale to be able to do, we certainly wouldn't
> have had the manpower for that when we were starting out. This why I'm
> cautious about introducing more such clunkiness vectors as they are
> centralizing factors.
>
> I'm well aware of the reason for this policy being suggested and the
> potential pinning attack vector for LN and other 

Re: [bitcoin-dev] Ephemeral Anchors: Fixing V3 Package RBF againstpackage limit pinning

2022-10-19 Thread James O'Beirne via bitcoin-dev
I'm also very happy to see this proposal, since it gets us closer to having
a mechanism that allows the contribution to feerate in an "unauthenticated"
way, which seems to be a very helpful feature for vaults and other
contracting protocols.

One possible advantage of the sponsors interface -- and I'm curious for
your input here Greg -- is that with sponsors, assuming we relaxed the "one
sponsor per sponsoree" constraint, multiple uncoordinated parties can
collaboratively bump a tx's feerate. A simple example would be a batch
withdrawal from an exchange could be created with a low feerate, and then
multiple users with a vested interest of expedited confirmation could all
"chip in" to raise the feerate with multiple sponsor transactions.

Having a single ephemeral output seems to create a situation where a single
UTXO has to shoulder the burden of CPFPing a package. Is there some way we
could (possibly later) amend the ephemeral anchor interface to allow for
this kind of collaborative sponsoring? Could you maybe see "chained"
ephemeral anchors that would allow this?


On Tue, Oct 18, 2022 at 12:52 PM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Excellent proposal and I agree it does capture much of the spirit of
> sponsors w.r.t. how they might be used for V3 protocols.
>
> The only drawbacks I see is they don't work for lower tx version
> contracts, so there's still something to be desired there, and that the
> requirement to sweep the output must be incentive compatible for the miner,
> or else they won't enforce it (pass the buck onto the future bitcoiners).
> The Ephemeral UTXO concept can be a consensus rule (see
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf "Intermediate UTXO")
> we add later on in lieu of managing them by incentive, so maybe it's a
> cleanup one can punt.
>
> One question I have is if V3 is designed for lightning, and this is
> designed for lightning, is there any sense in requiring these outputs for
> v3? That might help with e.g. anonymity set, as well as potentially keep
> the v3 surface smaller.
>
> On Tue, Oct 18, 2022 at 11:51 AM Greg Sanders via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> > does that effectively mark output B as unspendable once the child gets
>> confirmed?
>>
>> Not at all. It's a normal spend like before, since the parent has been
>> confirmed. It's completely unrestricted, not being bound to any
>> V3/ephemeral anchor restrictions on size, version, etc.
>>
>> On Tue, Oct 18, 2022 at 11:47 AM Arik Sosman via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Greg,
>>>
>>> Thank you very much for sharing your proposal!
>>>
>>> I think there's one thing about the second part of your proposal that
>>> I'm missing. Specifically, assuming the scenario of a v3 transaction with
>>> three outputs, A, B, and the ephemeral anchor OP_TRUE. If a child
>>> transaction spends A and OP_TRUE, does that effectively mark output B as
>>> unspendable once the child gets confirmed? If so, isn't the implication
>>> therefore that to safely spend a transaction with an ephemeral anchor, all
>>> outputs must be spent? Thanks!
>>>
>>> Best,
>>> Arik
>>>
>>> On Tue, Oct 18, 2022, at 6:52 AM, Greg Sanders via bitcoin-dev wrote:
>>>
>>> Hello Everyone,
>>>
>>> Following up on the "V3 Transaction" discussion here
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
>>> , I would like to elaborate a bit further on some potential follow-on work
>>> that would make pinning severely constrained in many setups].
>>>
>>> V3 transactions may solve bip125 rule#3 and rule#5 pinning attacks under
>>> some constraints[0]. This means that when a replacement is to be made and
>>> propagated, it costs the expected amount of fees to do so. This is a great
>>> start. What's left in this subset of pinning is *package limit* pinning. In
>>> other words, a fee-paying transaction cannot enter the mempool due to the
>>> existing mempool package it is being added to already being too large in
>>> count or vsize.
>>>
>>> Zooming into the V3 simplified scenario for sake of discussion, though
>>> this problem exists in general today:
>>>
>>> V3 transactions restrict the package limit of a V3 package to one parent
>>> and one child. If the parent transaction includes two outputs which can be
>>> immediately spent by separate parties, this allows one party to disallow a
>>> spend from the other. In Gloria's proposal for ln-penalty, this is worked
>>> around by reducing the number of anchors per commitment transaction to 1,
>>> and each version of the commitment transaction has a unique party's key on
>>> it. The honest participant can spend their version with their anchor and
>>> package RBF the other commitment transaction safely.
>>>
>>> What if there's only one version of the commitment transaction, such as
>>> in other protocols like duplex payment channels, eltoo? What about multi
>>> 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Erik Aronesty via bitcoin-dev
> Currently Lightning is somewhere around 15% of our total bitcoin
payments. This is very much not nothing, and all of us here want Lightning
to grow, but I think it warrants a serious discussion on whether we want
Lightning adoption to go to 100% by means of disabling on-chain commerce.

Is this about disabling "on-chain instant commerce"?

 - Waiting for confirmation on-chain before shipping a product won't
change, normally it's 15 minutes or so.  This doesn't change that.

 - An easy way to cancel/rbf a transaction doesn't exist - like you said,
there's no UX for this now, and I don't anticipate one being broadly used
except by inter-exchange transfers, etc.

So what does this change?

 - In the rare event that an RBF transaction is received where the fee
level means confirmation times will be slow a merchant will have to wait
very long for at least 1 confirmation, the merchant should alert the user
that the transaction may take longer than the BTC FX rate guarantee window,
and may require additional funds if FX rates change.

 - Users with wallets that support RBF can now be encouraged to accelerate
the tx, with help and advice depending on their wallet, in order to lock in
the FX rates

 - 0 conf is still viable strategy for releasing an order, as long as fees
are very high, and it's very likely to be included in the next block.
 More fee analysis is needed to validate 0 conf and mitigate risks, but now
it is, at least, more "honest" to the true risks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Sergej Kotliar via bitcoin-dev
Hi all,

Chiming in on this thread as I feel like the real dangers of RBF as default
policy aren't sufficiently elaborated here. It's not only about the
zero-conf (I'll get to that) but there is an even bigger danger called the
american call option, which risks endangering the entirety of BIP21 "Scan
this QR code with your wallet to buy this product" model that I believe
we've all come to appreciate. Specifically, in a scenario with high
volatility and many transactions in the mempools (which is where RBF would
come in handy), a user can make a low-fee transaction and then wait for
hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
up, user can cancel his transaction and make a new - cheaper one. The
biggest risk in accepting bitcoin payments is in fact not zeroconf risk
(it's actually quite easily managed), it's FX risk as the merchant must
commit to a certain BTCUSD rate ahead of time for a purchase. Over time
some transactions lose money to FX and others earn money - that evens out
in the end. But if there is an _easily accessible in the wallet_ feature to
"cancel transaction" that means it will eventually get systematically
abused. A risk of X% loss on many payments that's easy to systematically
abuse is more scary than a rare risk of losing 100% of one occasional
payment. It's already possible to execute this form of abuse with opt-in
RBF, which may lead to us at some point refusing those payments (even with
confirmation) or cumbersome UX to work around it, such as crediting the
bitcoin to a custodial account.

To compare zeroconf risk with FX risk: I think we've had one incident in 8
years of operation where a user successfully fooled our server to accept a
payment that in the end didn't confirm. To successfully fool (non-RBF)
zeroconf one needs to have access to mining infrastructure and probability
of success is the % of hash rate controlled. This is simply due to the fact
that the network currently won't propagage the replacement transaction to
the miner, which is what's being discussed here. American call option risk
would however be available to 100% of all users, needs nothing beyond the
wallet app, and has no cost to the user - only upside.

Bitrefill currently processes 1500-2000 onchain payments every day. For us,
a world where bitcoin becomes de facto RBF by default, means that we would
likely turn off the BIP21 model for onchain payments, instruct Bitcoin
users to use Lightning or deposit onchain BTC to a custodial account that
we have.
This option is however not available for your typical
BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
from other merchants or payment providers how they see this new behavior
and how they would counteract it.

Currently Lightning is somewhere around 15% of our total bitcoin payments.
This is very much not nothing, and all of us here want Lightning to grow,
but I think it warrants a serious discussion on whether we want Lightning
adoption to go to 100% by means of disabling on-chain commerce. For me
personally it would be an easier discussion to have when Lightning is at
80%+ of all bitcoin transactions. Currently far too many bitcoin users
simply don't have access to Lightning, and of those that do and hold their
own keys Muun is the biggest wallet per our data, not least due to their
ease-of-use which is under threat per the OP. It's hard to assess how many
users would switch to Lightning in such a scenario, the communication
around it would be hard. My intuition says that the majority of the current
85% of bitcoin users that pay onchain would just not use bitcoin anymore,
probably shift to an alt. The benefits of Lightning are many and obvious,
we don't need to limit onchain to make Lightning more appealing. As an
anecdote, we did experiment with defaulting to bech32 addresses some years
back. The result was that simply users of the wallets that weren't able to
pay to bech32 didn't complete the purchase, no support ticket or anything,
just "it didn't work 路‍♂️" and user moved on. We rolled it back, and later
implemented a wallet selector to allow modern wallets to pay to bech32
while other wallets can pay to P2SH. This type of thing  is clunky, and
requires a certain level of scale to be able to do, we certainly wouldn't
have had the manpower for that when we were starting out. This why I'm
cautious about introducing more such clunkiness vectors as they are
centralizing factors.

I'm well aware of the reason for this policy being suggested and the
potential pinning attack vector for LN and other smart contracts, but I
think these two risks/costs need to be weighed against eachother first and
thoroughly discussed because the costs are non-trivial on both sides.

Sidenote: On the efficacy of RBF to "unstuck" stuck transactions
After interacting with users during high-fee periods I've come to not
appreciate RBF as a solution to that issue. Most users (80% or so) simply
don't have access to that functionality, 

Re: [bitcoin-dev] brickchain

2022-10-19 Thread Erik Aronesty via bitcoin-dev
> currently, a miner produce blocks with a limited capacity of transactions
that ultimately limits the global settlement throughput to a reduced number
of tx/s.

reduced settlement speed is a desirable feature and isn't something we need
to fix

the focus should be on layer 2 protocols that allow the ability to hold &
transfer, uncommitted transactions as pools / joins, so that layer 1's
decentralization and incentives can remain undisturbed

protocols like mweb, for example




On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Bitcoin devs,
> I'd like to share an idea of a method to increase throughput in the
> bitcoin network.
>
> Currently, a miner produce blocks with a limited capacity of transactions
> that ultimately limits the global settlement throughput to a reduced number
> of tx/s.
>
> Big-blockers proposed the removal of limits but this didn't come with
> undesirable effects that have been widely discussed and rejected.
>
> The main feature we wanted to preserve is 'small blocks', providing
> 'better network effects' I won't focus on them.
>
> The problem with small blocks is that, once a block is filled
> transactions, they are kept back in the mempool, waiting for their turn in
> future blocks.
>
> The following changes in the protocol aim to let all transactions go in
> the current block, while keeping the block size small. It requires changes
> in the PoW algorithm.
>
> Currently, the PoW algorithm consists on finding a valid hash for the
> block. Its validity is determined by comparing the numeric value of the
> block hash with a protocol-defined value difficulty.
>
> Once a miner finds a nonce for the block that satisfies the condition the
> new block becomes valid and can be propagated. All nodes would update their
> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
> for clarity).
>
> This process is meant to happen every 10 minutes in average.
>
> With this background information (we all already know) I go on to describe
> the idea:
>
> Let's allow a miner to include transactions until the block is filled,
> let's call this structure (coining a new term 'Brick'), B0. [brick=block
> that doesn't meet the difficulty rule and is filled of tx to its full
> capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce
> corresponding to a minimum numeric value of its hash found until it got
> filled.
>
> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>
> At this point, instead of discarding transactions, our miner would start
> working on a new brick B1, linked with B0 as usual.
>
> Nodes would allow incoming regular blocks and bricks with hashes that
> don't satisfy the difficulty rule, provided the brick is fully filled of
> transactions. Bricks not fully filled would be rejected as invalid to
> prevent spam (except if constitutes the last brick of a brickchain,
> explained below).
>
> Let's assume that 10 minutes have elapsed and our miner is in a state
> where N bricks have been produced and the accumulated PoW calculated using
> mathematics (every brick contains a 'minimum hash found', when a series of
> 'minimum hashes' is computationally equivalent to the network difficulty is
> then the full 'brickchain' is valid as a Block.
>
> This calculus shall be better defined, but I hope that this idea can serve
> as a seed to a BIP, or otherwise deemed absurd, which might be possible and
> I'd be delighted to discover why a scheme like this wouldn't work.
>
> If it finally worked, it could completely flush mempools, keep
> transactions fees low and increase throughput without an increase in the
> block size that would raise other concerns related to propagation.
>
> Thank you.
> I look forward to your responses.
>
> --
> Marcos Mayorga
> https://twitter.com/KatlasC
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] brickchain

2022-10-19 Thread Bryan Bishop via bitcoin-dev
Hi,

On Wed, Oct 19, 2022 at 6:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>

Check out the previous "weak block" proposals:
https://diyhpl.us/~bryan/irc/bitcoin/weak-blocks-links.2016-05-09.txt

- Bryan
https://twitter.com/kanzure
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] brickchain

2022-10-19 Thread angus via bitcoin-dev


> Let's allow a miner to include transactions until the block is filled, let's 
> call this structure (coining a new term 'Brick'), B0. [brick=block that 
> doesn't meet the difficulty rule and is filled of tx to its full capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce 
> corresponding to a minimum numeric value of its hash found until it got 
> filled.


So, if I'm understanding right, this amounts to "reduce difficulty required for 
a block ('brick') to be valid if the mempool contains more than 1 block's worth 
of transactions so we get transactions confirmed faster" using 'bricks' as 
short-lived sidechains that get merged into blocks?

This would have the same fundamental problem as just making the max blocksize 
bigger - it increases the rate of growth of storage required for a full node, 
because you're allowing blocks/bricks to be created faster, so there will be 
more confirmed transactions to store in a given time window than under current 
Bitcoin rules.

Bitcoin doesn't take the size of the mempool into account when adjusting the 
difficulty because the time-between-blocks is 'more important' than avoiding 
congestion where transactions take ages to get into a block. The fee mechanism 
in part allows users to decide how urgently they want their tx to get 
confirmed, and high fees when there is congestion also disincentivises others 
from transacting at all, which helps arrest mempool growth.

I'd imagine we'd also see a 'highway widening' effect with this kind of 
proposal - if you increase the tx volume Bitcoin can settle in a given time, 
that will quickly be used up by more people transacting until we're back at a 
congested state again.

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
> would be broadcasted and nodes would have it on in a separate fork as usual.


How do we know if the hash the miner does find for a brick was their 'best 
effort' and they're not just being lazy? There's an element of luck in the best 
hash a miner can find, sometimes it takes a long time to meet the difficulty 
requirement and sometimes it happens almost at instantly.

How would we know how 'busy' the mempool was at the time a brick from months or 
years ago was mined?

Nodes have to be able to run through the entire history of the blockchain and 
check everything is valid. They have to do this using only the previous blocks 
they've already validated - they won't have historical snapshots of the mempool 
(they'll build and mutate a UTXO set, but that's different). Transactions don't 
contain a 'created-at' time that you could compare to the block's creation time 
(and if they did, you probably couldn't trust it).

With the current system, Nodes can calculate what the difficulty should be for 
every block based on those previous blocks' times and difficulties - but how 
would you know an old brick was valid if its difficulty was low but at the time 
the mempool was busy, vs. getting a fraudulent brick that is actually invalid 
because there isn't enough work in it? You can't solve this by adding some 
mempoolsize field to bricks, as you'd have to blindly trust miners not to lie 
about them.

If we can't be (fairly) certain that a miner put a minimum amount of work into 
finding a hash, then you lose all the strengths of PoW.

If you weaken the difficulty requirement which is there so that mining blocks 
is hard so that it is very hard to intentionally fork the chain, re-mine 
previous blocks, overtake the other fork, and get the network to re-org onto 
your chain - then there's no Proof of work undergirding consensus in the 
ledger's state.

Secondly, where does the block reward go? Do brick miners get a fraction of the 
reward proportionate to the fraction of the difficulty they got to? Later when 
bricks become part of a block, who gets the block reward for that complete 
block? Who gets the fees? No miner is going to bother mining a 
merge-bricks-into-block block if the reward isn't the same or better than just 
mining a regular block, but each miner of the bricks in it would also want a 
reward. But, we can't give them both a block reward as that'd increase 
Bitcoin's issuance rate, which might be the only thing people are more strongly 
opposed to than increasing the blocksize! xD

> At this point, instead of discarding transactions, our miner would start 
> working on a new brick B1, linked with B0 as usual.
> 

> Nodes would allow incoming regular blocks and bricks with hashes that don't 
> satisfy the difficulty rule, provided the brick is fully filled of 
> transactions. Bricks not fully filled would be rejected as invalid to prevent 
> spam (except if constitutes the last brick of a brickchain, explained below).
> 

> Let's assume that 10 minutes have elapsed and our miner is in a state where N 
> bricks have been produced and the accumulated PoW calculated using 
> mathematics (every brick contains a 'minimum hash found', when a 

Re: [bitcoin-dev] Ephemeral Anchors: Fixing V3 Package RBF against package limit pinning

2022-10-19 Thread Greg Sanders via bitcoin-dev
> IIRC, here I think we also need _package relay_ in strict addition of
_package RBF_,

Yes, sorry if that wasn't clear. Package Relay -> Package RBF -> V3 ->
Ephemeral Anchors

> If we allow non-zero value in ephemeral outputs, I think we're slightly
modifying the incentives games of the channels counterparties, in the sense
if you have a link Alice-Bob, Bob could circular loop a bunch of dust
offered HTLC deduced from Alice balance and committed as fees in the
ephemeral output value, then break the channel on-chain to pocket in the
trimmed value sum (in the limit of your Lightning implementation dust
exposure). Note, this is already possible today if your counterparty is a
miner however iiuc the proposal, here we're lowering the bar.

Maybe the 0-fee parent requirement creates too much downstream protocol
complexity. Perhaps each node software can choose its own strategy for
removing the parent when the child is evicted. For example, a node software
could completely ignore the parent tx fee in the presence of an ephemeral
anchor. In other words, the trimmed value can go to fee, but the fee is
effectively ignored from mempool inclusion standpoint.

We already toss things with dust even though it's "incentive incompatible";
it's no worse?

As an entertaining aside, h/t to AJ who found this old thread that proposed
an OP_TRUE, 0-fee parent idea, but 4 years behind in our understanding of
pinning. All the usual suspects chiming in:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015931.html

Great minds, etc.

Greg

On Tue, Oct 18, 2022 at 8:33 PM Antoine Riard 
wrote:

> Hi Greg,
>
> Thanks for proposing forward the "ephemeral anchors" policy change.
>
> > In Gloria's proposal for ln-penalty, this is worked
> > around by reducing the number of anchors per commitment transaction to 1,
> > and each version of the commitment transaction has a unique party's key
> on
> > it. The honest participant can spend their version with their anchor and
> > package RBF the other commitment transaction safely.
>
> IIRC, here I think we also need _package relay_ in strict addition of
> _package RBF_, otherwise if your Lightning transactions are still relayed
> and accepted one by one, your version of the commitment transaction won't
> succeed to replace the other counterparties's commitments sleeping in
> network mempools. The presence of a remote anchor output on the
> counterparty commitment still offers an ability to fee-bump, albeit in
> practice more a lucky shot as you might have partitioned network mempools
> between your local commitment and the remote commitment disputing the spend
> of the same funding UTXO.
>
> > 1) Expand a carveout for "sibling eviction", where if the new child is
> > paying "enough" to bump spends from the same parent, it knocks its
> sibling
> > out of the mempool and takes the one child slot. This would solve it, but
> > is a new eviction paradigm that would need to be carefully worked
> through.
>
> Note, I wonder about the robustness of such a "sibling eviction" mechanism
> in the context of multi-party construction. E.g, a batching payout, where
> the participants are competing to each other in a blind way, as they do
> want their CPFPs paying back to them to confirm first, enforcing their
> individual liquidity preferences. I would think it might artificially lead
> the participants to overbid far beyond the top mempool block fees.
>
> >  If we allow non-zero value in ephemeral outputs, does this open up a MEV
> >  we are worried about? Wallets should toss all the value directly to
> fees,
> >  and add their own additional fees on top, otherwise miners have
> incentive
> >  to make the smallest utxo burn transaction to claim those funds. They
> just
> >  confirmed your parent transaction anyways, so do we care?
>
> If we allow non-zero value in ephemeral outputs, I think we're slightly
> modifying the incentives games of the channels counterparties, in the sense
> if you have a link Alice-Bob, Bob could circular loop a bunch of dust
> offered HTLC deduced from Alice balance and committed as fees in the
> ephemeral output value, then break the channel on-chain to pocket in the
> trimmed value sum (in the limit of your Lightning implementation dust
> exposure). Note, this is already possible today if your counterparty is a
> miner however iiuc the proposal, here we're lowering the bar.
>
> >  SIGHASH_GROUP like constructs would allow uncommitted ephemeral anchors
> >  to be added at spend time, depending on spending requirements.
> >  SIGHASH_SINGLE already allows this.
>
> Note, with SIGHASH_GROUP, you're still allowed to aggregate in a single
> bundle multiple ln-penalty commitments or eltoo settlement transactions,
> with only one fee-bumping output. It's a cool space performance trick, but
> a) I think this is still more a whiteboard idea than a sound proposal and
> b) sounds more a long-term, low-hanging fruit optimization of blockspace
> consumption.
>
> Best,
> 

[bitcoin-dev] brickchain

2022-10-19 Thread mm-studios via bitcoin-dev
Hi Bitcoin devs,
I'd like to share an idea of a method to increase throughput in the bitcoin 
network.

Currently, a miner produce blocks with a limited capacity of transactions that 
ultimately limits the global settlement throughput to a reduced number of tx/s.

Big-blockers proposed the removal of limits but this didn't come with 
undesirable effects that have been widely discussed and rejected.

The main feature we wanted to preserve is 'small blocks', providing 'better 
network effects' I won't focus on them.

The problem with small blocks is that, once a block is filled transactions, 
they are kept back in the mempool, waiting for their turn in future blocks.

The following changes in the protocol aim to let all transactions go in the 
current block, while keeping the block size small. It requires changes in the 
PoW algorithm.

Currently, the PoW algorithm consists on finding a valid hash for the block. 
Its validity is determined by comparing the numeric value of the block hash 
with a protocol-defined value difficulty.

Once a miner finds a nonce for the block that satisfies the condition the new 
block becomes valid and can be propagated. All nodes would update their 
blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for 
clarity).

This process is meant to happen every 10 minutes in average.

With this background information (we all already know) I go on to describe the 
idea:

Let's allow a miner to include transactions until the block is filled, let's 
call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't 
meet the difficulty rule and is filled of tx to its full capacity]
Since PoW hashing is continuously active, Brick B0 would have a nonce 
corresponding to a minimum numeric value of its hash found until it got filled.

Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would 
be broadcasted and nodes would have it on in a separate fork as usual.

At this point, instead of discarding transactions, our miner would start 
working on a new brick B1, linked with B0 as usual.

Nodes would allow incoming regular blocks and bricks with hashes that don't 
satisfy the difficulty rule, provided the brick is fully filled of 
transactions. Bricks not fully filled would be rejected as invalid to prevent 
spam (except if constitutes the last brick of a brickchain, explained below).

Let's assume that 10 minutes have elapsed and our miner is in a state where N 
bricks have been produced and the accumulated PoW calculated using mathematics 
(every brick contains a 'minimum hash found', when a series of 'minimum hashes' 
is computationally equivalent to the network difficulty is then the full 
'brickchain' is valid as a Block.

This calculus shall be better defined, but I hope that this idea can serve as a 
seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be 
delighted to discover why a scheme like this wouldn't work.

If it finally worked, it could completely flush mempools, keep transactions 
fees low and increase throughput without an increase in the block size that 
would raise other concerns related to propagation.

Thank you.
I look forward to your responses.

--
Marcos Mayorgahttps://twitter.com/KatlasC___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread alicexbt via bitcoin-dev
Hi aj,

> I mean, I guess I can understand wanting to reduce that responsibility
> for maintainers of the github repo, even if for no other reason than to
> avoid frivolous lawsuits, but where do you expect people to find better
> advice about what things are a good/bad idea if core devs as a whole
> are avoiding that responsibility?

Bitcoin Core contributors and maintainers should provide the options, 
recommendations etc. about mempool policies. If these policies are kept for 
users to change based on their needs, why force anything or change defaults 
ignoring feedback?

> Core devs are supposedly top technical experts at bitcoin -- which means
> they're the ones that should have the best understanding of all the
> implications of policy changes like this.

Why even provide options for users to change RBF policy in that case? Option to 
disable was already [removed][1] ignoring NACKs and MarcoFalke prefers users 
try the [workaround][2] if there is ever a need to disable it. Are we going to 
remove all the options to switch RBF policies in future because fullrbf has 
been suggested by leading technical experts? Is there a possibility of experts 
going wrong and has it ever happened in past?

> It's a bit disappointing that the people that's a problem for didn't
> engage earlier -- though looking back, I guess there wasn't all that
> much effort made to reach out, either.

To be fair, John Carvalho did [comment][3] about this in a pull request 
although it was wrong PR and never going to be merged.

> And I mean: all this is only about drawing a line in sand; if people
> think core devs are wrong, they can still let that line blow away in
> the wind, by running different software, configuring core differently,
> patching core, or whatever else.

I think this is the best option for users at this point. Keep running older 
versions of Core and use Knots or other implementations until technical experts 
in core repository, other bitcoin projects and users are on the same page.

> And the
> impression I got from the PR review club discussion more seemed like
> devs making assumptions about businesses rather than having talked to
> them (eg "[I] think there are fewer and fewer businesses who absolutely
> cannot survive without relying on zeroconf. Or at least hope so").

Even I noticed this since I don't recall the developers of the 3 main coinjoin 
implementations that are claimed to be impacted by opt-in RBF making any 
remarks.

[1]: https://github.com/bitcoin/bitcoin/pull/16171
[2]: https://github.com/bitcoin/bitcoin/pull/25373#issuecomment-1157846575
[3]: https://github.com/bitcoin/bitcoin/pull/25373#issuecomment-1163422654

/dev/fd0

Sent with Proton Mail secure email.

--- Original Message ---
On Tuesday, October 18th, 2022 at 12:30 PM, Anthony Towns via bitcoin-dev 
 wrote:


> On Mon, Oct 17, 2022 at 05:41:48PM -0400, Antoine Riard via bitcoin-dev wrote:
> 
> > > 1) Continue supporting and encouraging accepting unconfirmed "on-chain"
> > > payments indefinitely
> > > 2) Draw a line in the sand now, but give people who are currently
> > > accepting unconfirmed txs time to update their software and business
> > > model
> > > 3) Encourage mainnet miners and relay nodes to support unconditional
> > > RBF immediately, no matter how much that increases the risk to
> > > existing businesses that are still accepting unconfirmed txs
> > > To give more context, the initial approach of enabling full RBF through
> > > #25353 + #25600 wasn't making the assumption the enablement itself would
> > > reach agreement of the economic majority or unanimity.
> 
> 
> Full RBF doesn't need a majority or unanimity to have an impact; it needs
> adoption by perhaps 10% of hashrate (so a low fee tx at the bottom of
> a 10MvB mempool can be replaced before being mined naturally), and some
> way of finding a working path to relay txs to that hashrate.
> 
> Having a majority of nodes/hashrate support it makes the upsides better,
> but doesn't change the downsides to the people who are relying on it
> not being available.
> 
> > Without denying that such equilibrium would be unstable, it was designed to
> > remove the responsibility of the Core project itself to "draw a hard line"
> > on the subject.
> 
> 
> Removing responsibility from core developers seems like it's very much
> optimising for the wrong thing to me.
> 
> I mean, I guess I can understand wanting to reduce that responsibility
> for maintainers of the github repo, even if for no other reason than to
> avoid frivolous lawsuits, but where do you expect people to find better
> advice about what things are a good/bad idea if core devs as a whole
> are avoiding that responsibility?
> 
> Core devs are supposedly top technical experts at bitcoin -- which means
> they're the ones that should have the best understanding of all the
> implications of policy changes like this. Is opt-in RBF only fine? If
> you look at the network today, it sure seems like it; it takes 

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Antoine Riard via bitcoin-dev
> Full RBF doesn't need a majority or unanimity to have an impact; it needs
> adoption by perhaps 10% of hashrate (so a low fee tx at the bottom of
> a 10MvB mempool can be replaced before being mined naturally), and some
> way of finding a working path to relay txs to that hashrate.

Yes, this has been the crux of the conceptual discussion in #25600.

> I mean, I guess I can understand wanting to reduce that responsibility
> for maintainers of the github repo, even if for no other reason than to
> avoid frivolous lawsuits, but where do you expect people to find better
> advice about what things are a good/bad idea if core devs as a whole
> are avoiding that responsibility?
>
> Core devs are supposedly top technical experts at bitcoin -- which means
> they're the ones that should have the best understanding of all the
> implications of policy changes like this. Is opt-in RBF only fine? If
> you look at the network today, it sure seems like it; it takes a pretty
> good technical understanding to figure out what problems it has, and
> an even better one to figure out whether those problems can be solved
> while keeping an opt-in RBF regime, or if full RBF is needed.

In the present case, I don't think there is a real concern of a frivolous
or half-baked lawsuit. My concern is rather the pretension to omniscience
that we would adopt as Core devs w.r.t policy changes, as far from being a
more closed, hermetic system like the p2p stack, it's interfacing with the
operations of a number of Bitcoin applications and second-layer contracting
protocols. As of today, I think this is still a relatively short process to
analyze the implications of any policy changes on the major Bitcoin
applications
flows and L2s of the day (i.e mainly Lightning and coinjoins). I'm not sure
this statement will stay true in a future with a growing fauna of L2s (i.e
vaults, DLC-over-channel, peerswaps, etc), each presenting unique
characteristics.

How do we minimize the odds of policy-based disruptions for current Bitcoin
softwares and users ? I don't have strong ideas, though I wish for the Core
project to adopt a more open-ended and smooth approach to release
context-rich policy changes. I aimed with #25353 and #25600 to experiment
with such a smoother approach advocated for (rather than the last year
proposal of turning on by default full-rbf, that was a wrong and missing
context). I hope at least one good outcome of this gradual process has been
to give time to Dario to publish a thoughtful standpoint for 0conf
operators, of which at least I learnt a few interesting elements on the UX
of such applications.

> It's a bit disappointing that the people that's a problem for didn't
> engage earlier -- though looking back, I guess there wasn't all that
> much effort made to reach out, either. There were two mentions in the
> optech newsletter [3] [4] but it wasn't called out as an "action item"
> (maybe those aren't a thing anymore), so it may have been pretty missable,
> especially given RBF has been discussed on and off for so long. And the
> impression I got from the PR review club discussion more seemed like
> devs making assumptions about businesses rather than having talked to
> them (eg "[I] think there are fewer and fewer businesses who absolutely
> cannot survive without relying on zeroconf. Or at least hope so").

Yeah, I'm still valuing the mailing list as a kind of "broadcast-all"
communication channel towards all the community stakeholders, though this
is the perspective of a developer and I'm not sure business/services
operators have the same communication habits. There is definitely a
reflection to hold, if we, as Core devs, we should follow a better
communication standard when we propose significant policy changes. And go
the full-tour of Reddit AMA, podcasts and newsletters as suggested in my
reply to Dario. It's hard to know if lack of vocal reactions on the mailing
list or to the publication of optech newsletter signifies a lack of
opposition, a lack of negatively impacted users or lack of interest from
the wider community. Maybe we should have a formalized, bulletpoints -based
for future policy changes, with clear time buffers and actions items, I
don't know.

> If we're happy to not get feedback until we start doing rcs, that's fine;
> but if we want to say "oops, we're into release candidates, you should
> have something earlier, it's too late now", that's a pretty closed-off
> way of doing things.
>
> And I mean: all this is only about drawing a line in *sand*; if people
> think core devs are wrong, they can still let that line blow away in
> the wind, by running different software, configuring core differently,
> patching core, or whatever else.

In the present case, it's more a lack of feedback showing up until we start
doing rcs, rather than a pretty closed-off way of doing things. That we
should amend expected and already-merged changes in the function of
feedback, I'm all for it in principle. The hard question 

[bitcoin-dev] Batch validation of CHECKMULTISIG using an extra hint field

2022-10-19 Thread Mark Friedenbach via bitcoin-dev
When Satoshi wrote the first version of bitcoin, s/he made what was almost 
certainly an unintentional mistake. A bug in the original CHECKMULTISIG 
implementation caused an extra item to be popped off the stack upon completion. 
This extra value is not used in any way, and has no consensus meaning. Since 
this value is often provided in the witness, it unfortunately provides a 
malleability vector as anybody can change the extra/dummy value in the 
signature without invalidating a transaction. In legacy scripts NULLDUMMY is a 
policy rule that states this value must be zero, and this was made a consensus 
rule for segwit scripts.

This isn’t the only problem with CHECKMULTISIG. For both ECDSA and Schnorr 
signatures, batch validation could enable an approximate 2x speedup, especially 
during the initial block download phase. However the CHECKMULTISIG algorithm, 
as written, seemingly precludes batch validation for threshold signatures as it 
attempts to validate the list of signatures with the list of pubkeys, in order, 
dropping an unused pubkey only when a signature validation fails. As an 
example, the following script

[2 C B A 3 CHECKMULTISIG]

Could be satisfied by the following witness:

[0 c a]

Where “a” is a signature for pubkey A, and “c” a signature for pubkey C. During 
validation, the signature a is checked using pubkey A, which is successful, so 
the internal algorithm increments the signature pointer AND the pubkey pointer 
to the next elements in the respective lists, removing both from future 
consideration. Next the signature c is checked with pubkey B, which fails, so 
only the pubkey pointer is incremented. Finally signature c is checked with 
pubkey C, which passes. Since 2 signatures passed and this is equal to the 
specified threshold, the opcode evaluates as true. All inputs (including the 
dummy 0 value) are popped from the stack.

The algorithm cannot batch validate these signatures because for any partial 
threshold it doesn’t know which signatures map to which pubkeys.

Not long after segwit was released for activation, making the NULLDUMMY rule 
consensus for segwit scripts, the observation was made by Luke-Jr on IRC[1] 
that this new rule was actually suboptimal. Satoshi’s mistake gave us an extra 
parameter to CHECKMULTISIG, and it was entirely within our means to use this 
parameter to convey extra information to the CHECKMULTISIG algorithm, and 
thereby enable batch validation of threshold signatures using this opcode.

The idea is simple: instead of requiring that the final parameter on the stack 
be zero, require instead that it be a minimally-encoded bitmap specifying which 
keys are used, or alternatively, which are not used and must therefore be 
skipped. Before attempting validation, ensure for a k-of-n threshold only k 
bits are set in the bitfield indicating the used pubkeys (or n-k bits set 
indicating the keys to skip). The updated CHECKMULTISIG algorithm is as 
follows: when attempting to validate a signature with a pubkey, first check the 
associated bit in the bitfield to see if the pubkey is used. If the bitfield 
indicates that the pubkey is NOT used, then skip it without even attempting 
validation. The only signature validations which are attempted are those which 
the bitfield indicates ought to pass. This is a soft-fork as any validator 
operating under the original rules (which ignore the “dummy” bitfield) would 
still arrive at the correct pubkey-signature mapping through trial and error.

Aside: If you wanted to hyper-optimize, you could use a binomial encoding of 
the bitmask hint field, given that the n-choose-k threshold is already known. 
Or you could forego encoding the k threshold entirely and infer it from the 
number of set bits. However in either case the number of bytes saved is 
negligible compared to the overall size of a multisig script and witness, and 
there’d be a significant tradeoff in usability. Such optimization is probably 
not worth it.

If you’d rather see this in terms of code, there’s an implementation of this 
that I coded up in 2019 and deployed to a non-bitcoin platform:

https://github.com/tradecraftio/tradecraft/commit/339dafc0be37ae5465290b22d204da4f37c6e261

Unfortunately this observation was made too late to be incorporated into 
segwit, but future versions of script could absolutely use the hint-field trick 
to enable batch validation of CHECKMULTISIG scripts. So you can imagine my 
surprise when reviewing the Taproot/Tapscript BIPs I saw that CHECKMULTISIG was 
disabled for Tapscript, and the justification given in the footnotes is that 
CHECKMULTISIG is not compatible with batch validation! Talking with a few other 
developers including Luke-Jr, it has become clear that this solution to the 
CHECKMULTISIG batch validation problem had been completely forgotten and did 
not come up during Tapscript review. I’m posting this now because I don’t want 
the trick to be lost again.

Kind regards,
Mark Friedenbach