Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-23 Thread Matt Corallo via bitcoin-dev




On 10/20/23 7:43 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 09:55:12PM -0400, Matt Corallo wrote:

Quite the contrary. Schnorr signatures are 64 bytes, so in situations like
lightning where the transaction form is deterministically derived, signing 100
extra transactions requires just 6400 extra bytes. Even a very slow 100KB/s
connection can transfer that in 64ms; latency will still dominate.


Lightning today isn't all that much data, but multiply it by 100 and we
start racking up data enough that people may start to have to store a really
material amount of data for larger nodes and dealing with that starts to be
a much bigger pain then when we're talking about a GiB or twenty.


We are talking about storing ephemeral data here, HTLC transactions and
possibly commitment transactions. Since lightning uses disclosed secrets to
invalidate old state, you do not need to keep every signature from your
counterparty indefinitely.


Mmm, fair point, yes.


RBF has a minimum incremental relay fee of 1sat/vByte by default. So if you use
those 100 pre-signed transaction variants to do nothing more than sign every
possible minimum incremental relay, you've covered a range of 1sat/vByte to
100sat/vByte. I believe that is sufficient to get mined for any block in
Bitcoin's entire modern history.

CPFP meanwhile requires two transactions, and thus extra bytes. Other than edge
cases with very large transactions in low-fee environments, there's no
circumstance where CPFP beats RBF.


What I was referring to is that if we have the SIGHASH_SINGLE|ANYONECANPAY
we can combine many HTLC claims into one transaction, vs pre-signing means
we're stuck with a ton of individual transactions.


Since SIGHASH_SINGLE requires one output per input, the savings you get by
combining multiple SIGHASH_SINGLE transactions together aren't very
significant. Just 18 bytes for nVersion, nLockTime, and the txin and txout size
fields. The HTLC-timeout transaction is 166.5 vBytes, so that's a savings of
just 11%


Yep, its not a lot, but for a thing that's inherently super chain-spammy, its 
still quite nice.


Of course, if you _do_ need to fee bump and add an additional input, that input
takes up space, and you'll probably need a change output. At which point you
again would probably have been better off with a pre-signed transaction.

You are also assuming there's lots of HTLC's in flight that need to be spent.
That's very often not the case.


In general, yes, in force-close cases often there's been some failure which is repeated in several 
HTLCs :).


More generally, I think we're getting lost here - this isn't really a material change for 
lightning's trust model - its already the case that a peer that is willing to put a lot of work in 
can probably steal your money, and there's now just one more way they can do that. We really don't 
need to rush to "fix lightning" here, we can do it right and fix it at the ecosystem level. It 
shouldn't be the case that a policy restriction results in both screwing up a L2 network *and* 
results in miners getting paid less. That's a policy bug.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-20 Thread Matt Corallo via bitcoin-dev




On 10/20/23 9:25 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 09:03:49PM -0400, Matt Corallo wrote:

What are anchor outputs used for other than increasing fees?

Because if we've pre-signed the full fee range, there is simply no need for
anchor outputs. Under any circumstance we can broadcast a transaction with a
sufficiently high fee to get mined.



Indeed, that is what anchor outputs are for. Removing the pre-set feerate
solved a number of issues with edge-cases and helped address the
fee-inflation attack. Now, just using pre-signed transactions doesn't have
to re-introduce those issues - as long as the broadcaster gets to pick which
of the possible transactions they broadcast its just another transaction of
theirs.

Still, I'm generally really dubious of the multiple pre-signed transaction
thing, (a) it would mean more fee overhead (not the end of the world for a
force-closure, but it sucks to have all these individual transactions
rolling around and be unable to batch), but more importantly (b) its a bunch
of overhead to keep track of a ton of variants across a sufficiently
granular set of feerates for it to not result in substantially overspending
on fees.


Quite the contrary. Schnorr signatures are 64 bytes, so in situations like
lightning where the transaction form is deterministically derived, signing 100
extra transactions requires just 6400 extra bytes. Even a very slow 100KB/s
connection can transfer that in 64ms; latency will still dominate.


Lightning today isn't all that much data, but multiply it by 100 and we start racking up data enough 
that people may start to have to store a really material amount of data for larger nodes and dealing 
with that starts to be a much bigger pain then when we're talking about a GiB or twenty.



RBF has a minimum incremental relay fee of 1sat/vByte by default. So if you use
those 100 pre-signed transaction variants to do nothing more than sign every
possible minimum incremental relay, you've covered a range of 1sat/vByte to
100sat/vByte. I believe that is sufficient to get mined for any block in
Bitcoin's entire modern history.

CPFP meanwhile requires two transactions, and thus extra bytes. Other than edge
cases with very large transactions in low-fee environments, there's no
circumstance where CPFP beats RBF.


What I was referring to is that if we have the SIGHASH_SINGLE|ANYONECANPAY we can combine many HTLC 
claims into one transaction, vs pre-signing means we're stuck with a ton of individual transactions.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-20 Thread Matt Corallo via bitcoin-dev




On 10/20/23 8:15 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 05:05:48PM -0400, Matt Corallo wrote:

Sadly this only is really viable for pre-anchor channels. With anchor
channels the attack can be performed by either side of the closure, as the
HTLCs are now, at max, only signed SIGHASH_SINGLE|ANYONECANPAY, allowing you
to add more inputs and perform this attack even as the broadcaster.

I don't think its really viable to walk that change back to fix this, as it
also fixed plenty of other issues with channel usability and important
edge-cases.


What are anchor outputs used for other than increasing fees?

Because if we've pre-signed the full fee range, there is simply no need for
anchor outputs. Under any circumstance we can broadcast a transaction with a
sufficiently high fee to get mined.



Indeed, that is what anchor outputs are for. Removing the pre-set feerate solved a number of issues 
with edge-cases and helped address the fee-inflation attack. Now, just using pre-signed transactions 
doesn't have to re-introduce those issues - as long as the broadcaster gets to pick which of the 
possible transactions they broadcast its just another transaction of theirs.


Still, I'm generally really dubious of the multiple pre-signed transaction thing, (a) it would mean 
more fee overhead (not the end of the world for a force-closure, but it sucks to have all these 
individual transactions rolling around and be unable to batch), but more importantly (b) its a bunch 
of overhead to keep track of a ton of variants across a sufficiently granular set of feerates for it 
to not result in substantially overspending on fees.


Like I mentioned in the previous mail, this is really a policy bug - we're talking about a 
transaction pattern that might well happen where miners aren't getting the optimal value in 
transaction fees (potentially by a good bit). This needs to be fixed at the policy/Bitcoin Core 
layer, not in the lightning world (as much as its pretty resource-intensive to fix in the policy 
domain, I think).


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-20 Thread Matt Corallo via bitcoin-dev
Sadly this only is really viable for pre-anchor channels. With anchor channels the attack can be 
performed by either side of the closure, as the HTLCs are now, at max, only signed 
SIGHASH_SINGLE|ANYONECANPAY, allowing you to add more inputs and perform this attack even as the 
broadcaster.


I don't think its really viable to walk that change back to fix this, as it also fixed plenty of 
other issues with channel usability and important edge-cases.


I'll highlight that fixing this issue on the lightning end isn't really the right approach generally 
- we're talking about a case where a lightning counterparty engineered a transaction broadcast 
ordering such that miners are *not* including the optimal set of transactions for fee revenue. Given 
such a scenario exists (and its not unrealistic to think someone might wish to engineer such a 
situation), the fix ultimately needs to lie with Bitcoin Core (or other parts of the mining stack).


Now, fixing this in the Bitcoin Core stack is no trivial deal - the reason for this attack is to 
keep enough history to fix it Bitcoin Core would need unbounded memory. However, its not hard to 
imagine a simple external piece of software which monitors the mempool for transactions which were 
replaced out but which may be able to re-enter the mempool later with other replacements and store 
them on disk. From there, this software could optimize the revenue of block template selection, 
while also accidentally fixing this issue.


Matt

On 10/20/23 2:35 PM, Matt Morehouse via bitcoin-dev wrote:

I think if we apply this presigned fee multiplier idea to HTLC spends,
we can prevent replacement cycles from happening.

We could modify HTLC scripts so that *both* parties can only spend the
HTLC via presigned second-stage transactions, and we can always sign
those with SIGHASH_ALL.  This will prevent the attacker from adding
inputs to their presigned transaction, so (AFAICT) a replacement
cycling attack becomes impossible.

The tradeoff is more bookkeeping and less fee granularity when
claiming HTLCs on chain.

On Fri, Oct 20, 2023 at 11:04 AM Peter Todd via bitcoin-dev
 wrote:


On Fri, Oct 20, 2023 at 10:31:03AM +, Peter Todd via bitcoin-dev wrote:

As I have suggested before, the correct way to do pre-signed transactions is to
pre-sign enough *different* transactions to cover all reasonable needs for
bumping fees. Even if you just increase the fee by 2x each time, pre-signing 10
different replacement transactions covers a fee range of 1024x. And you
obviously can improve on this by increasing the multiplier towards the end of
the range.


To be clear, when I say "increasing the multiplier", I mean, starting with a
smaller multiplier at the beginning of the range, and ending with a bigger one.

Eg feebumping with fee increases pre-signed for something like:

 1.1
 1.2
 1.4
 1.8
 2.6
 4.2
 7.4

etc.

That would use most of the range for smaller bumps, as a %, with larger % bumps
reserved for the end where our strategy is changing to something more
"scorched-earth"

And of course, applying this idea properly to commitment transactions will mean
that the replacements may have HTLCs removed, when their value drops below the
fees necessary to get those outputs mined.

Note too that we can sign simultaneous variants of transactions that deduct the
fees from different party's outputs. Eg Alice can give Bob the ability to
broadcast higher and higher fee txs, taking the fees from Bob's output(s), and
Bob can give Alice the same ability, taking the fees from Alice's output(s). I
haven't thought through how this would work with musig. But you can certainly
do that with plain old OP_CheckMultisig.

--
https://petertodd.org 'peter'[:-1]@petertodd.org
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-19 Thread Matt Corallo via bitcoin-dev
That certainly helps, yes, and I think many nodes do something akin to this already, but I'm not 
sure we can say that the problem has been fixed if the victim has to spend way more than the 
prevailing mempool fees (and potentially burn a large % of their HTLC value) :).


Matt

On 10/19/23 12:23 PM, Matt Morehouse wrote:

On Wed, Oct 18, 2023 at 12:34 AM Matt Corallo via bitcoin-dev
 wrote:


There appears to be some confusion about this issue and the mitigations. To be 
clear, the deployed
mitigations are not expected to fix this issue, its arguable if they provide 
anything more than a PR
statement.

There are two discussed mitigations here - mempool scanning and transaction 
re-signing/re-broadcasting.

Mempool scanning relies on regularly checking the mempool of a local node to 
see if we can catch the
replacement cycle mid-cycle. It only works if wee see the first transaction 
before the second
transaction replaces it.

Today, a large majority of lightning nodes run on machines with a Bitcoin node 
on the same IP
address, making it very clear what the "local node" of the lightning node is. 
An attacker can
trivially use this information to connect to said local node and do the 
replacement quickly,
preventing the victim from seeing the replacement.

More generally, however, similar discoverability is true for mining pools. An 
attacker performing
this attack is likely to do the replacement attack on a miner's node directly, 
potentially reducing
the reach of the intermediate transaction to only miners, such that the victim 
can never discover it
at all.

The second mitigation is similarly pathetic. Re-signing and re-broadcasting the 
victim's transaction
in an attempt to get it to miners even if its been removed may work, if the 
attacker is super lazy
and didn't finish writing their attack system. If the attacker is connected to 
a large majority of
hashrate (which has historically been fairly doable), they can simply do their 
replacement in a
cycle aggressively and arbitrarily reduce the probability that the victim's 
transaction gets confirmed.


What if the honest node aggressively fee-bumps and retransmits the
HTLC-timeout as the CLTV delta deadline approaches, as suggested by
Ziggie?  Say, within 10 blocks of the deadline, the honest node starts
increasing the fee by 1/10th the HTLC value for each non-confirmation.

This "scorched earth" approach may cost the honest node considerable
fees, but it will cost the attacker even more, since each attacker
replacement needs to burn at least as much as the HTLC-timeout fees,
and the attacker will need to do a replacement every time the honest
node fee bumps.

I think this fee-bumping policy will provide sufficient defense even
if the attacker is replacement-cycling directly in miners' mempools
and the victim has no visibility into the attack.



Now, the above is all true in a spherical cow kinda world, and the P2P network 
has plenty of slow
nodes and strange behavior. Its possible that these mitigations might, by some 
stroke of luck,
happen to catch such an attack and prevent it, because something took longer 
than the attacker
intended or whatever. But, that's a far cry from any kind of material "fix" for 
the issue.

Ultimately the only fix for this issue will be when miners keep a history of 
transactions they've
seen and try them again after they may be able to enter the mempool because of 
an attack like this.

Matt

On 10/16/23 12:57 PM, Antoine Riard wrote:

(cross-posting mempool issues identified are exposing lightning chan to loss of 
funds risks, other
multi-party bitcoin apps might be affected)

Hi,

End of last year (December 2022), amid technical discussions on eltoo payment 
channels and
incentives compatibility of the mempool anti-DoS rules, a new transaction-relay 
jamming attack
affecting lightning channels was discovered.

After careful analysis, it turns out this attack is practical and immediately 
exposed lightning
routing hops carrying HTLC traffic to loss of funds security risks, both legacy 
and anchor output
channels. A potential exploitation plausibly happening even without network 
mempools congestion.

Mitigations have been designed, implemented and deployed by all major lightning 
implementations
during the last months.

Please find attached the release numbers, where the mitigations should be 
present:
- LDK: v0.0.118 - CVE-2023 -40231
- Eclair: v0.9.0 - CVE-2023-40232
- LND: v.0.17.0-beta - CVE-2023-40233
- Core-Lightning: v.23.08.01 - CVE-2023-40234

While neither replacement cycling attacks have been observed or reported in the 
wild since the last
~10 months or experimented in real-world conditions on bitcoin mainet, 
functional test is available
exercising the affected lightning channel against bitcoin core mempool (26.0 
release cycle).

It is understood that a simple replacement cycling attack does not demand 
privileged capabilities
from an attacker (e.g no low-hashrate p

Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-17 Thread Matt Corallo via bitcoin-dev
There appears to be some confusion about this issue and the mitigations. To be clear, the deployed 
mitigations are not expected to fix this issue, its arguable if they provide anything more than a PR 
statement.


There are two discussed mitigations here - mempool scanning and transaction 
re-signing/re-broadcasting.

Mempool scanning relies on regularly checking the mempool of a local node to see if we can catch the 
replacement cycle mid-cycle. It only works if wee see the first transaction before the second 
transaction replaces it.


Today, a large majority of lightning nodes run on machines with a Bitcoin node on the same IP 
address, making it very clear what the "local node" of the lightning node is. An attacker can 
trivially use this information to connect to said local node and do the replacement quickly, 
preventing the victim from seeing the replacement.


More generally, however, similar discoverability is true for mining pools. An attacker performing 
this attack is likely to do the replacement attack on a miner's node directly, potentially reducing 
the reach of the intermediate transaction to only miners, such that the victim can never discover it 
at all.


The second mitigation is similarly pathetic. Re-signing and re-broadcasting the victim's transaction 
in an attempt to get it to miners even if its been removed may work, if the attacker is super lazy 
and didn't finish writing their attack system. If the attacker is connected to a large majority of 
hashrate (which has historically been fairly doable), they can simply do their replacement in a 
cycle aggressively and arbitrarily reduce the probability that the victim's transaction gets confirmed.


Now, the above is all true in a spherical cow kinda world, and the P2P network has plenty of slow 
nodes and strange behavior. Its possible that these mitigations might, by some stroke of luck, 
happen to catch such an attack and prevent it, because something took longer than the attacker 
intended or whatever. But, that's a far cry from any kind of material "fix" for the issue.


Ultimately the only fix for this issue will be when miners keep a history of transactions they've 
seen and try them again after they may be able to enter the mempool because of an attack like this.


Matt

On 10/16/23 12:57 PM, Antoine Riard wrote:
(cross-posting mempool issues identified are exposing lightning chan to loss of funds risks, other 
multi-party bitcoin apps might be affected)


Hi,

End of last year (December 2022), amid technical discussions on eltoo payment channels and 
incentives compatibility of the mempool anti-DoS rules, a new transaction-relay jamming attack 
affecting lightning channels was discovered.


After careful analysis, it turns out this attack is practical and immediately exposed lightning 
routing hops carrying HTLC traffic to loss of funds security risks, both legacy and anchor output 
channels. A potential exploitation plausibly happening even without network mempools congestion.


Mitigations have been designed, implemented and deployed by all major lightning implementations 
during the last months.


Please find attached the release numbers, where the mitigations should be 
present:
- LDK: v0.0.118 - CVE-2023 -40231
- Eclair: v0.9.0 - CVE-2023-40232
- LND: v.0.17.0-beta - CVE-2023-40233
- Core-Lightning: v.23.08.01 - CVE-2023-40234

While neither replacement cycling attacks have been observed or reported in the wild since the last 
~10 months or experimented in real-world conditions on bitcoin mainet, functional test is available 
exercising the affected lightning channel against bitcoin core mempool (26.0 release cycle).


It is understood that a simple replacement cycling attack does not demand privileged capabilities 
from an attacker (e.g no low-hashrate power) and only access to basic bitcoin and lightning 
software. Yet I still think executing such an attack successfully requests a fair amount of bitcoin 
technical know-how and decent preparation.


 From my understanding of those issues, it is yet to be determined if the mitigations deployed are 
robust enough in face of advanced replacement cycling attackers, especially ones able to combine 
different classes of transaction-relay jamming such as pinnings or vetted with more privileged 
capabilities.


Please find a list of potential affected bitcoin applications in this full disclosure report using 
bitcoin script timelocks or multi-party transactions, albeit no immediate security risk exposure as 
severe as the ones affecting lightning has been identified. Only cursory review of non-lightning 
applications has been conducted so far.


There is a paper published summarizing replacement cycling attacks on the 
lightning network:
https://github.com/ariard/mempool-research/blob/2023-10-replacement-paper/replacement-cycling.pdf 



  ## Problem

A lightning node allows HTLCs 

Re: [bitcoin-dev] [Lightning-dev] A new Bitcoin implementation integrated with Core Lightning

2023-05-06 Thread Matt Corallo via bitcoin-dev
Hi Michael,While I don’t think forks of Core with an intent to drive consensus rule changes (or lack thereof) benefits the bitcoin system as the Bitcoin Core project stands today, if you want to build a nice full node wallet with lightning based on a fork of Core, there was code written to do this some years ago.https://github.com/bitcoin/bitcoin/pull/18179It never went anywhere as lightning (and especially LDK!) were far from ready to be a first class feature in bitcoin core at the time (and I’d argue still today), but as a separate project it could be interesting, at least if maintenance burden were kept to a sustainable level.MattOn Jan 14, 2023, at 13:03, Michael Folkson via Lightning-dev  wrote:I tweeted this [0] back in November 2022."With the btcd bugs and the analysis paralysis on a RBF policy option in Core increasingly thinking @BitcoinKnots and consensus compatible forks of Core are the future. Gonna chalk that one up to another thing @LukeDashjr was right about all along."A new bare bones Knots style Bitcoin implementation (in C++/C) integrated with Core Lightning was a long term idea I had (and presumably many others have had) but the dysfunction on the Bitcoin Core project this week (if anything it has been getting worse over time, not better) has made me start to take the idea more seriously. It is clear to me that the current way the Bitcoin Core project is being managed is not how I would like an open source project to be managed. Very little discussion is public anymore and decisions seem to be increasingly made behind closed doors or in private IRC channels (to the extent that decisions are made at all). Core Lightning seems to have the opposite problem. It is managed effectively in the open (admittedly with fewer contributors) but doesn't have the eyeballs or the usage that Bitcoin Core does. Regardless, selfishly I at some point would like a bare bones Bitcoin and Lightning implementation integrated in one codebase. The Bitcoin Core codebase has collected a lot of cruft over time and the ultra conservatism that is needed when treating (potential) consensus code seems to permeate into parts of the codebase that no one is using, definitely isn't consensus code and should probably just be removed.The libbitcoinkernel project was (is?) an attempt to extract the consensus engine out of Core but it seems like it won't achieve that as consensus is just too slippery a concept and Knots style consensus compatible codebase forks of Bitcoin Core seem to still the model. To what extent you can safely chop off this cruft and effectively maintain this less crufty fork of Bitcoin Core also isn't clear to me yet.Then there is the question of whether it makes sense to mix C and C++ code that people have different views on. C++ is obviously a superset of C but assuming this merging of Bitcoin Core and Core Lightning is/was the optimal final destination it surely would have been better if Core Lightning was written in the same language (i.e. with classes) as Bitcoin Core.I'm just floating the idea to (hopefully) hear from people who are much more familiar with the entirety of the Bitcoin Core and Core Lightning codebases. It would be an ambitious long term project but it would be nice to focus on some ambitious project(s) (even if just conceptually) for a while given (thankfully) there seems to be a lull in soft fork activation chaos.ThanksMichael[0]: https://twitter.com/michaelfolkson/status/1589220155006910464?s=20=GbPm7w5BqS7rS3kiVFTNcw


--Michael FolksonEmail: michaelfolkson at protonmail.com Keybase: michaelfolksonPGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3






___Lightning-dev mailing listlightning-...@lists.linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/lightning___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core 24.0.1 Released

2022-12-13 Thread Matt Corallo via bitcoin-dev
The signature verifies for me, however the email was sent HTML and the signature only verifies in 
plaintext, so I had to copy it into a text file. I've included the email as-verified below.


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Due to last-minute issues (https://github.com/bitcoin/bitcoin/pull/26616),
24.0, although tagged, was never fully announced or released.

Bitcoin Core version 24.0.1 is now available from:

  https://bitcoincore.org/bin/bitcoin-core-24.0.1/

Or through BitTorrent:


 
magnet:?xt=urn:btih:d7604a67c8ed6e3b35da15138f8ac81d7618788c=bitcoin-core-24.0.1=udp%3A%2F%
2Ftracker.openbittorrent.com%3A80=udp%3A%2F%2Ftracker.opentrackr.org
%3A1337%2Fannounce=udp%3A%2F%2Ftracker.coppersurfer.tk
%3A6969%2Fannounce=udp%3A%2F%2Ftracker.leechers-paradise.org
%3A6969%2Fannounce=udp%3A%2F%2Fexplodie.org%3A6969%2Fannounce=udp%3A%2F%
2Ftracker.torrent.eu.org%3A451%2Fannounce=udp%3A%2F%
2Ftracker.bitcoin.sprovoost.nl%3A6969

This release includes new features, various bug fixes and performance
improvements, as well as updated translations.

Please report bugs using the issue tracker at GitHub:

  

To receive security and update notifications, please subscribe to:

  

How to Upgrade
==

If you are running an older version, shut it down. Wait until it has
completely
shut down (which might take a few minutes in some cases), then run the
installer (on Windows) or just copy over `/Applications/Bitcoin-Qt` (on
macOS)
or `bitcoind`/`bitcoin-qt` (on Linux).

Upgrading directly from a version of Bitcoin Core that has reached its EOL
is
possible, but it might take some time if the data directory needs to be
migrated. Old
wallet versions of Bitcoin Core are generally supported.

Compatibility
==

Bitcoin Core is supported and extensively tested on operating systems
using the Linux kernel, macOS 10.15+, and Windows 7 and newer.  Bitcoin
Core should also work on most other Unix-like systems but is not as
frequently tested on them.  It is not recommended to use Bitcoin Core on
unsupported systems.

Notice of new option for transaction replacement policies
=

This version of Bitcoin Core adds a new `mempoolfullrbf` configuration
option which allows users to change the policy their individual node
will use for relaying and mining unconfirmed transactions.  The option
defaults to the same policy that was used in previous releases and no
changes to node policy will occur if everyone uses the default.

Some Bitcoin services today expect that the first version of an
unconfirmed transaction that they see will be the version of the
transaction that ultimately gets confirmed---a transaction acceptance
policy sometimes called "first-seen".

The Bitcoin Protocol does not, and cannot, provide any assurance that
the first version of an unconfirmed transaction seen by a particular
node will be the version that gets confirmed.  If there are multiple
versions of the same unconfirmed transaction available, only the miner
who includes one of those transactions in a block gets to decide which
version of the transaction gets confirmed.

Despite this lack of assurance, multiple merchants and services today
still make this assumption.

There are several benefits to users from removing this *first-seen*
simplification.  One key benefit, the ability for the sender of a
transaction to replace it with an alternative version paying higher
fees, was realized in [Bitcoin Core 0.12.0][] (February 2016) with the
introduction of [BIP125][] opt-in Replace By Fee (RBF).

Since then, there has been discussion about completely removing the
first-seen simplification and allowing users to replace any of their
older unconfirmed transactions with newer transactions, a feature called
*full-RBF*.  This release includes a `mempoolfullrbf` configuration
option that allows enabling full-RBF, although it defaults to off
(allowing only opt-in RBF).

Several alternative node implementations have already enabled full-RBF by
default for years, and several contributors to Bitcoin Core are
advocating for enabling full-RBF by default in a future version of
Bitcoin Core.

As more nodes that participate in relay and mining begin enabling
full-RBF, replacement of unconfirmed transactions by ones offering higher
fees may rapidly become more reliable.

Contributors to this project strongly recommend that merchants and services
not accept unconfirmed transactions as final, and if they insist on doing
so,
to take the appropriate steps to ensure they have some recourse or plan for
when their assumptions do not hold.

[Bitcoin Core 0.12.0]:
https://bitcoincore.org/en/releases/0.12.0/#opt-in-replace-by-fee-transactions
[bip125]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki

Notable changes
===

P2P and network changes
- - ---

- - - To address 

Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet

2022-09-17 Thread Matt Corallo via bitcoin-dev




On 9/17/22 2:14 AM, Anthony Towns wrote:

On Fri, Sep 16, 2022 at 12:46:53PM -0400, Matt Corallo via bitcoin-dev wrote:

On 9/16/22 3:15 AM, Anthony Towns via bitcoin-dev wrote:

As we've seen from the attempt at a CHECKTEMPLATEVERIFY activation earlier
in the year [0], the question of "how to successfully get soft fork
ideas from concept to deployment" doesn't really have a good answer today.

I strongly disagree with this.


Okay? "X is good" is obviously just a statement of opinion, so if you
want to disagree, that's obviously allowed.

I also kind of feel like that's the *least* interesting paragraph in the
entire email to talk further about; if you think the current answer's
already good, then the rest of the mail's just about (hopefully) making
it better, which would be worthwhile anyway?


No, I think its at least a good chunk of the "statement of problem". Yes, more testing is good, and 
this project is a way to get that. Cool. But implying that lack of test frameworks is in any 
material way part of the lack of movement on forks in Bitcoin I think is very wrong, so its worth 
pointing out, whether the particular project is useful or not is separate.



Going back many, many years we've had many
discussions about fork process, and the parts people (historically) agreed
with tend to be:
(1) come up with an idea
(2) socialize the idea in the technical community, see if anyone comes up
with any major issues or can suggest better ideas which solve the same
use-cases in cleaner ways
(3) propose the concrete idea with a more well-defined strawman, socialize
that, get some kind of rough consensus in the loosely-defined, subjective,
"technical community" (ie just ask people and adapt to feedback until you
have found some kind of average of the opinions of people you, the
fork-champion, think are reasonably well-informed!).
(4) okay, admittedly beyond this is a bit less defined, but we can deal with it 
when we get there.
Turns out, the issue today is a lack of champions following steps 1-3, we
can debate what the correct answer is to step (4) once we actually have
people who want to be champions who are willing to (humbly) push an idea
forward towards rough agreement of the world of technical bitcoiners
(without which I highly doubt you'd ever see broader-community consensus).


Personally, I think this is easily refuted by contradiction.

1) If we did have a good answer for how to progress a soft-fork, then
the great consensus cleanup [0] would have made more progress over the
past 3.5 years


No? Who is the champion for it? I haven't been. No one else is obliged to take up the reins and run 
with it, that's not how open-source works. And no one has emerged who has strong interest in doing 
so, and that's totally fine. It means it hasn't made any progress, but that's an indication that no 
one feels strongly enough about it that its risen to the top of their personal priority list so 
clearly doesn't *need* to make progress.



Maybe not all of the ideas in it were unambiguously good
[1], but personally, I'm convinced at least some of them are, and I
don't think I'm alone in thinking that. Even if the excuse is that its
original champion wasn't humble enough, there's something wrong with
the process if there doesn't exist some other potential champion with
the right balance of humility, confidence, interest and time who could
have taken it over in that timeframe.


No? Its not up to the community to find a champion for someone who wants a fork to happen. Either 
someone thinks its a good enough idea that they step up, or no one does. If no one does, then so be 
it. If the original proper (me, in this case) thought it was that important then its *their* 
responsibility to be the champion, no one else's.



2) Many will argue that CTV has already done steps (1) through (3) above:
certainly there's been an idea, it's been socialised through giving talks,
having discussion forums, having research workshops [2], documenting use
cases use cases; there's been a concrete implementation for years now,
with a test network that supports the proposed feature, and new tools
that demonstrate some of the proposed use cases, and while alternative
approaches have been suggested [3], none of them have even really made
it to step (2), let alone step (3).


I don't really see how you can make this argument seriously. Honestly, if a soft-fork BIP only has 
one author on the list, then I'm not sure one can argue that step (3) has really been completed, and 
maybe not even step (2).



So that leaves a few possibilities
to my mind:



  * CTV should be in step (4), and its lack of definition is a problem,
and trying the "deal with it when we get there" approach is precisely
what didn't work back in April.

  * The evaluation process is too inconclusive: it should either be
saying "CTV is not good enough, fix these problems", or "CTV ha

Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet

2022-09-16 Thread Matt Corallo via bitcoin-dev

Apologies for any typos, somewhat jet-lagged atm.

On 9/16/22 3:15 AM, Anthony Towns via bitcoin-dev wrote:

Subhead: "Nobody expects a Bitcoin Inquistion? C'mon man, *everyone*
expects a Bitcoin Inquisition."

As we've seen from the attempt at a CHECKTEMPLATEVERIFY activation earlier
in the year [0], the question of "how to successfully get soft fork
ideas from concept to deployment" doesn't really have a good answer today.


I strongly disagree with this. Going back many, many years we've had many discussions about fork 
process, and the parts people (historically) agreed with tend to be:


(1) come up with an idea
(2) socialize the idea in the technical community, see if anyone comes up with any major issues or 
can suggest better ideas which solve the same use-cases in cleaner ways
(3) propose the concrete idea with a more well-defined strawman, socialize that, get some kind of 
rough consensus in the loosely-defined, subjective, "technical community" (ie just ask people and 
adapt to feedback until you have found some kind of average of the opinions of people you, the 
fork-champion, think are reasonably well-informed!).

(4) okay, admittedly beyond this is a bit less defined, but we can deal with it 
when we get there.

Turns out, the issue today is a lack of champions following steps 1-3, we can debate what the 
correct answer is to step (4) once we actually have people who want to be champions who are willing 
to (humbly) push an idea forward towards rough agreement of the world of technical bitcoiners 
(without which I highly doubt you'd ever see broader-community consensus).


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Vaulting (Was: Automatically reverting ("transitory") soft forks)

2022-04-23 Thread Matt Corallo via bitcoin-dev

Still trying to make sure I understand this concern, let me know if I get this 
all wrong.

On 4/22/22 10:25 AM, Russell O'Connor via bitcoin-dev wrote:
It's not the attackers *only choice to succeed*.  If an attacker steals the hot key, then they have 
the option to simply wait for the user to unvault their funds of their own accord and then race / 
outspend the users transaction with their own.  Indeed, this is what we expect would happen in the 
dark forest.


Right, a key security assumption of the CTV-based vaults would be that you MUST NOT EVER withdraw 
more in one go than your hot wallet risk tolerance, but given that your attack isn't any worse than 
simply stealing the hot wallet key immediately after a withdraw.


It does have the drawback that if you ever get a hot wallet key stole you have to rotate all of your 
CTV outputs and your CTV outputs must never be any larger than your hot wallet risk tolerance 
amount, both of which are somewhat frustrating limitations, but not security limitations, only 
practical ones.


And that's not even mentioning the issues already noted by the document regarding fee management, 
which would likely also benefit from a less constrained design for covenants.


Of course I've always been in favor of a less constrained covenants design from day one for ten 
reasons, but that's a whole other rabbit hole :)

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread Matt Corallo via bitcoin-dev



On 4/21/22 6:20 PM, David A. Harding wrote:

[Rearranging Matt's text in my reply so my nitpicks come last.]

On 21.04.2022 13:02, Matt Corallo wrote:

I agree, there is no universal best, probably. But is there a concrete
listing of a number of use-cases and the different weights of things,
plus flexibility especially around forward-looking designs?


I'm sure we could make a nice list of covenant usecases, but I don't know how we would assign 
reasonable objective weights to the different things purely through group foresight.  I know I'm 
skeptical about congestion control and enthusiastic about joinpools---but I've talked to developers 
I respect who've had the opposite opinions from me about those things.  The best way I know of to 
reconcile our differing opinions is to see what real Bitcoin users actually pay for.  But to do 
that, I think they must have a way to use covenants in something like the production environment.


To get good data for this kind of question you'd need much longer than five years, sadly. As we've 
seen over and over again in Bitcoin deploying very nontrivial things takes at least five years, 
often more. While vaults may be deployed relatively more quickly, the fact that we haven't seen 
(AFAIK) *anyone* deploy some of the key-deletion-based vault designs that have been floating around 
for some time is indication that even that probably wouldn't be deployed quickly.



You're also writing off [...] a community of
independent contributors who care about Bitcoin working together to
make decisions on what is or isn't the "right way to go" [...]. Why are you
suggesting its something that you "don't know how to do"?


You said we should use the best design.  I said the different designs optimize for different things, 
so it's unlikely that there's an objective best.  That implies to me that we either need to choose a 
winner (yuck) or we need to implement more than one of the designs.  In either of those cases, 
choosing what to implement would benefit from data about how much the thing will be used and how 
much users will pay for it in fees.


I agree, there is no objective "best" design. But we can sill explore design tradeoffs and utility 
for different classes of covenants. I've seen relatively little of this so far, and from what I have 
seen its not been clear that CTV is really a good option, sadly.




Again, you're writing off the real and nontrivial risk of doing a fork
to begin with.


I agree this risk exists and it isn't my intention to write it off---my OP did say "we [must be] 
absolutely convinced CTV will have no negative effects on the holders or receivers of non-CTV 
coins."  I haven't been focusing on this in my replies because I think the other issues we've been 
discussing are more significant.  If we were to get everyone to agree to do a transitory soft fork, 
I think the safety concerns related to a CTV soft fork could be mitigated the same way we've 
mitigated them for previous soft forks: heaps of code review/testing and making sure a large part of 
the active community supports the change.


I'm not sure I made my point here clear - the nontrivial and real risk I was referring to was not 
avoidable with "moar code review" or "careful analysis to make sure the proposed fork doesn't cause 
damage". I mean issues that keep cropping up in many changes like "people start threatening to run a 
fork-causing client" or "some miners aren't validating blocks and end up creating a fork" or "some 
people forget to upgrade and follow such a fork" or. there's lots and lots of risks to a doing a 
fork that come from the process and nature of forks, that have nothing to do with the actual details 
of the fork itself.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread Matt Corallo via bitcoin-dev




On 4/22/22 9:28 AM, James O'Beirne wrote:

 > There are at least three or four separate covenants designs that have
 > been posted to this list, and I don't see why we're even remotely
 > talking about a specific one as something to move forward with at
 > this point.

To my knowledge none of these other proposals (drafts, really) have
actual implementations let alone the many sample usages that exist for
CTV.


You can fix this! Don't point to something you can easily remedy in the short-term as an argument 
for or against something in the long-term.



Given that the "covenants" discussion has been ongoing for years
now, I think the lack of other serious proposals is indicative of the
difficulty inherent in coming up with a preferable alternative to CTV.


I'd think its indicative of the lack of interest in serious covenants designs from many of the 
highly-qualified people who could be working on them. There are many reasons for that. If there's 
one positive thing from the current total mess, its that hopefully there will be a renewed interest 
in researching things and forming conclusions.




CTV is about as simple a covenant system as can be devised - its limits
relative to more "general" covenant designs notwithstanding.
The level of review around CTV's design is well beyond the other
sketches for possible designs that this list has seen.


[citation needed]

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Matt Corallo via bitcoin-dev



On 4/21/22 3:28 PM, David A. Harding wrote:

On 21.04.2022 08:39, Matt Corallo wrote:

We add things to Bitcoin because (a) there's some demonstrated
use-cases and intent to use the change (which I think we definitely
have for covenants, but which only barely, if at all, suggests
favoring one covenant design over any other)


I'm unconvinced about CTV's use cases but others have made reasonable claims that it will be used.  
We could argue about this indefinitely, but I would love to give CTV proponents an opportunity to 
prove that a significant number of people would use it.


To be clear - I was not suggesting that CTV fell flat here. I think there *is* demand for Bitcoin 
covenant designs, CTV included. I do *not* think there is demand for CTV *over* other covenant 
designs, that's okay, though, it doesn't need that, we just have to be confident its the right 
direction.


I believe you got the impression I was arguing CTV did not meet by criteria list (a)-(d), but in 
fact I only think it falls flat horribly on (c).



(b) because its
generally considered aligned with Bitcoin's design and goals, based on
developer and more broad community response


I think CTV fulfills this criteria.  At least, I can't think of any way BIP119 itself 
(notwithstanding activation concerns) violates Bitcoin's designs and goals.


I tend to agree.


(c) because the
technical folks who have/are wiling to spend time working on the
specific design space think the concrete proposal is the best design
we have


This is the criteria that most concerns me.  What if there is no universal best?  For example, I 
mentioned in my previous email that I'm a partisan of OP_CAT+OP_CSFS due to their min-max of 
implementation simplicity versus production flexibility.  But one problem is that spends using them 
would need to contain a lot of witness data.  In my mind, they're the best for experimentation and 
for proving the existence of demand for more optimized constructions.


I agree, there is no universal best, probably. But is there a concrete listing of a number of 
use-cases and the different weights of things, plus flexibility especially around forward-looking 
designs? You don't mention the lack of recursion in CTV vs CAT+CSFS, which is a *huge* difference in 
the available design space for developers. This stuff is critical to get right and we're barely even 
talking about it, let alone at a position of deciding something?



I do not see how we can make an argument for any specific covenant
under (c) here. We could just as well be talking about
TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use
CTV can probably just as easily use those instead - ie this has
nothing to do with "will people use it".


I'm curious how we as a technical community will be able to determine which is the best approach.  
Again, I like starting simple and general, gathering real usage data, and then optimizing for 
demonstrated needs. But the simplest and most general approaches seem to be too general for some 
people (because they enable recursive covenants), seemingly forcing us into looking only at 
application-optimized designs.  In that case, I think the main thing we want to know about these 
narrow proposals for new applications is whether the applications and the proposed consensus changes 
will actually receive significant use.  For that, I think we need some sort of test bed with real 
paying users, and ideally one that is as similar to Bitcoin mainnet as possible.


Again, you're writing off the real and nontrivial risk of doing a fork to begin with. You're also 
writing off something organic that has happened without issue time and time again - a community of 
independent contributors who care about Bitcoin working together to make decisions on what is or 
isn't the "right way to go" is something we've all collaboratively done time and time again. Why are 
you suggesting its something that you "don't know how to do"?


Again, my point *is not* "will people use CTV", I think they will. I think they would also use TLUV 
if that were activated for the exact same use-cases. I think they would also use CAT+CSFS if that 
were what was activated, again for the exact same use-cases. Given that, I'm not sure how your 
proposal teaches us anything at all, aside from "yes, there was demand for *some* kind of covenant".


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Matt Corallo via bitcoin-dev



On 4/21/22 11:06 AM, David A. Harding wrote:

On 21.04.2022 04:58, Matt Corallo wrote:

On 4/20/22 6:04 PM, David A. Harding via bitcoin-dev wrote:

The main criticisms I'm aware of against CTV seem to be along the following 
lines:

1. Usage, either:
   a. It won't receive significant real-world usage, or
   b. It will be used but we'll end up using something better later
2. An unused CTV will need to be supported forever, creating extra maintenance
    burden, increasing security surface, and making it harder to evaluate later
    consensus change proposals due to their interactions with CTV


Also "is this even the way we should be going about covenants?"


I consider this to be a version of point 1b above.  If we find a better way for going about 
covenants, then we'll activate that and let CTV automatically be retired at the end of its five years.


If you still think your point is separate from point 1b, I would appreciate you 
helping me understand.


No, its unrelated to whether CTV or any other system gets usage. If we were just concerned with 
whether CTV would get usage over or under some other alternative proposal then I could see an 
argument for your proposal (though the nontrivial cost of any fork to Bitcoin would make me still 
strongly disagree with such a way forward in principle).


Rather, I'm instead concerned with us designing something that is going to be the most flexible and 
useful and hopefully private covenents design we can, because that doesn't just get users to use the 
change to Bitcoin we paid some nontrivial change-cost to incorporate into the Bitcoin's consensus 
rules, but gets the most bang-for-our-buck. There are at least three or four separate covenants 
designs that have been posted to this list, and I don't see why we're even remotely talking about a 
specific one as something to move forward with at this point.


We don't add things to Bitcoin just to find out whether we can. full stop.

We add things to Bitcoin because (a) there's some demonstrated use-cases and intent to use the 
change (which I think we definitely have for covenants, but which only barely, if at all, suggests 
favoring one covenant design over any other), (b) because its generally considered aligned with 
Bitcoin's design and goals, based on developer and more broad community response and (c) because the 
technical folks who have/are wiling to spend time working on the specific design space think the 
concrete proposal is the best design we have, and finally (d) because the implementation is 
well-reviewed and complete.


I do not see how we can make an argument for any specific covenant under (c) here. We could just as 
well be talking about TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use CTV can 
probably just as easily use those instead - ie this has nothing to do with "will people use it".



the Bitcoin technical community (or at least those interested in
working on covenants) doesn't even remotely show any signs of
consensus around any concrete proposal,


This is also my assessment: neither CTV nor any other proposal currently has enough support to 
warrant a permanent change to the consensus rules.  My question to the list was whether we could use 
a transitory soft fork as a method for collecting real-world usage data about proposals.  E.g., a 
consensus change proposal could proceed along the following idealized path:


- Idea (individual or small group)
- Publication (probably to this list)
- Draft specification and implementation
- Riskless testing (integration tests, signet(s), testnet, etc)
- Money-at-stake testing (availability on a pegged sidechain, an altcoin similar to Bitcoin, or in 
Bitcoin via a transitory soft fork)

- Permanent consensus change


That all seems fine, except that doing a fork on Bitcoin has very nontrivial cost, both in terms of 
ecosystem disruption and possibility that anything goes wrong, not to mention code maintenance 
(which we cannot remove the validation code for something ever, really - you still want to be able 
to validate the historical chain). Plus, really, I'd love to see "technical community consensus" 
somewhere in there - at least its been something that has very roughly appeared for most previous 
soft forks, at least among those who have time/willingness to work on the specific design being 
proposed.


[other comments snipped because my responses would mostly have been rehashing 
the first response above].

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread Matt Corallo via bitcoin-dev
This is great in theory, but I think it kinda misses *why* the complexity keeps creeping in. We 
agree on (most of) the goals here, but the problem is the goals explicitly lead to the complexity, 
its not some software engineering failure or imagination failure that leads to the complexity.


On 2/10/22 14:40, James O'Beirne via bitcoin-dev wrote:
-snip-

# Purely additive feerate bumps should never be impossible

Any user should always be able to add to the incentive to mine any
transaction in a purely additive way. The countervailing force here
ends up being spam prevention (a la min-relay-fee) to prevent someone
from consuming bandwidth and mempool space with a long series of
infinitesimal fee-bumps.

A fee bump, naturally, should be given the same per-byte consideration
as a normal Bitcoin transaction in terms of relay and block space,
although it would be nice to come up with a more succinct
representation. This leads to another design principle:


This is where *all* the complexity comes from. If our goal is to "ensure a bump increases a miner's 
overall revenue" (thus not wasting relay for everyone else), then we precisely *do* need


> Special consideration for "what should be in the next
> block" and/or the caching of block templates seems like an imposing
> dependency

Whether a transaction increases a miner's revenue depends precisely on whether the transaction 
(package) being replaced is in the next block - if it is, you care about the absolute fee of the 
package and its replacement. If it is not in the next block (or, really, not near a block boundary 
or further down in the mempool where you assume other transactions will appear around it over time), 
then you care about the fee *rate*, not the fee difference.


> # The bandwidth and chain space consumed by a fee-bump should be minimal
>
> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.

This entirely misses the network cost. Yes, sure, we can send "diffs", but if you send enough diffs 
eventually you send a lot of data. We cannot simply ignore network-wide costs like total relay 
bandwidth (or implementation runtime DoS issues).



# Special transaction structure should not be required to bump fees

In an ideal design, special structural foresight would not be needed
in order for a txn's feerate to be improved after broadcast.

Anchor outputs specified solely for CPFP, which amount to many bytes of
wasted chainspace, are a hack. > It's probably uncontroversial at this


This has nothing to do with fee bumping, though, this is only solved with covenants or something in 
that direction, not different relay policy.



Coming down to earth, the "tabula rasa" thought experiment above has led
me to favor an approach like the transaction sponsors design that Jeremy
proposed in a prior discussion back in 2020[1].


How does this not also fail your above criteria of not wasting block space?

Further, this doesn't solve pinning attacks at all. In lightning we want to be able to *replace* 
something in the mempool (or see it confirm soon, but that assumes we know exactly what transaction 
is in "the" mempool). Just being able to sponsor something doesn't help if you don't know what that 
thing is.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-15 Thread Matt Corallo via bitcoin-dev


> On Sep 13, 2021, at 21:56, Anthony Towns  wrote:
> I'm not sure that's really the question you want answered?

Of course it is? I’d like to understand the initial thinking and design 
analysis that went into this decision. That seems like an important question to 
ask when seeking changes in an existing system :).

> Mostly
> it's just "this is how mainnet works" plus "these are the smallest
> changes to have blocks be chosen by a signature, rather than entirely
> by PoW competition".
> 
> For integration testing across many services, I think a ten-minute-average
> between blocks still makes sense -- protocols relying on CSV/CLTV to
> ensure there's a delay they can use to recover funds, if they specify
> that in blocks (as lightning's to_self_delay does), then significant
> surges of blocks will cause uninteresting bugs. 

Hmm, why would blocks coming quicker lead to a bug? I certainly hope no one has 
a bug if their block time is faster than per ten minutes. I presume here, you 
mean something like “if the node can’t keep up with the block rate”, but I 
certainly hope the benchmark for may isn’t 10 minutes, or really even one.

> It would be easy enough to change things to target an average of 2 or
> 5 minutes, I suppose, but then you'd probably need to propogate that
> logic back into your apps that would otherwise think 144 blocks is around
> about a day.

Why? One useful thing for testing is compressing real time. More broadly, the 
only issues that I’ve heard around block times in testnet3 are the 
inconsistency and, rarely software failing to keep up at all.

> We could switch back to doing blocks exactly every 10 minutes, rather
> than a poisson-ish distribution in the range of 1min to 60min, but that
> doesn't seem like that huge a win, and makes it hard to test that things
> behave properly when blocks arrive in bursts.

Hmm, I suppose? If you want to test that the upper bound doesn’t need to be 100 
minutes, though, it could be 10.

> Best of luck to you then? Nobody's trying to sell you on a subscription
> plan to using signet.


lol, yes, I’m aware of that, nor did I mean to imply that anything has to be 
targeted at a specific person’s requirements. Rather, my point here is that I’m 
really confused as to who  the target user *is*, because we should be building 
products with target users in mind, even if those targets are often “me” for 
open source projects.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-13 Thread Matt Corallo via bitcoin-dev


> On Sep 13, 2021, at 05:30, Michael Folkson  wrote:
> 
> 
>> 
>> Can you explain the motivation for this? From where I sit, as far as I know, 
>> I should basically be > a prime example of the target market for public 
>> signet - someone developing bitcoin applications > with regular requirements 
>> to test those applications with other developers without
>> jumping through hoops to configure software the same across the globe and 
>> set up miners.
>> With blocks > being slow and irregular, I’m basically not benefited at all 
>> by signet and will
>> stick with testnet3/mainnet testing, which both suck.
> 
> On testnet3 you can realistically go days without blocks being found
> (and conversely thousands of blocks can be found in a day), the block
> discovery time variance is huge. Of course this is probabilistically
> possible on mainnet too but the probability of this happening is close
> to zero. Here[0] is an example of 16,000 blocks being found in a day
> on testnet3.

Blocks too fast isn’t generally an issue when waiting for blocks to test, and 
hooking up a miner is probably less work on testnet3 than creating a 
multi-party private signet with miners. In any case, you didn’t address the 
substance of the point - we can do better to make it a good platform for 
testing…. Why aren’t we?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-12 Thread Matt Corallo via bitcoin-dev


> On Sep 12, 2021, at 00:53, Anthony Towns  wrote:
> 
> On Thu, Sep 09, 2021 at 05:50:08PM -0700, Matt Corallo via bitcoin-dev wrote:
>>> AJ proposed to allow SigNet users to opt-out of reorgs in case they
>>> explicitly want to remain unaffected. This can be done by setting a
>>> to-be-reorged version bit [...]
>> Why bother with a version bit? This seems substantially more complicated
>> than the original proposal that surfaced many times before signet launched
>> to just have a different reorg signing key.
> 
> Yeah, that was the original idea, but there ended up being two problems
> with that approach. The simplest is that the signet block signature
> encodes the signet challenge,

But if that was the originally proposal, why is the challenge committed to in 
the block? :)

> So using the RECENT_CONSENSUS_CHANGE behaviour that avoids the
> discourage/disconnect logic seems the way to avoid that problem, and that
> means making it so that nodes that that opt-out of reorgs can distinguish
> valid-but-will-become-stale blocks from invalid blocks. Using a versionbit
> seems like the easiest way of doing that.

Sure, you could set that for invalid block signatures as well though. It’s not 
really a material DoS protection one way or the other.

>>> The reorg-interval X very much depends on the user's needs. One could
>>> argue that there should be, for example, three reorgs per day, each 48
>>> blocks apart. Such a short reorg interval allows developers in all time
>>> zones to be awake during one or two reorgs per day. Developers don't
>>> need to wait for, for example, a week until they can test their reorgs
>>> next. However, too frequent reorgs could hinder other SigNet users.
>> I see zero reason whatsoever to not simply reorg ~every block, or as often
>> as is practical. If users opt in to wanting to test with reorgs, they should
>> be able to test with reorgs, not wait a day to test with reorgs.
> 
> Blocks on signet get mined at a similar rate to mainnet, so you'll always
> have to wait a little bit (up to an hour) -- if you don't want to wait
> at all, that's what regtest (or perhaps a custom signet) is for.

Can you explain the motivation for this? From where I sit, as far as I know, I 
should basically be a prime example of the target market for public signet - 
someone developing bitcoin applications with regular requirements to test those 
applications with other developers without jumping through hoops to configure 
software the same across the globe and set up miners. With blocks being slow 
and irregular, I’m basically not benefited at all by signet and will stick with 
testnet3/mainnet testing, which both suck.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Matt Corallo via bitcoin-dev
Fwiw, your email client is broken and does not properly quote in the plaintext copy. I believe this 
is a known gmail bug, but I'd recommend avoiding gmail's web interface for list posting :).


On 9/10/21 12:00, Michael Folkson wrote:

Huh? Why would the goal be to match mainnet? The goal, as I understand it, is 
to allow software to

use SigNet without modification *to make testing simpler* - keep the
header format the same to let
SPV clients function without (significant) modification, etc. The
point of the whole thing is to
make testing as easy as possible, why would we do otherwise.

I guess Kalle (and AJ) can answer this question better than me but my
understanding is that the motivation for Signet was that testnet
deviated erratically from mainnet behavior (e.g. long delays before
any blocks were mined followed by a multitude of blocks mined in a
short period of time) which meant it wasn't conducive to normal
testing of applications. Why would you want a mainnet like chain? To
check if your application works on a mainnet like chain without
risking any actual value before moving to mainnet. The same purpose as
testnet but more reliably resembling mainnet behavior. You are well
within your rights to demand more than that but my preference would be
to push some of those demands to custom signets rather than the
default Signet.


Huh? You haven't made an argument here as to why such a chain is easier to test with, only that we 
should "match mainnet". Testing on mainnet sucks, 99% of the time testing on mainnet involves no 
reorgs, which *doesn't* match in-the-field reality of mainnet, with occasional reorgs. Matching 
mainnet's behavior is, in fact, a terrible way to test if your application will run fine on mainnet.


My point is that the goal should be making it easier to test. I'm not entirely sure why there's 
debate here.  I *regularly* have lunch late because I'm waiting for blocks either on mainnet or 
testnet3, and would quite like to avoid that in the future. It takes *forever* to test things on 
mainnet and testnet3, matching their behavior would mean its equally impossible to test things on 
mainnet and testnet3, why is that something we should stirve for?




Testing out proposed soft forks in advance of them being considered
for activation would already be introducing a dimension of complexity
that is going to be hard to manage [0]. I'm generally of the view that
if you are going to introduce a complexity dimension, keep the other
dimensions as vanilla as possible. Otherwise you are battling
complexity in multiple different dimensions and it becomes hard or
impossible to maintain it and meet your initial objectives.


Yep! Great reason to not have any probabilistic nonsense or try to match mainnet or something on 
signet, just make it deterministic, reorg once a block or twice an our or whatever and call it a day!


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Matt Corallo via bitcoin-dev




On 9/10/21 06:05, Michael Folkson wrote:

I see zero reason whatsoever to not simply reorg ~every block, or as often as 
is practical. If users opt in to wanting to test with reorgs, they should be 
able to test with reorgs, not wait a day to test with reorgs.


One of the goals of the default Signet was to make the default Signet
resemble mainnet as much as possible. (You can do whatever you want on
a custom signet you set up yourself including manufacturing a re-org
every block if you wish.) Hence I'm a bit wary of making the behavior
on the default Signet deviate significantly from what you might
experience on mainnet. Given re-orgs don't occur that often on mainnet
I can see the argument for making them more regular (every 8 hours
seems reasonable to me) on the default Signet but every block seems
excessive. It makes the default Signet into an environment for purely
testing whether your application can withstand various flavors of edge
case re-orgs. You may want to test whether your application can
withstand normal mainnet behavior (no re-orgs for long periods of
time) first before you concern yourself with re-orgs.


Huh? Why would the goal be to match mainnet? The goal, as I understand it, is to allow software to 
use SigNet without modification *to make testing simpler* - keep the header format the same to let 
SPV clients function without (significant) modification, etc. The point of the whole thing is to 
make testing as easy as possible, why would we do otherwise.


Further, because one goal here is to enable clients to opt in or out of the reorg chain at will 
(presumably by just changing one config flag in bitcoin.conf), why would we worry about making it 
"similar to mainnet". If users want an experience "similar to mainnet", they can simply turn off 
reorgs and they'll see a consistent chain moving forward which never reorgs, similar to the 
practical experience of mainnet.


Once you've opted into reorgs, you almost certainly are looking to *test* reorgs - you just 
restarted Bitcoin Core with the reorg flag set, waiting around for a reorg after doing that seems 
like the experience of testnet3 today, and the whole reason why we wanted signet to begin with - 
things happen sporadically and inconsistently, making developers wait around forever. Please lets 
not replicate the "gotta wait for blocks before I can go to lunch" experience of testnet today on 
signet, I'm tired of eating lunch late.



Why bother with a version bit? This seems substantially more complicated than 
the original proposal that surfaced many times before signet launched to just 
have a different reorg signing key. Thus, users who wish to follow reorgs can 
use a 1-of-2 (or higher multisig) and users who wish to not follow reorgs would 
use a 1-of-1 (or higher multisig), simply marking the reorg blocks as invalid 
without touching any header bits that non-full clients will ever see.


If I understand this correctly this is introducing a need for users to
sign blocks when currently with the default Signet the user does not
need to concern themselves with signing blocks. That is entirely left
to the network block signers of the default Signet (who were AJ and
Kalle last time I checked). Again I don't think this additional
complexity is needed on the default Signet when you can set up your
own custom Signet if you want to test edge case scenarios that deviate
significantly from what you are likely to experience on mainnet. A
flag set via a configuration argument (the AJ, 0xB10C proposal) with
no-reorgs (or 8 hour re-orgs) as the default seems to me like it would
introduce no additional complexity to the casual (or alpha stage)
tester experience though of course it introduces implementation
complexity.

To move the default Signet in the direction of resembling mainnet even
closer would be to randomly generate batches of transactions to fill
up blocks and create a fee market. It would be great to be able to
test features like RBF and Lightning unhappy paths (justice
transactions, perhaps even pinning attacks etc) on the default Signet
in future.


I believe my suggestion was not correctly understood. I'm not suggesting *users* sign blocks or 
otherwise do anything manually here, only that the existing block producers each generate a new key, 
and we then only sign reorgs with *those* keys. Users will be able to set a flag to indicate "I want 
to accept sigs from either sets of keys, and see reorgs" or "I only want sigs from the non-reorg 
keys, and will consider the reorg keys-signed blocks invalid"


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-09 Thread Matt Corallo via bitcoin-dev


On 9/7/21 09:07, 0xB10C via bitcoin-dev wrote:

Hello,

tl;dr: We want to make reorgs on SigNet a reality and are looking for
feedback on approach and parameters.


Awesome!


AJ proposed to allow SigNet users to opt-out of reorgs in case they
explicitly want to remain unaffected. This can be done by setting a
to-be-reorged version bit flag on the blocks that won't end up in the
most work chain. Node operators could choose not to accept to-be-reorged
SigNet blocks with this flag set via a configuration argument.


Why bother with a version bit? This seems substantially more complicated than the original proposal 
that surfaced many times before signet launched to just have a different reorg signing key. Thus, 
users who wish to follow reorgs can use a 1-of-2 (or higher multisig) and users who wish to not 
follow reorgs would use a 1-of-1 (or higher multisig), simply marking the reorg blocks as invalid 
without touching any header bits that non-full clients will ever see.



The reorg-interval X very much depends on the user's needs. One could
argue that there should be, for example, three reorgs per day, each 48
blocks apart. Such a short reorg interval allows developers in all time
zones to be awake during one or two reorgs per day. Developers don't
need to wait for, for example, a week until they can test their reorgs
next. However, too frequent reorgs could hinder other SigNet users.


I see zero reason whatsoever to not simply reorg ~every block, or as often as is practical. If users 
opt in to wanting to test with reorgs, they should be able to test with reorgs, not wait a day to 
test with reorgs.



We propose that the reorg depth D is deterministically random between a
minimum and a maximum based on, e.g., the block hash or the nonce of the
last block before the reorg. Compared to a local randint() based
implementation, this allows reorg-handling tests and external tools to
calculate the expected reorg depth.

# Scenario 1: Race between two chains

For this scenario, at least two nodes and miner scripts need to be
running. An always-miner A continuously produces blocks and rejects
blocks with the to-be-reorged version bit flag set. And a race-miner R
that only mines D blocks at the start of each interval and then waits X
blocks. A and R both have the same hash rate. Assuming both are well
connected to the network, it's random which miner will first mine and
propagate a block. In the end, the A miner chain will always win the race.

# Scenario 2: Chain rollback

This scenario only requires one miner and Bitcoin Core node but also
works in a multiminer setup. The miners mine D blocks with the
to-be-reorged version bit flag set at the start of the interval. After
allowing the block at height X+D to propagate, they invalidate the block
at height X+1 and start mining on block X again. This time without
setting the to-be-reorged version bit flag. Non-miner nodes will reorg
to the new tip at height X+D+1, and the first-seen branch stalls.


Both seem reasonable. I'm honestly not sure what software cases would be hit differently between one 
or the other, as long as reorgs happen regularly and at random depth. Nodes should presumably only 
ever be following one chain.



# Questions

     1. How do you currently test your applications reorg handling? Do
the two discussed scenarios (race and chain rollback) cover your
needs? Are we missing something you'd find helpful?

     2. How often should reorgs happen on the default SigNet? Should
there be multiple reorgs a day (e.g., every 48 or 72 blocks
assuming 144 blocks per day) as your engineers need to be awake?
Do you favor less frequent reorgs (once per week or month)? Why?

 3. How deep should the reorgs be on average? Do you want to test
deeper reorgs (10+ blocks) too?


6 is the "standard" confirmation window for mainnet. Its arguably much too low, but for testing 
purposes we've gotta pick something, so that seems reasonable?


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-09 Thread Matt Corallo via bitcoin-dev
Thanks for taking the time to write this up!

To wax somewhat broadly here, I’m very excited about this as a direction for 
bitcoin covenants. Other concrete proposals seem significantly more limited, 
which worries me greatly. Further, this feels very “taproot-native” in a way 
that encourages utilizing taproot’s features fully while building covenants, 
saving fees on chain and at least partially improving privacy.

I’ve been saying we need more covenants research and proposals before we move 
forward with one and this is a huge step in that direction, IMO. With Taproot 
activating soon, I’m excited for what coming forks bring.

Matt

> On Sep 8, 2021, at 23:42, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> Hello world,
> 
> A couple of years ago I had a flight of fancy [0] imagining how it
> might be possible for everyone on the planet to use bitcoin in a
> mostly decentralised/untrusted way, without requiring a block size
> increase. It was a bit ridiculous and probably doesn't quite hold up,
> and beyond needing all the existing proposals to be implemented (taproot,
> ANYPREVOUT, CTV, eltoo, channel factories), it also needed a covenant
> opcode [1]. I came up with something that I thought fit well with taproot,
> but couldn't quite figure out how to use it for anything other than my
> ridiculous scheme, so left it at that.
> 
> But recently [2] Greg Maxwell emailed me about his own cool idea for a
> covenant opcode, which turned out to basically be a reinvention of the
> same idea but with more functionality, a better name and a less fanciful
> use case; and with that inspiration, I think I've also now figured out
> how to use it for a basic vault, so it seems worth making the idea a
> bit more public.
> 
> I'll split this into two emails, this one's the handwavy overview,
> the followup will go into some of the implementation complexities.
> 
> 
> 
> The basic idea is to think about "updating" a utxo by changing the
> taproot tree.
> 
> As you might recall, a taproot address is made up from an internal public
> key (P) and a merkle tree of scripts (S) combined via the formula Q=P+H(P,
> S)*G to calculate the scriptPubKey (Q). When spending using a script,
> you provide the path to the merkle leaf that has the script you want
> to use in the control block. The BIP has an example [3] with 5 scripts
> arranged as ((A,B), ((C,D), E)), so if you were spending with E, you'd
> reveal a path of two hashes, one for (AB), then one for (CD), then you'd
> reveal your script E and satisfy it.
> 
> So that makes it relatively easy to imagine creating a new taproot address
> based on the input you're spending by doing some or all of the following:
> 
> * Updating the internal public key (ie from P to P' = P + X)
> * Trimming the merkle path (eg, removing CD)
> * Removing the script you're currently executing (ie E)
> * Adding a new step to the end of the merkle path (eg F)
> 
> Once you've done those things, you can then calculate the new merkle
> root by resolving the updated merkle path (eg, S' = MerkleRootFor(AB,
> F, H_TapLeaf(E))), and then calculate a new scriptPubKey based on that
> and the updated internal public key (Q' = P' + H(P', S')).
> 
> So the idea is to do just that via a new opcode "TAPLEAF_UPDATE_VERIFY"
> (TLUV) that takes three inputs: one that specifies how to update the
> internal public key (X), one that specifies a new step for the merkle path
> (F), and one that specifies whether to remove the current script and/or
> how many merkle path steps to remove. The opcode then calculates the
> scriptPubKey that matches that, and verifies that the output corresponding
> to the current input spends to that scriptPubKey.
> 
> That's useless without some way of verifying that the new utxo retains
> the bitcoin that was in the old utxo, so also include a new opcode
> IN_OUT_AMOUNT that pushes two items onto the stack: the amount from this
> input's utxo, and the amount in the corresponding output, and then expect
> anyone using TLUV to use maths operators to verify that funds are being
> appropriately retained in the updated scriptPubKey.
> 
> 
> 
> Here's two examples of how you might use this functionality.
> 
> First, a basic vault. The idea is that funds are ultimately protected
> by a cold wallet key (COLD) that's inconvenient to access but is as
> safe from theft as possible. In order to make day to day transactions
> more convenient, a hot wallet key (HOT) is also available, which is
> more vulnerable to theft. The vault design thus limits the hot wallet
> to withdrawing at most L satoshis every D blocks, so that if funds are
> stolen, you lose at most L, and have D blocks to use your cold wallet
> key to re-secure the funds and prevent further losses.
> 
> To set this up with TLUV, you construct a taproot output with COLD as
> the internal public key, and a script that specifies:
> 
> * The tx is signed via HOT
> *  CSV -- there's a relative time lock since the last spend
> * If the input 

Re: [bitcoin-dev] Removing the Dust Limit

2021-08-08 Thread Matt Corallo via bitcoin-dev
If it weren't for the implications in changing standardness here, I think we should consider increasing the dust limit 
instead.


The size of the UTXO set is a fundamental scalability constraint of the system. In fact, with proposals like 
assume-utxo/background history sync it is arguably *the* fundamental scalability constraint of the system. Today's dust 
limit is incredibly low - its based on a feerate of only 3 sat/vByte in order for claiming the UTXO to have *any* value, 
not just having enough value to be worth bothering. As feerates have gone up over time, and as we expect them to go up 
further, we should be considering drastically increasing the 3 sat/vByte basis to something more like 20 sat/vB.


Matt

On 8/8/21 14:52, Jeremy via bitcoin-dev wrote:

We should remove the dust limit from Bitcoin. Five reasons:

1) it's not our business what outputs people want to create


It is precisely our business - the costs are born by us, not the creator. If someone wants to create outputs which don't 
make sense to spend, they can do so using OP_RETURN, since they won't spend it anyway.



2) dust outputs can be used in various authentication/delegation smart contracts


So can low-value-but-enough-to-be-worth-spending-when-you're-done-with-them 
outputs.

3) dust sized htlcs in lightning 
(https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light 
) 
force channels to operate in a semi-trusted mode which has implications (AFAIU) for the regulatory classification of 
channels in various jurisdictions; agnostic treatment of fund transfers would simplify this (like getting a 0.01 cent 
dividend check in the mail)


This is unrelated to the consensus dust limit. This is related to the practical question about the value of claiming an 
output. Again, the appropriate way to solve this instead of including spendable dust outputs would be an OP_RETURN 
output (though I believe this particular problem is actually better solved elsewhere in the lightning protocol).



4) thinly divisible colored coin protocols might make use of sats as value 
markers for transactions.


These schemes can and should use values which make them economical to spend. The whole *point* of the dust limit is to 
encourage people to use values which make sense economically to "clean up" after they're done with them. If people want 
to use outputs which they will not spend/"clean up" later, they should be using OP_RETURN.



5) should we ever do confidential transactions we can't prevent it without 
compromising privacy / allowed transfers


This is the reason the dust limit is not a *consensus* limit. If and when CT were to happen we can and would relax the 
standardness rules around the dust limit to allow for CT.




The main reasons I'm aware of not allow dust creation is that:

1) dust is spam
2) dust fingerprinting attacks


3) The significant costs to every miner and full node operator.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-05 Thread Matt Corallo via bitcoin-dev
I find this point to be incredibly important. Indeed I, like several others, have historically been concerned with 
covenants in the unbounded form. However, as more and more research has been done in what they can accomplish, the 
weighting of such arguments naturally has to be reduced. More importantly, AJ's point here neuters anti-covanent 
arguments rather strongly.


Matt

On 7/5/21 01:04, Anthony Towns via bitcoin-dev wrote:

On Sun, Jul 04, 2021 at 09:02:25PM -0400, Russell O'Connor via bitcoin-dev 
wrote:

Bear in mind that when people are talking about enabling covenants, we are
talking about whether OP_CAT should be allowed or not.


In some sense multisig *alone* enables recursive covenants: a government
that wants to enforce KYC can require that funds be deposited into
a multisig of "2   2 CHECKMULTISIG", and that
"recipient" has gone through KYC. Once deposited to such an address,
the gov can refus to sign with gov_key unless the funds are being spent
to a new address that follows the same rules.

(That's also more efficient than an explicit covenant since it's all
off-chain -- recipient/gov_key can jointly sign via taproot/MuSig at
that point, so that full nodes are only validating a single pubkey and
signature per spend, rather than having to do analysis of whatever the
underlying covenant is supposed to be [0])

This is essentially what Liquid already does -- it locks bitcoins into
a multisig and enforces an "off-chain" covenant that those bitcoins can
only be redeemed after some valid set of signatures are entered into
the Liquid blockchain. Likewise for the various BTC-on-Ethereum tokens.
To some extent, likewise for coins held in exchanges/custodial wallets
where funds can be transferred between customers off-chain.

You can "escape" from that recursive covenant by having the government
(or Liquid functionaries, or exchange admins) change their signing
policy of course; but you could equally escape any consensus-enforced
covenant by having a hard fork to stop doing consensus-enforcement (cf
ETH Classic?). To me, that looks more like a difference of procedure
and difficulty, rather than a fundamental difference in kind.

Cheers,
aj

[0] https://twitter.com/pwuille/status/1411533549224693762

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reminder on the Purpose of BIPs

2021-04-25 Thread Matt Corallo via bitcoin-dev

Alright, let's see...

Sorting by most recently updated...
https://github.com/bitcoin/bips/pulls?page=1=is%3Apr+is%3Aopen+sort%3Aupdated-asc+updated%3A%3E2021-01-01

#1104 has been updated nearly daily for the past many weeks. You commented 12 days ago saying "Concept NACK" (which 
isn't a thing on BIPs - huh? they're author documents, as you're well aware), and nothing further.


#1105 which is less recently updated by one on the above list has a comment 
from you 19 hours ago.

I'm really not sure what playing dumb gets you, here. Its really transparent 
and isn't helpful in any way to anything.

In general, I think its time we all agree the BIP process has simply failed and move on. Luckily its not really all that 
critical and proposed protocol documents can be placed nearly anywhere with the same effect.


Matt

On 4/25/21 17:22, Luke Dashjr wrote:

On Sunday 25 April 2021 21:14:08 Matt Corallo wrote:

On 4/25/21 17:00, Luke Dashjr wrote:

I will not become an accomplice to this deception by giving special
treatment, and will process the BIP PR neutrally according to the
currently-defined BIP process.


Again, please don't play dumb, no one watching believes this - you've been
active on the BIP repo on numerous PRs and this has never in the past been
the case.


I started going through PRs a few days ago, in order of "Recently updated" on
GitHub, starting with the least-recent following the last one I triaged a
month ago that hasn't seen activity.. the same as I have been doing month
after month prior to this.

If you don't believe me, feel free to look through the repo history.

Luke


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reminder on the Purpose of BIPs

2021-04-25 Thread Matt Corallo via bitcoin-dev

On 4/25/21 17:00, Luke Dashjr wrote:

On Sunday 25 April 2021 20:29:44 Matt Corallo wrote:

If the BIP editor is deliberately refusing to accept changes which the
author's approval (which appears to be occurring here),


It isn't. I am triaging BIPs PRs the same as I have for years, and will get to
them all in due time, likely before the end of the month.


Please don't play dumb, it isn't a good look.


Rather, what we have going on is a few bad actors trying to misportray the
BIPs as an approval process so they can pretend ST is somehow official, or
that the preexisting Core+Taproot client is "breaking" the spec. And to
further their agenda, they have been harassing me demanding special
treatment.


I'd be curious who is doing that, because obviously I'd agree that merging something in a BIP doesn't really have any 
special meaning. This, however, is a completely different topic from following the BIP process that you had a key hand 
in crafting.



I will not become an accomplice to this deception by giving special treatment,
and will process the BIP PR neutrally according to the currently-defined BIP
process.


Again, please don't play dumb, no one watching believes this - you've been active on the BIP repo on numerous PRs and 
this has never in the past been the case.



Despite the continual harassment, I have even made two efforts to try to
(fairly) make things faster, and have been obstructed both times by ST
advocates. It appears they intend to paint me as "deliberately refusing" (to
use your words) in order to try to put Bitcoin and the BIP process under
their control, and abuse it in the same manner in which they abused Bitcoin
Core's usual standards (by releasing ST without community consensus).

Luke


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Reminder on the Purpose of BIPs

2021-04-25 Thread Matt Corallo via bitcoin-dev

There appears to be some severe lack of understanding of the point of the BIP 
process here.

The BIP process exists to be a place for those in the Bitcoin development community (which includes anyone who wishes to 
participate in it!) to place specifications which may be important for others in the Bitcoin development community to 
see, to ensure interoperability.


It does not, should not, and has never existed to take any positions on...anything. It has always existed to allow those 
who wish to participate in the Bitcoin development community to publish proposed standards or deployed protocols, in 
whatever form the authors of the BIPs seem fit.


If anyone suggests changes with a BIP's proposed form in a way the original author does not agree with, they have always 
been free to, and should simply create a new BIP with their proposed form.


The BIP editor's role has always been, and should continue to be, to encourage BIP authors to respond to (either by 
dismissing or accepting) feedback on their BIPs, and encourage formatting in a standard form. The BIP editor's role has 
never included, and should not include, taking a stance on substantive changes to a BIP's contents - those are up to the 
author(s) of a BIP, and always have been.


If the BIP editor is deliberately refusing to accept changes which the author's approval (which appears to be occurring 
here), the broader development community (us) should either remove the BIP editor and replace them, or simply ignore the 
BIP repository entirely (which seems like the most likely outcome here). There really should be no debate over this 
point, and I'm not entirely sure why anyone would think there should be.


Luckily BIPs aren't really all that critical in this instance - they exist to communicate protocols for 
interoperability, and in this case the protocol changes as proposed have been broadly communicated already.


Still, given the apparent lack of desire to remove the BIP editor in this case, I'd suggest we all move on and simply 
ignore the BIP repository entirely. Simply sending notices of protocol systems to this mailing list is likely sufficient.


Matt

On 4/23/21 11:34, Antoine Riard via bitcoin-dev wrote:

Hi Luke,

For the records and the subscribers of this list not following #bitcoin-core-dev, this mail follows a discussion which 
did happen during yesterday irc meetings.

Logs here : http://gnusha.org/bitcoin-core-dev/2021-04-22.log 


I'll reiterate my opinion expressed during the meeting. If this proposal to extend the bip editorship membership doesn't 
satisfy parties involved or anyone in the community, I'm strongly opposed to have the matter sliced by admins of the 
Bitcoin github org. I believe that defect or uncertainty in the BIP Process shouldn't be solved by GH janitorial roles 
and I think their roles don't bestow to intervene in case of loopholes. Further, you have far more contributors involved 
in the BIP Process rather than only Bitcoin Core ones. FWIW, such precedent merits would be quite similar to lobby 
directly GH staff...


Unless we harm Bitcoin users by not acting, I think we should always be respectful of procedural forms. And in the lack 
of such forms, stay patient until a solution satisfy everyone.


I would recommend the BIP editorship, once extended or not, to move in its own 
repository in the future.

Cheers,
Antoine




Le jeu. 22 avr. 2021 à 22:09, Luke Dashjr via bitcoin-dev > a écrit :


Unless there are objections, I intend to add Kalle Alm as a BIP editor to
assist in merging PRs into the bips git repo.

Since there is no explicit process to adding BIP editors, IMO it should be
fine to use BIP 2's Process BIP progression:

 > A process BIP may change status from Draft to Active when it achieves
 > rough consensus on the mailing list. Such a proposal is said to have
 > rough consensus if it has been open to discussion on the development
 > mailing list for at least one month, and no person maintains any
 > unaddressed substantiated objections to it.

A Process BIP could be opened for each new editor, but IMO that is
unnecessary. If anyone feels there is a need for a new Process BIP, we can 
go
that route, but there is prior precedent for BIP editors appointing new BIP
editors, so I think this should be fine.

Please speak up soon if you disagree.

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org 

https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



Re: [bitcoin-dev] Proposed BIP editor: Kalle Alm

2021-04-24 Thread Matt Corallo via bitcoin-dev
What is preventing the BIP maintainership role from moving to a bot? It does seem like a bot should be able to do a fine 
job given the explicit criteria (though ignoring obvious spam is often nice, its by no means a requirement).


Given recent events where humans haveacted like humans, it seems a move to 
a bot may be warranted.

Matt

On 4/24/21 00:42, Greg Maxwell via bitcoin-dev wrote:

I am opposed to the addition of Kalle Alm at this time.

Those who believe that adding him will resolve the situation with
Luke-jr's inappropriate behavior re: PR1104 are mistaken.



27e59ffd51ee5a95d0e0faff70e045faca10b00015e90abc1c8de48b1dfff40c
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Update on "Speedy" Trial: The circus rolls on

2021-04-08 Thread Matt Corallo via bitcoin-dev
Probably worth noting, but while the coin toss was acceptable to many people as "who cares, just move on", the two 
authors of actual code for the two proposals here also came to an agreement on a way forward, so its not like it was a 
"coin toss to overrule everyone on 'the other side'".


On 4/8/21 10:30, Andrew Poelstra via bitcoin-dev wrote:

On Thu, Apr 08, 2021 at 12:40:42PM +0100, Michael Folkson via bitcoin-dev wrote:


All of this makes me extremely uncomfortable and I dread to think what
individuals and businesses all over the world who have plans to
utilize and build on Taproot are making of all of this. As an
individual I would like to distance myself from this circus. I will
try to keep the mailing list informed though of further developments
re Speedy Trial in Core or progress on an alternative client.



Thank you for your updates.


For what it's worth, as somebody who wants to use Taproot I don't care *at
all* about activation parameters, and I especially don't care about block
height vs MTP.

If a coin toss is what it takes for people to move past this that's fine
by me.


  


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] March 23rd 2021 Taproot Activation Meeting Notes

2021-04-07 Thread Matt Corallo via bitcoin-dev




On 4/7/21 01:01, Rusty Russell via bitcoin-dev wrote:

Ryan Grant  writes:

On Tue, Apr 6, 2021 at 11:58 PM Rusty Russell via bitcoin-dev
What ST is saying is that a strategy of avoiding unnecessary risk is
stronger than a strategy of brinkmanship when brinkmanship wasn't
our only option.  Having deescalation in the strategy toolkit makes
Bitcoin stronger.


I don't believe that having a plan is brinkmanship or an escalation.


I strongly disagree with this characterization of ST, primarily because there just isn't the kind of agreement you seem 
to be assuming. ST isn't a "lets not decide because we don't want to formulate a specific grand plan" its more of a 
"lets not decide, because there are very strong, and very divergent viewpoints on what a specific grand plan can or 
should look like, and something most people are ok with is better than nothing at all". Ultimately, there are a number 
of possible directions a grand plan could go, and there appear to be at least several prominent (and likely many 
non-prominent) individuals who would strongly disagree with any such plan, you and I likely among them :).


LOT=true does face the awkward question, but there are downsides:

   - in the requirement to drop blocks from apathetic miners (although
 as Luke-Jr pointed out in a previous reply on this list they have
 no contract under which to raise a complaint); and


Surely, yes.  If the users of bitcoin decide blocks are invalid, they're
invalid.  With a year's warning, and developer and user consensus
against them, I think we've reached the limits of acceptable miner
apathy.


You say "developer and user consensus against them" here, but then go on to argue that its perfectly acceptable for only 
a small subset of users to be required to do something below.



   - in the risk of a chain split, should gauging economic majority
 support - which there is zero intrinsic tooling for - go poorly.


Agreed that we should definitely do better here: in practice people
would rely on third party explorers for information on the other side of
the split.  Tracking the cumulative work on invalid chains would be a
good idea for bitcoind in general (AJ suggested this, IIRC).


We already have a really, really great precedent for tracking economic majority, I'd argue we have great tooling here! 
During Segwit2x, we had multiple futures and chain-split-tokens available, including the BitMex futures with billions of 
dollars in daily volume! For the BCH split, ViaBTC issued similar chain split tokens.


At the end of the day, economic value is going to determine the amount of hashrate on any chain, and there is a very, 
very strong incentive (trading fees!) for an exchange to list...more stuff, chainsplit tokens included.


Why do we need to build in really janky ways to measure economic majority when there's already a great one that 
experience has shown us will prop up and provide reasonable signal, given any material demand.



Personally, I think the compromise position is using LOT=false and
having those such as Luke and myself continue working on a LOT=true
branch for future consideration.  It's less than optimal, but I
appreciate that people want Taproot activated more than they want
the groundwork future upgrades.


Another way of viewing the current situation is that should
brinkmanship be necessary, then better tooling to resolve a situation
that requires brinkmanship will be invaluable.  But:

   - we do not need to normalize brinkmanship;

   - designing brinkmanship tooling well before the next crisis does
 not require selecting conveniently completed host features to
 strap the tooling onto for testing; and


Again, openly creating a contingency plan is not brinkmanship, it's
normal.  I know that considering these scenarios is uncomfortable; I
avoid conflict myself!  But I feel obliged to face this as a real
possibility.

I think we should be normalizing the understanding that bitcoin users
are the ultimate decider.  By offering *all* of them the tools to do so
we show this isn't lip-service, but something that businesses and
everyone else in the ecosystem should consider.


While I strongly agree with your principle, I strongly disagree with the practice of how you propose going about it. 
Ultimately, no matter what we decide here, elsewhere, or what the process for consensus changes is, the decider will be 
economic activity and users voting with their Bitcoin. We should start by acknowledging that, and acknowledging that the 
markets will (and have!) let us know what they think when there is any kind of material disagreement.


Then, we should optimize for ensuring that the market never needs to "correct the situation", because if we end up there 
(or in any of these kinds of scenarios), we've basically screwed the pooch. Sure, some 10% minority group (and usually 
less as time goes on) forking themselves off has turned out to basically be irrelevant, but if we end up with multiple 

Re: [bitcoin-dev] Response to Rusty Russell from Github

2021-04-06 Thread Matt Corallo via bitcoin-dev
I'm somewhat gobsmacked that this entire conversation hasn't included the word "market" in it at all. If there's one 
thing we can all agree we learned from Segwit2x, BCH, BSV, BU, etc, its that, ultimately, the market decides. Not only 
does the market decide, but there's lots of money to be made by being the market maker or operator letting the market 
make its voice heard. There is nothing we can, or should, do to ensure the market can make its voice heard - it always will.


We don't need to bend over backwards to make sure individual users are forced to try to form consensus among themselves 
via options or chain splits, we can just let the market decide. Within reason, the market will probably decide "yep, 
what the brains are doing looks good, Bitcoin needs to stay in consensus, no point in trying to nitpick something or 
we'll never come to consensus about anything". If what's being proposed is ever disagreed with by some small-ish but 
nontrivial group, futures markets are going to decide the fate of the system no matter what the consensus rules or 
activation method is, why do we need to do very much else?


Matt

On 4/6/21 00:40, Rusty Russell via bitcoin-dev wrote:

Jeremy via bitcoin-dev  writes:

Where I disagree is that I do not believe that BIP8 with LOT configuration
is the improved long term option we should ossify around either. I
understand the triumvirate model you desire to achieve, but BIP8 with an
individually set LOT configuration does not formalize how economic nodes
send a network legible signal ahead of a chain split. A regular flag day,
with no signalling, but communally released and communicated openly most
likely better achieves the goal of providing users choice.


You're ignoring the role of infrastructure.  It's similar to saying that
there is no need for elections: if things are bad enough, citizens can
rise up and overthrow their government.


1. Developers release, but do not activate
2. Miners signal
3. Users may override by compiling and releasing a patched Bitcoin with
moderate changes that activates Taproot at a later date. While this might
*seem* more complicated a procedure than configurable LOT, here are four
reasons why it may be simpler (and safer) to just do a fresh release:


Users may indeed, fire the devs and replace them, as this implies.  This
is not empowering users, but in effect risks reducing their role to "beg
the devs or beg the miners".


A. No time-based consensus sensitivity on when LOT must be set (e.g., what
happens if mid final signal period users decide to set LOT? Do all users
set it at the same time? Or different times and end up causing nodes to ban
each other for various reasons?)


Yes, this Schelling point is important.  If you read BIP-8, you will see
that LOT=true activates at the last moment for this very reason.


B. No "missed window" if users don't coordinate on setting LOT before the
final period -- release whenever ready.


Of course there is: they need to upgrade in time.


C. ST fails fast, permitting users ample time to prepare an alternative
release


You'd think so, but given the confusion surrounding Segwit, it seems a
year was barely time to debate, decide and coordinate.  You want this
ready to go at the *beginning* of the 1 year process, not being decided,
debated, build and deployed once the crisis is upon us.  That existing
deployment is a vital stake in the calculus of those who might try to
disrupt the process for any reason.


D. If miners continue to mine without signalling, and users abandon a
LOT=true setting, their node will have already marked those blocks invalid
and they will need to figure out how to re-validate the block.


This is true, in fact, of any soft fork: a Luke points out, our lack of
revalidation of blocks after upgrade is a bug.  Which should be fixed:
IMHO a decent PR to make LOT runtime configurable would reevaluate any
blocks >= timeoutheight-2016 when it is altered.


RE: point 3, is it as easy as it *could* be? No, but I don't have any
genius ideas on how to make it easier either. (Note that I've previously
argued for adding configurable LOT=true on the basis that a user-run script
could emulate LOT without any software change as a harm reduction, but I
did not advocate that particular technique be formalized as a part of the
activation process)


BIP-8 (with the recent modifications to allow maximal number of
non-signalling blocks) is technically as fork-preventative as we can
seek to make it.

I am hopeful that our ecosystem will remain harmonious and we won't have
to use it.  But I am significantly more hopeful that we won't have to
use it if we have it deployed and ready.

Cheers,
Rusty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org

Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-16 Thread Matt Corallo via bitcoin-dev




On 3/15/21 23:44, Luke Dashjr wrote:

(To reiterate: I do not intend any of this as a NACK of Taproot.)


Frankly, then why parrot arguments you don't agree with in an already-tense discussion? I'm really not sure what there 
is to gain by dredging up years-old since-settled debates except to cause yet more delay and frustration.



On Monday 15 March 2021 22:05:45 Matt Corallo wrote:

First, so long as we have hash-based addresses as a best practice, we can
continue to shrink the percentage of bitcoins affected through social
efforts discouraging address use. If the standard loses the hash, the
situation cannot be improved, and will indeed only get worse.


I truly wish this were the case, but we've been beating that drum for at
least nine years and still haven't solved it.


I think we've made progress over those 9 years, don't you?


Some, sure, but not anywhere near the amount of progress we'd need to make to have an impact on QC security of the 
overall system.



Except its not? One entity would be able to steal that entire block of
supply rather quickly (presumably over the course of a few days, at
maximum), instead of a slow process with significant upfront real-world
cost in the form of electricity.


My understanding is that at least initial successes would likely be very slow.
Hopefully we would have a permanent solution before it got too out of hand.


There is a lot of debate on this point in the original thread which discussed this several years ago. But even if it 
were the case, it still doesn't make "let QC owners steal coins" somehow equivalent to mining. There are probably 
several blocks of coins that can be stolen to the tune of much greater rewards than a block reward, but, more broadly, 
what?! QC owners stealing coins from old outputs isn't somehow going to be seen as "OK", not to mention because many old 
outputs do have owners with the keys, they aren't all forgotten or lost.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread Matt Corallo via bitcoin-dev
Right, totally. There was substantial debate on the likelihood of such a QC existing (ie a slow one) on the original 
thread several years ago, but ignoring that, my broader point was about the address reuse issue. Given that, there's 
just not much we can do with the existing hash-indirection.


Matt

On 3/15/21 19:01, Karl-Johan Alm via bitcoin-dev wrote:

On Tue, 16 Mar 2021 at 07:48, Matt Corallo via bitcoin-dev
 wrote:


Overall, the tradeoffs here seem ludicrous, given that any QC issues in Bitcoin 
need to be solved in another way, and
can't practically be solved by just relying on the existing hash indirection.


The important distinction here is that, with hashes, an attacker has
to race against the spending transaction confirming, whereas with
naked pubkeys, the attacker doesn't have to wait for a spend to occur,
drastically increasing the available time to attack.

It may initially take months to break a single key. In such a
scenario, anyone with a hashed pubkey would be completely safe* (even
at spend time), until that speeds up significantly, while Super Secure
Exchange X with an ultra-cold 38-of-38 multisig setup using Taproot
would have a timer ticking, since the attacker need only find a single
privkey like with any old P2PK output.

(* assuming no address reuse)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread Matt Corallo via bitcoin-dev
Right, you can avoid the storage cost at the cost of significantly higher CPU usage, plus lack of ability to 
batch-validate. As Robert pointed out in a neighboring mail, it also reduces ability to do other, fancier, protocols 
using the fact that public keys are now a public part of a script_pubkey.


Overall, the tradeoffs here seem ludicrous, given that any QC issues in Bitcoin need to be solved in another way, and 
can't practically be solved by just relying on the existing hash indirection.


Matt

On 3/15/21 18:40, Jeremy wrote:

I think Luke is pointing out that with the Signature and the Message you should 
be able to recover the key.

if your address is H(P) and the message is H(H(P) || txn), then the you can use the public H(P) and the signature to 
recover the PK and verify that H(P) == P (I think you then don't even have to check the signature after doing that).


Therefore there is no storage benefit.

For the script path case, you might have to pay a little bit extra though as you'd have to reveal P I think? But perhaps 
that can be avoided another way...

--
@JeremyRubin <https://twitter.com/JeremyRubin><https://twitter.com/JeremyRubin>


On Mon, Mar 15, 2021 at 3:06 PM Matt Corallo via bitcoin-dev <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


There have been many threads on this before, I'm not sure anything new has 
been brought up here.

Matt

On 3/15/21 17:48, Luke Dashjr via bitcoin-dev wrote:
 > I do not personally see this as a reason to NACK Taproot, but it has 
become
 > clear to me over the past week or so that many others are unaware of this
 > tradeoff, so I am sharing it here to ensure the wider community is aware 
of
 > it and can make their own judgements.

Note that this is most definitely *not* news to this list, eg, Anthony brought 
it up in "Schnorr and taproot (etc)
upgrade" and there was a whole thread on it in "Taproot: Privacy preserving 
switchable scripting". This issue has been
beaten to death, I'm not sure why we need to keep hitting the poor horse 
corpse.

 >
 > In short, Taproot loses an important safety protection against quantum.
 > Note that in all circumstances, Bitcoin is endangered when QC becomes a
 > reality, but pre-Taproot, it is possible for the network to "pause" 
while a
 > full quantum-safe fix is developed, and then resume transacting. With 
Taproot
 > as-is, it could very well become an unrecoverable situation if QC go 
online
 > prior to having a full quantum-safe solution.

This has been discussed ad nauseam, and it all seems to fall apart once its 
noted just how much Bitcoin could be stolen
by any QC-wielding attacker due to address reuse. Ultimately, no "pause" 
can solve this issue, and, if we learned about
a QC attacker overnight (instead of slowly over time), there isn't anything 
that a non-Taproot Bitcoin could do that a
Taproot Bitcoin couldn't.

 > Also, what I didn't know myself until today, is that we do not actually 
gain
 > anything from this: the features proposed to make use of the raw keys 
being
 > public prior to spending can be implemented with hashed keys as well.
 > It would use significantly more CPU time and bandwidth (between private
 > parties, not on-chain), but there should be no shortage of that for 
anyone
 > running a full node (indeed, CPU time is freed up by Taproot!); at 
worst, it
 > would create an incentive for more people to use their own full node, 
which
 > is a good thing!

This is untrue. The storage space required for Taproot transactions is 
materially reduced by avoiding the hash
indirection.

 > Despite this, I still don't think it's a reason to NACK Taproot: it 
should be
 > fairly trivial to add a hash on top in an additional softfork and fix 
this.

For the reason stated above, i think such a fork is unlikely.

 > In addition to the points made by Mark, I also want to add two more, in
 > response to Pieter's "you can't claim much security if 37% of the supply 
is
 > at risk" argument. This argument is based in part on the fact that many
 > people reuse Bitcoin invoice addresses.
 >
 > First, so long as we have hash-based addresses as a best practice, we can
 > continue to shrink the percentage of bitcoins affected through social 
efforts
 > discouraging address use. If the standard loses the hash, the situation
 > cannot be improved, and will indeed only get worse.

I truly wish this were the case, but we've been beating that drum for at 
least nine years and still haven't solved it.
Worse, there's a lot of old coins that are unlikely to move any time soon 
that are exposed whether we like it or not.

 > Second, when/if quantum does compromi

Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread Matt Corallo via bitcoin-dev

There have been many threads on this before, I'm not sure anything new has been 
brought up here.

Matt

On 3/15/21 17:48, Luke Dashjr via bitcoin-dev wrote:

I do not personally see this as a reason to NACK Taproot, but it has become
clear to me over the past week or so that many others are unaware of this
tradeoff, so I am sharing it here to ensure the wider community is aware of
it and can make their own judgements.


Note that this is most definitely *not* news to this list, eg, Anthony brought it up in "Schnorr and taproot (etc) 
upgrade" and there was a whole thread on it in "Taproot: Privacy preserving switchable scripting". This issue has been 
beaten to death, I'm not sure why we need to keep hitting the poor horse corpse.




In short, Taproot loses an important safety protection against quantum.
Note that in all circumstances, Bitcoin is endangered when QC becomes a
reality, but pre-Taproot, it is possible for the network to "pause" while a
full quantum-safe fix is developed, and then resume transacting. With Taproot
as-is, it could very well become an unrecoverable situation if QC go online
prior to having a full quantum-safe solution.


This has been discussed ad nauseam, and it all seems to fall apart once its noted just how much Bitcoin could be stolen 
by any QC-wielding attacker due to address reuse. Ultimately, no "pause" can solve this issue, and, if we learned about 
a QC attacker overnight (instead of slowly over time), there isn't anything that a non-Taproot Bitcoin could do that a 
Taproot Bitcoin couldn't.



Also, what I didn't know myself until today, is that we do not actually gain
anything from this: the features proposed to make use of the raw keys being
public prior to spending can be implemented with hashed keys as well.
It would use significantly more CPU time and bandwidth (between private
parties, not on-chain), but there should be no shortage of that for anyone
running a full node (indeed, CPU time is freed up by Taproot!); at worst, it
would create an incentive for more people to use their own full node, which
is a good thing!


This is untrue. The storage space required for Taproot transactions is 
materially reduced by avoiding the hash indirection.


Despite this, I still don't think it's a reason to NACK Taproot: it should be
fairly trivial to add a hash on top in an additional softfork and fix this.


For the reason stated above, i think such a fork is unlikely.


In addition to the points made by Mark, I also want to add two more, in
response to Pieter's "you can't claim much security if 37% of the supply is
at risk" argument. This argument is based in part on the fact that many
people reuse Bitcoin invoice addresses.

First, so long as we have hash-based addresses as a best practice, we can
continue to shrink the percentage of bitcoins affected through social efforts
discouraging address use. If the standard loses the hash, the situation
cannot be improved, and will indeed only get worse.


I truly wish this were the case, but we've been beating that drum for at least nine years and still haven't solved it. 
Worse, there's a lot of old coins that are unlikely to move any time soon that are exposed whether we like it or not.



Second, when/if quantum does compromise these coins, so long as they are
neglected or abandoned/lost coins (inherent in the current model), it can be
seen as equivalent to Bitcoin mining. At the end of the day, 37% of supply
minable by QCs is really no different than 37% minable by ASICs. (We've seen
far higher %s available for mining obviously.)


Except its not? One entity would be able to steal that entire block of supply rather quickly (presumably over the course 
of a few days, at maximum), instead of a slow process with significant upfront real-world cost in the form of electricity.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-06 Thread Matt Corallo via bitcoin-dev




On 3/6/21 14:56, Michael Folkson wrote:

Hi Matt

 > I'm really unsure that three months is a short enough time window that there wouldn't be a material effort to split 
the network with divergent consensus rules. Instead, a three month window is certainly long enough to organize and make 
a lot of noise around such an effort, given BIP 148 was organized and reached its peak within a similar such window.


I'm not sure either. I can't control anyone other than myself. I think (and Luke has also stated on IRC) that trying a 
UASF (LOT=true) during a "Speedy Trial" deployment would be crazy. I would certainly recommend no one tries that but I 
can't stop anyone. I'll repeat that soft forks have and always will contain some limited chain split risk regardless of 
activation mechanism. I think you are well intentioned but I'm not sure if you've fully grasped that yet. Maybe you have 
and I'm missing something.


 > Worse, because the obvious alternative after a three month activation failure is a significant delay prior to 
activation, the vocal UASF minority may be encouraged to pursue such a route to avoid such a delay.


Again I can only speak for myself but I wouldn't support a UASF until this "fail fast" Speedy Trial has completed and 
failed. Luke agrees with that and other people (eg proofofkeags) on the ##uasf IRC channel have also supported this 
"Speedy Trial" proposal. If you want me (or anyone else for that matter) to guarantee there won't be an attempted UASF 
during a Speedy Trial deployment obviously nobody can do that. All I can say is that personally I won't support one.


That's great to hear.

The parameters for Speedy Trial are being hammered out on IRC as we speak. I'd encourage you to engage with those 
discussions. I'd really like to avoid a scenario where we have broad consensus on the details of Speedy Trial and then 
you come out the woodwork weeks later with either an alternative proposal or a criticism for how the details of Speedy 
Trial were finalized.

>
I've read your email as you're concerned about a UASF during a Speedy Trial deployment. Other than that I think (?) you 
support it and you are free to join the discussion on IRC if you have particular views on parameters. Personally I don't 
think those parameters should be chosen assuming there will be a UASF during the deployment but you can argue that case 
on IRC if you wish. All proposals you have personally put forward suffer from chain split risk in the face of a 
competing incompatible activation mechanism.


The conversations around the activation of Taproot have far outgrown a single IRC channel, let alone a single live 
conversation. Nor is having a discussion with under a few days latency "coming out of the wordwork weeks later". 
Frankly, I find this more than a little insulting. Bitcoin's consensus has never been decided in such a manner and I see 
no reason to start now.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-06 Thread Matt Corallo via bitcoin-dev
I don't think anyone is proposing anything to "prevent" other people from doing anything they wish. My understanding of 
the goal of this proposal, itself, was to keep the community together by proposing a solution that was palatable to all. 
My point was that I'm not sure that this proposal achieves its own goal, and that there may be solutions which are even 
more likely to keep the community of nodes together.


Matt

On 3/6/21 15:23, David A. Harding wrote:

On Sat, Mar 06, 2021 at 01:11:01PM -0500, Matt Corallo wrote:

I'm really unsure that three months is a short enough time window that there
wouldn't be a material effort to split the network with divergent consensus
rules.


I oppose designing activation mechanisms with the goal of preventing
other people from effectively exercising self determination over what
consensus rules their nodes enforce.

Three months was chosen because it's long enough to give miners a
reasonable enough amount of time to activate taproot but it's also short
enough that it doesn't delay any of the existing proposals with roughly
one-year timelines.  As such, I think it has the potential to gain
acceptance from multiple current factions (even if it doesn't ever gain
their full approval), allowing us to move forward with rough social
consensus and to gain useful information from the attempt that can
inform future decisions.

-Dave


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-06 Thread Matt Corallo via bitcoin-dev
I'm really unsure that three months is a short enough time window that there wouldn't be a material effort to split the 
network with divergent consensus rules. Instead, a three month window is certainly long enough to organize and make a 
lot of noise around such an effort, given BIP 148 was organized and reached its peak within a similar such window.


Worse, because the obvious alternative after a three month activation failure is a significant delay prior to 
activation, the vocal UASF minority may be encouraged to pursue such a route to avoid such a delay.


One alternative may be to reduce the signaling windows involved and start slightly later. Instead of the likelihood of 
failure growing on the horizon, simply have two signaling windows (maybe two weeks, maybe a moth each?). In order to 
ensure success remains likely, begin them somewhat later after software release to give pools and miners a chance to 
configure their mining software in advance.


Matt

On 3/5/21 22:43, David A. Harding via bitcoin-dev wrote:

On the ##taproot-activation IRC channel, Russell O'Connor recently
proposed a modification of the "Let's see what happens" activation
proposal.[1] The idea received significant discussion and seemed
acceptable to several people who could not previously agree on a
proposal (although this doesn't necessarily make it their first
choice).  The following is my attempt at a description.

1. Start soon: shortly after the release of software containing this
proposed activation logic, nodes will begin counting blocks towards
the 90% threshold required to lock in taproot.[2]

2. Stop soon: if the lockin threshold isn't reached within approximately
three months, the activation attempt fails.  There is no mandatory
activation and everyone is encouraged to try again using different
activation parameters.

2. Delayed activation: in the happy occasion where the lockin threshold

is reached, taproot is guaranteed to eventually activate---but not
until approximately six months after signal tracking started.

## Example timeline

(All dates approximate; see the section below about BIP9 vs BIP8.)

- T+0: release of one or more full nodes with activation code
- T+14: signal tracking begins
- T+28: earliest possible lock in
- T+104: locked in by this date or need to try a different activation process
- T+194: activation (if lockin occurred)

## Analysis

The goal of Speedy Trial is to allow a taproot activation attempt to
either quickly succeed or quickly fail---without compromising safety in
either case.  Details below:

### Mitigating the problems of early success

New rules added in a soft fork need to be enforced by a large part of
the economy or there's a risk that a long chain of blocks breaking the
rules will be accepted by some users and rejected by others, causing a
chain split that can result in large direct losses to transaction
receivers and potentially even larger indirect losses to holders due to
reduced confidence in the safety of the Bitcoin system.

One step developers have taken in the past to ensure widespread adoption
of new consensus rules is programming in a delay between the time software
with those rules is expected to be released and when the software starts
tracking which blocks signal for activation.  For example:

 Soft fork| Release| Start  | Delta
 -+++--
 BIP68 (v0.12.1)  | 2016-04-15 | 2016-05-11 | 26 days
 BIP141 (v0.13.1) | 2016-10-27 | 2016-11-18 | 24 days

 Sources: BitcoinCore.org, 
https://gist.github.com/ajtowns/1c5e3b8bdead01124c04c45f01c817bc

Speedy Trial replaces most of that upfront delay with a backend delay.
No matter how fast taproot's activation threshold is reached by miners,
there will be six months between the time signal tracking starts and when
nodes will begin enforcing taproot's rules.  This gives the userbase even
more time to upgrade than if we had used the most recently proposed start
date for a BIP8 activation (~July 23rd).[2]

### Succeed, or fail fast

The earlier version of this proposal was documented over 200 days ago[3]
and taproot's underlying code was merged into Bitcoin Core over 140 days
ago.[4]  If we had started Speedy Trial at the time taproot
was merged (which is a bit unrealistic), we would've either be less than
two months away from having taproot or we would have moved on to the
next activation attempt over a month ago.

Instead, we've debated at length and don't appear to be any closer to
what I think is a widely acceptable solution than when the mailing list
began discussing post-segwit activation schemes over a year ago.[5]  I
think Speedy Trial is a way to generate fast progress that will either
end the debate (for now, if activation is successful) or give us some
actual data upon which to base future taproot activation proposals.

Of course, for those who enjoy the debate, discussion can continue while
waiting for 

Re: [bitcoin-dev] Making the case for flag day activation of taproot

2021-03-06 Thread Matt Corallo via bitcoin-dev

Replies inline. Several sections removed, where I basically agree.

On 3/4/21 08:47, Russell O'Connor wrote:

Appologies as I've rearranged your comments in my reply.
I agree with you.  I also think we have plenty of evidence to proceed with taproot and could proceed with a PR for such 
a flag day activation.  If there is support for it to be merged, that would be fantastic.  I think we should proceed 
along these lines forthwith.


However, the existence and/or release of a flag day activation code does not in of itself preclude concurrently 
developing and/or releasing a BIP8 LOT=false deployment.  Activating taproot is "idempotent" after all. We could even do 
a Core release with a flag day activation while we continue to discuss BIP8 LOT=false if that gets the ball rolling.  
Certainly having a flag day activation code merged would take a lot of pressure off further BIP8 LOT=false work.


Hmm, while this is certainly true at a technical level, it adds a lot of complexity both in terms of discussion, and for 
users - "I already upgraded to Taproot, why did I just see a fork with an invalid Taproot spend?".


As Aaron noted on IRC, if the sticking point here is the MUST_SIGNAL state, then running BIP8 LOT=false alongside a flag 
day activation at timeout may be the way to go.  Once a flag day deployment is released, the LOT=true people would have 
their guaranteed activation and would be less interested in an alternative client. And without a MUST_SIGNAL state, I 
believe the LOT=false deployment won't lead any hashpower that is following standardness rules to create invalid blocks.


This is indeed a significant improvement over BIP 8 in my opinion. However, as I pointed out on a Reddit discussion with 
Aaron, I'm still incredibly worried about users pushing for some UASF-style forced-signaling guerilla faster-activation. 
It may absolutely be the case that Taproot activates quickly or that such users are a tiny minority of transacting. 
However, as we saw with BIP 148/UASF, even a tiny minority of transacting users can set the tone and claim victory over 
how a soft-fork activates. I worry that even your approach here runs the risk of yet further normalization of consensus 
rule diversity on the network.


Maybe my worry is overblown, and I'm certainly not going to try to solely stand in the way on this one, but now that 
we're stuck in yet another overblown debate, we might as well take it as an opportunity to reinforce the idea that 
consensus rule diversity runs the risk of consensus failure, and isn't a reasonable risk.


 > Even today, I still think that starting with BIP8 LOT=false is, 
generally speaking, considered a reasonably safe
 > activation method in the sense that I think it will be widely considered as a 
"not wholly unacceptable" approach to
 > activation.

How do you propose avoiding divergent consensus rules on the network, 
something which a number of commentors on this
list have publicly committed to?


Firstly, it is an open network.  Anyone can join and run whatever consensus rules they want.  People have run divergent 
consensus rules on the network in the past and it will continue to do so in the future.
It is troublesome when it happens in mass, but it isn't fatal.  We can't prevent it, and we should continue working to 
keep the protocol robust in the face of it.

And we certainly shouldn't be bullied by anyone who comes threatening their own 
soft-fork.


Apologies. I should have phrased my comment better. My worry, at a broad level 
is that
(a) people have taken the events around the Segwit BIP 148 UASF to mean that a very small minority of users can (and 
maybe should) push consensus rules through threats of breaking consensus and
(b) there is a very vocal group today which is reinforcing this belief by ignoring the complex context around Segwit and 
behaving similarly with regards to Taproot.


Indeed, there is nothing we can, or should, do to actively prevent people from running their own software which 
interprets Bitcoin's consensus rules in...creative ways. But that isn't to say there is no use worrying about it. 95% of 
Bitcoin users aren't aware that this debate is even happening. Of the remaining 5%, 90% haven't had the time to research 
and think deeply enough to form an opinion. Our responsibility is to the 99.5% of users.


Sure, individuals running different consensus rules won't cause immediate harm to others, but the net effect of many 
users doing so and especially the community normalizing such behavior very significantly can. Ill-informed transactors 
running such software (including wallet providers with users who were unaware of the events) can be screwed out of their 
Bitcoin. This outcome very well could have occurred in the case of the BIP 148 UASF, and repeating the same pattern many 
times will not help to de-risk that.


Even simply doing nothing may not prevent divergent consensus from appearing on the network.  Playing 

Re: [bitcoin-dev] Making the case for flag day activation of taproot

2021-03-03 Thread Matt Corallo via bitcoin-dev



On 3/3/21 14:08, Russell O'Connor via bitcoin-dev wrote:
While I support essentially any proposed taproot activation method, including a flag day activation, I think it is 
premature to call BIP8 dead.


Even today, I still think that starting with BIP8 LOT=false is, generally speaking, considered a reasonably safe 
activation method in the sense that I think it will be widely considered as a "not wholly unacceptable" approach to 
activation.


How do you propose avoiding divergent consensus rules on the network, something which a number of commentors on this 
list have publicly committed to?


After a normal and successful Core update with LOT=false, we will have more data showing broad community support for the 
taproot upgrade in hand.


I think this is one of the strongest arguments against a flag day activation, but, as I described in more detail in the 
thread "Straight Flag Day (Height) Taproot Activation", I'm not sure we aren't there enough already.


In the next release, 6 months later or so, Core could then confidently deploy a BIP8 LOT=true 


Could you clarify what an acceptable timeline is, then? Six months from release of new consensus rules to activation (in 
the case of a one-year original window) seems incredibly agressive for a flag-day activation, let alone one with 
forced-signaling, which would require significantly higher level of adoption to avoid network split risk. In such a 
world, we'd probably get Taproot faster with a flag day from day one.


client, should it prove to be necessary.  A second Core deployment of LOT=true would mitigate some of the concerns with 
LOT=false, but still provide a period beforehand to objective actions taken by the community in support of taproot.  We 
don't even have to have agreement today on a second deployment of LOT=true after 6 months to start the process of a 
LOT=false deployment. The later deployment will almost certainly be moot, and we will have 6 months to spend debating 
the LOT=true deployment versus doing a flag day activation or something else.


That was precisely the original goal with the LOT=false movement - do something easy and avoid having to hash out all 
the technical details of a second deployment. Sadly, that's no longer tennable as a number of people are publicly 
committed to deploying LOT=true software on the network ASAP.


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-02-28 Thread Matt Corallo via bitcoin-dev
Glad you asked! Yes, your goal here is #4 on the list of goals I laid out at [1], which I referenced and specifically 
addressed each of in the OP of this thread.


[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html

On 2/28/21 15:19, Eric Voskuil wrote:

In the attempt to change consensus rules there is a simple set of choices:

1) hard fork: creates a chain split
2) soft fork: creates a chain split
3) 51% attack: does not create a chain split

The presumption being that one can never assume 100% explicit adoption of any 
rule change.

A 51% attack can of course fail. It is also possible that signaling can be untruthful. But miner signaling provides some 
level of assurance that it will be successful. This level of assurance is increased by adoption of a higher than 
majority threshold, as has been done in the past.


Most of the discussion I’ve seen has been focused on who is in charge. Bitcoin requires no identity; anyone can mine 
and/or accept bitcoin - nobody is in charge.


The majority of those who mine can choose to enforce censorship any time they want. They don’t need anyone’s permission. 
No power is given to them by developers or anyone else. They have that power based on their own capital invested.


Similarly, the economy (those who accept bitcoin) can enforce any rule change it wants to. And it can do so at any level 
of participation that wants to go along. Anyone can do this, it requires nobody’s permission. Furthermore, it is 
possible for the economy to signal its level of agreement in every transaction, as miners have done in blocks previously.


But if the objective is to produce a rule change while avoiding a chain split, 50% is a much lower bar than 100%. If 
there is some other objective, it’s not clear to me what it is.


e


On Feb 28, 2021, at 12:02, Jeremy via bitcoin-dev 
 wrote:


Miners still can generate invalid blocks as a result of SPV mining, and it could be profitable to do "bad block 
enhanced selfish mining" to take advantage of it.



Hard to analyze exactly what that looks like, but...

E.g., suppose 20% is un-upgraded and 80% is upgraded. Taking 25% hashrate to mine bad blocks would mean 1/4th of the 
time you could make 20% of the hashrate mine bad blocks, overall a > 5% (series expansion) benefit. One could analyze 
out that the lost hash rate for bad blocks only matters for the first difficulty adjustment period you're doing this 
for too, as the hashrate drop will be accounted for -- but then a miner can switch back to mining valid chain, giving 
themselves a larger % of hashrate.


So it is still possible that an un-upgraded miner will fail part 3, and attempting to accommodate un-upgraded miners 
leads to some nasty oscillating hashrate being optimal.



--
@JeremyRubin 


On Sun, Feb 28, 2021 at 11:52 AM Matt Corallo mailto:lf-li...@mattcorallo.com>> wrote:

Note further that mandatory signaling isn't "just" a flag day - unlike a 
Taproot flag day (where miners running
Bitcoin
Core unmodified today will not generate invalid blocks), a mandatory 
signaling flag day blatantly ignores goal (3)
from
my original post - it results in any miner who has not taken active action 
(and ensured every part of their
often-large
infrastructure has been correctly reconfigured) generating invalid blocks.

As for "Taproot" took too long, hey, at least if its locked in people can 
just build things assuming it exists. Some
already are, but once its clearly locked in, there's no reason to not 
continue other work at the same time.

Matt

On 2/28/21 14:43, Jeremy via bitcoin-dev wrote:
> I agree with much of the logic presented by Matt here.
>
> BIP8 was intended to be simpler to agree on to maintain consensus, yet we 
find ourselves in a situation where a
"tiny"
> parameter has the potential to cause great network disruption and 
confusion (rationality is not too useful a
concept
> here given differing levels of sophistication and information). It is 
therefore much simpler and more likely to be
> universally understood by all network participants to just have a flag 
day. It is easier to communicate what users
> should do and when.
>
> This is ultimately not coercive to users because the upgrade for Taproot 
itself is provable and analyzable on
its own,
> but activation parameters based on what % of economically relevant nodes 
are running an upgrade by a certain
date are
> not. Selecting these sorts of complicated consensus parameters may 
ultimately present more opportunity for a
cooptable
> consensus process than something more straightforward.
>
>
> That said, a few points strike me as worth delving into.
>
>
> 1) Con: Mandatory signalling is no different than a flag day. Mandatory 
signaling is effectively 2 flag days --
one for

Re: [bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-02-28 Thread Matt Corallo via bitcoin-dev
SPV mining has been curtailed somewhat to only apply for a brief period of time (based on public statements) since the 
last time SPV mining caused a fork. Indeed, if you can make other miners mine on top of an invalid block, you can make 
money by reducing the difficulty, but that is true as much today as during a fork. Still, I think you've made my point - 
someone has to take an active, malicious action in order to mine a bad block, vs with forced signaling, someone only 
needs to forget to reconfigure one out of one hundred pool servers they operate.


Matt

On 2/28/21 15:02, Jeremy wrote:
Miners still can generate invalid blocks as a result of SPV mining, and it could be profitable to do "bad block enhanced 
selfish mining" to take advantage of it.



Hard to analyze exactly what that looks like, but...

E.g., suppose 20% is un-upgraded and 80% is upgraded. Taking 25% hashrate to mine bad blocks would mean 1/4th of the 
time you could make 20% of the hashrate mine bad blocks, overall a > 5% (series expansion) benefit. One could analyze 
out that the lost hash rate for bad blocks only matters for the first difficulty adjustment period you're doing this for 
too, as the hashrate drop will be accounted for -- but then a miner can switch back to mining valid chain, giving 
themselves a larger % of hashrate.


So it is still possible that an un-upgraded miner will fail part 3, and attempting to accommodate un-upgraded miners 
leads to some nasty oscillating hashrate being optimal.



--
@JeremyRubin 


On Sun, Feb 28, 2021 at 11:52 AM Matt Corallo mailto:lf-li...@mattcorallo.com>> wrote:

Note further that mandatory signaling isn't "just" a flag day - unlike a 
Taproot flag day (where miners running Bitcoin
Core unmodified today will not generate invalid blocks), a mandatory 
signaling flag day blatantly ignores goal (3) from
my original post - it results in any miner who has not taken active action 
(and ensured every part of their often-large
infrastructure has been correctly reconfigured) generating invalid blocks.

As for "Taproot" took too long, hey, at least if its locked in people can 
just build things assuming it exists. Some
already are, but once its clearly locked in, there's no reason to not 
continue other work at the same time.

Matt

On 2/28/21 14:43, Jeremy via bitcoin-dev wrote:
 > I agree with much of the logic presented by Matt here.
 >
 > BIP8 was intended to be simpler to agree on to maintain consensus, yet 
we find ourselves in a situation where a
"tiny"
 > parameter has the potential to cause great network disruption and 
confusion (rationality is not too useful a concept
 > here given differing levels of sophistication and information). It is 
therefore much simpler and more likely to be
 > universally understood by all network participants to just have a flag 
day. It is easier to communicate what users
 > should do and when.
 >
 > This is ultimately not coercive to users because the upgrade for Taproot 
itself is provable and analyzable on its
own,
 > but activation parameters based on what % of economically relevant nodes 
are running an upgrade by a certain date
are
 > not. Selecting these sorts of complicated consensus parameters may 
ultimately present more opportunity for a
cooptable
 > consensus process than something more straightforward.
 >
 >
 > That said, a few points strike me as worth delving into.
 >
 >
 > 1) Con: Mandatory signalling is no different than a flag day. Mandatory 
signaling is effectively 2 flag days --
one for
 > the signaling rule, 1 for the taproot type. The reason for the 2 week 
gap between flag day for signaling and flag
day
 > for taproot rules is, more or less, so that nodes who aren't taproot 
ready at the 1st flag day do not end up SPV
mining
 > (using standardness rules in mempool prevents them from mining an 
invalid block on top of a valid tip, but does not
 > ensure the tip is valid).
 > 2) Con: Releasing a flag day without releasing the LOT=true code leading 
up to that flag day means that clients
would
 > not be fully compatible with an early activation that could be proposed 
before the flag day is reached. E.g.,
LOT=true
 > is a flag day that retains the possibility of being compatible with 
other BIP8 releases without changing software.
 > 3) Pro: BIP-8 is partially in service of "early activation" and . I'm 
personally skeptical that early activation
is/was
 > ever a good idea. A fixed activation date may be largely superior for 
business purposes, software engineering
schedules,
 > etc. I think even with signaling BIP8, it would be possibly superior to 
activate rules at a fixed date (or a
quantized
 > set of fixed dates, e.g. guaranteeing at least 3 months but maybe more).
  

Re: [bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-02-28 Thread Matt Corallo via bitcoin-dev
Note further that mandatory signaling isn't "just" a flag day - unlike a Taproot flag day (where miners running Bitcoin 
Core unmodified today will not generate invalid blocks), a mandatory signaling flag day blatantly ignores goal (3) from 
my original post - it results in any miner who has not taken active action (and ensured every part of their often-large 
infrastructure has been correctly reconfigured) generating invalid blocks.


As for "Taproot" took too long, hey, at least if its locked in people can just build things assuming it exists. Some 
already are, but once its clearly locked in, there's no reason to not continue other work at the same time.


Matt

On 2/28/21 14:43, Jeremy via bitcoin-dev wrote:

I agree with much of the logic presented by Matt here.

BIP8 was intended to be simpler to agree on to maintain consensus, yet we find ourselves in a situation where a "tiny" 
parameter has the potential to cause great network disruption and confusion (rationality is not too useful a concept 
here given differing levels of sophistication and information). It is therefore much simpler and more likely to be 
universally understood by all network participants to just have a flag day. It is easier to communicate what users 
should do and when.


This is ultimately not coercive to users because the upgrade for Taproot itself is provable and analyzable on its own, 
but activation parameters based on what % of economically relevant nodes are running an upgrade by a certain date are 
not. Selecting these sorts of complicated consensus parameters may ultimately present more opportunity for a cooptable 
consensus process than something more straightforward.



That said, a few points strike me as worth delving into.


1) Con: Mandatory signalling is no different than a flag day. Mandatory signaling is effectively 2 flag days -- one for 
the signaling rule, 1 for the taproot type. The reason for the 2 week gap between flag day for signaling and flag day 
for taproot rules is, more or less, so that nodes who aren't taproot ready at the 1st flag day do not end up SPV mining 
(using standardness rules in mempool prevents them from mining an invalid block on top of a valid tip, but does not 
ensure the tip is valid).
2) Con: Releasing a flag day without releasing the LOT=true code leading up to that flag day means that clients would 
not be fully compatible with an early activation that could be proposed before the flag day is reached. E.g., LOT=true 
is a flag day that retains the possibility of being compatible with other BIP8 releases without changing software.
3) Pro: BIP-8 is partially in service of "early activation" and . I'm personally skeptical that early activation is/was 
ever a good idea. A fixed activation date may be largely superior for business purposes, software engineering schedules, 
etc. I think even with signaling BIP8, it would be possibly superior to activate rules at a fixed date (or a quantized 
set of fixed dates, e.g. guaranteeing at least 3 months but maybe more).
4) Pro: part of the argument for BIP-8=false is that it is possible that the rule could not activate, if signaling does 
not occur, providing additional stopgap against dev collusion and bugs. But BIP-8 can activate immediately (with start 
times being proposed 1 month after release?) so we don't have certainty around how much time there is for that secondary 
review process (read -- I think it isn't that valuable) and if there *is* a deadly bug discovered, we might want to 
hard-fork to fix it even if it isn't yet signaled for (e.g., if the rule activates it enables more mining reward). So I 
think that it's a healthier mindset to release a with definite deadline and not rule out having to do a hard fork if 
there is a grave issue (we shouldn't ever release a SF if we think this is at all likely, mind you).
5) Con: It's already taken so long for taproot, the schedule around taproot was based on the idea it could early 
activate, 2022 is now too far away. I don't know how to defray this other than, if your preferred idea is 1 year flag 
day, to do that via LOT=true so that taproot can still have early activation if desired.


Overall I agree with the point that all the contention around LOT, makes a flag day look not so bad. And something 
closer to a flag day might not be so bad either for future forks as well.


However, I think given the appetite for early activation, if a flag day is desired I think LOT=true is the best option 
at this time as it allows our flag day to remain compatible with such an early activation.


I think we can also clearly communicate that LOT=true for Taproot is not a precedent setting occurence for any future 
forks (hold me accountable to not using this as precedent this should I ever advocate for a SF with similar release 
parameters).



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org

[bitcoin-dev] Straight Flag Day (Height) Taproot Activation

2021-02-28 Thread Matt Corallo via bitcoin-dev
As anyone reading this list is aware, there is significant debate around the activation method for the proposed Taproot 
soft fork. So much so, and with so much conviction, that many individuals are committing themselves to running 
incompatible consensus rules. Obviously, such commitments, were they to come to pass, and were a fork to occur as a 
result, would do more harm than any soft-fork does good. Further, such commitments and debate have likely delayed any 
possible release of a future Taproot activation while issues around locked-in activation are debated, instead of 
avoiding it as was the original intent of the "just ship BIP 8 LOT=false and we'll debate the rest if we need to" approach.


Given this, it seems one way to keep the network in consensus would be to simply activate taproot through a traditional, 
no-frills, flag-day (or -height) activation with a flag day of roughly August, 2022. Going back to my criteria laid out 
in [1],


1) I don't believe we have or will see significant, reasonable, and directed objection. This has largely always been the 
case, but it is also critical that the lack of such objection is demonstrable to outside observers. Ironically, the 
ongoing debate (and clear lack of consensus) around activation methods can be said to have had that effect, at least as 
far as the active Bitcoin Reddit/Twitter userbase is concerned. In addition, the public support for Taproot activation 
made by mining pool operators further shows public review and acceptance. Ideally, large Bitcoin business operators who 
previously took part in Bitcoin Optech's Taproot workshop [2] would publicly state something similar prior to release of 
Taproot activation parameters. Because this expectation is social, no technical solution exists, only public statements 
made in broad venues - something which I'd previously argued comes through soft fork activation parameter deployment, 
but which can also come from elsewhere.


2) The high node-level-adoption bar is one of the most critical goals, and the one most currently in jeopardy in a BIP 8 
approach. Users demanding alternative consensus rules (or, worse, configuration flags to change consensus rules on 
individual nodes with an expectation of use) makes this very complicated in the context of BIP 8. Instead of debating 
activation parameters and entrenching individuals into running their own consensus rules, a flag day activation changes 
the problem to one of simply encouraging upgrades, avoiding a lot of possibility for games. Of course in order to meet 
this goal we still need significant time to pass between activation parameter release and activation. Given the delays 
likely to result from debates around BIP 8 parameters, I don't think this is a huge loss. Capitalizing on current 
interest and demand for Taproot further implies a shortened timeline (eg a year and a half instead of two) may be merited.


3) The potential loss of hashpower is likely the biggest risk of this approach. It is derisked somewhat by the public 
commitment of pool operators to Taproot activation, and can be de-risked further by seeking more immediate commitment 
prior to release. Still, given the desire to push for a forced-signaling approach by many, there is more significant 
risk of loss of hashpower in today's approach. In an ideal world, we could do something more akin to BIP 9/BIP 8(false) 
to reduce risk of this further, but its increasingly likely that this is not possible. A flag-day which takes advantage 
of the nonstandardness of Taproot transactions in current Bitcoin Core releases may suffice.


4) Similar arguments apply as the above around the public commitment from pool operators to enforce the Taproot 
consensus rules.


5) Similar arguments apply as the discussion in (1).


Ultimately, the risk which is present in doing flag day activations (and the reason I've argued against them as a 
"default" in the past) are present as well in BIP 8(true), forced-signaling activations where community debate splits 
the consensus rules across nodes. While such a deployment could delay Taproot somewhat, it sidesteps a sufficient amount 
of debate and resulting delay that I wouldn't be surprised if it did not. Further, Taproot has been worked on for many 
years now with no apparent urgency from the many who are suddenly expressing activation urgency, making it more likely 
such urgency is artificial. Those who seek Taproot activation for Bitcoin market reasons should also rejoice - not only 
can the market celebrate the final release of Taproot, but it gets a second celebratory event in 2022 when the 
activation occurs.


Matt


[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html
[2] https://bitcoinops.org/en/schorr-taproot-workshop/
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring alternative activation mechanisms: decreasing threshold

2021-02-27 Thread Matt Corallo via bitcoin-dev
Forced-signaling, or any form of signaling, does not materially change whether a soft fork can be seen to be safe to 
use. Pieter wrote a great post[1] some time ago that goes into depth about the security of soft forks, but, while miners 
can help to avoid the risk of forks, they aren't the determining factor in whether use of a fork should be considered 
safe (ie the fork "has activated").


Not only that, but the signaling methods used in BIP 8/9 (ie the version field in the block header) do not imply 
anything about whether mining pools are running full nodes which enforce the soft fork at all, only whether the pool has 
configured their stratum software to signal or not.


Ultimately, forced-signaling, or signaling period, are not a substitute for having a broad set of upgraded nodes across 
the network, including an overwhelming majority of economically-active nodes, enforcing the rules of a new fork. As this 
can be difficult to measure, waiting some time after a fork and examining upgrade patterns across the network is important.


Matt

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012014.html

On 2/27/21 12:55, Luke Dashjr via bitcoin-dev wrote:

This has the same problems BIP149 did: since there is no signalling, it is
ambiguous whether the softfork has activated at all. Both anti-SF and pro-SF
nodes will remain on the same chain, with conflicting perceptions of the
rules, and resolution (if ever) will be chaotic. Absent resolution, however,
there is a strong incentive not to rely on the rules, and thus it may never
get used, and therefore also never resolved.

Additionally, it loses the flexibility of BIP 8 to, after the initial
deployment, move the timeoutheight sooner.

Luke


On Thursday 25 February 2021 22:33:25 Gregorio Guidi via bitcoin-dev wrote:

Hello,

I followed the debate on LOT=false / LOT=true trying to get a grasp of
the balance of risks and advantages. The summary by Aaron van Wirdum [1]
explains well the difficulties to find a good equilibrium... it
concludes that "perhaps, a new possibility will present itself".

Thinking about such a "new possibility" that overcomes the
LOT=true/false dichotomy, I would like to offer the following proposal.
It could be called "decreasing threshold activation".

Decreasing threshold activation works similarly to BIP8, with the
difference that the threshold that triggers the STARTED -> LOCKED_IN
transition starts at 100% for the first retargeting period, and then is
gradually reduced on each period in steps of 24 blocks (~1,2%). More
precisely:

On the 1st period (starting on start_height): if 2016 out of 2016 blocks
signal, the state is changed to LOCKED_IN on the next period (otherwise
stays STARTED)
On the 2nd period: if 1992 out of 2016 blocks signal (~98.8%), the state
transitions to LOCKED_IN on the next period
On the 3rd period: if 1968 out of 2016 blocks signal (~97.6%), the state
transitions to LOCKED_IN on the next period
...
On the 14th period (~6 months): if 1704 out of 2016 blocks signal
(~84.5%), the state transitions to LOCKED_IN on the next period
...
On the 27th period (~12 months): if 1392 out of 2016 blocks signal
(~69.0%), the state transitions to LOCKED_IN on the next period
...
On the 40th period (~18 months): if 1080 out of 2016 blocks signal
(~53.6%), the state transitions to LOCKED_IN on the next period
...
On the 53th period (~24 months): if 768 out of 2016 blocks signal
(~38.1%), the state transitions to LOCKED_IN on the next period
...
On the 66th period (~30 months): if 456 out of 2016 blocks signal
(~22.6%), the state transitions to LOCKED_IN on the next period
...
On the 79th period (~36 months): if 144 out of 2016 blocks signal
(~7.1%), the state transitions to LOCKED_IN on the next period
...
On the 84th and final period (~39 months): if 24 out of 2016 blocks
signal (~1.2%), the state transitions to LOCKED_IN on the next period,
otherwise goes to FAILED

(For reference, I include below a snippet of pseudocode for the
decreasing thresholds in the style of BIP8 and BIP9.)

Here are the main features and advantages of this approach:

1. It is relatively conservative at the beginning: for activation to
happen in the first year, it requires a clear majority of signaling
hashrate, indicating that the activation is relatively safe. Only later
the threshold starts to move towards "unsafe" territory, accepting the
tradeoff of less support from existing hashrate in exchange for ensuring
that the activation eventually happens.

2. Like LOT=true, the activation will always occur in the end (except in
the negligible case where less than 1.2% of hashrate supports it).

3. This approach is quite easy to implement, in particular it avoids the
extra code to deal with the MUST_SIGNAL period.

4. There are no parameters to set (except startheight). I am a KISS fan,
so this is a plus for me, making the activation mechanism robust and
predictable with less chance for users to shoot themselves in the 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-22 Thread Matt Corallo via bitcoin-dev


> On Feb 22, 2021, at 05:16, Anthony Towns  wrote:
> 
> If a lockinontimeout=true node is requesting compact blocks from a
> lockinontimeout=false node during a chainsplit in the MUST_SIGNAL phase,
> I think that could result in a ban.
> 
>> More importantly, nodes on both sides of the fork need to find each other. 
> 
> (If there was going to be an ongoing fork there'd be bigger things to
> worry about...)

I think it should be clear that a UASF-style command line option to allow 
consensus rule changes in the node in the short term, immediately before a fork 
carries some risk of a fork, even if I agree it may not persist over months. We 
can’t simply ignore that.

> I think the important specific case of this is something like "if a chain
> where taproot is impossible to activate is temporarily the most work,
> miners with lockinontimeout=true need to be well connected so they don't
> end up competing with each other while they're catching back up".

Between this and your above point, I think we probably agree - there is 
material  technical complexity hiding behind a “change the consensus rules“ 
option. Given it’s not a critical feature by any means, putting resources into 
fixing these issues probably isn’t worth it.

Matt___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-21 Thread Matt Corallo via bitcoin-dev
Hmm, indeed, I may have missed that you can skip the headers issues by not 
persisting them, though there are other follow-on effects that are concerning 
and I think still make my point valid.

A node feeding you invalid headers (used to be) cause for a ban - is that 
information still persisted? More importantly, nodes on both sides of the fork 
need to find each other. There’s not a great way to do that without forking the 
address database, DNS seeds and defining a new protocol magic.

Matt

> On Feb 22, 2021, at 00:16, Anthony Towns  wrote:
> 
> On Fri, Feb 19, 2021 at 12:48:00PM -0500, Matt Corallo via bitcoin-dev wrote:
>> It was pointed out to me that this discussion is largely moot as the
>> software complexity for Bitcoin Core to ship an option like this is likely
>> not practical/what people would wish to see.
>> Bitcoin Core does not have infrastructure to handle switching consensus
>> rules with the same datadir - after running with uasf=true for some time,
>> valid blocks will be marked as invalid, 
> 
> I don't think this is true? With the current proposed bip8 code,
> lockinontimeout=true will cause headers to be marked as invalid, and
> won't process the block further. If a node running lockinontimeout=true
> accepts the header, then it will apply the same consensus rules as a
> lockinontimeout=false node.
> 
> I don't think an invalid header will be added to the block index at all,
> so a node restart should always cleanly allow it to be reconsidered.
> 
> The test case in
> 
> https://github.com/bitcoin/bitcoin/pull/19573/commits/bd8517135fc839c3332fea4d9c8373b94c8c9de8
> 
> tests that a node that had rejected a chain due to lockinontimeout=true
> will reorg to that chain after being restarted as a byproduct of the way
> it tests different cases (the nodes set a new startheight, but retain
> their lockinontimeout settings).
> 
> 
> (I think with the current bip8 code, if you switch from
> lockinontimeout=false to lockinontimeout=true and the tip of the current
> most work chain is after the timeoutheight and did not lockin, then you
> will continue following that chain until a taproot-invalid transaction
> is inclued, rather than immediately reorging to a shorter chain that
> complies with the lockinontimeout=true rules)
> 
> Cheers,
> aj
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-21 Thread Matt Corallo via bitcoin-dev
I don’t think “some vocal users are going to threaten to fork themselves off” 
is good justification for technical decisions. It’s important to communicate 
and for everyone to agree/understand that a failed BIP 8/9 activation, in the 
scenario people are worried about, is not the end of the story for Taproot 
activation. If it is clear that Taproot has broad consensus but some miners 
failed to upgrade in time (as it presumably would be), a flag day activation 
seems merited and I’m not sure anyone has argued against this. That said, 
forced-signaling via a UASF/BIP8(true)-style fork carries material additional 
risk that a classic flag-day activation does not, so let’s not optimize for 
something like that.

Matt

> On Feb 21, 2021, at 08:26, Ariel Lorenzo-Luaces via bitcoin-dev 
>  wrote:
> 
> 
> What would be the tradeoffs of a BIP8(false, ∞) option? That would remove 
> some of the concerns of having to coordinate a UASF with an approaching 
> deadline.
> 
> Cheers
> Ariel Lorenzo-Luaces
>> On Feb 19, 2021, at 6:55 PM, ZmnSCPxj via bitcoin-dev 
>>  wrote:
>> Good morning list,
>> 
>>>  It was pointed out to me that this discussion is largely moot as the 
>>> software complexity for Bitcoin Core to ship an
>>>  option like this is likely not practical/what people would wish to see.
>>> 
>>>  Bitcoin Core does not have infrastructure to handle switching consensus 
>>> rules with the same datadir - after running with
>>>  uasf=true for some time, valid blocks will be marked as invalid, and 
>>> additional development would need to occur to
>>>  enable switching back to uasf=false. This is complex, critical code to get 
>>> right, and the review and testing cycles
>>>  needed seem to be not worth it.
>> 
>> Without implying anything else, this can be worked around by a user 
>> maintaining two `datadir`s and running two clients.
>> This would have an "external" client running an LOT=X (where X is whatever 
>> the user prefers) and an "internal" client that is at most 0.21.0, which 
>> will not impose any LOT rules.
>> The internal client then uses `connect=` directive to connect locally to the 
>> external client and connects only to that client, using it as a firewall.
>> The external client can be run pruned in order to reduce diskspace resource 
>> usage (the internal client can remain unpruned if that is needed by the 
>> user, e.g. for LN implementation sthat need to look up arbitrary 
>> short-channel-ids).
>> Bandwidth usage should be same since the internal client only connects to 
>> the external client and the OS should optimize that case.
>> CPU usage is doubled, though.
>> 
>> (the general idea came from gmax, just to be clear, though the below use is 
>> from me)
>> 
>> Then the user can select LOT=C or LOT=!C (where C is whatever Bitcoin Core 
>> ultimately ships with) on the external client based on the user preferences.
>> 
>> If Taproot is not MASF-activated and LOT=!U is what dominates later (where U 
>> is whatever the user decided on), the user can decide to just destroy the 
>> external node and connect the internal node directly to the network 
>> (optionally upgrading the internal node to LOT=!U) as a way to "change their 
>> mind in view of the economy".
>> The internal node will then follow the dominant chain.
>> 
>> 
>> Regards,
>> ZmnSCPxj
>> 
>>> 
>>>  Instead, the only practical way to ship such an option would be to treat 
>>> it as a separate chain (the same way regtest,
>>>  testnet, and signet are treated), including its own separate datadir and 
>>> the like.
>>> 
>>>  Matt
>>> 
>>>>  On 2/19/21 09:13, Matt Corallo via bitcoin-dev wrote:
>>>> 
>>>>  (Also in response to ZMN...)
>>>>  Bitcoin Core has a long-standing policy of not shipping options which 
>>>> shoot yourself in the foot. I’d be very disappointed if that changed now. 
>>>> People are of course more than welcome to run such software themselves, 
>>>> but I anticipate the loud minority on Twitter and here aren’t processing 
>>>> enough transactions or throwing enough financial weight behind their 
>>>> decision for them to do anything but just switch back if they find 
>>>> themselves on a chain with no blocks.
>>>>  There’s nothing we can (or should) do to prevent people from threatening 
>>>> to (and possibly) forking themselves off of bitcoin, but that doesn’t mean 
>>>> we should encourage it either. Th

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread Matt Corallo via bitcoin-dev

(off-list)

Your email client didn't thread correctly, so I'm not sure if you saw my responses to Adam's email, but note that there 
is no such thing as "All that must be done" here - supporting multiple, different, consensus rules for a given chain is 
a nontrivial undertaking in Bitcoin Core from a software perspective. The only practical way is to, just treat it as a 
different chain, which, in practice, it could be.


One group running LOT=true and one running LOT=false results in two Bitcoins, and the software would need to be able to 
handle that (and, presumably, allow users to switch between chains).


Matt

On 2/19/21 17:12, Matt Hill via bitcoin-dev wrote:

Good day all, this is my first post to this mailing list. Per Adam's comment 
below:

 > given there are clearly people of both views, or for now don't care
but might later, it would minimally be friendly and useful if
bitcoin-core has a LOT=true option - and that IMO goes some way to
avoid the assumptive control via defaults.

Both here and elsewhere, the debate taking place is around the manner of Taproot activation, not whether or not Taproot 
should be activated. The latter seems to have widespread support. Given this favorable environment, it seems to me this 
is an incredible opportunity for the developer contingency to "take the high road" while also minimizing time to Taproot 
activation using political incentives. By offering power on the left hand to miners and and power on the right to users, 
neither of whom is expressing disapproval of activation, but both of whom are able to activate without the consent of 
the other, both are incentivized to signal activation as quickly as possible to emerge as the "group that did it". All 
that must be done is to include a LOT=true option to Bitcoin Core that carries a default of LOT=false. Miners can 
activate at any time, users can signal their intent to activate should miners renege, and developers emerge as 
politically neutral in the eyes of both.


Extrapolating a bit, I contend this expanded agency of full node operatorship may result in more users running a full 
node, which is good and healthy. From a miner's point of view, more full nodes only increases the likelihood of future 
UASFs, and so they are even further incentivized to expedite Taproot activation. Perhaps this is a stretch, perhaps not.


To summarize: (1) this positions developers as neutral facilitators who deferred power to the other contingencies; (2) 
we may see a rise in the popularity of running a full node and the number of full node operators; (3) miners are 
incentivized to activate quickly to avoid being perceived as the "bad guys" and to avoid the spread of full nodes; and 
(4) even if miners do not activate, users can organize a UASF in a grass-roots way.


Matt Hill

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread Matt Corallo via bitcoin-dev
It was pointed out to me that this discussion is largely moot as the software complexity for Bitcoin Core to ship an 
option like this is likely not practical/what people would wish to see.


Bitcoin Core does not have infrastructure to handle switching consensus rules with the same datadir - after running with 
uasf=true for some time, valid blocks will be marked as invalid, and additional development would need to occur to 
enable switching back to uasf=false. This is complex, critical code to get right, and the review and testing cycles 
needed seem to be not worth it.


Instead, the only practical way to ship such an option would be to treat it as a separate chain (the same way regtest, 
testnet, and signet are treated), including its own separate datadir and the like.


Matt

On 2/19/21 09:13, Matt Corallo via bitcoin-dev wrote:

(Also in response to ZMN...)

Bitcoin Core has a long-standing policy of not shipping options which shoot 
yourself in the foot. I’d be very disappointed if that changed now. People are 
of course more than welcome to run such software themselves, but I anticipate 
the loud minority on Twitter and here aren’t processing enough transactions or 
throwing enough financial weight behind their decision for them to do anything 
but just switch back if they find themselves on a chain with no blocks.

There’s nothing we can (or should) do to prevent people from threatening to 
(and possibly) forking themselves off of bitcoin, but that doesn’t mean we 
should encourage it either. The work Bitcoin Core maintainers and developers do 
is to recommend courses of action which they believe have reasonable levels of 
consensus and are technically sound. Luckily, there’s strong historical 
precedent for people deciding to run other software around forks, so 
misinterpretation is not very common (just like there’s strong historical 
precedent for miners not unilaterally deciding forks in the case of Segwit).

Matt


On Feb 19, 2021, at 07:08, Adam Back  wrote:

would dev consensus around releasing LOT=false be considered as "developers forcing 
their views on users"?


given there are clearly people of both views, or for now don't care
but might later, it would minimally be friendly and useful if
bitcoin-core has a LOT=true option - and that IMO goes some way to
avoid the assumptive control via defaults.



Otherwise it could be read as saying "developers on average
disapprove, but if you, the market disagree, go figure it out for
yourself" which is not a good message for being defensive and avoiding
mis-interpretation of code repositories or shipped defaults as
"control".



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread Matt Corallo via bitcoin-dev
(Also in response to ZMN...)

Bitcoin Core has a long-standing policy of not shipping options which shoot 
yourself in the foot. I’d be very disappointed if that changed now. People are 
of course more than welcome to run such software themselves, but I anticipate 
the loud minority on Twitter and here aren’t processing enough transactions or 
throwing enough financial weight behind their decision for them to do anything 
but just switch back if they find themselves on a chain with no blocks.

There’s nothing we can (or should) do to prevent people from threatening to 
(and possibly) forking themselves off of bitcoin, but that doesn’t mean we 
should encourage it either. The work Bitcoin Core maintainers and developers do 
is to recommend courses of action which they believe have reasonable levels of 
consensus and are technically sound. Luckily, there’s strong historical 
precedent for people deciding to run other software around forks, so 
misinterpretation is not very common (just like there’s strong historical 
precedent for miners not unilaterally deciding forks in the case of Segwit).

Matt

> On Feb 19, 2021, at 07:08, Adam Back  wrote:
>> would dev consensus around releasing LOT=false be considered as "developers 
>> forcing their views on users"?
> 
> given there are clearly people of both views, or for now don't care
> but might later, it would minimally be friendly and useful if
> bitcoin-core has a LOT=true option - and that IMO goes some way to
> avoid the assumptive control via defaults.

> Otherwise it could be read as saying "developers on average
> disapprove, but if you, the market disagree, go figure it out for
> yourself" which is not a good message for being defensive and avoiding
> mis-interpretation of code repositories or shipped defaults as
> "control".


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev
This is absolutely the case, however note that the activation method itself is consensus code which executes as a part 
of a fork, and one which deserves as much scrutiny as anything else. While taproot is a model of how a soft-fork should 
be designed, this doesn't imply anything about the consensus code which represents the activation thereof.


Hence all the debate around activation - ultimately its also defining a fork, and given the politics around it, one 
which almost certainly carries significantly more risk than Taproot.


Note that I don't believe anyone is advocating for "try to activate, and if it fails, move on". Various people have 
various views on how conservative and timelines for what to do at that point, but I believe most in this discussion are 
OK with flag-day-based activation (given some level of care) if it becomes clear Taproot is supported by a vast majority 
of Bitcoin users and is only not activating due to lagging miner upgrades.


Matt

On 2/18/21 10:04, Keagan McClelland wrote:

Hi all,

I think it's important for us to consider what is actually being considered for 
activation here.

The designation of "soft fork" is accurate but I don't think it adequately conveys how non-intrusive a change like this 
is. All that taproot does (unless I'm completely missing something) is imbue a previously undefined script version with 
actual semantics. In order for a chain reorg to take place it would mean that someone would have to have a use case for 
that script version today. This is something I think that we can easily check by digging through the UTXO set or 
history. If anyone is using that script version, we absolutely should not be using it, but that doesn't mean that we 
can't switch to a script version that no one is actually using.


If no one is even attempting to use the script version, then the change has no effect on whether a chain split occurs 
because there is simply no block that contains a transaction that only some of the network will accept.


Furthermore, I don't know how Bitcoin can stand the test of time if we allow developers who rely on "undefined behavior" 
(which the taproot script version presently is) to exert tremendous influence over what code does or does not get run. 
This isn't a soft fork that makes some particular UTXO's unspendable. It isn't one that bans miners from collecting 
fees. It is a change that means that certain "always accept" transactions actually have real conditions you have to 
meet. I can't imagine a less intrusive change.


On the other hand, choosing to let L=F be a somewhat final call sets a very real precedent that 10% of what I estimate 
to be 1% of bitcoin users can effectively block any change from here on forward. At that point we are saying that miners 
are in control of network consensus in ways they have not been up until now. I don't think this is a more desirable 
outcome to let ~0.1% of the network get to block /non-intrusive/ changes that the rest of the network wants.


I can certainly live with an L=F attempt as a way to punt on the discussion, maybe the activation happens and this will 
all be fine. But if it doesn't, I hardly think that users of Bitcoin are just going to be like "well, guess that's it 
for Taproot". I have no idea what ensues at that point, but probably another community led UASF movement.


I wasn't super well educated on this stuff back in '17 when Segwit went down, as I was new at that time, so if I'm 
missing something please say so. But from my point of view, we can't treat all soft forks as equal.


Keagan

On Thu, Feb 18, 2021 at 7:43 AM Matt Corallo via bitcoin-dev <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


We've had several softforks in Bitcoin which, through the course of their 
activation, had a several-block reorg. That
should be indication enough that we need to very carefully consider 
activation to ensure we reduce the risk of that as
much as absolutely possible. Again, while I think Taproot is a huge 
improvement and am looking forward to being able to
use it, getting unlucky and hitting a 4-block reorg that happens to include 
a double-spend and some PR around an
exchange losing millions would be worse than having Taproot is good.

Matt

On 2/18/21 09:26, Michael Folkson wrote:
 > Thanks for your response Matt. It is a fair challenge. There is always 
going to be an element of risk with soft
forks,
 > all we can do is attempt to minimize that risk. I would argue that risk 
has been minimized for Taproot.
 >
 > You know (better than I do in fact) that Bitcoin (and layers built on 
top of it) greatly benefit from upgrades
such as
 > Taproot. To say we shouldn't do Taproot or any future soft forks because 
there is a small but real risk of chain
splits
 > I think is shortsighted. Indeed I think even if we collectively decided 
n

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev
To ensure we're on the same page, here - I'm not advocating we give up on Taproot. Indeed, without having dug deep into 
the issue, my overall impression is that Knots has a tiny transaction-processing userbase and it likely isn't worth 
giving deep thought to whether it forks itself off from the network or not. My point is that, if it were the case that 
various implementations of Bitcoin's consensus that have material userbases were to release either a configurable 
consensus mechanism (without incredible care being given to it, not just a "we can't decide, whatever" argument) or a 
different consensus, we'd be much, much better off not having Taproot at all.


Matt

On 2/18/21 09:53, Matt Corallo via bitcoin-dev wrote:

You say "short term PR", I say "risking millions of user dollars".

On 2/18/21 09:51, Michael Folkson wrote:
 > getting unlucky and hitting a 4-block reorg that happens to include a double-spend and some PR around an exchange 
losing millions would be worse than having Taproot is good.


We are at the point where an upgrade that confers significant long term benefits for the whole ecosystem is not as 
important as bad short term PR? That is a depressing outlook if that is what you believe.


Even in that worst case scenario exchanges should not lose money if they are 
competent and are able to manage that risk.

On Thu, Feb 18, 2021 at 2:42 PM Matt Corallo mailto:lf-li...@mattcorallo.com>> wrote:

    We've had several softforks in Bitcoin which, through the course of their 
activation, had a several-block reorg. That
    should be indication enough that we need to very carefully consider activation to ensure we reduce the risk of 
that as
    much as absolutely possible. Again, while I think Taproot is a huge improvement and am looking forward to being 
able to

    use it, getting unlucky and hitting a 4-block reorg that happens to include 
a double-spend and some PR around an
    exchange losing millions would be worse than having Taproot is good.

    Matt

    On 2/18/21 09:26, Michael Folkson wrote:
 > Thanks for your response Matt. It is a fair challenge. There is always 
going to be an element of risk with soft
    forks,
 > all we can do is attempt to minimize that risk. I would argue that risk 
has been minimized for Taproot.
 >
 > You know (better than I do in fact) that Bitcoin (and layers built on 
top of it) greatly benefit from upgrades
    such as
 > Taproot. To say we shouldn't do Taproot or any future soft forks because 
there is a small but real risk of chain
    splits
 > I think is shortsighted. Indeed I think even if we collectively decided not to do any future soft fork upgrades 
ever

 > again on this mailing list that wouldn't stop soft fork attempts from 
other people in future.
 >
 > I don't think there is anything else we can do to minimize that risk for 
the Taproot soft fork at this point
    though I'm
 > open to ideas. To reiterate that risk will never be zero. I don't think 
I see Bitcoin as fragile as you seem to
    (though
 > admittedly you have a much better understanding than me of what happened 
in 2017).
 >
 > The likely scenario for the Taproot soft fork is LOT turns out to be entirely irrelevant and miners activate 
Taproot

 > before it becomes relevant. And even the unlikely worst case scenario 
would only cause short term disruption and
 > wouldn't kill Bitcoin long term.
 >
 > On Thu, Feb 18, 2021 at 2:01 PM Matt Corallo mailto:lf-li...@mattcorallo.com>
    <mailto:lf-li...@mattcorallo.com <mailto:lf-li...@mattcorallo.com>>> wrote:
 >
 >     If the eventual outcome is that different implementations (that have material *transaction processing* 
userbases,

 >     and I’m not sure to what extent that’s true with Knots) ship 
different consensus rules, we should stop here
    and not
 >     activate Taproot. Seriously.
 >
 >     Bitcoin is a consensus system. The absolute worst outcome at all 
possible is to have it fall out of consensus.
 >
 >     Matt
 >
 >>     On Feb 18, 2021, at 08:11, Michael Folkson via bitcoin-dev 
mailto:bitcoin-dev@lists.linuxfoundation.org>
 >>     <mailto:bitcoin-dev@lists.linuxfoundation.org 
<mailto:bitcoin-dev@lists.linuxfoundation.org>>> wrote:
 >>
 >>     
 >>     Right, that is one option. Personally I would prefer a Bitcoin Core release sets LOT=false (based on what 
I have

 >>     heard from Bitcoin Core contributors) and a community effort 
releases a version with LOT=true. I don't think
    users
 >>     should be forced to choose something they may have no context on 
before they are allowed to use Bitcoin Core.
 >>
 >>     My current understanding is that roasb

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev

You say "short term PR", I say "risking millions of user dollars".

On 2/18/21 09:51, Michael Folkson wrote:
 > getting unlucky and hitting a 4-block reorg that happens to include a double-spend and some PR around an exchange 
losing millions would be worse than having Taproot is good.


We are at the point where an upgrade that confers significant long term benefits for the whole ecosystem is not as 
important as bad short term PR? That is a depressing outlook if that is what you believe.


Even in that worst case scenario exchanges should not lose money if they are 
competent and are able to manage that risk.

On Thu, Feb 18, 2021 at 2:42 PM Matt Corallo mailto:lf-li...@mattcorallo.com>> wrote:

We've had several softforks in Bitcoin which, through the course of their 
activation, had a several-block reorg. That
should be indication enough that we need to very carefully consider 
activation to ensure we reduce the risk of that as
much as absolutely possible. Again, while I think Taproot is a huge 
improvement and am looking forward to being able to
use it, getting unlucky and hitting a 4-block reorg that happens to include 
a double-spend and some PR around an
exchange losing millions would be worse than having Taproot is good.

Matt

On 2/18/21 09:26, Michael Folkson wrote:
 > Thanks for your response Matt. It is a fair challenge. There is always 
going to be an element of risk with soft
forks,
 > all we can do is attempt to minimize that risk. I would argue that risk 
has been minimized for Taproot.
 >
 > You know (better than I do in fact) that Bitcoin (and layers built on 
top of it) greatly benefit from upgrades
such as
 > Taproot. To say we shouldn't do Taproot or any future soft forks because 
there is a small but real risk of chain
splits
 > I think is shortsighted. Indeed I think even if we collectively decided 
not to do any future soft fork upgrades ever
 > again on this mailing list that wouldn't stop soft fork attempts from 
other people in future.
 >
 > I don't think there is anything else we can do to minimize that risk for 
the Taproot soft fork at this point
though I'm
 > open to ideas. To reiterate that risk will never be zero. I don't think 
I see Bitcoin as fragile as you seem to
(though
 > admittedly you have a much better understanding than me of what happened 
in 2017).
 >
 > The likely scenario for the Taproot soft fork is LOT turns out to be 
entirely irrelevant and miners activate Taproot
 > before it becomes relevant. And even the unlikely worst case scenario 
would only cause short term disruption and
 > wouldn't kill Bitcoin long term.
 >
 > On Thu, Feb 18, 2021 at 2:01 PM Matt Corallo mailto:lf-li...@mattcorallo.com>
>> wrote:
 >
 >     If the eventual outcome is that different implementations (that have 
material *transaction processing* userbases,
 >     and I’m not sure to what extent that’s true with Knots) ship 
different consensus rules, we should stop here
and not
 >     activate Taproot. Seriously.
 >
 >     Bitcoin is a consensus system. The absolute worst outcome at all 
possible is to have it fall out of consensus.
 >
 >     Matt
 >
 >>     On Feb 18, 2021, at 08:11, Michael Folkson via bitcoin-dev 
mailto:bitcoin-dev@lists.linuxfoundation.org>
 >>     >> wrote:
 >>
 >>     
 >>     Right, that is one option. Personally I would prefer a Bitcoin Core 
release sets LOT=false (based on what I have
 >>     heard from Bitcoin Core contributors) and a community effort 
releases a version with LOT=true. I don't think
users
 >>     should be forced to choose something they may have no context on 
before they are allowed to use Bitcoin Core.
 >>
 >>     My current understanding is that roasbeef is planning to set 
LOT=false on btcd (an alternative protocol
 >>     implementation to Bitcoin Core) and Luke Dashjr hasn't yet decided 
on Bitcoin Knots.
 >>
 >>
 >>
 >>     On Thu, Feb 18, 2021 at 11:52 AM ZmnSCPxj mailto:zmnsc...@protonmail.com>
>> wrote:
 >>
 >>         Good morning all,
 >>
 >>         > "An activation mechanism is a consensus change like any other 
change, can be contentious like any other
 >>         change, and we must resolve it like any other change. Otherwise we 
risk arriving at the darkest timeline."
 >>         >
 >>         > Who's we here?
 >>         >
 >>         > Release both and let the network decide.
 >>
 >>         A thing that could be done, without mandating either LOT=true 
or LOT=false, would be to have a release that
 >>         requires a 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev
We've had several softforks in Bitcoin which, through the course of their activation, had a several-block reorg. That 
should be indication enough that we need to very carefully consider activation to ensure we reduce the risk of that as 
much as absolutely possible. Again, while I think Taproot is a huge improvement and am looking forward to being able to 
use it, getting unlucky and hitting a 4-block reorg that happens to include a double-spend and some PR around an 
exchange losing millions would be worse than having Taproot is good.


Matt

On 2/18/21 09:26, Michael Folkson wrote:
Thanks for your response Matt. It is a fair challenge. There is always going to be an element of risk with soft forks, 
all we can do is attempt to minimize that risk. I would argue that risk has been minimized for Taproot.


You know (better than I do in fact) that Bitcoin (and layers built on top of it) greatly benefit from upgrades such as 
Taproot. To say we shouldn't do Taproot or any future soft forks because there is a small but real risk of chain splits 
I think is shortsighted. Indeed I think even if we collectively decided not to do any future soft fork upgrades ever 
again on this mailing list that wouldn't stop soft fork attempts from other people in future.


I don't think there is anything else we can do to minimize that risk for the Taproot soft fork at this point though I'm 
open to ideas. To reiterate that risk will never be zero. I don't think I see Bitcoin as fragile as you seem to (though 
admittedly you have a much better understanding than me of what happened in 2017).


The likely scenario for the Taproot soft fork is LOT turns out to be entirely irrelevant and miners activate Taproot 
before it becomes relevant. And even the unlikely worst case scenario would only cause short term disruption and 
wouldn't kill Bitcoin long term.


On Thu, Feb 18, 2021 at 2:01 PM Matt Corallo mailto:lf-li...@mattcorallo.com>> wrote:

If the eventual outcome is that different implementations (that have 
material *transaction processing* userbases,
and I’m not sure to what extent that’s true with Knots) ship different 
consensus rules, we should stop here and not
activate Taproot. Seriously.

Bitcoin is a consensus system. The absolute worst outcome at all possible 
is to have it fall out of consensus.

Matt


On Feb 18, 2021, at 08:11, Michael Folkson via bitcoin-dev 
mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


Right, that is one option. Personally I would prefer a Bitcoin Core release 
sets LOT=false (based on what I have
heard from Bitcoin Core contributors) and a community effort releases a 
version with LOT=true. I don't think users
should be forced to choose something they may have no context on before 
they are allowed to use Bitcoin Core.

My current understanding is that roasbeef is planning to set LOT=false on 
btcd (an alternative protocol
implementation to Bitcoin Core) and Luke Dashjr hasn't yet decided on 
Bitcoin Knots.



On Thu, Feb 18, 2021 at 11:52 AM ZmnSCPxj mailto:zmnsc...@protonmail.com>> wrote:

Good morning all,

> "An activation mechanism is a consensus change like any other change, 
can be contentious like any other
change, and we must resolve it like any other change. Otherwise we risk 
arriving at the darkest timeline."
>
> Who's we here?
>
> Release both and let the network decide.

A thing that could be done, without mandating either LOT=true or 
LOT=false, would be to have a release that
requires a `taprootlot=1` or `taprootlot=0` and refuses to start if the 
parameter is not set.

This assures everyone that neither choice is being forced on users, and 
instead what is being forced on users,
is for users to make that choice themselves.

Regards,
ZmnSCPxj

>
> On Thu, Feb 18, 2021 at 3:08 AM Michael Folkson via bitcoin-dev 
mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>
> > Thanks for your response Ariel. It would be useful if you responded 
to specific points I have made in the
mailing list post or at least quote these ephemeral "people" you speak 
of. I don't know if you're responding
to conversation on the IRC channel or on social media etc.
> >
> > > The argument comes from a naive assumption that users MUST 
upgrade to the choice that is submitted into
code. But in fact this isn't true and some voices in this discussion 
need to be more humble about what users
must or must not run.
> >
> > I personally have never made this assumption. Of course users 
aren't forced to run any particular software
version, quite the opposite. Defaults set in software versions matter 
though as many users won't change them.
> >
> > > Does no one realize that it is a very possible outcome that if 
LOT=true is 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev
If the eventual outcome is that different implementations (that have material 
*transaction processing* userbases, and I’m not sure to what extent that’s true 
with Knots) ship different consensus rules, we should stop here and not 
activate Taproot. Seriously.

Bitcoin is a consensus system. The absolute worst outcome at all possible is to 
have it fall out of consensus.

Matt

> On Feb 18, 2021, at 08:11, Michael Folkson via bitcoin-dev 
>  wrote:
> 
> 
> Right, that is one option. Personally I would prefer a Bitcoin Core release 
> sets LOT=false (based on what I have heard from Bitcoin Core contributors) 
> and a community effort releases a version with LOT=true. I don't think users 
> should be forced to choose something they may have no context on before they 
> are allowed to use Bitcoin Core. 
> 
> My current understanding is that roasbeef is planning to set LOT=false on 
> btcd (an alternative protocol implementation to Bitcoin Core) and Luke Dashjr 
> hasn't yet decided on Bitcoin Knots.
> 
> 
> 
>> On Thu, Feb 18, 2021 at 11:52 AM ZmnSCPxj  wrote:
>> Good morning all,
>> 
>> > "An activation mechanism is a consensus change like any other change, can 
>> > be contentious like any other change, and we must resolve it like any 
>> > other change. Otherwise we risk arriving at the darkest timeline."
>> >
>> > Who's we here?
>> >
>> > Release both and let the network decide.
>> 
>> A thing that could be done, without mandating either LOT=true or LOT=false, 
>> would be to have a release that requires a `taprootlot=1` or `taprootlot=0` 
>> and refuses to start if the parameter is not set.
>> 
>> This assures everyone that neither choice is being forced on users, and 
>> instead what is being forced on users, is for users to make that choice 
>> themselves.
>> 
>> Regards,
>> ZmnSCPxj
>> 
>> >
>> > On Thu, Feb 18, 2021 at 3:08 AM Michael Folkson via bitcoin-dev 
>> >  wrote:
>> >
>> > > Thanks for your response Ariel. It would be useful if you responded to 
>> > > specific points I have made in the mailing list post or at least quote 
>> > > these ephemeral "people" you speak of. I don't know if you're responding 
>> > > to conversation on the IRC channel or on social media etc.
>> > >
>> > > > The argument comes from a naive assumption that users MUST upgrade to 
>> > > > the choice that is submitted into code. But in fact this isn't true 
>> > > > and some voices in this discussion need to be more humble about what 
>> > > > users must or must not run.
>> > >
>> > > I personally have never made this assumption. Of course users aren't 
>> > > forced to run any particular software version, quite the opposite. 
>> > > Defaults set in software versions matter though as many users won't 
>> > > change them.
>> > >
>> > > > Does no one realize that it is a very possible outcome that if 
>> > > > LOT=true is released there may be only a handful of people that begin 
>> > > > running it while everyone else delays their upgrade (with the very 
>> > > > good reason of not getting involved in politics) and a year later 
>> > > > those handful of people just become stuck at the moment of 
>> > > > MUST_SIGNAL, unable to mine new blocks?
>> > >
>> > > It is a possible outcome but the likely outcome is that miners activate 
>> > > Taproot before LOT is even relevant. I think it is prudent to prepare 
>> > > for the unlikely but possible outcome that miners fail to activate and 
>> > > hence have this discussion now rather than be unprepared for that 
>> > > eventuality. If LOT is set to false in a software release there is the 
>> > > possibility (T2 in 
>> > > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html)
>> > >  of individuals or a proportion of the community changing LOT to true. 
>> > > In that sense setting LOT=false in a software release appears to be no 
>> > > more safe than LOT=true.
>> > >
>> > > > The result: a wasted year of waiting and a minority of people who 
>> > > > didn't want to be lenient with miners by default.
>> > >
>> > > There is the (unlikely but possible) possibility of a wasted year if LOT 
>> > > is set to false and miners fail to activate. I'm not convinced by this 
>> > > perception that LOT=true is antagonistic to miners. I actually think it 
>> > > offers them clarity on what will happen over a year time period and 
>> > > removes the need for coordinated or uncoordinated community UASF efforts 
>> > > on top of LOT=false.
>> > >
>> > > > An activation mechanism is a consensus change like any other change, 
>> > > > can be contentious like any other change, and we must resolve it like 
>> > > > any other change. Otherwise we risk arriving at the darkest timeline.
>> > >
>> > > I don't know what you are recommending here to avoid "this darkest 
>> > > timeline". Open discussions have occurred and are continuing and in my 
>> > > mailing list post that you responded to **I recommended we propose 
>> > > LOT=false be set in protocol 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread Matt Corallo via bitcoin-dev
Bitcoin is a consensus system. Please let’s not jump to (or even consider) 
options that discourage consensus. We all laughed at (and later academics 
researched showed severe deficiencies in) Bitcoin XT’s “emergent consensus” 
nonsense, why should we start doing things along that line in Bitcoin?

(Resent from the correct email)

Matt

> On Feb 18, 2021, at 06:52, ZmnSCPxj via bitcoin-dev 
>  wrote:
> 
> Good morning all,
> 
>> "An activation mechanism is a consensus change like any other change, can be 
>> contentious like any other change, and we must resolve it like any other 
>> change. Otherwise we risk arriving at the darkest timeline."
>> 
>> Who's we here?
>> 
>> Release both and let the network decide.
> 
> A thing that could be done, without mandating either LOT=true or LOT=false, 
> would be to have a release that requires a `taprootlot=1` or `taprootlot=0` 
> and refuses to start if the parameter is not set.
> 
> This assures everyone that neither choice is being forced on users, and 
> instead what is being forced on users, is for users to make that choice 
> themselves.
> 
> Regards,
> ZmnSCPxj
> 
>> 
>>> On Thu, Feb 18, 2021 at 3:08 AM Michael Folkson via bitcoin-dev 
>>>  wrote:
>>> 
>>> Thanks for your response Ariel. It would be useful if you responded to 
>>> specific points I have made in the mailing list post or at least quote 
>>> these ephemeral "people" you speak of. I don't know if you're responding to 
>>> conversation on the IRC channel or on social media etc.
>>> 
 The argument comes from a naive assumption that users MUST upgrade to the 
 choice that is submitted into code. But in fact this isn't true and some 
 voices in this discussion need to be more humble about what users must or 
 must not run.
>>> 
>>> I personally have never made this assumption. Of course users aren't forced 
>>> to run any particular software version, quite the opposite. Defaults set in 
>>> software versions matter though as many users won't change them.
>>> 
 Does no one realize that it is a very possible outcome that if LOT=true is 
 released there may be only a handful of people that begin running it while 
 everyone else delays their upgrade (with the very good reason of not 
 getting involved in politics) and a year later those handful of people 
 just become stuck at the moment of MUST_SIGNAL, unable to mine new blocks?
>>> 
>>> It is a possible outcome but the likely outcome is that miners activate 
>>> Taproot before LOT is even relevant. I think it is prudent to prepare for 
>>> the unlikely but possible outcome that miners fail to activate and hence 
>>> have this discussion now rather than be unprepared for that eventuality. If 
>>> LOT is set to false in a software release there is the possibility (T2 in 
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html)
>>>  of individuals or a proportion of the community changing LOT to true. In 
>>> that sense setting LOT=false in a software release appears to be no more 
>>> safe than LOT=true.
>>> 
 The result: a wasted year of waiting and a minority of people who didn't 
 want to be lenient with miners by default.
>>> 
>>> There is the (unlikely but possible) possibility of a wasted year if LOT is 
>>> set to false and miners fail to activate. I'm not convinced by this 
>>> perception that LOT=true is antagonistic to miners. I actually think it 
>>> offers them clarity on what will happen over a year time period and removes 
>>> the need for coordinated or uncoordinated community UASF efforts on top of 
>>> LOT=false.
>>> 
 An activation mechanism is a consensus change like any other change, can 
 be contentious like any other change, and we must resolve it like any 
 other change. Otherwise we risk arriving at the darkest timeline.
>>> 
>>> I don't know what you are recommending here to avoid "this darkest 
>>> timeline". Open discussions have occurred and are continuing and in my 
>>> mailing list post that you responded to **I recommended we propose 
>>> LOT=false be set in protocol implementations such as Bitcoin Core**. I do 
>>> think this apocalyptic language isn't particularly helpful. In an open 
>>> consensus system discussion is healthy, we should prepare for bad or worst 
>>> case scenarios in advance and doing so is not antagonistic or destructive. 
>>> Mining pools have pledged support for Taproot but we don't build secure 
>>> systems based on pledges of support, we build them to minimize trust in any 
>>> human actors. We can be grateful that people like Alejandro have worked 
>>> hard on taprootactivation.com (and this effort has informed the discussion) 
>>> without taking pledges of support as cast iron guarantees.
>>> 
>>> TL;DR It sounds like you agree with my recommendation to set LOT=false in 
>>> protocol implementations in my email :)
>>> 
 On Thu, Feb 18, 2021 at 5:43 AM Ariel Lorenzo-Luaces 
  wrote:
>>> 
 Something 

Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-13 Thread Matt Corallo via bitcoin-dev
So we’d kill two birds with one stone if all bloom support was dropped. As far 
as I understand, precomputed filters are now provided via p2p connections as 
well.

Matt

> On Jan 14, 2021, at 00:33, Anthony Towns  wrote:
> 
> On Wed, Jan 13, 2021 at 01:40:03AM -0500, Matt Corallo via bitcoin-dev wrote:
>> Out of curiosity, was the interaction between fRelay and bloom disabling ever
>> specified? ie if you aren’t allowed to enable bloom filters on a connection 
>> due
>> to resource constraints/new limits, is it ever possible to “set” fRelay 
>> later?
> 
> (Maybe I'm missing something, but...)
> 
> In the current bitcoin implementation, no -- you either set
> m_tx_relay->fRelayTxes to true via the VERSION message (either explicitly
> or by not setting fRelay), or you enable it later with FILTERLOAD or
> FILTERCLEAR, both of which will cause a disconnect if bloom filters
> aren't supported. Bloom filter support is (optionally?) indicated via
> a service bit (BIP 111), so you could assume you know whether they're
> supported as soon as you receive the VERSION line.
> 
> fRelay is specified in BIP 37 as:
> 
>  | 1 byte || fRelay || bool || If false then broadcast transactions will
>  not be announced until a filter{load,add,clear} command is received. If
>  missing or true, no change in protocol behaviour occurs.
> 
> BIP 60 defines the field as "relay" and references BIP 37. Don't think
> it's referenced in any other bips.
> 
> Cheers,
> aj
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-12 Thread Matt Corallo via bitcoin-dev
Out of curiosity, was the interaction between fRelay and bloom disabling ever 
specified? ie if you aren’t allowed to enable bloom filters on a connection due 
to resource constraints/new limits, is it ever possible to “set” fRelay later?

Matt

> On Jan 6, 2021, at 11:35, Suhas Daftuar via bitcoin-dev 
>  wrote:
> 
> 
> Hi,
> 
> I'm proposing the addition of a new, optional p2p message to allow peers to 
> communicate that they do not want to send or receive (loose) transactions for 
> the lifetime of a connection. 
> 
> The goal of this message is to help facilitate connections on the network 
> over which only block-related data (blocks/headers/compact blocks/etc) are 
> relayed, to create low-resource connections that help protect against 
> partition attacks on the network.  In particular, by adding a network message 
> that communicates that transactions will not be relayed for the life of the 
> connection, we ease the implementation of software that could have increased 
> inbound connection limits for such peers, which in turn will make it easier 
> to add additional persistent block-relay-only connections on the network -- 
> strengthening network security for little additional bandwidth.
> 
> Software has been deployed for over a year now which makes such connections, 
> using the BIP37/BIP60 "fRelay" field in the version message to signal that 
> transactions should not be sent initially.  However, BIP37 allows for 
> transaction relay to be enabled later in the connection's lifetime, 
> complicating software that would try to distinguish inbound peers that will 
> never relay transactions from those that might.
> 
> This proposal would add a single new p2p message, "disabletx", which (if used 
> at all) must be sent between version and verack.  I propose that this message 
> is valid for peers advertising protocol version 70017 or higher.  Software is 
> free to implement this BIP or ignore this message and remain compatible with 
> software that does implement it.
> 
> Full text of the proposed BIP is below.
> 
> Thanks,
> Suhas
> 
> ---
> 
> 
>   BIP: XXX
>   Layer: Peer Services
>   Title: Disable transaction relay message
>   Author: Suhas Daftuar 
>   Comments-Summary: No comments yet.
>   Comments-URI:
>   Status: Draft
>   Type: Standards Track
>   Created: 2020-09-03
>   License: BSD-2-Clause
> 
> 
> ==Abstract==
> 
> This BIP describes a change to the p2p protocol to allow a node to tell a peer
> that a connection will not be used for transaction relay, to support
> block-relay-only connections that are currently in use on the network.
> 
> ==Motivation==
> 
> For nearly the past year, software has been deployed[1] which initiates
> connections on the Bitcoin network and sets the transaction relay field
> (introduced by BIP 37 and also defined in BIP 60) to false, to prevent
> transaction relay from occurring on the connection. Additionally, addr 
> messages
> received from the peer are ignored by this software.
> 
> The purpose of these connections is two-fold: by making additional
> low-bandwidth connections on which blocks can propagate, the robustness of a
> node to network partitioning attacks is strengthened.  Additionally, by not
> relaying transactions and ignoring received addresses, the ability of an
> adversary to learn the complete network graph (or a subgraph) is reduced[2],
> which in turn increases the cost or difficulty to an attacker seeking to carry
> out a network partitioning attack (when compared with having such knowledge).
> 
> The low-bandwidth / minimal-resource nature of these connections is currently
> known only by the initiator of the connection; this is because the transaction
> relay field in the version message is not a permanent setting for the lifetime
> of the connection.  Consequently, a node receiving an inbound connection with
> transaction relay disabled cannot distinguish between a peer that will never
> enable transaction relay (as described in BIP 37) and one that will.  
> Moreover,
> the node also cannot determine that the incoming connection will ignore 
> relayed
> addresses; with that knowledge a node would likely choose other peers to
> receive announced addresses instead.
> 
> This proposal adds a new, optional message that a node can send a peer when
> initiating a connection to that peer, to indicate that connection should not 
> be
> used for transaction-relay for the connection's lifetime. In addition, without
> a current mechanism to negotiate whether addresses should be relayed on a
> connection, this BIP suggests that address messages not be sent on links where
> tx-relay has been disabled.
> 
> ==Specification==
> 
> # A new disabletx message is added, which is defined as an empty message 
> where pchCommand == "disabletx".
> # The protocol version of nodes implementing this BIP must be set to 70017 or 
> higher.
> # If a node sets the transaction relay field in the version 

Re: [bitcoin-dev] Default Signet, Custom Signets and Resetting Testnet

2020-09-13 Thread Matt Corallo via bitcoin-dev

[resent with correct source, sorry Michael, stupid Apple]

Yes, a “default” signet that regularly reorgs a block or two all the time and is “compatible” with testnet but a faster 
block target (eg so that it is trivial to mine but still has PoW) and freshly-seeded genesis would be a massive step-up 
in testing usability across the space.


I don’t have strong feelings about the multisig policy, but probably something that is at least marginally robust (ie 
2-of-N) and allows valid blocks to select the next block’s signers for key rollovers is probably close enough.


There are various folks with operational experience in the community, so let’s 
not run stuff on DO/AWS/etc, please.

Matt

On 8/29/20 6:14 AM, Michael Folkson via bitcoin-dev wrote:

Hi all

Signet has been announced and discussed previously on the mailing list so I 
won't repeat what Signet is and its motivation.

(For more background we recently had a Socratic Seminar with Kalle Alm and AJ Towns on Signet. Transcript, reading list 
and video are available.)


https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/ 



The first (of multiple) Signet PR 18267 in Bitcoin Core is at an advanced stage of review and certainly additional code 
review and testing of that PR is encouraged.


https://github.com/bitcoin/bitcoin/pull/18267 


However there are some meta questions around Signet(s) that are best discussed outside of the Bitcoin Core repo and it 
would be good to ensure everyone's testing needs are being met. I will put forward my initial thoughts on some of these 
questions. These thoughts seem to be aligned with Kalle's and AJ's initial views but they have not reviewed this post 
and they can chime in if they feel I am misrepresenting their perspectives.


1) Should there be one "default" Signet that we use for specific purpose(s) or should we 
"let a thousand ships sail"?

To be clear there will be multiple custom Signets. Even if we wanted to prevent them we couldn't. But is there an 
argument for having a "default" Signet with a network effect? A Signet that a large proportion of the community is drawn 
to using with tooling and support? I would say yes. Especially if we see Signet as a staging ground for testing proposed 
soft fork(s). Otherwise there will be lots of splintered Signet networks all with different combinations of proposed 
soft forks enabled and no network effect around a particular Signet. I think this would be bewildering for say Taproot 
testers to have to choose between Person A's Signet with Taproot enabled and Person B's Signet with Taproot enabled. For 
this to work there would have to be a formal understanding of at what stage a proposed soft fork should be enabled on 
"default" Signet. It would have to be at a sufficiently mature stage (e.g. BIP number allocated, BIP drafted and under 
review, PR open in Bitcoin Core repo under review etc) but early enough so that it can be tested on Signet well in 
advance of being considered for activation on mainnet. This does present challenges if soft forks are enabled on Signet 
and then change/get updated. However there are approaches that AJ in particular is working on to deal with this, one of 
which I have described below.


https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining 



2) Assuming there is a "default" Signet how many people and who should have keys to sign each new "default" Signet 
block? If one of these keys is lost or stolen should we reset Signet? Should we plan to reset "default" Signet at 
regular intervals anyway (say every two years)?


Currently it is a 1-of-2 multisig with Kalle Alm and AJ Towns having keys. It was suggested on IRC that there should be 
at least one additional key present in the EU/US timezone so blocks can continue to be mined during an Asia-Pacific 
outage. (Kalle and AJ are both in the Asia-Pacific region). Kalle believes we should keep Signet running indefinitely 
unless we encounter specific problems and personally I think this makes sense.


https://github.com/bitcoin/bitcoin/issues/19787#issuecomment-679160691 



3) Kalle has also experienced concern from some in the community that testnet will somehow be replaced by Signet. This 
is not the case. As long as someone out there is mining testnet blocks testnet will continue. However, there is the 
question of whether testnet needs to be reset. It was last reset in 2012 and there are differing accounts on 
whether this is presenting a problem for users of testnet. Assuming Signet is successful there will be less testing on 

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
Hmm, could that not be accomplished by simply building this into new messages? eg, send "betterprotocol", if you see a 
verack and no "betterprotocol" from your peer, send "worseprotocol" before you send a "verack".


Matt

On 8/21/20 5:17 PM, Jeremy wrote:
As for an example of where you'd want multi-round, you could imagine a scenario where you have a feature A which gets 
bugfixed by the introduction of feature B, and you don't want to expose that you support A unless you first negotiate B. 
Or if you can negotiate B you should never expose A, but for old nodes you'll still do it if B is unknown to them. An 
example of this would be (were it not already out without a feature negotiation existing) WTXID/TXID relay.


The SYNC primitve simply codifies what order messages should be in and when you're done for a phase of negotiation 
offering something. It can be done without, but then you have to be more careful to broadcast in the correct order and 
it's not clear when/if you should wait for more time before responding.



On Fri, Aug 21, 2020 at 2:08 PM Jeremy mailto:jlru...@mit.edu>> wrote:

Actually we already have service bits (which are sadly limited) which allow 
negotiation of non bilateral feature
support, so this would supercede that.
--
@JeremyRubin 



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
This seems to be pretty overengineered. Do you have a specific use-case in mind for anything more than simply continuing 
the pattern we've been using of sending a message indicating support for a given feature? If we find some in the future, 
we could deploy something like this, though the current proposal makes it possible to do it on a per-feature case.


The great thing about Suhas' proposal is the diff is about -1/+1 (not including tests), while still getting all the 
flexibility we need. Even better, the code already exists.


Matt

On 8/21/20 3:50 PM, Jeremy wrote:

I have a proposal:

Protocol >= 70016 cease to send or process VERACK, and instead use HANDSHAKEACK, which is completed after feature 
negotiation.


This should make everyone happy/unhappy, as in a new protocol number it's fair game to change these semantics to be 
clear that we're acking more than version.


I don't care about when or where these messages are sequenced overall, it seems to have minimal impact. If I had free 
choice, I slightly agree with Eric that verack should come before feature negotiation, as we want to divorce the idea 
that protocol number and feature support are tied.


But once this is done, we can supplant Verack with HANDSHAKENACK or HANDSHAKEACK to signal success or failure to agree 
on a connection. A NACK reason (version too high/low or an important feature missing) could be optional. Implicit NACK 
would be disconnecting, but is discouraged because a peer doesn't know if it should reconnect or the failure was 
intentional.


--

AJ: I think I generally do prefer to have a FEATURE wrapper as you suggested, or a rule that all messages in this period 
are interpreted as features (and may be redundant with p2p message types -- so you can literally just use the p2p 
message name w/o any data).


I think we would want a semantic (which could be based just on message names, but first-class support would be nice) for 
ACKing that a feature is enabled. This is because a transcript of:


NODE0:
FEATURE A
FEATURE B
VERACK

NODE1:
FEATURE A
VERACK

It remains unclear if Node 1 ignored B because it's an unknown feature, or 
because it is disabled. A transcript like:

NODE0:
FEATURE A
FEATURE B
FEATURE C
ACK A
VERACK

NODE1:
FEATURE A
ACK A
NACK B
VERACK

would make it clear that A and B are known, B is disabled, and C is unknown. C has 0 support, B Node 0 should support 
inbound messages but knows not to send to Node 1, and A has full bilateral support. Maybe instead it could a message 
FEATURE SEND A and FEATURE RECV A, so we can make the split explicit rather than inferred from ACK/NACK.



--

I'd also propose that we add a message which is SYNC, which indicates the end of a list of FEATURES and a request to 
send ACKS or NACKS back (which are followed by a SYNC). This allows multi-round negotiation where based on the presence 
of other features, I may expand the set of features I am offering. I think you could do without SYNC, but there are more 
edge cases and the explicitness is nice given that this already introduces future complexity.


This multi-round makes it an actual negotiation rather than a pure announcement system. I don't think it would be used 
much in the near term, but it makes sense to define it correctly now. Build for the future and all...




--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
Sure, we could do a new message for negotiation, but there doesn’t seem to be a 
lot of reason for it - using the same namespace for negotiation seems fine too. 
In any case, this is one of those things that doesn’t matter in the slightest, 
and if one person volunteers to write a BIP and code, no reason they shouldn’t 
just decide and be allowed to run with it. Rough consensus and running code, as 
it were :)

Matt


> On Aug 20, 2020, at 22:37, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> On Fri, Aug 14, 2020 at 03:28:41PM -0400, Suhas Daftuar via bitcoin-dev 
> wrote:
>> In thinking about the mechanism used there, I thought it would be helpful to
>> codify in a BIP the idea that Bitcoin network clients should ignore unknown
>> messages received before a VERACK.  A draft of my proposal is available here
>> [2].
> 
> Rather than allowing arbitrary messages, maybe it would make sense to
> have a specific feature negotiation message, eg:
> 
>  VERSION ...
>  FEATURE wtxidrelay
>  FEATURE packagerelay
>  VERACK
> 
> with the behaviour being that it's valid only between VERSION and VERACK,
> and it takes a length-prefixed-string giving the feature name, optional
> additional data, and if the feature name isn't recognised the message
> is ignored.
> 
> If we were to support a "polite disconnect" feature like Jeremy suggested,
> it might be easier to do that for a generic FEATURE message, than
> reimplement it for the message proposed by each new feature.
> 
> Cheers,
> aj
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev
ndshake negotiations implemented for optional messages which are valid 
at the negotiated version. The protocol may be flexible while remaining 
validateable. There is no reason to force a client to accept unknown message 
traffic.
A generalized versioning change can be implemented in or after the handshake. 
The latter is already done on an ad-hoc basis. The former is possible as long 
as the peer’s version is sufficient to be aware of the behavior. This does not 
imply any need to send invalid messages. The verack itself can simply be 
extended with a matrix of feature support. There is no reason to complicate 
negotiation with an additional message(s).
FWIW, bip37 did this poorly, adding a feature field to the version message, 
resulting in bip60. Due to this design, older protocol-validating clients were 
broken. In this case it was message length that was presumed to not be 
validated.
e

On Aug 18, 2020, at 07:59, Matt Corallo via bitcoin-dev 
 wrote:


This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with 
different features implemented in each. The Bitcoin protocol hasn't (fully) 
evolved to capture that reality. Initially the Bitcoin protocol had a simple 
numerical version field, but that is wholly impractical for any diverse network 
- some clients may not wish to implement every possible new relay mechanic, and 
why should they have to in order to use other new features?

Bitcoin protocol changes have, many times in recent history, been made via new dummy 
"negotiation" messages, which take advantage of the fact that the Bitcoin 
protocol has always expected clients to ignore unknown messages. Given that pattern, it 
makes sense to have an explicit negotiation phase - after version and before verack, just 
send the list of features that you support to negotiate what the connection will be 
capable of. The exact way we do that doesn't matter much, and sending it as a stream of 
messages which each indicate support for a given protocol feature perfectly captures the 
pattern that has been used in several recent network upgrades, keeping consistency.

Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,
Back in February I posted a proposal for WTXID-based transaction relay[1] (now 
known as BIP 339), which included a proposal for feature negotiation to take 
place prior to the VERACK message being received by each side.  In my email to 
this list, I had asked for feedback as to whether that proposal was 
problematic, and didn't receive any responses.
Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.
In thinking about the mechanism used there, I thought it would be helpful to 
codify in a BIP the idea that Bitcoin network clients should ignore unknown 
messages received before a VERACK.  A draft of my proposal is available here[2].
I presume that software upgrading past protocol version 70016 was already 
planning to either implement BIP 339, or ignore the wtxidrelay message proposed 
in BIP 339 (if not, then this would create network split concerns in the future 
-- so I hope that someone would speak up if this were a problem).  When we 
propose future protocol upgrades that would benefit from feature negotiation at 
the time of connection, I think it would be nice to be able to use the same 
method as proposed in BIP 339, without even needing to bump the protocol 
version.  So having an understanding that this is the standard of how other 
network clients operate would be helpful.
If, on the other hand, this is problematic for some reason, I look forward to 
hearing that as well, so that we can be careful about how we deploy future p2p 
changes to avoid disruption.
Thanks,
Suhas Daftuar
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html 
<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html>
[2] 
https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki
 
<https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev
There are several cases where a new message has been sent as a part of a negotiation without changing the protocol 
version. You may chose to ignore that, but that doesn't mean that it isn't an understood and even relied upon feature of 
the Bitcoin P2P protocol. If you wish to fail connections to new nodes (and risk network splits, as Suhas points out), 
then you may do so, but that doesn't make it a part of the Bitcoin P2P protocol that you must do so. Of course there is 
no "official document" by which we can make a formal appeal, but historical precedent suggests otherwise.


Still, I think we're talking pedantics here, and not in a useful way. Ultimately we need some kind of negotiation which 
is flexible in allowing different software to negotiate different features without a global lock-step version number 
increase. Or, to put it another way, if a feature is fully optional, why should there be a version number increase for 
it - the negotiation of it is independent and a version number only increases confusion over which change "owns" a given 
version number.


I presume you'd support a single message that lists the set of features which a node (optionally) wishes to support on 
the connection. This proposal is fully equivalent to that, instead opting to list them as individual messages instead of 
one message, which is a bit nicer in that they can be handled more independently or by different subsystems including 
even the message hashing.


Matt

On 8/18/20 12:54 PM, Eric Voskuil wrote:

“Bitcoin protocol has always expected clients to ignore unknown messages”

This is not true. Bitcoin has long implemented version negotiation, which is the opposite expectation. Libbitcoin’s p2p 
protocol implementation immediately drops a peer that sends an invalid message according to the negotiated version. The 
fact that a given client does not validate the protocol does not make it an expectation that the protocol not be validated.


Features can clearly be optional within an actual protocol. There have been post-handshake negotiations implemented for 
optional messages which are valid at the negotiated version. The protocol may be flexible while remaining validateable. 
There is no reason to force a client to accept unknown message traffic.


A generalized versioning change can be implemented in or after the handshake. The latter is already done on an ad-hoc 
basis. The former is possible as long as the peer’s version is sufficient to be aware of the behavior. This does not 
imply any need to send invalid messages. The verack itself can simply be extended with a matrix of feature support. 
There is no reason to complicate negotiation with an additional message(s).


FWIW, bip37 did this poorly, adding a feature field to the version message, resulting in bip60. Due to this design, 
older protocol-validating clients were broken. In this case it was message length that was presumed to not be validated.


e


On Aug 18, 2020, at 07:59, Matt Corallo via bitcoin-dev 
 wrote:

This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with different features implemented in each. 
The Bitcoin protocol hasn't (fully) evolved to capture that reality. Initially the Bitcoin protocol had a simple 
numerical version field, but that is wholly impractical for any diverse network - some clients may not wish to 
implement every possible new relay mechanic, and why should they have to in order to use other new features?


Bitcoin protocol changes have, many times in recent history, been made via new dummy "negotiation" messages, which 
take advantage of the fact that the Bitcoin protocol has always expected clients to ignore unknown messages. Given 
that pattern, it makes sense to have an explicit negotiation phase - after version and before verack, just send the 
list of features that you support to negotiate what the connection will be capable of. The exact way we do that 
doesn't matter much, and sending it as a stream of messages which each indicate support for a given protocol feature 
perfectly captures the pattern that has been used in several recent network upgrades, keeping consistency.


Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,
Back in February I posted a proposal for WTXID-based transaction relay[1] (now known as BIP 339), which included a 
proposal for feature negotiation to take place prior to the VERACK message being received by each side.  In my email 
to this list, I had asked for feedback as to whether that proposal was problematic, and didn't receive any responses.

Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.
In thinking about the mechanism used there, I thought it would be helpful to codify in a BIP the idea that Bitcoin 
network clients should ignore unknown messages received before a VERACK.  A draft of my proposal is available he

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev

This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with different features implemented in each. The 
Bitcoin protocol hasn't (fully) evolved to capture that reality. Initially the Bitcoin protocol had a simple numerical 
version field, but that is wholly impractical for any diverse network - some clients may not wish to implement every 
possible new relay mechanic, and why should they have to in order to use other new features?


Bitcoin protocol changes have, many times in recent history, been made via new dummy "negotiation" messages, which take 
advantage of the fact that the Bitcoin protocol has always expected clients to ignore unknown messages. Given that 
pattern, it makes sense to have an explicit negotiation phase - after version and before verack, just send the list of 
features that you support to negotiate what the connection will be capable of. The exact way we do that doesn't matter 
much, and sending it as a stream of messages which each indicate support for a given protocol feature perfectly captures 
the pattern that has been used in several recent network upgrades, keeping consistency.


Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,

Back in February I posted a proposal for WTXID-based transaction relay[1] (now known as BIP 339), which included a 
proposal for feature negotiation to take place prior to the VERACK message being received by each side.  In my email to 
this list, I had asked for feedback as to whether that proposal was problematic, and didn't receive any responses.


Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.

In thinking about the mechanism used there, I thought it would be helpful to codify in a BIP the idea that Bitcoin 
network clients should ignore unknown messages received before a VERACK.  A draft of my proposal is available here[2].


I presume that software upgrading past protocol version 70016 was already planning to either implement BIP 339, or 
ignore the wtxidrelay message proposed in BIP 339 (if not, then this would create network split concerns in the future 
-- so I hope that someone would speak up if this were a problem).  When we propose future protocol upgrades that would 
benefit from feature negotiation at the time of connection, I think it would be nice to be able to use the same method 
as proposed in BIP 339, without even needing to bump the protocol version.  So having an understanding that this is the 
standard of how other network clients operate would be helpful.


If, on the other hand, this is problematic for some reason, I look forward to hearing that as well, so that we can be 
careful about how we deploy future p2p changes to avoid disruption.


Thanks,
Suhas Daftuar

[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html 



[2] https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki 



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-10 Thread Matt Corallo via bitcoin-dev
I was assuming, largely, that Bitcoin Core will eventually get what you describe here (which is generally termed 
"package relay", implying we relay, and process, groups of transactions as one).


What we'd need for SIGHASH_ANYPREVOUT is a relay network that isn't just smart about fee calculation, but can actually 
rewrite the transactions themselves before passing them on to a local bitcoind.


eg such a network would need to be able to relay
"I have transaction A, with one input, which is valid for any output-idx-0 in a 
transaction spending output B".
and then have the receiver go look up which transaction in its mempool/chain spends output B, then fill in the input 
with that outpoint and hand the now-fully-formed transaction to their local bitcoind for processing.


Matt

On 8/7/20 11:34 AM, Richard Myers wrote:
When you say that a special relay network might be more "smart about replacement" in the context of ANYPREVOUT*, do you 
mean these nodes could RBF parts of a package like this:



Given:
  - Package A = UpdateTx_A(n=1): txin: AnchorTx, txout: SettlementTx_A(n=1) -> HtlcTxs(n=1)_A -> .chain of  transactions 
that pin UpdateTx_A(n=1) with high total fee, etc.



And a new package with higher fee rate versions of ANYPREVOUT* transactions in 
the package, but otherwise lower total fee:

  - Package B = UpdateTx_B(n=1): txin: AnchorTx, txout: SettlementTx_B(n=1) -> 
HtlcTxs(n=1)_B -> low total fee package


Relay just the higher up-front fee-rate transactions from package B which get spent by the high absolute fee child 
transactions from package A:


  - Package A' = UpdateTx_B(n=1): txin: AnchorTx, txout: SettlementTx_B(n=1) -> HtlcTxs(n=1)_A -> ...chain of up to 25 
txs that pin UpdateTx(n=1) with high total fee, etc.


On Thu, Aug 6, 2020 at 5:59 PM Matt Corallo via bitcoin-dev <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


In general, SIGHASH_NOINPUT makes these issues much, much simpler to 
address, but only if we assume that nodes can
somehow be "smart" about replacement when they see a SIGHASH_NOINPUT spend 
which can spend an output that something else
in the mempool already spends (potentially a different input than the 
relaying node thinks the transaction should
spend). While ideally we'd be able to shove that (significant) complexity 
into the Bitcoin P2P network, that may not be
feasible, but we could imagine a relay network of lightning nodes doing 
that calculation and then passing the
transactions to their local full nodes. 




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-03 Thread Matt Corallo via bitcoin-dev
While I admit I haven’t analyzed the feasibility, I want to throw one 
additional design consideration into the ring.

Namely, it would ideally be trivial, at the p2p protocol layer, to relay a 
transaction to a full node without knowing exactly which input transaction that 
full node has in its mempool/active chain. This is at least potentially 
important for systems like lighting where you do not know which counterparty 
commitment transaction(s) are in a random node’s mempool and you should be able 
to describe to that node that you are spending then nonetheless.

This is (obviously) an incredibly nontrivial problem both in p2p protocol 
complexity and mempool optimization, but it may leave SIGHASH_NOINPUT rather 
useless for lighting without it.

The least we could do is think about the consensus design in that context, even 
if we have to provide an external overlay relay network in order to make 
lighting transactions relay properly (presumably with miners running such 
software).

Matt

> On Jul 9, 2020, at 17:46, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> Hello world,
> 
> After talking with Christina ages ago, we came to the conclusion that
> it made more sense to update BIP 118 to the latest thinking than have
> a new BIP number, so I've (finally) opened a (draft) PR to update BIP
> 118 with the ANYPREVOUT bip I've passed around to a few people,
> 
> https://github.com/bitcoin/bips/pull/943
> 
> Probably easiest to just read the new BIP text on github:
> 
> https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki
> 
> It doesn't come with tested code at this point, but I figure better to
> have the text available for discussion than nothing.
> 
> Some significant changes since previous discussion include complete lack
> of chaperone signatures or anything like it (if you want them, you can
> always add them yourself, of course), and that ANYPREVOUTANYSCRIPT no
> longer commits to the value (details/rationale in the text).
> 
> Cheers,
> aj
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on soft-fork activation

2020-07-14 Thread Matt Corallo via bitcoin-dev
Thanks Anthony for this writeup!

I find it incredibly disappointing that the idea of naive flag day fork 
activation is being seriously discussed in the
form of BIP 9. Activation of forks is not only about the included changes but 
also around the culture of how changes to
Bitcoin should be and are made. Whether we like it or not, how Taproot 
activates will set a community understanding and
future norms around how many changes are made.

Members of this list lost sleep and years off their life from stress fighting 
to ensure that the process by which
Bitcoin changes is not only principled in its rejection of unilateral changes, 
but also that that idea was broadly
understood, and broadly *enforced* by community members - the only way in which 
it has any impact. That fight is far
from over - Bitcoin's community grows and changes daily, and the history around 
what changed and how has been rewritten
time and time again. Worse still, the principled nature of Bitcoin's change 
process is targeted constantly as untrue in
an attempt by various alternative systems to pretend that their change process 
of "developers ship new code, users run
it blindly" is identical to Bitcoin.

While members of this list may be aware of significant outreach efforts and 
design work to ensure that Taproot is not
only broadly acceptable to Bitcoin users, but also has effectively no impact on 
users who wish not to use it, it is
certainly not the case that all Bitcoin users are aware of that work, nor seen 
the results directly communicated to them.

Worse still, it is hard to argue that a new version of Bitcoin Core containing 
a fixed future activation of a new
consensus rule is anything other than "developers have decided on new rules" 
(even if it is, based on our own knowledge,
not the case). Indeed, even the proposal by Anthony, which makes reference to 
my previous work has this issue, and it
may not be avoidable - there is very legitimate concern over miners blocking 
changes to Bitcoin which do not harm them
which users objectively desire, potentially purely through apathy. But to 
dismiss the concerns over the optics which set
the stage for how future changes are made to Bitcoin purely because miners may 
be too busy with other things to upgrade
their nodes seems naive at best.

I appreciate the concern over activation timeline given miner apathy, and to 
some extend Anthony's work here addresses
that with decreasing activation thresholds during the second signaling period, 
but bikeshedding on timeline may be merited.

To not make every attempt to distance the activation method from the public 
perception unilateral activation strikes me
as the worst of all possible outcomes for Bitcoin's longevity. Having a 
quieting period after BIP 9 activation failure
may not be the best way to do that, but it seems like a reasonable attempt.

Matt

On 7/14/20 5:37 AM, Anthony Towns via bitcoin-dev wrote:
> Hi,
> 
> I've been trying to figure out a good way to activate soft forks in
> future. I'd like to post some thoughts on that. So:
> 
> I think there's two proposals that are roughly plausible. The first is
> Luke's recent update to BIP 8:
> 
> https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki
> 
> It has the advantage of being about as simple as possible, and (in my
> opinion) is an incremental improvement on how segwit was activated. Its
> main properties are:
> 
>- signalling via a version bit
>- state tansitions based on height rather than median time
>- 1 year time frame
>- optional mandatory activation at the end of the year
>- mandatory signalling if mandatory activation occurs
>- if the soft fork activates on the most work chain, nodes don't
>  risk falling out of consensus depending on whether they've opted in
>  to mandatory activation or not
> 
> I think there's some fixable problems with that proposal as it stands
> (mostly already mentioned in the comments in the recently merged PR,
> https://github.com/bitcoin/bips/pull/550 )
> 
> The approach I've been working on is based on the more complicated and
> slower method described by Matt on this list back in January. I've got a
> BIP drafted at:
> 
> 
> https://github.com/ajtowns/bips/blob/202007-activation-dec-thresh/bip-decthresh.mediawiki
> 
> The main difference with the mechanism described in January is that the
> threshold gradually decreases during the secondary period -- it starts at
> 95%, gradually decreases until 50%, then mandatorily activates. The idea
> here is to provide at least some potential reward for miners signalling
> in the secondary phase: if 8% of hashpower had refused to signal for
> a soft-fork, then there would have been no chance of activating until
> the very end of the period. This way, every additional percentage of
> hashpower signalling brings the activation deadline forward.
> 
> The main differences between the two proposals is that the BIP 8 approach
> has a relatively short time 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-24 Thread Matt Corallo via bitcoin-dev
Given transaction relay delays and a network topology that is rather 
transparent if you look closely enough, I think this is very real and very 
practical (double-digit % success rate, at least, with some trial and error 
probably 50+). That said, we all also probably know most of the people who know 
enough to go from zero to doing this practically next week. As for motivated 
folks who have lots of time to read code and dig, this seems like something 
worth fixing in the medium term.

Your observation is what’s largely led me to conclude there isn’t a lot we can 
do here without a lot of creativity and fundamental rethinking of our approach. 
One thing I keep harping on is maybe saving the blind-CPFP approach with a) 
eltoo, and b) some kind of magic transaction relay metadata that allows you to 
specify “this spends at least one output on any transaction that spends output 
X” so that nodes can always apply it properly. But maybe that’s a pipedream of 
complexity. I know Antoine has other thoughts.

Matt

> On Jun 22, 2020, at 04:04, Bastien TEINTURIER via bitcoin-dev 
>  wrote:
> 
> 
> Hey ZmnSCPxj,
> 
> I agree that in theory this looks possible, but doing it in practice with 
> accurate control
> of what parts of the network get what tx feels impractical to me (but maybe 
> I'm wrong!).
> 
> It feels to me that an attacker who would be able to do this would break 
> *any* off-chain
> construction that relies on absolute timeouts, so I'm hoping this is insanely 
> hard to
> achieve without cooperation from a miners subset. Let me know if I'm too 
> optimistic on
> this!
> 
> Cheers,
> Bastien
> 
>> Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :
>> Good morning Bastien,
>> 
>> > Thanks for the detailed write-up on how it affects incentives and 
>> > centralization,
>> > these are good points. I need to spend more time thinking about them.
>> >
>> > > This is one reason I suggested using independent pay-to-preimage
>> > > transactions[1]
>> >
>> > While this works as a technical solution, I think it has some incentives 
>> > issues too.
>> > In this attack, I believe the miners that hide the preimage tx in their 
>> > mempool have
>> > to be accomplice with the attacker, otherwise they would share that tx 
>> > with some of
>> > their peers, and some non-miner nodes would get that preimage tx and be 
>> > able to
>> > gossip them off-chain (and even relay them to other mempools).
>> 
>> I believe this is technically possible with current mempool rules, without 
>> miners cooperating with the attacker.
>> 
>> Basically, the attacker releases two transactions with near-equal fees, so 
>> that neither can RBF the other.
>> It releases the preimage tx near miners, and the timelock tx near non-miners.
>> 
>> Nodes at the boundaries between those that receive the preimage tx and the 
>> timelock tx will receive both.
>> However, they will receive one or the other first.
>> Which one they receive first will be what they keep, and they will reject 
>> the other (and *not* propagate the other), because the difference in fees is 
>> not enough to get past the RBF rules (which requires not just a feerate 
>> increase, but also an increase in absolute fee, of at least the minimum 
>> relay feerate times transaction size).
>> 
>> Because they reject the other tx, they do not propagate the other tx, so the 
>> boundary between the two txes is inviolate, neither can get past that 
>> boundary, this occurs even if everyone is running 100% unmodified Bitcoin 
>> Core code.
>> 
>> I am not a mempool expert and my understanding may be incorrect.
>> 
>> Regards,
>> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread Matt Corallo via bitcoin-dev



On 4/23/20 8:46 AM, ZmnSCPxj wrote:
>>> -   Miners, being economically rational, accept this proposal and include 
>>> this in a block.
>>>
>>> The proposal by Matt is then:
>>>
>>> -   The hashlock branch should instead be:
>>> -   B and C must agree, and show the preimage of some hash H (hashlock 
>>> branch).
>>> -   Then B and C agree that B provides a signature spending the hashlock 
>>> branch, to a transaction with the outputs:
>>> -   Normal payment to C.
>>> -   Hook output to B, which B can use to CPFP this transaction.
>>> -   Hook output to C, which C can use to CPFP this transaction.
>>> -   B can still (somehow) not maintain a mempool, by:
>>> -   B broadcasts its timelock transaction.
>>> -   B tries to CPFP the above hashlock transaction.
>>> -   If CPFP succeeds, it means the above hashlock transaction exists and B 
>>> queries the peer for this transaction, extracting the preimage and claiming 
>>> the A->B HTLC.
>>
>> Note that no query is required. The problem has been solved and the 
>> preimage-containing transaction should now confirm just fine.
> 
> Ah, right, so it gets confirmed and the `blocksonly` B sees it in a block.
> 
> Even if C hooks a tree of low-fee transactions on its hook output or normal 
> payment, miners will still be willing to confirm this and the B hook CPFP 
> transaction without, right?

Correct, once it makes it into the mempool we can CPFP it and all the regular 
sub-package CPFP calculation will pick it
and its descendants up. Of course this relies on it not spending any other 
unconfirmed inputs.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread Matt Corallo via bitcoin-dev
Great summary, a few notes inline.

> On Apr 22, 2020, at 21:50, ZmnSCPxj  wrote:
> 
> Good morning lists et al,
> 
> Let me try to summarize things a little:
> 
> * Suppose we have a forwarding payment A->B->C.
> * Suppose B does not want to maintain a mempool and is running in 
> `blocksonly` mode to reduce operational costs.

Quick point of clarification, due to the mempool lacking a consensus system 
(that’s the whole point, after all :p), there are several reasons to that just 
running a full node/having a mempool isn’t sufficient.

> * C triggers B somehow dropping the B<->C channel, such as by sending an 
> `error` message, which will usually cause the other side to drop the channel 
> onchain using its commitment transaction.
> * The dropped B<->C channel has an HTLC (that was set up during the A->B->C 
> forwarding).
> * The HTLC, being used in a Poon-Dryja channel, actually has the following 
> contract text:
> * The fund may be claimed by either of these clauses:
> * C can claim, if C shows the preimage of some hash H (hashlock branch).
> * B and C must agree, and claim after time L (timelock branch).
> * B holds a signature from C that can claim the timelock branch of the HTLC, 
> for a transaction that spends to an output with an `OP_CHECKSEQUENCEVERIFY`.
> * The signature is `SIGHASH_ALL`, so the transaction has a fixed feerate.
> * C can "pin" the HTLC output by spending using the hashlock branch, and 
> creating a large fee, low fee-rate (tree of) transactions.

Another: this is the simplest example. There are also games around the package 
size limits if I recall correctly.

> * As it is a low fee-rate, miners have no incentive to put this in a block, 
> especially if unrelated higher-fee-rate transactions exist that would earn 
> them more money.
> * Even in a full RBF universe, because of the anti-DoS mempool rules, B 
> cannot evict this pinned transaction by just bidding up the feerate.
> * A replacing transaction cannot evict alternatives unless its absolute fee 
> is greater than the absolute fee of the alternative.
> * The pinning transaction has a high fee, but is blockspace-wasteful, so it 
> is:
>   * Undesirable to mine (low feerate).
>   * Difficult to evict (high fee).
> * Thus, B is unable to get its timelock-branch transaction in the mempools of 
> miners.
> * C waits until the A->B HTLC times out, then:
> * C directly contacts miners with an out-of-band proposal to replace its 
> transaction with an alternative that is much smaller and has a low fee, but 
> much better feerate.

Or they can just wait. For example in today’s mempool it would not be strange 
for a transaction at 1 sat/vbyte to wait a day but eventually confirm.

> * Miners, being economically rational, accept this proposal and include this 
> in a block.
> 
> The proposal by Matt is then:
> 
> * The hashlock branch should instead be:
> * B and C must agree, and show the preimage of some hash H (hashlock branch).
> * Then B and C agree that B provides a signature spending the hashlock 
> branch, to a transaction with the outputs:
> * Normal payment to C.
> * Hook output to B, which B can use to CPFP this transaction.
> * Hook output to C, which C can use to CPFP this transaction.
> * B can still (somehow) not maintain a mempool, by:
> * B broadcasts its timelock transaction.
> * B tries to CPFP the above hashlock transaction.
> * If CPFP succeeds, it means the above hashlock transaction exists and B 
> queries the peer for this transaction, extracting the preimage and claiming 
> the A->B HTLC.

Note that no query is required. The problem has been solved and the 
preimage-containing transaction should now confirm just fine.

> Is that a fair summary?

Yep!

> --
> 
> Naively, and remembering I am completely ignorant of the exact details of the 
> mempool rules, it seems to me quite strange that we are allowing an 
> undesirable transaction (tree) into the mempool:
> 
> * Undesirable to mine (low fee-rate).
> * Difficult to evict (high fee).

As noted, such transactions today are profit in 10 hours. Just because they’re 
big doesn’t mean they don’t pay.

> Miners are not interested in low fee-rate transactions, as long as higher 
> fee-rate transactions exist.
> And being difficult to evict means miners cannot get alternatives that are 
> more lucrative for them.
> 
> The reason (as I understand it) eviction is purposely made difficult here is 
> to prevent certain DoS attacks on Bitcoin nodes, specifically:
> 
> 1. Attacker sends a low fee-rate tx as a "root" transaction.
> 2  Attacker sends thousands of low fee-rate tx that build off the above root.

I believe the limit is 25, though the point stands, mostly from a total-size 
perspective.

> 3. Attacker sends a slightly higher fee-rate alternative to the root, 
> evicting the above tree of txes.
> 4. Attacker sends thousands of low fee-rate tx that build off the latest root.
> 5. GOTO 3.
> 
> However, it seems to me, naively, that "an ounce of prevention 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev


On 4/22/20 7:27 PM, Olaoluwa Osuntokun wrote:
> 
>> Indeed, that is what I’m suggesting
> 
> Gotcha, if this is indeed what you're suggesting (all HTLC spends are now
> 2-of-2 multi-sig), then I think the modifications to the state machine I
> sketched out in an earlier email are required. An exact construction which
> achieves the requirements of "you can't broadcast until you have a secret
> which I can obtain from the htlc sig for your commitment transaction, and my
> secret is revealed with another swap", appears to be an open problem, atm.

Hmm, indeed, it does seem to require a change to the state machine, but I don't 
think a very interesting one. Because B
providing A an HTLC signature spending a commitment transaction B will 
broadcast does not allow A to actually broadcast
said HTLC transaction, B can be rather liberal with it. Indeed, however, it 
would require that B provide such a
signature before A can send the commitment_signed that exists today.

> Even if they're restricted in this fashion (must be a 1-in-1 out,
> sighashall, fees are pre agreed upon), they can still spend that with a CPFP
> (while still unconfirmed in the mempool) and create another heavy tree,
> which puts us right back at the same bidding war scenario?

Right, you'd have to use anchor outputs just like we do on the commitment 
transaction :).

>> There are a bunch of ways of doing pinning - just opting into RBF isn’t
>> even close to enough.
> 
> Mhmm, there're other ways of doing pinning. But with anchors as is defined
> in that spec PR, they're forced to spend with an RBF-replaceable
> transaction, which means the party wishing to time things out can enter into
> a bidding war. If the party trying to impeded things participates in this
> progressive absolute fee increase, it's likely that the war terminates
> with _one_ of them getting into the block, which seems to resolve
> everything?

No? Even if we assume there are no tricks that you can play with, eg, the 
package limits duri eviction, which I'd be
surprised about, the "absolute fee/feerate" thing still screws you. The 
attacker here gets to hold something at the
bottom of the mempool and the poor honest party is going to have to pay an 
absurd (likely more than the HTLC value) fee
just to get it unstuck, whereas the attacker never would have had to pay said 
fee.

> -- Laolung
> 
> 
> On Wed, Apr 22, 2020 at 4:20 PM Matt Corallo  > wrote:
> 
> 
> 
>> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun > > wrote:
>>
>> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures 
>> to
>> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
>> > transaction by sending a transaction which spends it and seeing if it 
>> is
>> > accepted
>>
>> Sorry I still don't follow. By "we clearly need to go the other 
>> direction -
>> all HTLC output spends need to be pre-signed.", you don't mean that the 
>> HTLC
>> spends of the non-broadcaster also need to be an off-chain 2-of-2 
>> multi-sig
>> covenant? If the other party isn't restricted w.r.t _how_ they can spend 
>> the
>> output (non-rbf'd, ect), then I don't see how that addresses anything.
> 
> Indeed, that is what I’m suggesting. Anchor output and all. One thing we 
> could think about is only turning it on
> over a certain threshold, and having a separate 
> “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.
> 
>> Also see my mail elsewhere in the thread that the other party is actually
>> forced to spend their HTLC output using an RBF-replaceable transaction. 
>> With
>> that, I think we're all good here? In the end both sides have the 
>> ability to
>> raise the fee rate of their spending transactions with the highest 
>> winning.
>> As long as one of them confirms within the CLTV-delta, then everyone is
>> made whole.
> 
> It does seem like my cached recollection of RBF opt-in was incorrect but 
> please re-read the intro email. There are a
> bunch of ways of doing pinning - just opting into RBF isn’t even close to 
> enough.
> 
>> [1]: https://github.com/bitcoin/bitcoin/pull/18191
>>
>>
>> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo > > wrote:
>>
>> A few replies inline.
>>
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> >
>> >
>> >> While this is somewhat unintuitive, there are any number of good 
>> anti-DoS
>> >> reasons for this, eg:
>> >
>> > None of these really strikes me as "good" reasons for this 
>> limitation, which
>> > is at the root of this issue, and will also plague any more 
>> complex Bitcoin
>> > contracts which rely on nested trees of transaction to confirm 
>> (CTV, Duplex,
>> > channel factories, etc). Regarding the various (seemingly 
>> arbitrary) package
>> 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev
Hmm, that's an interesting suggestion, it definitely raises the bar for attack 
execution rather significantly. Because lightning (and other second-layer 
systems) already relies heavily on uncensored access to blockchain data, its 
reasonable to extend the "if you don't have enough blocks, aggressively query 
various sources to find new blocks, or, really just do it always" solution to 
"also send relevant transactions while we're at it".

Sadly, unlike for block data, there is no consensus mechanism for nodes to 
ensure the transactions in their mempools are the same as others. Thus, if you 
focus on sending the pinning transaction to miner nodes directly (which isn't 
trivial, but also not nearly as hard as it sounds), you could still pull off 
the attack. However, to do it now, you'd need to
wait for your counterparty to broadcast the corresponding timeout transaction 
(once it is confirmable, and can thus get into mempools), turning the whole 
thing into a mempool-acceptance race. Luckily there isn’t much cost to 
*trying*, though it’s less likely you’ll succeed.

There are also practical design issues - if you’re claiming multiple HTLC 
output in a single transaction the node would need to provide reject messages 
for each input which is conflicted, something which we’d need to think hard 
about the DoS implications of.

In any case, while it’s definitely better than nothing, it’s unclear if it’s 
really the kind of thing I’d want to rely on for my own funds.

Matt


> On 4/22/20 2:24 PM, David A. Harding wrote:
>> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev 
>> wrote:
>> A lightning counterparty (C, who received the HTLC from B, who
>> received it from A) today could, if B broadcasts the commitment
>> transaction, spend an HTLC using the preimage with a low-fee,
>> RBF-disabled transaction.  After a few blocks, A could claim the HTLC
>> from B via the timeout mechanism, and then after a few days, C could
>> get the HTLC-claiming transaction mined via some out-of-band agreement
>> with a small miner. This leaves B short the HTLC value.
> 
> IIUC, the main problem is honest Bob will broadcast a transaction
> without realizing it conflicts with a pinned transaction that's already
> in most node's mempools.  If Bob knew about the pinned transaction and
> could get a copy of it, he'd be fine.
> 
> In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
> and sends his conflicting transaction, the nodes would reply with
> something like "rejected: code 123: conflicts with txid 0123...cdef".
> Bob could then reply with a a getdata('tx', '0123...cdef') to get the
> pinned transaction, parse out its preimage, and resolve the HTLC.
> 
> This approach isn't perfect (if it even makes sense at all---I could be
> misunderstanding the problem) because one of the problems that caused
> BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
> if Bob had at least one honest peer that had the pinned transaction in
> its mempool and which implemented reject-with-conflicting-txid, Bob
> might be ok.
> 
> -Dave

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev



On 4/22/20 12:12 AM, ZmnSCPxj wrote:
> Good morning Matt, and list,
> 
> 
> 
>> RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds, how, 
>> now?")
>> =
>>
>> You'll note that in the discussion of RBF pinning we were pretty broad, 
>> and that that discussion seems to in fact cover
>> our HTLC outputs, at least when spent via (3) or (4). It does, and in 
>> fact this is a pretty severe issue in today's
>> lightning protocol [2]. A lightning counterparty (C, who received the 
>> HTLC from B, who received it from A) today could,
>> if B broadcasts the commitment transaction, spend an HTLC using the 
>> preimage with a low-fee, RBF-disabled transaction.
>> After a few blocks, A could claim the HTLC from B via the timeout 
>> mechanism, and then after a few days, C could get the
>> HTLC-claiming transaction mined via some out-of-band agreement with a 
>> small miner. This leaves B short the HTLC value.
> 
> My (cached) understanding is that, since RBF is signalled using `nSequence`, 
> any `OP_CHECKSEQUENCEVERIFY` also automatically imposes the requirement "must 
> be RBF-enabled", including `<0> OP_CHECKSEQUENCEVERIFY`.
> Adding that clause (2 bytes in witness if my math is correct) to the hashlock 
> branch may be sufficient to prevent C from making an RBF-disabled transaction.

Hmm, indeed, though note that (IIRC) you can break this by adding children or 
parents which are *not* RBF-enabled and
then the package may lose the ability to be RBF'd.

> But then you mention out-of-band agreements with miners, which basically 
> means the transaction might not be in the mempool at all, in which case the 
> vulnerability is not really about RBF or relay, but sheer economics.

No. The whole point of this attack is that you keep a transaction in the 
mempool but unconfirmed via RBF pinning, which
prevents an *alternative* transaction from being confirmed. You then have 
plenty of time to go get it confirmed later.

> The payment is A->B->C, and the HTLC A->B must have a larger timeout (L + 1) 
> than the HTLC B->C (L), in abstract non-block units.
> The vulnerability you are describing means that the current time must now be 
> L + 1 or greater ("A could claim the HTLC from B via the timeout mechanism", 
> meaning the A->B HTLC has timed out already).
> 
> If so, then the B->C transaction has already timed out in the past and can be 
> claimed in two ways, either via B timeout branch or C hashlock branch.
> This sets up a game where B and C bid to miners to get their version of 
> reality committed onchain.
> (We can neglect out-of-band agreements here; miners have the incentive to 
> publicly leak such agreements so that other potential bidders can offer even 
> higher fees for their versions of that transaction.)

Right, I think I didn't explain clearly enough. The point is that, here, B 
tries to broadcast the timeout transaction
but cannot because there is an in-mempool conflict.

> Before L+1, C has no incentive to bid, since placing any bid at all will leak 
> the preimage, which B can then turn around and use to spend from A, and A and 
> C cannot steal from B.
> 
> Thus, B should ensure that *before* L+1, the HTLC-Timeout has been committed 
> onchain, which outright prevents this bidding war from even starting.
> 
> The issue then is that B is using a pre-signed HTLC-timeout, which is needed 
> since it is its commitment tx that was broadcast.
> This prevents B from RBF-ing the HTLC-Timeout transaction.
> 
> So what is needed is to allow B to add fees to HTLC-Timeout:
> 
> * We can add an RBF carve-out output to HTLC-Timeout, at the cost of more 
> blockspace.
> * With `SIGHASH_NOINPUT` we can make the C-side signature 
> `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side signature 
> for a higher-fee version of HTLC-Timeout (assuming my cached understanding of 
> `SIGHASH_NOINPUT` still holds).

This does not solve the issue because you can add as many fees as you want, as 
long as the transaction is RBF-pinned,
there is not much you can do in an automated fashion.

> With this, B can exponentially increase the fee as L+1 approaches.
> If B can get HTLC-Timeout confirmed before L+1, then C cannot steal the HTLC 
> value at all, since the UTXO it could steal from has already been spent.
> 
> In particular, it does not seem to me that it is necessary to change the 
> hashlock-branch transaction of C at all, since this mechanism is enough to 
> sidestep the issue (as I understand it).
> But it does point to a need to make HTLC-Timeout (and possibly symmetrically, 
> HTLC-Success) also fee-bumpable.
> 
> Note as well that this does not require a mempool: B can run in `blocksonly` 
> mode and as each block comes in from L to L+1, if HTLC-Timeout is not 
> confirmed, feebump HTLC-Timeout.
> In particular, HTLC-Timeout comes into play only if B broadcast its own 
> commitment transaction, and B *should* be aware that it 

[bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-20 Thread Matt Corallo via bitcoin-dev
[Hi bitcoin-dev, in lightning-land we recently discovered some quite 
frustrating issues which I thought may merit
broader discussion]

While reviewing the new anchor outputs spec [1] last week, I discovered it 
introduced a rather nasty ability for a user
to use RBF Pinning to steal in-flight HTLCs which are being enforced on-chain. 
Sadly, Antoine pointed out that this is
an issue in today's light as well, though see [2] for qualifications. After 
some back-and-forth with a few other
lightning folks, it seems clear that there is no easy+sane fix (and the 
practicality of exploitation today seems
incredibly low), so soliciting ideas publicly may be the best step forward.

I've included lots of background for those who aren't super comfortable with 
lightning's current design, but if you
already know it well, you can skip at least background 1 & 2.

Background - Lightning's Transactions (you can skip this)
=

As many of you likely know, lightning today does all its update mechanics 
through:
 a) a 2-of-2 multisig output, locking in the channel,
 b) a "commitment transaction", which spends that output: i) back to its 
owners, ii) to "HTLC outputs",
 c) HTLC transactions which spend the relevant commitment transaction HTLC 
outputs.

This somewhat awkward third layer of transactions is required to allow HTLC 
timeouts to be significantly lower than the
time window during which a counterparty may be punished for broadcasting a 
revoked state. That is to say, you want to
"lock-in" the resolution of an HTLC output (ie by providing the hash lock 
preimage on-chain) by a fixed block height
(likely a few hours from the HTLC creation), but the punishment mechanism needs 
to occur based on a sequence height
(possibly a day or more after transaction broadcast).

As Bitcoin has no covanents, this must occur using pre-signed transactions - 
namely "HTLC-Success" and "HTLC-Timeout"
transactions, which finalize the resolution of an HTLC, but have a 
sequence-lock for some time during which the funds
may be taken if they had previously been revoked. To avoid needless delays, if 
the counterparty which did *not*
broadcast the commitment transaction wishes to claim the HTLC value, they may 
do so immediately (as there is no reason
to punish the non-broadcaster for having *not* broadcasted a revoked state). 
Thus, we have four possible HTLC
resolutions depending on the combination of which side broadcast the HTLC and 
which side sent the HTLC (ie who can claim
it vs who can claim it after time-out):

 1) pre-signed HTLC-Success transaction, providing the preimage in the witness 
and sent to an output which is sequence-
locked for some time to provide the non-broadcasting side the opportunity 
to take the funds,
 2) pre-signed HTLC-Timeout transaction, time-locked to N, providing no 
preimage, but with a similar sequence lock and
output as above,
 3) non-pre-signed HTLC claim, providing the preimage in the witness and 
unencumbered by the broadcaster's signature,
 4) non-pre-signed HTLC timeout, OP_CLTV to N, and similarly unencumbered.

Background 2 - RBF Pinning (you can skip this)
==

Bitcoin Core's general policy on RBF transactions is that if a counterparty 
(either to the transaction, eg in lightning,
or not, eg a P2P node which sees the transaction early) can modify a 
transaction, especially if they can add an input or
output, they can prevent it from confirming in a world where there exists a 
mempool (ie in a world where Bitcoin works).
While this is somewhat unintuitive, there are any number of good anti-DoS 
reasons for this, eg:
 * (ok, this is a bad reason, but) a child transaction could be marked 
'non-RBF', which would mean allowing the parent
   be RBF'd would violate the assumptions those who look at the RBF opt-in 
marking make,
 * a parent may be very large, but low feerate - this requires the RBF attempt 
to "pay for its own relay" and include a
   large absolute fee just to get into the mempool,
 * one of the various package size limits is at its maximum, and depending on 
the structure of the package the
   computational complexity of calculation evictions may be more than we want 
to do for a given transaction.

Background 3 - "The RBF Carve-Out" (you can skip this)
==

In today's lightning, we have a negotiation of what we expect the future 
feerate to be when one party goes to close the
channel. All the pre-signed transactions above are constructed with this 
fee-rate in mind, and, given they are all
pre-signed, adding additional fee to them is not generally an option. This is 
obviously a very maddening prediction
game, especially when the security consequences for negotiating a value which 
is wrong may allow your counterparty to
broadcast and time out HTLCs which you otherwise have the preimage for. To 
remove this quirk, we came up with an idea a
year or two back now called "anchor outputs" (aka 

Re: [bitcoin-dev] Taproot (and graftroot) complexity

2020-02-09 Thread Matt Corallo via bitcoin-dev
Responding purely to one point as this may be sufficient to clear up
lots of discussion:

On 2/9/20 8:19 PM, Bryan Bishop via bitcoin-dev wrote:
> Is Taproot just a probability assumption about the frequency and
> likelihood of
> the signature case over the script case? Is this a good assumption?  The BIP
> only goes as far as to claim that the advantage is apparent if the outputs
> *could be spent* as an N of N, but doesn't make representations about
> how likely
> that N of N case would be in practice compared to the script paths. Perhaps
> among use cases, more than half of the ones we expect people to be doing
> could be
> spent as an N of N. But how frequently would that path get used?
> Further, while
> the *use cases* might skew toward things with N of N opt-out, we might
> end up in
> a power law case where it's the one case that doesn't use an N of N opt
> out at
> all (or at a de minimis level) that becomes very popular, thereby making
> Taproot
> more costly then beneficial.
Its not just about the frequency and likelihood, no. If there is a
clearly-provided optimization for this common case in the protocol, then
it becomes further more likely that developers put in the additional
effort required to make this possibility a reality. This has a very
significant positive impact on user privacy, especially those who wish
to utilize more advanced functionality in Bitcoin. Further, yes, it is
anticipated that the N of N case is possible to take in the vast
majority of deployed use-cases for advanced scripting systems, ensuring
that it is maximally efficient to do so (and thereby encouraging
developers to do so) is a key goal in this work.

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Characterizing orphan transaction in the Bitcoin network

2020-02-02 Thread Matt Corallo via bitcoin-dev
The orphan pool has nontrivial denial of service properties around transaction 
validation. In general, I think the goal has been to reduce/remove it, not the 
other way around. In any case, this is likely the wrong forum for 
software-level discussion of Bitcoin Core. For that, you probably want to open 
an issue on github.com/bitcoin/bitcoin.

Matt

> On Feb 1, 2020, at 14:12, Anas via bitcoin-dev 
>  wrote:
> 
> 
> Hi all,
> 
> This paper - https://arxiv.org/pdf/1912.11541.pdf - characterizes orphan 
> transactions in the Bitcoin network and shows that increasing the size of the 
> orphan pool reduces network overhead with almost no additional performance 
> overhead. What are your thoughts?
> 
> Abstract: 
>> Orphan transactions are those whose parental income-sources are missing at 
>> the time that they are processed. These transactions are not propagated to 
>> other nodes until all of their missing parents are received, and they thus 
>> end up languishing in a local buffer until evicted or their parents are 
>> found. Although there has been little work in the literature on 
>> characterizing the nature and impact of such orphans, it is intuitive that 
>> they may affect throughput on the Bitcoin network. This work thus seeks to 
>> methodically research such effects through a measurement campaign of orphan 
>> transactions on live Bitcoin nodes. Our data show that, surprisingly, orphan 
>> transactions tend to have fewer parents on average than non-orphan 
>> transactions. Moreover, the salient features of their missing parents are a 
>> lower fee and larger size than their non-orphan counterparts, resulting in a 
>> lower transaction fee per byte. Finally, we note that the network overhead 
>> incurred by these orphan transactions can be significant, exceeding 17% when 
>> using the default orphan memory pool size (100 transactions). However, this 
>> overhead can be made negligible, without significant computational or memory 
>> demands, if the pool size is merely increased to 1000 transactions.
> 
> Regards,
> Anas
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-14 Thread Matt Corallo via bitcoin-dev
In general, your thoughts on the theory of how consensus changes should
work I strongly agree with. However, my one significant disagreement is
how practical it is for things to *actually* work that way. While I wish
ecosystem players (both businesses and users) spent their time
interacting with the Bitcoin development community enough that they had
a deep understanding of upcoming protocol change designs, it just isn't
realistic to expect that. Thus, having an "out" to avoid activation
after a release has been cut with fork activation logic is quite a
compelling requirement.

Thus, part of the goal here is that we ensure we have that "out", and
can observe the response of the ecosystem once the change is "staring
them in the face", as it were. A BIP 9 process is here not only to offer
a compelling activation path, but *also* to allow for observation and
discussion time for any lingering minor objections prior to a BIP 8/flag
day activation.

As for a "mandatory signaling period" as a part of BIP 8, I find this
idea strange both in that it flies in the face of all recent soft fork
design work, and because it doesn't actually accomplish its stated goal.

Recent soft-fork design has all been about how to design something with
minimal ecosystem impact. Certainly in the 95% activation case I can't
say I feel strongly, but if you actually *hit* the BIP 8 flag day,
deliberately causing significant network forks for old clients has the
potential to cause real ecosystem risk. While part of the reason for a
24-month time horizon between BIP 8 decision and flag-day activation
endeavors to de-risk the chance that major players are running on
un-upgraded nodes, you cannot ignore the reality of them, both full-,
and SPV-clients.

On the other hand, in practice, we've seen that version bits are set on
the pool side, and not on the node side, meaning the goal of ensuring
miners have upgraded isn't really accomplished in practice, you just end
up forking the chain for no gain.

Matt

On 1/11/20 2:42 PM, Anthony Towns wrote:
> On Fri, Jan 10, 2020 at 09:30:09PM +, Matt Corallo via bitcoin-dev wrote:
>> 1) a standard BIP 9 deployment with a one-year time horizon for
>> activation with 95% miner readiness,
>> 2) in the case that no activation occurs within a year, a six month
>> quieting period during which the community can analyze and discussion
>> the reasons for no activation and,
>> 3) in the case that it makes sense, a simple command-line/bitcoin.conf
>> parameter which was supported since the original deployment release
>> would enable users to opt into a BIP 8 deployment with a 24-month
>> time-horizon for flag-day activation (as well as a new Bitcoin Core
>> release enabling the flag universally).
> 
> FWIW etc, but my perspective on this is that the way we want consensus
> changes in Bitcoin to work is:
> 
>  - decentralised: we want everyone to be able to participate, in
>designing/promoting/reviewing changes, without decision making
>power getting centralised amongst one group or another
> 
>  - technical: we want changes to be judged on their objective technical
>merits; politics and animal spirits and the like are fine, especially
>for working out what to prioritise, but they shouldn't be part of the
>final yes/no decision on consensus changes
> 
>  - improvements: changes might not make everyone better off, but we
>don't want changes to screw anyone over either -- pareto
>improvements in economics, "first, do no harm", etc. (if we get this
>right, there's no need to make compromises and bundle multiple
>flawed proposals so that everyone's an equal mix of happy and
>miserable)
> 
> In particular, we don't want to misalign skills and responsibilities: it's
> fine for developers to judge if a proposal has bugs or technical problems,
> but we don't want want developers to have to decide if a proposal is
> "sufficiently popular" or "economically sound" and the like, for instance.
> Likewise we don't want to have miners or pool operators have to take
> responsibility for managing the whole economy, rather than just keeping
> their systems running.
> 
> So the way I hope this will work out is:
> 
>  - investors, industry, people in general work out priorities for what's
>valuable to work on; this is an economic/policy/subjective question,
>that everyone can participate in, and everyone can act on --
>either directly if they're developers who can work on proposals and
>implementations directly, or indirectly by persuading or paying other
>people to work on whatever's important
> 
>  - developers work on proposals, designing and implementing them to make
>(some subset of) bitcoin users better off, and t

Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-14 Thread Matt Corallo via bitcoin-dev
Good thing no one is proposing a naive BIP 9 approach :). I'll note that
BIP 9 has been fairly robust (spy-mining issues notwithstanding, which
we believe are at least largely solved in the wild) in terms of safety,
though I noted extensively in the first mail that it failed in terms of
misunderstanding the activation parameters. I think the above proposal
largely solves that, and I don't see much in the way of arguing that
point from you, here.

As an aside, BIP 9 is also the Devil We Know, which carries a lot of
value, since we've found (and addressed) direct issues with it, whereas
all other activation methods we have ~0 experience with in the modern
Bitcoin network.

On 1/10/20 11:37 PM, Luke Dashjr wrote:
> I think BIP 9 is a proven failure, and flag day softforks have their own 
> problems:
> 
> A) There is no way to unambiguously say "the rules for this chain are 
> ". It leaves the chain in a kind of "quantum state" where the rules 
> could be one thing, or could be another. Until the new rules are violated, we 
> do not know if the softfork was a success or not. Because of this, people 
> will rightly shy away from relying on the new rules. This problem is made 
> worse by the fact that common node policies might not produce blocks which 
> violate the rules. If we had gone with BIP149 for Segwit, it is IMO probable 
> we would still not have a clear answer today to "Is Segwit active or not?"
> 
> B) Because of (A), there is also no clear way to intentionally reject the 
> softfork. Those who do not consent to it are effectively compelled to accept 
> it anyway. While it is usually possible to craft an opposing softfork, this 
> should IMO be well-defined and simple to do (including a plan to do so in any 
> BIP9-alike spec).
> 
> For these reasons, in 2017, I proposed revising BIP 8 with a mandatory 
> signal, 
> similar to how BIP148 worked: https://github.com/bitcoin/bips/pull/550
> However, the author of BIP 8 has since vanished, and because we had no 
> immediate softfork plans, efforts to move this forward were abandoned 
> temporarily. It seems like a good time to resume this work.
> 
> In regard to your goal #3, I would like to note that after the mandatory 
> signal period, old miners could resume mining unchanged. This means there is 
> a temporary loss of hashrate to the network, but I think it is overall better 
> than the alternatives. The temporary loss of income from invalid blocks will 
> also give affected miners a last push to upgrade, hopefully improving the 
> long run security of the network hashrate.
> 
> Luke
> 
> (P.S. As for your #1, I do think it is oversimplified in some cases, but we 
> should leave that for later discussion when it actually becomes relevant.)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-10 Thread Matt Corallo via bitcoin-dev
I went back and forth with a few folks on this one. I think the fact that we 
lose goals 3/4 very explicitly in order to nudge miners seems like a poor trade 
off. I’ll note that your point 2 here seems a bit disconnected to me. If you 
want to fork yourself off the network, you can do it in easier ways, and if 
miners want to maliciously censors transactions to the detriment of users, 
rejecting a version bit doesn’t really help avoid that.

Your point about upgrade warnings is well-made, but I’m dubious of it’s value 
over the network chaos many large forks might otherwise cause.

Matt

> On Jan 10, 2020, at 17:22, Jorge Timón  wrote:
> 
> Well, bip9 doesn't only fall apart in case of unreasonable objection,
> it also fails simply with miners' apathy.
> Anyway, your proposed plan should take care of that case too, I think.
> Overall sounds good to me.
> 
> Regarding bip8-like activation, luke-jr suggested that instead of
> simply activating on date x if failed to do so by miners' signaling, a
> consensus rule could require the blocks to signal for activation in
> the last activation window.
> I see 2 main advantages for this:
> 
> 1) Outdated nodes can implement warnings (like in bip9) and they can
> see those warnings even if it's activated in the last activation
> window. Of course this can become counterproductive if miners' squat
> signaling bits for asicboost again.
> 
> 2) It is easier for users to actively resist a given change they
> oppose. Instead of requiring signaling, their nodes can be set to
> ignore chains that activate it. This will result in a fork, but if
> different groups of users want different things, this is arguably the
> best behaviour: a "clean" split.
> 
> I assume many people won't like this, but I really think we should
> consider how users should ideally resist an unwanted change, even if
> the proponents had the best intentions in mind, there may be
> legitimate reasons to resist it that they may not have considered.
> 
>> On Fri, Jan 10, 2020 at 10:30 PM Matt Corallo via bitcoin-dev
>>  wrote:
>> 
>> There are a series of soft-fork designs which have recently been making
>> good progress towards implementation and future adoption. However, for
>> various reasons, activation methods therefor have gotten limited
>> discussion. I'd like to reopen that discussion here.
>> 
>> It is likely worth revisiting the goals both for soft forks and their
>> activation methods to start. I'm probably missing some, but some basic
>> requirements:
>> 
>> 1) Avoid activating in the face of significant, reasonable, and directed
>> objection. Period. If someone has a well-accepted, reasonable use of
>> Bitcoin that is working today, have no reason to believe wouldn't work
>> long into the future without a change, and which would be made
>> impossible or significantly more difficult by a change, that change must
>> not happen. I certainly hope there is no objection on this point (see
>> the last point for an important caveat that I'm sure everyone will jump
>> to point out).
>> 
>> 2) Avoid activating within a timeframe which does not make high
>> node-level-adoption likely. As with all "node" arguments, I'll note that
>> I mean "economically-used" nodes, not the thousand or so spy nodes on
>> Google Cloud and AWS. Rule changes don't make sense without nodes
>> enforcing them, whether they happen to be a soft fork, hard fork, or a
>> blue fork, so activating in a reduced timeframe that doesn't allow for
>> large-scale node adoption doesn't have any value, and may cause other
>> unintended side effects.
>> 
>> 3) Don't (needlessly) lose hashpower to un-upgraded miners. As a part of
>> Bitcoin's security comes from miners, reducing the hashpower of the
>> network as a side effect of a rule change is a needless reduction in a
>> key security parameter of the network. This is why, in recent history,
>> soft forks required 95% of hashpower to indicate that they have upgraded
>> and are capable of enforcing the new rules. Further, this is why recent
>> soft forks have not included changes which would result in a standard
>> Bitcoin Core instance mining invalid-by-new-rules changes (by relying on
>> the standardness behavior of Bitcoin Core).
>> 
>> 4) Use hashpower enforcement to de-risk the upgrade process, wherever
>> possible. As a corollary of the above, one of the primary reasons we use
>> soft forks is that hashpower-based enforcement of rules is an elegant
>> way to prevent network splits during the node upgrade process. While it
>> does not make sense to invest material value in systems protected by ne

[bitcoin-dev] Modern Soft Fork Activation

2020-01-10 Thread Matt Corallo via bitcoin-dev
There are a series of soft-fork designs which have recently been making
good progress towards implementation and future adoption. However, for
various reasons, activation methods therefor have gotten limited
discussion. I'd like to reopen that discussion here.

It is likely worth revisiting the goals both for soft forks and their
activation methods to start. I'm probably missing some, but some basic
requirements:

1) Avoid activating in the face of significant, reasonable, and directed
objection. Period. If someone has a well-accepted, reasonable use of
Bitcoin that is working today, have no reason to believe wouldn't work
long into the future without a change, and which would be made
impossible or significantly more difficult by a change, that change must
not happen. I certainly hope there is no objection on this point (see
the last point for an important caveat that I'm sure everyone will jump
to point out).

2) Avoid activating within a timeframe which does not make high
node-level-adoption likely. As with all "node" arguments, I'll note that
I mean "economically-used" nodes, not the thousand or so spy nodes on
Google Cloud and AWS. Rule changes don't make sense without nodes
enforcing them, whether they happen to be a soft fork, hard fork, or a
blue fork, so activating in a reduced timeframe that doesn't allow for
large-scale node adoption doesn't have any value, and may cause other
unintended side effects.

3) Don't (needlessly) lose hashpower to un-upgraded miners. As a part of
Bitcoin's security comes from miners, reducing the hashpower of the
network as a side effect of a rule change is a needless reduction in a
key security parameter of the network. This is why, in recent history,
soft forks required 95% of hashpower to indicate that they have upgraded
and are capable of enforcing the new rules. Further, this is why recent
soft forks have not included changes which would result in a standard
Bitcoin Core instance mining invalid-by-new-rules changes (by relying on
the standardness behavior of Bitcoin Core).

4) Use hashpower enforcement to de-risk the upgrade process, wherever
possible. As a corollary of the above, one of the primary reasons we use
soft forks is that hashpower-based enforcement of rules is an elegant
way to prevent network splits during the node upgrade process. While it
does not make sense to invest material value in systems protected by new
rules until a significant majority of "economic nodes" is enforcing said
rules, hashpower lets us neatly bridge the gap in time between
activation and then. By having a supermajority of miners enforce the new
rules, attempts at violating the new rules does not result in a
significant network split, disrupting existing users of the system. If
we aren't going to take advantage of this, we should do a hard fork
instead, with the necessarily slow timescale that entails.

5) Follow the will of the community, irrespective of individuals or
unreasoned objection, but without ever overruling any reasonable
objection. Recent history also includes "objection" to soft forks in the
form of "this is bad because it doesn't fix a different problem I want
fixed ASAP". I don't think anyone would argue this qualifies as a
reasonable objection to a change, and we should be in a place, as a
community (never as developers or purely one group), to ignore such
objections and make forward progress in spite of them. We don't make
good engineering decisions by "bundling" unrelated features together to
enable political football and compromise.

I think BIP 9 (plus a well-crafted softfork) pretty effectively checks
the boxes for #2-4 here, and when done carefully with lots of community
engagement and measurement, can effectively fulfill #1 as well. #5 is,
as I'm sure everyone is aware, where it starts to fall down pretty hard.

BIP 8 has been proposed as an alternative, largely in response to issues
with #5. However, a naive deployment of it, rather obviously, completely
fails #1, #3, and #4, and, in my view, fails #5 as well by both giving
an impression of, setting a precedent of, and possibly even in practice
increasing the ability of developers to decide the consensus rules of
the system. A BIP 8 deployment that more accurately measures community
support as a prerequisite could arguably fulfill #1 and #5, though I'm
unaware of any concrete proposals on how to accomplish that. Arguably, a
significantly longer activation window could also allow BIP 8 to fulfill
#3 and #4, but only by exploiting the "needlessly" and "wherever
possible" loopholes.

You may note that, from the point of view of achieving the critical
goals here, BIP 8 is only different from a flag-day activation in that,
if it takes the "happy-path" of activating before the flag day, it looks
like BIP 9, but isn't guaranteed to. It additionally has the
"nice-to-have" property that activation can occur before the flag-day in
the case of faster miner adoption, though there is a limit of how fast
is useful due 

Re: [bitcoin-dev] v3 onion services

2019-11-17 Thread Matt Corallo via bitcoin-dev
There is effort ongoing to upgrade the Bitcoin P2P protocol to support other 
address types, including onion v3. There are various posts on this ML under the 
title “addrv2”. Further review and contributions to that effort is, as always, 
welcome.

> On Nov 17, 2019, at 00:05, Mr. Lee Chiffre via bitcoin-dev 
>  wrote:
> 
> Right now bitcoin client core supports use of tor hidden service. It
> supports v2 hidden service. I am in progress of creating a new bitcoin
> node which will use v3 hidden service instead of v2. I am looking at
> bitcoin core and btcd to use. Do any of these or current node software
> support the v3 onion addresses for the node address? What about I2P
> addresses? If not what will it take to get it to support the longer
> addresses that is used by i2p and tor v3?
> 
> 
> -- 
> lee.chif...@secmail.pro
> PGP 97F0C3AE985A191DA0556BCAA82529E2025BDE35
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bech32 weakness and impact on bip-taproot addresses

2019-11-10 Thread Matt Corallo via bitcoin-dev
Seems good to me, though I'm curious if we have any (even vaguely)
immediate need for non-32/20-byte Segwit outputs? It seems to me this
can be resolved by just limiting the size of bech32 outputs and calling
it a day - adding yet another address format has very significant
ecosystem costs, and if we don't anticipate needing it for 5 years (if
at all)...lets not jump to pay that cost.

Matt

On 11/10/19 9:51 PM, Pieter Wuille via bitcoin-dev wrote:
> On Thu, Nov 7, 2019, 18:16 David A. Harding  > wrote:
> 
> On Thu, Nov 07, 2019 at 02:35:42PM -0800, Pieter Wuille via
> bitcoin-dev wrote:
> > In the current draft, witness v1 outputs of length other
> > than 32 remain unencumbered, which means that for now such an
> > insertion or erasure would result in an output that can be spent by
> > anyone. If that is considered unacceptable, it could be prevented by
> > for example outlawing v1 witness outputs of length 31 and 33.
> 
> Either a consensus rule or a standardness rule[1] would require anyone
> using a bech32 library supporting v1+ segwit to upgrade their library.
> Otherwise, users of old libraries will still attempt to pay v1 witness
> outputs of length 31 or 33, causing their transactions to get rejected
> by newer nodes or get stuck on older nodes.  This is basically the
> problem #15846[2] was meant to prevent.
> 
> If we're going to need everyone to upgrade their bech32 libraries
> anyway, I think it's probably best that the problem is fixed in the
> bech32 algorithm rather than at the consensus/standardness layer.
> 
> 
> Admittedly, this affecting development of consensus or standardness
> rules would feel unnatural. In addition, it also has the potential
> downside of breaking batched transactions in some settings (ask an
> exchange for a withdrawal to a invalid/nonstandard version, which they
> batch with other outputs that then get stuck because the transaction
> does not go through).
> 
> So, Ideally this is indeed solved entirely on the bech32/address
> encoding side of things. I did not initially expect the discussion here
> to go in that direction, as that could come with all problems that
> rolling out a new address scheme in the first place has. However, there
> may be a way to mostly avoid those problems for the time being, while
> also not having any impact on consensus or standardness rules.
> 
> I believe that most new witness programs we'd want to introduce anyway
> will be 32 bytes in the future, if the option exists. It's enough for a
> 256-bit hash (which has up to 128-bit collision security, and more than
> 128 bits is hard to achieve in Bitcoin anyway), or for X coordinates
> directly. Either of those, plus a small version number to indicate the
> commitment structure should be enough to encode any spendability
> condition we'd want with any achievable security level.
> 
> With that observation, I propose the following. We amend BIP173 to be
> restricted to witness programs of length 20 or 32 (but still support
> versions other than 0). This seems like it may be sufficient for several
> years, until version numbers run out. I believe that some wallet
> implementations already restrict sending to known versions only, which
> means effectively no change for them in addition to normal deployment.
> 
> In the mean time we develop a variant of bech32 with better
> insertion/erasure detecting properties, which will be used for witness
> programs of length different from 20 or 32. If we make sure that there
> are never two distinct valid checksum algorithms for the same output, I
> don't believe there is any need for a new address scheme or a different
> HRP. The latter is something I'd strongly try to avoid anyway, as it
> would mean additional cognitive load on users because of another
> visually distinct address style, plus more logistical overhead
> (coordination and keeping track of 2 HRPs per chain).
> 
> I believe improving bech32 itself is preferable over changing the way
> segwit addresses use bech32, as that can be done without making
> addresses even longer. Furthermore, the root of the issue is in bech32,
> and it is simplest to fix things there. The easiest solution is to
> simply change the constant 1 that is xor'ed into the checksum before
> encoding it to a 30-bit number. This has the advantage that a single
> checksum is never valid for both algoritgms simultaneously. Another
> approach is to implicitly including the length into the checksummed data.
> 
> What do people think?
> 
> Cheers,
> 
> -- 
> Pieter
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bech32 weakness and impact on bip-taproot addresses

2019-11-07 Thread Matt Corallo via bitcoin-dev
Given the issue is in the address format, not the consensus/standardness layer, 
it does seem somewhat strange to jump to addressing it with a 
consensus/standardness fix. Maybe the ship has sailed, but for the sake of 
considering all our options, we could also redefine bech32 to not allow such 
addresses.

Matt

>> On Nov 7, 2019, at 17:47, Greg Sanders via bitcoin-dev 
>>  wrote:
> 
> Could the softer touch of just making them non-standard apply as a future 
> preparation for an accepted softfork? Relaxations could easily be done later 
> if desired.
> 
>>> On Thu, Nov 7, 2019, 5:37 PM Pieter Wuille via bitcoin-dev 
>>>  wrote:
>> Hello all,
>> 
>> A while ago it was discovered that bech32 has a mutation weakness (see
>> https://github.com/sipa/bech32/issues/51 for details). Specifically,
>> when a bech32 string ends with a "p", inserting or erasing "q"s right
>> before that "p" does not invalidate it. While insertion/erasure
>> robustness was not an explicit goal (BCH codes in general only have
>> guarantees about substitution errors), this is very much not by
>> design, and this specific issue could have been made much less
>> impactful with a slightly different approach. I'm sorry it wasn't
>> caught earlier.
>> 
>> This has little effect on the security of P2WPKH/P2WSH addresses, as
>> those are only valid (per BIP173) for specific lengths (42 and 62
>> characters respectively). Inserting 20 consecutive "p"s in a typo
>> seems highly improbable.
>> 
>> I'm making this post because this property may unfortunately influence
>> design decisions around bip-taproot, as was brought up in the review
>> session 
>> (https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017427.html)
>> past tuesday. In the current draft, witness v1 outputs of length other
>> than 32 remain unencumbered, which means that for now such an
>> insertion or erasure would result in an output that can be spent by
>> anyone. If that is considered unacceptable, it could be prevented by
>> for example outlawing v1 witness outputs of length 31 and 33.
>> 
>> Thoughts?
>> 
>> Cheers,
>> 
>> -- 
>> Pieter
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-25 Thread Matt Corallo via bitcoin-dev
I don’te see how? Let’s imagine Party A has two spendable outputs, now they 
stuff the package size on one of their spendable outlets until it is right at 
the limit, add one more on their other output (to meet the Carve-Out), and now 
Party B can’t do anything.

> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
> 
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as 
> long as there is an output available without any descendants. It changes the 
> commitment from "you always need at least, and exactly, one non-CSV output 
> per party. " to "you always need at least one non-CSV output per party. "
> 
> I realize these limits are there for a reason though, but I'm wondering if 
> could relax them. Also now that jeremyrubin has expressed problems with the 
> current mempool limits.
> 
>> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo  
>> wrote:
>> I may be missing something, but I'm not sure how this changes anything?
>> 
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>> 
>> Matt
>> 
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> > 
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> > 
>> > Instead, what about letting the rule be
>> > 
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> > 
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles this, but I’m asking since it seems like
>> > there has been several changes to the acceptance code and eviction
>> > policy since the limit was first introduced.
>> > 
>> > - Johan
>> > 
>> > 
>> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell > > > wrote:
>> > 
>> > Matt Corallo > > > writes:
>> > >>> Thus, even if you imagine a steady-state mempool growth, unless the
>> > >>> "near the top of the mempool" criteria is "near the top of the next
>> > >>> block" (which is obviously *not* incentive-compatible)
>> > >>
>> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
>> > >> block, and assumed you'd only allow RBF if the old package wasn't
>> > in the
>> > >> top and the replacement would be.  That seems incentive
>> > compatible; more
>> > >> than the current scheme?
>> > >
>> > > My point was, because of block time variance, even that criteria
>> > doesn't hold up. If you assume a steady flow of new transactions and
>> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
>> > likely to get confirmed until a few blocks come in "early". Given
>> > block variance within a 12 block window, this is a relatively likely
>> > scenario.
>> > 
>> > [ Digging through old mail. ]
>> > 
>> > Doesn't really matter.  Lightning close algorithm would be:
>> > 
>> > 1.  Give bitcoind unileratal close.
>> > 2.  Ask bitcoind what current expidited fee is (or survey your 
>> > mempool).
>> > 3.  Give bitcoind child "push" tx at that total feerate.
>> > 4.  If next block doesn't contain unilateral close tx, goto 2.
>> > 
>> > In this case, if you allow a simpified RBF where 'you can replace if
>> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
>> > old tx isnt',
>> > it works.
>> > 
>> > It allows someone 100k of free tx spam, sure.  But it's simple.
>> > 
>> > We could further restrict it by marking the unilateral close somehow to
>> > say "gonna be pushed" and further limiting the child tx weight (say,
>> > 5kSipa?) in that case.
>> > 
>> > Cheers,
>> > Rusty.
>> > 

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Matt Corallo via bitcoin-dev
I may be missing something, but I'm not sure how this changes anything?

If you have a commitment transaction, you always need at least, and
exactly, one non-CSV output per party. The fact that there is a size
limitation on the transaction that spends for carve-out purposes only
effects how many other inputs/outputs you can add, but somehow I doubt
its ever going to be a large enough number to matter.

Matt

On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> Reviving this old thread now that the recently released RC for bitcoind
> 0.19 includes the above mentioned carve-out rule.
> 
> In an attempt to pave the way for more robust CPFP of on-chain contracts
> (Lightning commitment transactions), the carve-out rule was added in
> https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> an implementation of a new commitment format for utilizing the Bring
> Your Own Fees strategy using CPFP, I’m wondering if the special case
> rule should have been relaxed a bit, to avoid the need for adding a 1
> CSV to all outputs (in case of Lightning this means HTLC scripts would
> need to be changed to add the CSV delay).
> 
> Instead, what about letting the rule be
> 
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
>   * Have no more than one unconfirmed parent.
> 
> This would of course allow adding a large transaction to each output of
> the unconfirmed parent, which in effect would allow an attacker to
> exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> this a problem with the current mempool acceptance code in bitcoind? I
> would imagine evicting transactions based on feerate when the max
> mempool size is met handles this, but I’m asking since it seems like
> there has been several changes to the acceptance code and eviction
> policy since the limit was first introduced.
> 
> - Johan
> 
> 
> On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > wrote:
> 
> Matt Corallo  > writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't
> in the
> >> top and the replacement would be.  That seems incentive
> compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria
> doesn't hold up. If you assume a steady flow of new transactions and
> one or two blocks come in "late", suddenly "top 4MWeight" isn't
> likely to get confirmed until a few blocks come in "early". Given
> block variance within a 12 block window, this is a relatively likely
> scenario.
> 
> [ Digging through old mail. ]
> 
> Doesn't really matter.  Lightning close algorithm would be:
> 
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
> 
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> old tx isnt',
> it works.
> 
> It allows someone 100k of free tx spam, sure.  But it's simple.
> 
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
> 
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> 
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is Signet Bitcoin?

2019-10-14 Thread Matt Corallo via bitcoin-dev
Indeed, Signet is no less (or more) Bitcoin than a seed format or BIP 32. It’s 
“not Bitcoin” but it’s certainly “interoperability for how to build good 
testing for Bitcoin”.

> On Oct 14, 2019, at 19:55, Karl-Johan Alm via bitcoin-dev 
>  wrote:
> 
> Hello,
> 
> The pull request to the bips repository for Signet has stalled, as the
> maintainer isn't sure Signet should have a BIP at all, i.e. "is Signet
> Bitcoin?".
> 
> My argument is that Signet is indeed Bitcoin and should have a BIP, as
> this facilitates the interoperability between different software in
> the Bitcoin space.
> 
> Feedback welcome, here or on the pull request itself:
> https://github.com/bitcoin/bips/pull/803
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-08-14 Thread Matt Corallo via bitcoin-dev
You very clearly didn't bother to read other mails in this thread. To make it 
easy for you, here's a few links:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017147.html
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017175.html

Matt

> On Aug 13, 2019, at 23:05, Will Madden  wrote:
> 
> For the record, strong NACK. My understanding is that this breaks several 
> established SPV implementations (such as early breadwallet for sure and 
> possibly current BRD wallets) and I have yet to see quantitative 
> prioritization or even a rational justification for this change.
> 
> Requiring SPV wallets to communicate with trusted nodes is centralization, 
> and breaking functionality and implementations that enable this without a 
> thoroughly researched rationale is highly suspect.
> 
>> On Jul 20, 2019, at 1:46 PM, Matt Corallo via bitcoin-dev 
>>  wrote:
>> 
>> Just a quick heads-up for those watching the list who may be using it -
>> in the next Bitcoin Core release bloom filter serving will be turned off
>> by default. This has been a long time coming, it's been an option for
>> many releases and has been a well-known DoS vector for some time.
>> As other DoS vectors have slowly been closed, this has become
>> increasingly an obvious low-hanging fruit. Those who are using it should
>> already have long been filtering for NODE_BLOOM-signaling nodes, and I
>> don't anticipate those being gone any time particularly soon.
>> 
>> See-also PR at https://github.com/bitcoin/bitcoin/pull/16152
>> 
>> The release notes will liekly read:
>> 
>> P2P Changes
>> ---
>> - The default value for the -peerbloomfilters configuration option (and,
>> thus, NODE_BLOOM support) has been changed to false.
>> This resolves well-known DoS vectors in Bitcoin Core, especially for
>> nodes with spinning disks. It is not anticipated that
>> this will result in a significant lack of availability of
>> NODE_BLOOM-enabled nodes in the coming years, however, clients
>> which rely on the availability of NODE_BLOOM-supporting nodes on the
>> P2P network should consider the process of migrating
>> to a more modern (and less trustful and privacy-violating) alternative
>> over the coming years.
>> 
>> Matt
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Matt Corallo via bitcoin-dev
This conversation went off the rails somewhat. I don't think there's any 
immediate risk of NODE_BLOOM peers being unavailable. This is a defaults 
change, not a removal of the code to serve BIP 37 peers (nor would I suggest 
removing said code while people still want to use them - the maintenance burden 
isn't much). Looking at historical upgrade cycles, ignoring any other factors, 
there will be a large number of nodes serving NODE_BLOOM for many years.

Even more importantly, if you need them, run a node or two. As long as no one 
is exploiting the issues with them such a node isn't *too* expensive. Or don't, 
I guarantee you chainanalysis or some competitor of theirs will very very 
happily serve bloom-filtered clients as long as such clients want to 
deanonymize themselves. We already see a plurality of nodes on the network are 
clearly not run-of-the-mill Core nodes, many of which are likely 
deanonimization efforts.

In some cases BIP 137 is a replacement, in some cases, indeed, it is not. I 
agree at a protocol level we shouldn't be passing judgement about how users 
wish to interact with the Bitcoin system (aside from not putting our own, 
personal, effort into building such things) but that isn't what's happening 
here. This is an important DoS fix for the average node, and I don't really 
understand the argument that this is going to break existing BIP 37 wallets, 
but if it makes you feel any better I can run some beefy BIP 37 nodes.

Matt

> On Jul 26, 2019, at 06:04, Jonas Schnelli via bitcoin-dev 
>  wrote:
> 
> 
>> 1) It causes way too much traffic for mobile users, and likely even too
>> much traffic for fixed lines in not so developed parts of the world.
> 
> Yes. It causes more traffic than BIP37.
> Basic block filters for current last ~7 days (1008 blocks) are about 19MB 
> (just the filters).
> On top, you will probably fetch a handful of irrelevant blocks due to the FPs 
> and due to true relevant txns.
> A over-the-thumb estimation: ~25MB per week of catch-up.
> If you where offline for a month: ~108MB
> 
> Thats certainly more then BIP37 BF (measured 1.6MB total traffic with android 
> schildbach wallet restore blockchain for 8 week [7 weeks headers, 1week 
> merkleblocks]).
> 
> But lets look at it like this: for an additional, say 25MB per week (maybe a 
> bit more), you get the ability to filter blocks without depending on serving 
> peers who may compromise your financial privacy.
> Also, if you keep the filters, further rescans do consume the same or less 
> bandwidth than BF BIP37.
> In other words: you have the chance to potentially increase privacy by 
> consuming bandwidth in the range of a single audio podcast per week.
> 
> I would say the job of protocol developers is protect users privacy where 
> it’s possible (as a default).
> It’s probably a debatable point wether 25MB per week of traffic is worth a 
> potential increase in privacy, though I absolutely think 25MB/week is an 
> acceptable tradeoff.
> Saving traffic is possible by using BIP37 or stratum/electrum… but developers 
> should make sure users are __warned about the consequences__!
> 
> Additionally, it looks like, peer operators are not endless being willing to 
> serve – for free – a CPU/disk intense service with no benefits for the 
> network. I would question wether a decentralised form of BIP37 is sustainable 
> in the long run (if SPV wallet provider bootstrap a net range of NODE_BLOOM 
> peers to make it more reliable on the network would be snake-oil).
> 
> 
>> 
>> 2) It filters blocks only. It doesn't address unconfirmed transactions.
> 
> Well, unconfirmed transaction are uncertain for various reasons.
> 
> BIP158 won't allow you to filter the mempool.
> But as soon as you are connected to the network, you may fetch tx with 
> inv/getdata and pick out the relevant ones (causes also traffic).
> Unclear and probably impossible with the current BIP158 specs to fetch 
> transactions that are not in active relay and are not in a block (mempool 
> txns, at least this is true with the current observed relay tactics).
> 
> 
>> 3) Afaik, it enforces/encourages address re-use. This stems from the
>> fact that the server decides on the filter and in particular on the
>> false positive rate. On wallets with many addresses, a hardcoded filter
>> will be too blurry and thus each block will be matched. So wallets that
>> follow the "one address per incoming payment" pattern (e.g. HD wallets)
>> at some point will be forced to wrap their key chains back to the
>> beginning. If I'm wrong on this one please let me know.
> 
> I’m probably the wrong guy to ask (haven’t made the numbers) but last time I 
> rescanned a Core wallet (in my dev branch) with block filters (and a Core 
> wallet has >2000 addresses by default) it fetched a low and acceptable amount 
> of false positive blocks.
> (Maybe someone who made the numbers step in here.)
> 
> Though, large wallets – AFAIK – also operate badly with BIP37.
> 
>> 
>> 4) 

Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-22 Thread Matt Corallo via bitcoin-dev
Hey Andreas,

I think maybe some of the comments here were misunderstood - I don't
anticipate that most people will change their defaults, indeed, but
given the general upgrade cycles we've seen on the network over the
entire course of Bitcoin's history, there's little reason to believe
that many nodes with NODE_BLOOM publicly accessible will be around for
at least three or four years to come, though obviously any conscious
effort by folks who need those services to run nodes could extend that
significantly.

As for the DoS issues, a super old Proof-of-Concept of the I/O variant
is here: https://github.com/petertodd/bloom-io-attack though CPU DoS
attacks are also possible that use high hash counts to fill a node's CPU
usage (you can pretty trivially see when a bloom-based peer connects to
you just by looking at top...).

Finally, regarding alternatives, the filter-generation code for BIP
157/158 has been in Bitcoin Core for some time, though the P2P serving
side of things appears to have lost any champions working on it. I
presume one of the Lightning folks will eventually, given they appear to
be requiring their users connect to a handful of their own servers right
now, but if you really need it, its likely not a ton of work to pipe
them through.

Matt

On 7/21/19 10:56 PM, Andreas Schildbach via bitcoin-dev wrote:
> An estimated 10+ million wallets depend on that NODE_BLOOM to be
> updated. So far, I haven't heard of an alternative, except reading all
> transactions and full blocks.
> 
> It goes without saying pulling the rug under that many wallets is a
> disastrous idea for the adoption of Bitcoin.
> 
>> well-known DoS vectors
> 
> I asked many people, even some "core developers" at meetings, but nobody
> ever was able to explain the DoS vector. I think this is just a myth.
> 
> Yes, you can set an overly blurry filter and thus cause useless traffic,
> but it never exceeds just drinking from the full firehose (which this
> change doesn't prohibit). So where is the point? An attacker will just
> switch filtering off, or in fact has never used it.
> 
>> It is not anticipated that
>> this will result in a significant lack of availability of
>> NODE_BLOOM-enabled nodes in the coming years
> 
> Why don't you anticipate that? People almost never change defaults,
> especially if it's not for their own immediate benefit. At the same
> time, release notes in general recommend updating to the latest version.
> I *do* anticipate this will reduce the number of nodes usable by a large
> enough amount so that the feature will become unstable.
> 
>> clients
>> which rely on the availability of NODE_BLOOM-supporting nodes on the
>> P2P network should consider the process of migrating
>> to a more modern (and less trustful and privacy-violating) alternative
>> over the coming years.
> 
> There is no such alternative.
> 
> I strongly recommend postponing this change until an alternative exists
> and then give developers enough time to implement, test and roll out.
> 
> I also think as long as we don't have an alternative, we should improve
> the current filtering for segwit. E.g. testing the scripts themselves
> and each scriptPubKey spent by any input against the filter would do,
> and it also fixes the main privacy issue with server-side filtering
> (wallets have to add two items per address to the filter).
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-21 Thread Matt Corallo via bitcoin-dev
Just a quick heads-up for those watching the list who may be using it -
in the next Bitcoin Core release bloom filter serving will be turned off
by default. This has been a long time coming, it's been an option for
many releases and has been a well-known DoS vector for some time.
As other DoS vectors have slowly been closed, this has become
increasingly an obvious low-hanging fruit. Those who are using it should
already have long been filtering for NODE_BLOOM-signaling nodes, and I
don't anticipate those being gone any time particularly soon.

See-also PR at https://github.com/bitcoin/bitcoin/pull/16152

The release notes will liekly read:

P2P Changes
---
- The default value for the -peerbloomfilters configuration option (and,
thus, NODE_BLOOM support) has been changed to false.
  This resolves well-known DoS vectors in Bitcoin Core, especially for
nodes with spinning disks. It is not anticipated that
  this will result in a significant lack of availability of
NODE_BLOOM-enabled nodes in the coming years, however, clients
  which rely on the availability of NODE_BLOOM-supporting nodes on the
P2P network should consider the process of migrating
  to a more modern (and less trustful and privacy-violating) alternative
over the coming years.

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)

2019-06-03 Thread Matt Corallo via bitcoin-dev
I think this needs significantly improved motivation/description. A few areas 
I'd like to see calculated out:

1) wrt rule 3, for this to be 
obviously-incentive-compatible-for-the-next-miner, I'd think no evicted 
transactions would be allowed to be in the next block range. This would 
probably require some significant additional tracking in today's mempool logic.

2) wrt rule 4, I'd like to see a calculation of worst-case free relay. I think 
we're already not in a great place, but maybe it's worth it or maybe there is 
some other way to reduce this cost (intuitively it looks like this proposal 
could make things very, very, very bad).

3) wrt rule 5, I'd like to see benchmarks, it's probably a pretty nasty DoS 
attack, but it may also be the case that is (a) not worse than other 
fundamental issues or (b) sufficiently expensive.

4) As I've indicated before, I'm generaly not a fan of such vague protections 
for time-critical transactions such as payment channel punishment transactions. 
At a high-level, in this context your counterparty's transactions (not to 
mention every other transaction in everyone's mempool) are still involved in 
the decision about whether to accept an RBF, in contrast to previous proposals, 
which makes it much harder to reason about. As a specific example, if an 
attacker exploits mempool policy differences they may cause your concept of 
"top 4M weight" to be bogus for a subeset of nodes, causing propogation to be 
limited.

Obviously there is also a ton more client-side knowledge required and 
complexity to RBF decisions here than other previous, more narrowly-targeted 
proposals.

(I don't think this one use-case being not optimal should prevent such a 
proposal, i agree it's quite nice for some other cases).

Matt

> On Jun 2, 2019, at 06:41, Rusty Russell  wrote:
> 
> Hi all,
> 
>   I want to propose a modification to rules 3, 4 and 5 of BIP 125:
> 
> To remind you of BIP 125:
> 3. The replacement transaction pays an absolute fee of at least the sum
>   paid by the original transactions.
> 
> 4. The replacement transaction must also pay for its own bandwidth at
>   or above the rate set by the node's minimum relay fee setting.
> 
> 5. The number of original transactions to be replaced and their
>   descendant transactions which will be evicted from the mempool must not
>   exceed a total of 100 transactions.
> 
> The new "emergency RBF" rule:
> 
> 6. If the original transaction was not in the first 4,000,000 weight
>   units of the fee-ordered mempool and the replacement transaction is,
>   rules 3, 4 and 5 do not apply.
> 
> This means:
> 
> 1. RBF can be used in adversarial conditions, such as lightning
>  unilateral closes where the adversary has another valid transaction
>  and can use it to block yours.  This is a problem when we allow
>  differential fees between the two current lightning transactions
>  (aka "Bring Your Own Fees").
> 
> 2. RBF can be used without knowing about miner's mempools, or that the
>  above problem is occurring.  One simply gets close to the required
>  maximum height for lightning timeout, and bids to get into the next
>  block.
> 
> 3. This proposal does not open any significant new ability to RBF spam,
>  since it can (usually) only be used once.  IIUC bitcoind won't
>  accept more that 100 descendents of an unconfirmed tx anyway.
> 
> 4. This proposal makes RBF miner-incentive compatible.  Currently the
>  protocol tells miners they shouldn't accept the highest bidding tx
>  for the good of the network.  This conflict is particularly sharp
>  in the case where the replacement tx would be immediately minable,
>  which this proposal addresses.
> 
> Unfortunately I haven't found time to code this up in bitcoin, but if
> there's positive response I can try.
> 
> Thanks for reading!
> Rusty.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-22 Thread Matt Corallo via bitcoin-dev
If we're going to do covenants (and I think we should), then I think we
need to have a flexible solution that provides more features than just
this, or we risk adding it only to go through all the effort again when
people ask for a better solution.

Matt

On 5/20/19 8:58 PM, Jeremy via bitcoin-dev wrote:
> Hello bitcoin-devs,
> 
> Below is a link to a BIP Draft for a new opcode,
> OP_CHECKOUTPUTSHASHVERIFY. This opcode enables an easy-to-use trustless
> congestion control techniques via a rudimentary, limited form of
> covenant which does not bear the same technical and social risks of
> prior covenant designs.
> 
> Congestion control allows Bitcoin users to confirm payments to many
> users in a single transaction without creating the UTXO on-chain until a
> later time. This therefore improves the throughput of confirmed
> payments, at the expense of latency on spendability and increased
> average block space utilization. The BIP covers this use case in detail,
> and a few other use cases lightly.
> 
> The BIP draft is here:
> https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki
> 
> The BIP proposes to deploy the change simultaneously with Taproot as an
> OPSUCCESS, but it could be deployed separately if needed.
> 
> An initial reference implementation of the consensus changes and  tests
> which demonstrate how to use it for basic congestion control is
> available at
> https://github.com/JeremyRubin/bitcoin/tree/congestion-control.  The
> changes are about 74 lines of code on top of sipa's Taproot reference
> implementation.
> 
> Best regards,
> 
> Jeremy Rubin
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-12 Thread Matt Corallo via bitcoin-dev
Note that even your carve-outs for OP_NOP is not sufficient here - if you were 
using nSequence to tag different pre-signed transactions into categories 
(roughly as you suggest people may want to do with extra sighash bits) then 
their transactions could very easily have become un-realistically-spendable. 
The whole point of soft forks is that we invalidate otherwise-unused bits of 
the protocol. This does not seem inconsistent with the proposal here.

> On Mar 9, 2019, at 13:29, Russell O'Connor  wrote:
> Bitcoin has *never* made a soft-fork, since the time of Satoishi, that 
> invalidated transactions that send secured inputs to secured outputs 
> (excluding uses of OP_NOP1-OP_NOP10).

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-11 Thread Matt Corallo via bitcoin-dev
I think you may have misunderstood part of the motivation. Yes, part of the 
motivation *is* to remove OP_CODESEPARATOR wholesale, greatly simplifying the 
theoretical operation of checksig operations (thus somewhat simplifying the 
implementation but also simplifying analysis of future changes, such as 
sighash-caching code).

I think a key part of the analysis here is that no one I've spoken to (and 
we've been discussing removing it for *years*, including many attempts at 
coming up with reasons to keep it) is aware of any real proposals to use 
OP_CODESEPARATOR, let alone anyone using it in the wild. Hiding data in invalid 
pubic keys is a long-discussed-and-implemented idea (despite it's 
discouragement, not to mention it appears on the chain in many places).

It would end up being a huge shame to have all the OP_CORESEPARATOR mess left 
around after all the effort that has gone into removing it for the past few 
years, especially given the stark difference in visibility of a fork when 
compared to a standardness change.

As for your specific proposal of increasing the weight of anything that has an 
OP_CODESEPARATOR in it by the cost of an additional (simple) input, this 
doesn't really solve the issue. After all, if we're assuming some user exists 
who has been using sending money, unspent, to scripts with OP_CODESEPARATOR to 
force signatures to commit to whether some other signature was present and who 
won't see a (invariably media-covered) pending soft-fork in time to claim their 
funds, we should also assume such a user has pre-signed transactions which are 
time-locked and claim a number of inputs and have several paths in the script 
which contain OP_CODESEPARATOR, rendering their transcription invalid.

Matt

> On Mar 11, 2019, at 15:15, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> Increasing the OP_CODESEPARATOR weight by 520 (p2sh redeemScript size limit) 
> + 40 (stripped txinput size) + 8 (stripped txoutput size) + a few more 
> (overhead for varints) = 572ish bytes should be enough to completely 
> eliminate any vulnerability caused by OP_CODESEPARATOR within P2SH 
> transactions without the need to remove it ever.  I think it is worth 
> attempting to be a bit more clever than such a blunt rule, but it would be 
> much better than eliminating OP_CODESEPARATOR within P2SH entirely.
> 
> Remember that the goal isn't to eliminate OP_CODESEPARATOR per se; the goal 
> is to eliminate the vulnerability associated with it.
> 
>> On Mon, Mar 11, 2019 at 12:47 PM Dustin Dettmer via bitcoin-dev 
>>  wrote:
>> What about putting it in a deprecated state for some time. Adjust the 
>> transaction weight so using the op code is more expensive (10x, 20x?) and 
>> get the word out that it will be removed in the future.
>> 
>> You could even have nodes send a reject code with the message 
>> “OP_CODESEPARATOR is depcrecated.”
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Signet

2019-03-09 Thread Matt Corallo via bitcoin-dev
To make testing easier, it may make sense to keep the existing block header 
format (and PoW) and instead apply the signature rules to some field in the 
coinbase transaction. This means SPV clients (assuming they only connect to 
honest/trusted nodes) work as-is.

A previous idea regarding reorgs (that I believe Greg came up with) is to allow 
multiple keys to sign blocks, with one signing no reorgs and one signing a 
reorg every few blocks, allowing users to choose the behavior they want.


> On Mar 8, 2019, at 00:54, Karl-Johan Alm via bitcoin-dev 
>  wrote:
> 
> Hello,
> 
> As some of you already know, I've been working on a network called "signet", 
> which is bascially a complement to the already existing testnet, except it is 
> completely centralized, and blocks are signed by a specific key rather than 
> using proof of work.
> 
> Benefits of this:
> 
> 1. It is more predictable than testnet. Miners appear and disappear 
> regularly, causing irregular block generation.
> 
> 2. Since it is centrally controlled, it is easy to perform global testing, 
> such as reorgs (e.g. the network performs a 4 block reorg by request, or as 
> scheduled).
> 
> 3. It is more stable than testnet, which occasionally sees several thousand 
> block reorgs.
> 
> 4. It is trivial to spin up (and shut down) new signets to make public tests 
> where anyone can participate.
> 
> Anyone can create a signet at any time, simply by creating a key pair and 
> creating a challenge (scriptPubKey). The network can then be used globally by 
> anyone, assuming the creator sends some coins to the other participants.
> 
> Having a persistent signet would be beneficial in particular to services 
> which need a stable place to test features over an extended period of time. 
> My own company implements protocols on top of Bitcoin with sidechains. We 
> need multi-node test frameworks to behave in a predictable manner (unlike 
> testnet) and with the same standardness relay policy as mainnet.
> 
> Signets consist of 2 parameters: the challenge script (scriptPubKey) and the 
> solution length. (The latter is needed to retain fixed length block headers, 
> despite having an additional payload.)
> 
> I propose that a default persistent "signet1" is created, which can be 
> replaced in future versions e.g. if the coins are unwisely used as real 
> money, similarly to what happened to previous testnets. This signet is picked 
> by default if a user includes -signet without providing any of the parameters 
> mentioned above. The key holder would be someone sufficiently trusted in the 
> community, who would be willing to run the system (block generation code, 
> faucet, etc). It could be made a little more sturdy by using 1-of-N multisig 
> as the challenge, in case 1 <= x < N of the signers disappear. If people 
> oppose this, it can be skipped, but will mean people can't just jump onto 
> signet without first tracking down parameters from somewhere.
> 
> Implementation-wise, the code adds an std::map with block hash to block 
> signature. This is serialized/deserialized as appropriate (Segwit witness 
> style), which means block headers in p2p messages are (80 + solution_length) 
> bytes. Block header non-contextual check goes from checking if block header 
> hash < target to checking if the payload is a valid signature for the block 
> header hash instead.
> 
> Single commit with code (will split into commits and make PR later, but just 
> to give an idea what it looks like): 
> https://github.com/kallewoof/bitcoin/pull/4
> 
> I don't think this PR is overly intrusive, and I'm hoping to be able to get 
> signet code into Bitcoin Core eventually, and am equally hopeful that devs of 
> other (wallet etc) implementations will consider supporting it.
> 
> Feedback requested on this.
> 
> Attribution: parts of the signet code (in particular signblock and 
> getnewblockhex) were adapted from the ElementsProject/elements repository. 
> When PR is split into atomic commits, I will put appropriate attribution 
> there.
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-09 Thread Matt Corallo via bitcoin-dev
Aside from the complexity issues here, note that for a user to be adversely 
affect, they probably have to have pre-signed lock-timed transactions. 
Otherwise, in the crazy case that such a user exists, they should have no 
problem claiming the funds before activation of a soft-fork (and just switching 
to the swgwit equivalent, or some other equivalent scheme). Thus, adding 
additional restrictions like tx size limits will equally break txn.

> On Mar 8, 2019, at 14:12, Sjors Provoost  wrote:
> 
> 
>> (1) It has been well documented again and again that there is desire to 
>> remove OP_CODESEPARATOR, (2) it is well-documented OP_CODESEPARATOR in 
>> non-segwit scripts represents a rather significant vulnerability in Bitcoin 
>> today, and (3) lots of effort has gone into attempting to find practical 
>> use-cases for OP_CODESEPARATOR's specific construction, with no successes as 
>> of yet. I strongly, strongly disagree that the highly-unlikely remote 
>> possibility that someone created something before which could be rendered 
>> unspendable is sufficient reason to not fix a vulnerability in Bitcoin today.
>> 
>>> I suggest an alternative whereby the execution of OP_CODESEPARATOR 
>>> increases the transactions weight suitably as to temper the vulnerability 
>>> caused by it.  Alternatively there could be some sort of limit (maybe 1) on 
>>> the maximum number of OP_CODESEPARATORs allowed to be executed per script, 
>>> but that would require an argument as to why exceeding that limit isn't 
>>> reasonable.
>> 
>> You could equally argue, however, that any such limit could render some 
>> moderately-large transaction unspendable, so I'm somewhat skeptical of this 
>> argument. Note that OP_CODESEPARATOR is non-standard, so getting them mined 
>> is rather difficult in any case.
> 
> Although I'm not a fan of extra complicity, just to explore these two ideas a 
> bit further.
> 
> What if such a transaction:
> 
> 1. must have one input; and
> 2. must be smaller than 400 vbytes; and
> 3. must spend from a UTXO older than fork activation
> 
> Adding such a contextual check seems rather painful, perhaps comparable to 
> nLockTime. Anything more specific than the above, e.g. counting the number of 
> OP_CODESEPARATOR calls, seems like guess work.
> 
> Transaction weight currently doesn't consider OP codes, it only considers if 
> bytes are part of the witness. Changing that to something more akin to 
> Ethereums gas pricing sounds too complicated to even consider.
> 
> 
> I would also like to believe that whoever went through the trouble of using 
> OP_CODESEPARATOR reads this list.
> 
> Sjors
> 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-08 Thread Matt Corallo via bitcoin-dev

Replies inline.

On 3/8/19 3:57 PM, Russell O'Connor wrote:
On Thu, Mar 7, 2019 at 2:50 PM Matt Corallo > wrote:

It's very easy to construct a practical script using OP_CODESEPARATOR.

IF <2>   <2> CHECKMULTISIGVERIFY ELSE 
CODESEPARATOR  CHECKSIGVERFY ENDIF


Now when someone hands Alice, the CFO of XYZ corp., some transaction, 
she has the option of either signing it unilaterally herself, or 
creating a partial signature such that the transaction additionally 
needs Bob, the CEOs signature as well, and Alice's choice is committed 
to the blockchain for auditing purposes later.


Now, there are many things you might object about this scheme, but my 
point is that (A) regardless of what you think about this scheme, it, or 
similar schemes, may have been devised by users, and (B) users may have 
already committed funds to such schemes, and due to P2SH you cannot know 
that this is not the case.


The common way to set that up is to have a separate key, but, ok, fair 
enough. That said, the argument that "it may be hidden by P2SH!" isn't 
sufficient here. It has to *both* be hidden by P2SH and have never been 
spent from (either on mainnet or testnet) or be lock-timed a year in the 
future. I'm seriously skeptical that someone is using a highly esoteric 
scheme and has just been pouring money into it without ever having 
tested it or having withdrawn any money from it whatsoever. This is just 
a weird argument.



Please don't strawman my position.  I am not suggesting we don't fix a 
vulnerability in Bitcoin.  I am suggesting we find another way.  One 
that limits the of risk destroying other people's money.


Here is a more concrete proposal:  No matter how bad OP_CODESEPARATOR 
is, it cannot be worse than instead including another input that spends 
another identically sized UTXO.  So how about we soft-fork in a rule 
that says that an input's weight is increased by an amount equal to the 
number of OP_CODESEPARATORs executed times the sum of weight of the UTXO 
being spent and 40 bytes, the weight of a stripped input. The risk of 
destroying other people's money is limited and AFAIU it would completely 
address the vulnerabilities caused by OP_CODESEPARATOR.


You're already arguing that someone has such an esoteric use of script, 
suggesting they aren't *also* creating pre-signed, long-locktimed 
transactions with many inputs isn't much of a further stretch 
(especially since this may result in the fee being non-standardly low if 
you artificially increase its weight).


Note that "just limit number of OP_CODESEPARATOR calls" results in a ton 
of complexity and reduces the simple analysis that fees (almost) have 
today vs just removing it allows us to also remove a ton of code.


Further note that if you don't remove it getting the efficiency wins 
right is even harder because instead of being able to cache sighashes 
you now have to (at a minimum) wipe the cache between each 
OP_CODESEPARATOR call, which results in a ton of additional 
implementation complexity.




 > I suggest an alternative whereby the execution of OP_CODESEPARATOR
 > increases the transactions weight suitably as to temper the
 > vulnerability caused by it.  Alternatively there could be some
sort of
 > limit (maybe 1) on the maximum number of OP_CODESEPARATORs
allowed to be
 > executed per script, but that would require an argument as to why
 > exceeding that limit isn't reasonable.

You could equally argue, however, that any such limit could render some
moderately-large transaction unspendable, so I'm somewhat skeptical of
this argument. Note that OP_CODESEPARATOR is non-standard, so getting
them mined is rather difficult in any case.


I already know of people who's funds are tied up due to in other changes 
to Bitcoin Core's default relay policy.  Non-standardness is not an 
excuse to take other people's tied up funds and destroy them permanently.


Huh?! The whole point of non-standardness in this context is to (a) make 
soft-forking something out safer by derisking miners not upgrading right 
away and (b) signal something that may be a candidate for soft-forking 
out so that we get feedback. Who is getting things disabled who isn't 
bothering to *tell* people that their use-case is being hurt?!

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


  1   2   3   >