Re: [bitcoin-dev] BIP process friction

2024-01-18 Thread David A. Harding via bitcoin-dev

On 2024-01-16 16:42, Anthony Towns via bitcoin-dev wrote:

I'm switching inquisition over to having a dedicated "IANA"-ish
thing that's independent of BIP process nonsense. It's at:

 * https://github.com/bitcoin-inquisition/binana

If people want to use it for bitcoin-related proposals that don't have
anything to do with inquisition, that's fine


Thank you for doing this!

Question: is there a recommended way to produce a shorter identifier for 
inline use in reading material?  For example, for proposal 
BIN-2024-0001-000, I'm thinking:


- BIN24-1 (references whatever the current version of the proposal is)
- BIN24-1.0 (references revision 0)

I think that doesn't look too bad even if there are over 100 proposals a 
year, with some of them getting into over a hundred revisions:


- BIN24-123
- BIN24-123.123

Rationale:

- Using "BIN" for both full-length and shortened versions makes it 
explicit which document set we're talking about


- Eliminating the first dash losslessly saves space and reduces visual 
clutter


- Shortening a four-digit year to two digits works for the next 75 
years.  Adding more digits as necessary after that won't produce any 
ambiguity


- Although I'd like to eliminate the second dash, and it wouldn't 
introduce any ambiguity in machine parsing for the next 175 years, I 
think it would lead to people interpreting numbers incorrectly.  E.g., 
"BIN241" would be read "BIN two-hundred fourty-one" instead of a more 
desirable "BIN twenty-four dash one"


- Eliminating prefix zeroes in the proposal and revision numbers 
losslessly saves space and reduces visual clutter


- A decimal point between the proposal number and revision number 
creates less visual clutter than the third dash and still conveys the 
intended meaning


- Overall, for the typical case I'd expect---BIN proposals numbered 1-99 
with no mention of revision---this produces strings only one or two or 
characters longer than a typical modern BIP number in shortened format, 
e.g. BIN24-12 versus BIP123.


Thoughts?

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Lamport scheme (not signature) to economize on L1

2024-01-01 Thread David A. Harding via bitcoin-dev

On 2024-01-01 00:17, yuri...@pm.me wrote:

I'm afraid I didn't understand your objection. [...]
I suspect my proposed scheme can be
implemented with available, existing Bitcoin infrastructure.


Is a soft fork or a hard fork required?  If so, the proposal will need a 
lot of peer review and user acceptance.


What are the benefits of your proposal?  As I understand it, the benefit 
is smaller transactions.  How much smaller will they be in terms of 
vbytes?  For example, a transaction today with one input performing a 
taproot keypath spend and one taproot-paying output is 111 vbytes[1].  
What will be the total onchain size of an equivalent one-input, 
one-output transaction using your scheme?


My comment (not objection) is that modest decreases in onchain data size 
may not provide a significant enough benefit to attract reviewers and 
interested users, especially if a proposal is complicated by a 
dependencies on many things that have not previously been included in 
Bitcoin (such as new hash functions).


If I'm deeply misunderstanding your proposal and my questions don't make 
sense, I'd very much appreciate a clarification about what your proposal 
does.


Thanks,

-Dave

[1] https://bitcoinops.org/en/tools/calc-size/
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Lamport scheme (not signature) to economize on L1

2023-12-31 Thread David A. Harding via bitcoin-dev

Hi Yuri,

I think it's worth noting that for transactions with an equal number of 
P2TR keypath spends (inputs) and P2TR outputs, the amount of space used 
in a transaction by the serialization of the signature itself (16 vbytes 
per input) ranges from a bit over 14% of transaction size (1-input, 
1-output) to a bit less than 16% (10,000-in, 10,000-out; a ~1 MvB tx).  
I infer that to mean that the absolute best a signature replacement 
scheme can do is free up 16% of block space.


An extra 16% of block space is significant, but the advantage of that 
savings needs to be compared to the challenge of creating a highly peer 
reviewed implementation of the new signature scheme and then convincing 
a very large number of Bitcoin users to accept it.  A soft fork proposal 
that introduces new-to-Bitcoin cryptography (such as a different hash 
function) will likely need to be studied for a prolonged period by many 
experts before Bitcoin users become confident enough in it to trust 
their bitcoins to it.  A hard fork proposal has the same challenges as a 
soft fork, plus likely a large delay before it can go into effect, and 
it also needs to be weighed against the much easier process it would be 
for experts and users to review a hard fork that increased block 
capacity by 16% directly.


I haven't fully studied your proposal (as I understand you're working on 
an improved version), but I wanted to put my gut feeling about it into 
words to offer feedback (hopefully of the constructive kind): I think 
the savings in block space might not be worth the cost in expert review 
and user consensus building.


That said, I love innovative ideas about Bitcoin and this is one I will 
remember.  If you continue working on it, I very much look forward to 
seeing what you come up with.  If you don't continue working on it, I 
believe you're likely to think of something else that will be just as 
exciting, if not more so.


Thanks for innovating!,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-29 15:17, Nagaev Boris wrote:

Feerate-Dependent Timelocks do create incentives to accept out-of-band
fees to decrease in-band fees and speed up mining of transactions
using FDT! Miners can make a 5% discount on fees paid out-of-band and
many people will use it. Observed fees decrease and FDT transactions
mature faster. It is beneficial for both parties involved: senders of
transactions save 5% on fees, miners get FDT transactions mined faster
and get more profits (for the sake of example more than 5%).


Hi Nagaev,

That's an interesting idea, but I don't think that it works due to the 
free rider problem: miner Alice offers a 5% discount on fees paid out of 
band now in the hopes of collecting more than 5% in extra fees later due 
to increased urgency from users that depended on FDTs.  However, 
sometimes the person who actually collects extra fees is miner Bob who 
never offered a 5% discount.  By not offering a discount, Bob earns more 
money on average per block than Alice (all other things being equal), 
eventually forcing her to stop offering the discount or to leave the 
market.


Additionally, if nearly everyone was paying discounted fees out of band, 
participants in contract protocols using FDTs would know to use 
proportionally higher FDT amounts (e.g. 5% over their actual desired 
fee), negating the benefit to miners of offering discounted fees.


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-28 08:42, Eric Voskuil via bitcoin-dev wrote:

Assuming a “sufficient fraction” of
one of several economically rational behaviors is a design flaw.


The amount of effort it takes a user to pay additional miners out of
band is likely to increase much faster than probability that the user's
payment will confirm on time.  For example, offering payment to the set
of miners that controls 90% of hash rate will result in confirmation
within 6 blocks 99.% of the time, meaning it's probably not worth
putting any effort into offering payment to the other 10% of miners.  If
out of band payments become a significant portion of mining revenue via
a mechanism that results in small miners making significantly less
revenue than large miners, there will be an incentive to centralize
mining even more than it is today.  The more centralized mining is, the
less resistant Bitcoin is to transaction censorship.

We can't prevent people from paying out of band, but we can ensure that
the easiest and most effective way to pay for a transaction is through
in-band fees and transactions that are relayed to every miner who is
interested in them.  If we fail at that, I think Bitcoin losing its
censorship resistance will be inevitable.  LN, coinpools, and channel
factories all strongly depend on Bitcoin transactions not being
censored, so I don't think any security is lost by redesigning them to
additionally depend on reasonably accurate in-band fee statistics.

Miners mining their own transactions, accepting the occasional
out-of-band fee, or having varying local transaction selection policies
are situations that are easily addressed by the user of fee-dependent
timelocks choosing a long window and setting the dependent feerate well
below the maximum feerate they are willing to pay.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-28 08:06, jlspc via bitcoin-dev wrote:

On Friday, December 22nd, 2023 at 8:36 AM, Nagaev Boris
 wrote:

To validate a transaction with FDT [...]
a light client would have to determine the median fee
rate of the recent blocks. To do that without involving trust, it has
to download the blocks. What do you think about including median
feerate as a required OP_RETURN output in coinbase transaction?


Yes, I think that's a great idea!


I think this points to a small challenge of implementing this soft fork 
for pruned full nodes.  Let's say a fee-dependent timelock (FDT) soft 
fork goes into effect at time/block _t_.  Both before and for a while 
after _t_, Alice is running an older pruned full node that did not 
contain any FDT-aware code, so it prunes blocks after _t_ without 
storing any median feerate information about them (not even commitments 
in the coinbase transaction).  Later, well after _t_, Alice upgrades her 
node to one that is aware of FDTs.  Unfortunately, as a pruned node, it 
doesn't have earlier blocks, so it can't validate FDTs without 
downloading those earlier blocks.


I think the simplest solution would be for a recently-upgrade node to 
begin collecting median feerates for new blocks going forward and to 
only enforce FDTs for which it has the data.  That would mean anyone 
depending on FDTs should be a little more careful about them near 
activation time, as even some node versions that nominally enforced FDT 
consensus rules might not actually be enforcing them yet.


Of course, if the above solution isn't satisfactory, upgraded pruned 
nodes could simply redownload old blocks or, with extensions to the P2P 
protocol, just the relevant parts of them (i.e., coinbase transactions 
or, with a soft fork, even just commitments made in coinbase 
transactions[1]).


-Dave

[1] An idea discussed for the segwit soft fork was requiring the witness 
merkle root OP_RETURN to be the final output of the coinbase transaction 
so that all chunks of the coinbase transaction before it could be 
"compressed" into a SHA midstate and then the midstate could be extended 
with the bytes of the OP_RETURN commitment to produce the coinbase 
transaction's txid, which could then be connected to the block header 
using the standard Bitcoin-style merkle inclusion proof.  This would 
allow trailing commitments in even a very large coinbase transaction to 
be communicated in just a few hundred bytes (not including the size of 
the commitments themselves).  This idea was left out of segwit because 
at least one contemporary model of ASIC miner had a hardware-enforced 
requirement to put a mining reward payout in the final output.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Future of the bitcoin-dev mailing list

2023-11-07 Thread David A. Harding via bitcoin-dev

On 2023-11-07 05:37, Bryan Bishop via bitcoin-dev wrote:

What about [...] delvingbitcoin.org?


I'm only willing to consider discussion groups that provide good 
archives, so I think it's worth noting that James O'Beirne has written 
code[1] and is currently maintaining a git repo[2] with a backup of 
Delving Bitcoin discussion.  See his post[3] for additional details.


In addition to providing an archive, I currently find it to be nice way 
to quickly skim all posts made to the forum since I last checked (plus I 
see edits)[4]:


$ cd delving-bitcoin-archive/
$ git pull
$ git log -p archive/rendered-topics/

I think some technical discussions were already migrating to Delving 
Bitcoin before the shutdown notice and I expect more discussions to move 
there in the future even if the current mailing list is relocated to a 
new platform.  Knowing that discussions are archived in a way that I can 
easily replicate was key to me feeling comfortable putting significant 
time into reading and writing posts on Delving Bitcoin, so I wanted to 
share that information here.


-Dave

[1] https://github.com/jamesob/discourse-archive
[2] https://github.com/jamesob/delving-bitcoin-archive
[3] https://delvingbitcoin.org/t/public-archive-for-delving-bitcoin/87/6
[4] Plus every commit makes me laugh.  James O'Beirne's commit robot is 
called "jamesobot"

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-23 Thread David A. Harding via bitcoin-dev

On 2023-10-21 18:49, Nadav Ivgi via bitcoin-dev wrote:

Could this be addressed with an OP_CSV_ALLINPUTS, a covenant opcode
that requires _all_ inputs to have a matching nSequence, and using `1
OP_CSV_ALLINPUTS` in the HTLC preimage branch?

This would prevent using unconfirmed outputs in the
HTLC-preimage-spending transaction entirely, which IIUC should protect
it against the replacement cycling attack.


I don't think that addresses the underlying problem.  In Riard's 
description, a replacement cycle looks like this:


- Bob broadcasts an HTLC-timeout  (input A, input B for fees, output X)
- Mallory replaces the HTLC-timeout with an HTLC-preimage (input A, 
input C for fees, output Y)
- Mallory replaces the transaction that created input C, removing the 
HTLC-preimage from the mempool


However, an alternative approach is:

- (Same) Bob broadcasts an HTLC-timeout (input A, input B for fees, 
output X)
- (Same) Mallory replaces the HTLC-timeout with an HTLC-preimage (input 
A, input C for fees, output Y)
- (Different) Mallory uses input C to replace the HTLC-preimage with a 
transaction that does not include input A, removing the preimage from 
the mempool


The original scenario requires input C to be from an unconfirmed 
transaction, so OP_CSV_ALLINPUTS works.  The alternative scenario works 
even if input C comes from a confirmed transaction, so OP_CSV_ALLINPUTS 
is ineffective.


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_Expire and Coinbase-Like Behavior: Making HTLCs Safer by Letting Transactions Expire Safely

2023-10-21 Thread David A. Harding via bitcoin-dev

On 2023-10-20 14:09, Peter Todd via bitcoin-dev wrote:
The basic problem here is after the HTLC-timeout path becomes 
spendable, the
HTLC-preimage path remains spendable. That's bad, because in this case 
we want
spending the HTLC-preimage - if possible - to have an urgency attached 
to it to

ensure that it happens before the previous HTLC-timeout is mined.

So, why can't we make the HTLC-preimage path expire?


If the goal is to ensure the HTLC-preimage should be mined before an 
upstream HTLC-timeout becomes mineable, then I don't think a consensus 
change is required.  We can just make the HTLC-preimage claimable by 
anyone some time after the HTLC-timeout becomes mineable.


For example, imagine that Alice offers Bob an HTLC with a timeout at 
block t+200.  Bob offers Carol an HTLC with a timeout at block t+100.  
The Bob-Carol HTLC script looks like this:


If
  # Does someone have the preimage?
  Hash  EqualVerify
  If
# Carol has the preimage at any time
 CheckSig
  Else
# Anyone else has the preimage after t+150
 CLTV
  EndIf
Else
  # Bob is allowed a refund after t+100
   CheckSigVerify
   CLTV
EndIf

In English:

- At any time, Carol can spend the output by releasing the preimage
- After t+100, Bob can spend the output
- After t+150, anyone with the preimage can spend the output



Let's consider this in the wider context of the forwarded payment 
Alice->Bob->Carol:


- If Carol attempts to spend the output by releasing the preimage but 
pays too low of a feerate to get it confirmed by block t+100, Bob can 
spend the output in block t+101.  He then has 99 blocks to settle 
(revoke) the Alice-Bob HTLC offchain.


- If Carol releases the preimage to the network in general but prevents 
Bob from using it (e.g. using a replacement cycling attack), anyone who 
saw the preimage can take Carol's output at t+150 and, by doing so, will 
put the preimage in the block chain where Bob will learn about it.  
He'll then have 49 blocks to settle (revoke) the Alice-Bob HTLC 
offchain.


- (All the normal cases when the HTLC is settled offchain, or where 
onchain operations occur in a timely manner)




I think that adequately satisfies the concern about the effect on LN 
from replacement cycling.  Looking at potential complications:


- If all miners acted together[1], they are incentivized to not mine 
Carol's preimage transaction before t+150 because its fees are less than 
the HTLC value they can receive at t+150.  I think this level of miner 
centralization would result in a general failure for LN given that 
miners could be any LN user's counterparty (or bribed by a user's 
counterparty).  E.g., stuff like this: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017997.html


- To allow anyone with the preimage to spend the output after t+150, 
they need to know the script.  For taproot, that means the t+150 tapleaf 
script needs to follow a standard (e.g. a BOLT) and that any internal 
merkle nodes needed to connect it to the taproot commitment need to be 
shown in Carol's preimage transaction (or inferable from it or other 
data).


- Classic RBF pinning of the t+150 transaction to prevent it from 
confirming by block t+200 might be an issue.  E.g., including it in a 
400,000 weight low-feerate transaction.


- Full RBF might be required to ensure the t+150 transaction isn't sent 
with a low feerate and no opt-in signal.




Deployment considerations:

- No changes are required to full nodes (no consensus change required)

- No changes are required to mining Bitcoin nodes[2]

- At least one well-connected Bitcoin relay node will need to be updated 
to store preimages and related data, and to send the preimage claim 
transactions.  Data only needs to be kept for a rolling window of a few 
thousand blocks for the LN case, bounding storage requirements.  No 
changes are required to other relaying Bitcoin nodes


- LN nodes will need to update to new HTLC scripts, but this should be 
doable without closing/re-opening channels.  Both anchor and non-anchor 
channels can continue to be used




Compared to OP_EXPIRE:

- OP_EXPIRE requires consensus and policy changes; this does not

- OP_EXPIRE does not depend on special software; this depends on at 
least one person running special software




Although this proposal is an alternative to Peter's proposal and is 
primarily inspired by his idea, it's also a variation on a previous 
suggestion of mine: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002664.html


-Dave

[1] Perhaps under block censorship threat from a mining majority or a 
sub-majority performing selfish mining.


[2] Although miners may want to consider running code that allows them 
to rewrite any malleable transactions to pay themselve

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Actuarial System To Reduce Interactivity In N-of-N (N > 2) Multiparticipant Offchain Mechanisms

2023-09-17 Thread David A. Harding via bitcoin-dev



On September 8, 2023 3:27:38 PM HST, ZmnSCPxj via bitcoin-dev 
 wrote:
>Now, suppose that participant A wants B to be assured that
>A will not double-spend the transaction.
>Then A solicits a single-spend signature from the actuary,
>getting a signature M:
>
>current state  +++
>-+-+   || (M||CSV) && A2 |
> |(M||CSV) && A| > |  M,A   ++
> +-+   || (M||CSV) && B2 |
> |(M||CSV) && B|   +++
> +-+
> |(M||CSV) && C|
>-+-+
>
>The above is now a confirmed transaction.

Good morning, ZmnSCPxj.

What happens if A and M are both members of a group of thieves that control a 
moderate amount of hash rate?  Can A provide the "confirmed transaction" 
containing M's sign-only-once signature to B and then, sometime[1] before the 
CSV expiry, generate a block that contains A's and M's signature over a 
different transaction that does not pay B?  Either the same transaction or a 
different transaction in the block also spends M's fidelity bond to a new 
address exclusively controlled by M, preventing it from being spent by another 
party unless they reorg the block chain.

If the CSV is a significant amount of time in the future, as we would probably 
want it to be for efficiency, then the thieving group A and M are part of would 
not need to control a large amount of hash rate to have a high probability of 
being successful (and, if they were unsuccessful at the attempted theft, they 
might not even lose anything and their theft attempt would be invisible to 
anyone outside of their group).

If A is able to double spend back to herself funds that were previously 
intended to B, and if cut through transactions were created where B allocated 
those same funds to C, I think that the double spend invalidates the 
cut-through even if APO is used, so I think the entire mechanism collapses into 
reputational trust in M similar to the historic GreenAddress.it co-signing 
mechanim.

Thanks,

-Dave

[1] Including in the past, via a Finney attack or an extended Finney attack 
supported by selfish mining.  
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP for Serverless Payjoin

2023-08-13 Thread David A. Harding via bitcoin-dev



On August 10, 2023 5:37:54 AM HST, AdamISZ via bitcoin-dev 
 wrote:
>Hi Dan,
>A couple more more thoughts:
>
>> Out of band, the receiver of the payment, shares a bitcoin URI with the 
>> sender including a pj= query parameter describing the relay 
>> subdirectory endpoint and psk= parameter with base64 encoded 
>> 256-bit secret key.
>
>You're sending the symmetric secret key out of band; but isn't this obscuring 
>the question of securely sharing the secret key? Did you consider DH-ing this 
>as other protocols do? At the very least I would claim that it's likely that 
>implementers might be sloppy here; at the most I would claim this is just 
>insecure full stop.

Hi Dan,

After reading Adam's comments above and re-reading your draft BIP where it says 
the secret key is also used as the session identifier and that outputs can be 
modified, I'm wondering about the security of posting payment URIs anywhere 
someone can see them.

For example, if Alice posts her BIP21 URI for Bob to pay where Eve can see it, 
such as in a shared chatroom or via email or any cleartext protocol that gets 
relayed, can Eve establish her own session to the relay and frontrun Alice on 
receiving Bob's PSBT, modify the returned PSBT to include her (Eve's) output, 
and submit it for Bob to sign and broadcast?

The way BItcoin users currently use BIP21 URIs and QR-encoded BIP21 URIs, 
posting them where evesdroppers can see them poses a privacy risk but not a 
risk of loss of funds, so many users don't treat them as especially hazardous 
material.  I don't think it would be practical to change that expectation, and 
I think a protocol where evesdropping didn't create a risk of funds loss would 
be much better than one where that risk was created.

(Apologies to Adam is this is exactly what he was saying with more subtly.)

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Concrete MATT opcodes

2023-08-06 Thread David A. Harding via bitcoin-dev



On July 30, 2023 11:37:49 AM HST, Salvatore Ingala via bitcoin-dev
>I have put together a first complete proposal for the core opcodes of
>MATT [1][2].
>The changes make the opcode functionally complete, and the
>implementation is revised and improved.
> [...]
>[1] - https://merkle.fun/
>[2] -
>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-November/021182.html

Hi Salvatore,

Where exactly is the proposal?  Merkle.fun links to a "WIP" comment that seems 
to specify OP_CHECKCONTRACTVERIFY but your text above says "core opcodes" 
(plural) so I feel like I'm missing something.  Also, it being "WIP" makes me 
wonder if that actually is the "complete proposal" I should be looking for.

When I read "complete proposal", I was expecting a draft BIP.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Standardisation of an unstructured taproot annex

2023-06-13 Thread David A. Harding via bitcoin-dev

On 2023-06-11 09:25, Joost Jager wrote:

Isn't it the case that that op-dropped partial signature for the
ephemeral key isn't committed to and thus can be modified by anyone
before it is mined


That's correct; I hadn't thought of that, sorry.


I am really looking for a bitcoin-native solution to leverage
bitcoin's robustness and security properties.


I understand.  I would briefly point out that there are other advantages
to not storing a signature for an ephemeral key in the annex.  For
example, if you want to generate multiple different potential spending
transactions, you need to store one signature for each potential
transaction.  The more data you store in the annex, the less scalable
the vault protocol becomes; by comparison, it's possible to cheaply
store a very large amount of data offchain with high robustness.

Also, depending on construction of the vault, a possible advantage of a
presigned vault (without using the annex) over a solution like OP_VAULT
is that all actions might be able to use keypath spends.  That would be
highly efficient, increasing the usability of vaults.  It would also be
more private, which may be important to certain classes of vault users.
Even if OP_VAULT was added to Bitcoin, it would be interesting to have
an alternative vault protocol that offered different tradeoffs.


That years-long timeline that you sketch for witness replacement (or
any other policy change I presume?) to become effective is perhaps
indicative of the need to have an alternative way to relay
transactions to miners besides the p2p network?


The speed depends on the policy change.  In this case, I think there's a
reasonable argument to be made that a mitigation for the problems of
annex relay should be widely deployed before we enable annex relay.

Bitcoin Core's policy is designed to both prevent the abuse of relay
node resources and also serve the transaction selection needs of miners.
Any alternative relay system will need to solve the same general
problems: how to prevent abuse of the relayers and help miners choose
the best transactions.  Ideas for alternative relay like those
previously proposed on this list[1] avoid certain problems but also
(AFAICT) create new problems.

To be specific towards this proposal, if an alternative relay network
naively implemented annex relay, any miners who used that network could
receive a coinjoin-style transaction with a large annex that
significantly reduced the transaction's feerate.  By comparison, any
miners who continued to only receive transactions from the P2P network
of Bitcoin Core (or similar) nodes would have received the transaction
without an annex at its original (higher) feerate, allowing them to to
receive greater revenue if they mined it.  If, instead, the alternative
relay network implemented the witness replacement proposal you've linked
to, those miners could still receive up to 4.99% less revenue than
Bitcoin Core-based miners and the operators of the alternative relay
network might have had to pay extra costs for the replacement relays.
You can tweak the proposal to tweak those ratios, but I'm not sure
there's a case where an alternative relay network comes up as a clear
winner over the existing network for general purpose transactions.
Instead, like many things, it's a matter of tradeoffs.


I agree though that it would be ideal if there is a good solution that
doesn't require any protocol changes or upgrade path.


Apologies for the salt, but there is a good solution: don't use the
block chain to store backup data.

-Dave

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-May/021700.html

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Standardisation of an unstructured taproot annex

2023-06-10 Thread David A. Harding via bitcoin-dev

On 2023-06-09 21:43, Joost Jager via bitcoin-dev wrote:

The most critical application in this category, for me, involves
time-locked vaults.
[...]
Backing up the ephemeral signatures of the pre-signed transactions on
the blockchain itself is an excellent way to ensure that the vault can
always be 'opened'. However, without the annex, this is not as safe as
it could be. Due to the described circular reference problem, the
vault creation and signature backup can't be executed in one atomic
operation.


Hi Joost,

For the purpose of experimenting with vaults, I don't think you need the
most efficient construction---instead, anything that works without too
much overhead is probably ok.  In that case, I don't think you need the
annex at all:

1. Alice can receive new payments to tr(, raw(OP_DROP 
   OP_CHECKSIG))

2. Later, Alice creates tr(MuSig2(,
   ))

3. When paying the script in #2, Alice chooses the scriptpath spend from
   #1 and pushes a serialized partial signature for the ephemeral key
   from #2 onto the stack, where it's immediately dropped by the
   interpreter (but is permanently stored on the block chain).  She also
   attaches a regular signature for the OP_CHECKSIG opcode.

Alternatively, if Alice decides she doesn't want to pay into a vault,
she uses the keypath spend from #1 with no loss in efficiency.

The scriptpath solution requires some extra preparation on Alice's part
and costs about a dozen vbytes extra over using the annex, which feels
acceptable to me to avoid the problems identified with using the annex.

Even better, I think you can achieve nearly the same safety without
putting any data on the chain.  All you need is a widely used
decentralized protocol that allows anyone who can prove ownership of a
UTXO to store some data.  You can think of LN gossip as being a version
of this: anyone who proves ownership of a P2WSH 2-of-2 script is allowed
to store data in a certain format on every LN routing node.  Rusty
Russell's v2 gossip proposal makes this a bit more generic, but I think
you could make it even more generic by creating a simple server that
stores and forwards a single BIP322 signed message up to size x for any
entry in the current UTXO set, with periodic replacement of the signed
message allowed.  The signed data could be LN routing information or it
could be arbitrary data like a signature from an ephemeral key (or it
could even be a JPEG or other data irrelevant to processing payments).

Any full node (including pruned and utreexo nodes) can trustlessly
provide UTXO lookup for such a server and a decentralized network of
such servers could be useful by a large number of protocols, encouraging
hundreds or thousands of servers to be operated---providing similar data
availability guarantees to committing data on the block chain, but
without the permanent footprint (i.e., once a UTXO is spent, the
associated data can be deleted).  Many vault designs already effectively
require watchtowers, so it'd be easy to make this simple server part of
the watchtower.


Regarding the potential payload extension attack, I believe that the
changes proposed in the [3] to allow tx replacement by smaller witness
would provide a viable solution?
[...]
[3] https://github.com/bitcoin/bitcoin/pull/24007


The two solutions identified above (OP_DROP and decentralized storage
for UTXO owners) can be implemented immediately.  By comparison, rolling
out relay of the annex and witness replacement may take months of review
and years for >90% deployment among nodes, would allow an attacker to
lower the feerate of coinjoin-style transactions by up to 4.99%, would
allow an attacker to waste 8 million bytes of bandwidth per relay node
for the same cost they'd have to pay to today to waste 400 thousand
bytes, and might limit the flexibility and efficiency of future
consensus changes that want to use the annex.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-06-07 Thread David A. Harding via bitcoin-dev

On 2023-06-07 03:30, Burak Keceli wrote:

If the service provider double-spends a transaction that enforces a
one-time signature where Bob is the vendor, Bob can forge the service
provider’s signature from the 2-of-2 and can immediately claim his
previously-spent vTXO(s).


Hi Burak,

I'm confused.  Bob owns some bitcoins that are timelocked against
immediate withdrawal, but where he can spend immediately with the
cooperation of service provider Sally.  Bob transfers some bitcoins to
Sally contingent on her spending an equal amount of bitcoins (minus a
fee) to Carol.  You already have a mechanism to enforce this contingency
(tx outpoints), so if Carol doesn't receive the bitcoins from Sally,
then Sally also doesn't receive the bitcoins from Bob.  In other words,
you already have atomicity for a single transfer.

Are you describing the effect over multiple transfers?  For example, Bob
previously transferred bitcoins to Sally and she paid users X, Y, and Z
in transactions that are now confirmed onchain, although she hasn't yet
swept Bob's funds.  Now when Sally double spends the payment to Carol,
Bob can not only reclaim the funds he gave Sally to pay to Carol (which
was guaranteed by the atomicity), he can also reclaim the unswept funds
he gave Sally to pay X, Y, and Z.

If so, I don't think that works.  In a private protocol, Carol can't be
sure that Bob and Sally are separate individuals.  If they're the same
entity, then any forfeit that Sally needs to pay Bob is just an internal
transfer, not a penalty.

I'd appreciate any clarification you can offer.  Thanks!,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Standardisation of an unstructured taproot annex

2023-06-02 Thread David A. Harding via bitcoin-dev

On 2023-06-02 05:00, Joost Jager via bitcoin-dev wrote:

the benefits of making the annex available in a
non-structured form are both evident and immediate. By allowing
developers to utilize the taproot annex without delay, we can take
advantage of its features today,


Hi Joost,

Out of curiosity, what features and benefits are available today?  I 
know Greg Sanders wants to use annex data with LN-Symmetry[1], but 
that's dependent on a soft fork of SIGHASH_ANYPREVOUT.  I also heard you 
mention that it could allow putting arbitrary data into a witness 
without having to commit to that data beforehand, but that would only 
increase the efficiency of witness stuffing like ordinal inscriptions by 
only 0.4% (~2 bytes saved per 520 bytes pushed) and it'd still be 
required to create an output in order to spend it.


Is there some other way to use the annex today that would be beneficial 
to users of Bitcoin?


-Dave

[1] 
https://github.com/lightning/bolts/compare/master...instagibbs:bolts:eltoo_draft#diff-156a655274046c49e6b1c2a22546ed66366d3b8d97b8e9b34b45fe5bd8800ae2R119

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Transaction Relay over Nostr

2023-05-27 Thread David A. Harding via bitcoin-dev

On 2023-05-22 21:19, Joost Jager via bitcoin-dev wrote:

A notable advantage of this approach is that it delegates the
responsibility of dealing with Denial-of-Service (DoS) threats to the
relays themselves. They could, for example, require a payment to
mitigate such concerns.


Hi Joost,

Thanks for working on this!  One quick thought I had was that a possibly
interesting avenue for exploration would be that, in addition to
relaying individual transactions or packages, it might be worth relaying
block templates and weak blocks as both of those provide inherent DoS
resistance and can offer useful features.

A block template is an ordered list of raw transactions that can all be
included in the next block (with some space reserved for a coinbase
transaction).  A full node can validate those transactions and calculate
how much fee they pay.  A Nostr relay can simply relay almost[1] any
template that pays more fees than the previous best template it saw for
the next block.  That can be more flexible than the current
implementation of submitblock with package relay which still enforces a
lot of the rules that helps keep a regular relay node safe from DoS and
a miner node able to select mineable transactions quickly.

A weak block is a block whose header doesn't quite hash to low enough of
a value to be included on the chain.  It still takes an extraordinary
amount of hashrate to produce, so it's inherently DoS resistant.  If
miners are producing block that include transactions not seen by typical
relay nodes, that can reduce the efficiency and effectiveness of BIP152
compact block relay, which hurts the profitability of miners of custom
blocks.  To compensate, miners could relay weak blocks through Nostr to
full nodes and other miners so that they could quickly relay and accept
complete blocks that later included the same custom transactions.  This
would also help fee estimation and provide valuable insights to those
trying to get their transactions included into the next block.

Regarding size, the block template and weak block could both be sent in
BIP152 compact block format as a diff against the expected contents of a
typical node, allowing Alice to send just a small amount of additional
data for relay over what she'd have to send anyway for each transaction
in a package.  (Although it's quite possible that BetterHash or Stratum
v2 have even better solutions, possibly already implemented.)

If nothing else, I think Nostr could provide an interesting playground
for experimenting with various relay and mining ideas we've talked about
for years, so thanks again for working on this!

-Dave

[1] In addition to validating transactions, a relay would probably want
to reject templates that contained transactions that took
excessively long to validate (which could cause a block including
them to become stale) or that included features reserved for
upgrades (as a soft fork that happened before the relay's node was
upgraded might make that block invalid).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-27 Thread David A. Harding via bitcoin-dev

Hi Burak,

Thanks for your response!  I found it very helpful.  I'm going to reply
to your email a bit out of order.


4. Alice places one input to her one-in, three-out transaction to
   supply funds to commitment output, connectors output, change
   output, and transaction fees.


You don't mention it in your reply, but was I correct in my earlier
email in assuming that Alice can claim any funds paid to a commitment
output after four weeks if its commitments haven't been published
onchain?  E.g., that in the best case this allows a ~50 vbyte commitment
output that pays an arbitrary number of users to be spent as a ~100
vbyte input (P2TR scriptpath for pk(A) && older(4 weeks))?


1. Mixing coins.
2. Paying lightning invoices
3. Making internal transfers


If commitment outputs can't normally be spent by Alice for four weeks,
then Alice needs to keep enough capital on hand to pay out all amounts
involved in the activities listed above.  I've seen many people make
this point, but I wanted to run some rough numbers to estimate the
extent of that capital load.

Let's say Alice has a million customers who each receive all of their
income and pay all of their expenses with her.  In my country, the
median income is a bit less than $36,000 USD, or about $3,000 a month.
I imagine spending is not evenly distributed over time, so let's say
Alice needs to hold 3x the average to be prepared for a busy period.
That implies Alice's capital requirements are about $9 billion USD (3 *
3000 * 1e6).

At a hypothetical risk-free interest rate of 1.5% annual, that's about
$135 that will need to be recovered from each user per year (9e9 * 0.015
/ 1e6).

Additionally, if we assume the cost of an onchain transaction is $100
and the service creates one transaction per five seconds, that's $630 in
fee costs that will need to be recovered from each user per year ((60 /
5) * 60 * 24 * 365 * 100 / 1e6).

I'll come back to this financial analysis later.


If we want to enable Lightning-style instant settlement assurances for
the internal transfers, we need OP_XOR or OP_CAT on the base layer
[...] https://eprint.iacr.org/2017/394.pdf


What do you mean by "instant"?  Do you mean "settlement as soon as the
next onchain pool transaction is published"?  For example, within 5
seconds if the coinjoining completes on time?  That's significantly
slower than LN today, at least in the typical case for a well-connected
node.[1]

I think 5 seconds is fine for a lot of purposes (at both point-of-sale
terminals and on websites, I very often need to wait >5 seconds for a
credit card transaction to process), but I think it's worth noting the
speed difference in a technical discussion.

Additionally, I think the idea described significantly predates that
paper's publication, e.g.:

"Well while you can't prevent it you could render it insecure enabling
miners to take funds.  That could work via a one-show signature
[...]"[2]

A problem with the idea of using one-show signatures as double-spend
protection is that miner-claimable fidelity bonds don't work as well
against adversaries that are not just counterparties but also miners
themselves.  This same problem has been described for other ideas[3],
but to summarize:

Bob has something valuable.  Alice offers him the output of an
unconfirmed transaction in exchange for that thing.  She also provides a
bond that will pay its amount to any miner who can prove that Alice
double spent her input to the unconfirmed transaction.

If Alice is miner, she can privately create candidate blocks that double
spend the payment to Bob and which also claim the bond.  If she fails to
find a PoW solution for those candidate blocks, she lets Bob have his
money.  If she does find a PoW solution, she publishes the block, taking
Bob's money, securing her bond, and also receiving all the regular block
rewards (sans the fees from whatever space she used for her
transaction).

I haven't exactly[4] seen this mentioned before, but I think it's
possible to weaken Alice's position by putting a timelock on the
spending of the bond, preventing it from being spent in the same block
as the double-spend.  For example, a one-block timelock (AKA: 1 CSV)
would mean that she would need to mine both the block containing her
unconfirmed transactions (to double spend them) and the next block (to
pay the fidelity bonds back to herself).

Ignoring fee-sniping (bond-sniping in this case), selfish mining, and
51% attacks, her chance of success at claiming the fidelity bond is
equal to her portion of the network hashrate, e.g. if she has 33%, she's
33% likely to succeed at double spending without paying a penalty.  The
value of the fidelity bond can be scaled to compensate for that, e.g. if
you're worried about Alice controlling up to 50% of hashrate, you make
the fidelity bond at least 2x the base amount (1 / 50%).  Let's again
assume that Alice has a million users making $3,000 USD of payments per
month (28 days), or about on average $75,000 per 

Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-24 Thread David A. Harding via bitcoin-dev

Hi Burak,

Thanks for this really interesting protocol!  I tend to analyze
complicated ideas like this by writing about them in my own words, so
I've pasted my summary of your idea to the end of this email in case
it's useful, either to other people or to you in helping understand my
one concern.

My concern is the same one I think Olaoluwa Osuntokun mentioned on
Twitter[1] and (less clear to me) might be related to ZmnSCPxj's
concern[2]:

It seems to me that receiving a payment on the protocol, including
conditional payments using HTLC, PTLC, or Anchor-TLC, requires waiting
for the transaction containing that payment to confirm to a sufficient
depth (e.g., I'd wait 6 blocks for small payments and longer for huge
payments).  Am I missing something?

My summary of how I think that part of the protocol works is in the
sections labeled "Make an unconditioned payment" and "Make a conditional
payment" below.  In short, it's clear to me how the service provider and
the customer can make instant atomic swaps with each other---they can
either spend instantly cooperatively, or they have to wait for a
timeout.  But how can a receiver of funds be assured that they will
actually get those funds unless there's already a timelock and
cooperative spend path placed on those funds?

-Dave

Rough initial summary of Ark protocol:

Alice runs an Ark service provider.  Every 5 seconds, she broadcasts a
new unconfirmed onchain transaction that pays three outputs (the
three Cs):

1. *Change Output:* money not used for the other two Cs that gets sent
   back to the the transaction creator.

2. *Connector Output:* an output that will be used in a future
   transaction created by Alice as protection against double spends.

3. *Commitment Output:* a CTV-style commitment to a set of outputs that
   can be published later in a descendant transaction (alternatively,
   the commitment output may be spent unilaterally by Alice after 4
   weeks).

Bob wants to deposit 1 BTC with Alice.  He sends her an unsigned PSBT
with an input of his and a change output.  She updates the PSBT with a
commitment output that refunds Bob the 1 BTC and a connector output with
some minimum value.  They both sign the PBST and it is broadcast.  We'll
ignore fees in our examples, both onchain transaction fees and fees paid
to Alice.

From here, there are several things that Bob can do:

- *Unilaterally withdraw:* Bob can spend from the commitment output to
  put his refund onchain.  The refund can only be spent after a 24-hour
  time delay, allowing Bob to optionally come to an agreement with Alice
  about how to spend the funds before Bob can spend them unilaterally
  (as we'll see in a moment).  For example, the script might be[3]:

pk(B) && (older(1 day) || pk(A))

- *Collaboratively withdraw:* as seen above, Bob has the ability to come
  to a trustless agreement with Alice about how to spend his funds.
  They can use that ability to allow Bob to trade his (unpublished) UTXO
  for a UTXO that Alice funds and broadcasts.  For example:

- Alice creates an unsigned PSBT that uses as one of its inputs the
  connector from Bob's deposit transaction.  This will ensure that
  any attempt by Bob to double-spend his deposit transaction will
  invalidate this withdrawal transaction, preventing Bob from being
  able to steal any of Alice's funds.

Also included in Alice's unsigned PSBT is another connector
output plus the output that pays Bob his 1 BTC.

- Bob receives Alice's unsigned PSBT and creates a separate PSBT
  that includes his unpublished UTXO as an input, giving its value
  to Alice in an output.  The PSBT also includes as an input the
  connector output from Alice's PSBT.  This will ensure that any
  attempt by Alice to double spend her transaction paying him will
  invalidate his transaction paying her.

- Bob signs his PSBT and gives it to Alice.  After verifying it,
  Alice signs her PSBT and broadcasts it.

- *Collaboratively trade commitments:* as mentioned, the commitment
  output that pays Bob may be claimed instead by Alice after 4 weeks, so
  Bob will need to either withdraw or obtain a new commitment within 
that

  time.  To trade his existing commitment for a new commitment looks
  similar to the collaborative withdrawal procedure but without the
  creation of an immediately-spendable onchain output:

- Alice creates an unsigned PSBT that uses as one of its inputs the
  connector from Bob's deposit transaction, again preventing double
  spending by Bob.  Alice also includes a new connector and a new
  commitment that again allows Bob to later claim 1 BTC.

- Bob receives Alice's PSBT and creates a PSBT transferring his
  existing commitment to her, with the new connector again being
  included as an input to ensure atomicity.

- Bob signs; Alice signs and broadcasts.

- *Make an unconditioned payment:* using the mechanisms described above,
 

Re: [bitcoin-dev] Bitcoin Core maintainers and communication on merge decisions

2023-05-07 Thread David A. Harding via bitcoin-dev

On 2023-05-06 21:03, Michael Folkson via bitcoin-dev wrote:

Essentially my concern is going forward current maintainers will
decide which proposed new maintainers to add and which to block.


This is how a large percentage of organizations are run.  The current 
members of a board or other governance group choose who will become a 
new board member.


One alternative to self-perpetuating governance is membership voting, 
but building and maintaining democratic institutions is hard and not a 
good fit for many types of endeavors---the building of highly technical 
software being one of those cases IMO.


I think the questions we want to ask is whether the current set of 
maintainers is capable of moving Bitcoin Core in the direction we want 
and what we can do about it if we conclude that they are ill-suited (or 
malicious).  For the first question, I think that's something everyone 
needs to answer for themselves, as we may each have different visions 
for the future of the project.  That said, I note that several 
initiatives championed by the current maintainers in the IRC meeting you 
mention received overwhelmingly positive support from a significant 
number of current contributors, which seems like a healthy sign to me.


For the second question, I think AJ Towns already answered that quite 
well (though he was talking about a different project): 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-April/021578.html


Finally, I don't think this matter warranted a post to this mailing 
list.  Discussion about internal project decisions, such as who should 
have merge access and what maintainers should communicate in PRs, belong 
in communication channels dedicated to that project.


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RGB protocol announcement

2023-04-19 Thread David A. Harding via bitcoin-dev

On 2023-04-18 13:16, Dr Maxim Orlovsky wrote:

1. Assume we have some BTC lifted to RGB, which we will name BTC*.
   (let’s leave the question on how to do that aside; it can be
   discussed separately).


Hi Maxim,

Ok, I think I understand you, but I'd like to try rephrasing what you
wrote in a very brief format to see if you agree that it's correct and
in the hopes that it might help other Bitcoin/LN developers understand.

- Xavier and Yasmin create an RGB contract that says any BTC deposited
  into multi(2,x,y) can be used as BTC\*

- Bob acquires some of this BTC\*

- Bob offers his BTC\* to anyone who can provide x for (4 == 2 * x)

- Alice knows x = 2

- Alice asks Xavier and Yasmin to sign an onchain transaction
  withdrawing Bob's BTC\*. She provides them proof that Bob offered his
  BTC\* and that she knew the answer.  The both sign the the 
transaction.


In short, I think this capability of RGB allows easily creating
user-defined sidechains based on arbitrary scripts.  This is similar to
what Elements allowed last I looked at it, although RGB makes the
process of creating new sidechains much smoother, reduces global state,
and allows sidechain tokens (including tokens like lifted BTC) to be
used with LN without sidechain-specific programming.  That seems like a
significant advance to me.

What it doesn't provide is trustless contracting beyond the capabilities
of Bitcoin script.  To be fair, when I looked at your documentation
again just now, I don't see it promising enhanced **trustless**
contracting---I see it only promising enhanced contracting, which I (and
perhaps others) seem to have interpreted as also being trustless.

I hope I've understood you correctly.  Regardless, thank you for your
many detailed answers to my questions!

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal to Remove BIP35 P2P 'mempool' Message

2023-04-18 Thread David A. Harding via bitcoin-dev

When serving trusted clients one alternative might be to use the
`savemempool` RPC, which can then be loaded directly (in whole) by the
client.


It was common in the past for lightweight clients to load a BIP37 filter
and then send a `getdata` for requesting `mempool`.  In that case, the
node would filter the mempool transactions and only send transactions
matching the filter to the client (plus false positives, which the
client could choose to keep very low).

The above approach minimized the amount of data that needed to be
transferred, which can be very important for lite clients on metered or
bandwidth-limited connections---especially considering that lite clients
on poor connections (e.g. mobile) might get disconnected frequently and
so need to re-request the filtered mempool every time they reconnect to
acquire any new unconfirmed transactions that arrived while they were
disconnected.

By comparison, during a period of backlog (the natural state, we hope),
the mempool contents in the `savemempool` format are about 300 MB.  I
think that's a bit much to potentially be sending to lite clients just
so they can learn about any unconfirmed transactions which arrived since
they last connected.

Although I understand and support the desire to remove problematic
public interfaces like BIP37 and BIP35, I think we should also be aiming
to build interfaces which make it easier for people to use third-party
wallets with their own trusted nodes.  Right now, it's possible to 
use[*]

`getheaders`, BIP157/8, and `getdata(block)` with your own node to learn
about all confirmed transactions affecting your wallet.  It's also
possible now to use BIP37 and BIP35 to get unconfirmed transactions in
a bandwidth-efficient manner, if your connection is allowlisted.

I would personally like to see lite clients that use a trusted node
receive a replacement for BIP35/7 support before those protocols are
removed.  (Of course, I'd also like to see support for BIP324 and for
something like countersign so that authenticated and encrypted
connections from a lite client to a trusted node are easy to setup.)

Thanks,

-Dave

[*]: Requires an authenticated connection to be secure (and should
 ideally be encrypted).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RGB protocol announcement

2023-04-15 Thread David A. Harding via bitcoin-dev

Hi Dr Orlovsky,

Thank you for writing about your interesting project.  Some replies
inline below:

On 2023-04-10 12:09, Dr Maxim Orlovsky via bitcoin-dev wrote:
RGB v0.10 can be downloaded and installed as described on 


website, which also contains a number of user and developer guidelines.
RGB source code can be found on 


FYI: the RGB-WG organization page links to a repository whose latest
release is 0.9 and whose latest commit is titled, "Release v.0.9.1", see
https://github.com/RGB-WG/rgb-node/


My goal with RGB was not just to enable assets on Lightning, but that
of a much larger scope: to build a programmability layer for Bitcoin
and Lightning, which may unlock other cases than just tokens - DAOs,
decentralized identities and other things that bitcoin itself was 
lacking.


Is there any documentation or discussion archives that address the
problem of non-publishable conditional statements seemingly being 
insecure in

multiparty protocols, as previously described on this list[1] by Ruben
Somsen?  To give my own example of the problem:

- Bob doesn't believe that there's a number which can be multiplied by 2
  to produce 4.  He's willing to pay a bounty for proof that he's wrong
  but Bitcoin does not currently provide a multiplication opcode, so he
  can't simply pay a script that says: "2 OP_MUL 4 OP_EQUAL"

- Bob hears that RGB has turing-complete scripting, so he buys some
  random tokens that have an RGB contract which allows him to encumber
  them by any AlumVM script.  He then creates a Bitcoin transaction
  signed SIGHASH_NONE|SH_ANYONECANPAY that will allow anyone knowing the
  solution to (x * 2 == 4) to spend his RGB-based tokens.  He publishes
  a PSBT for the transaction along with the RGB data needed to claim the
  tokens.

- Anyone on the network can now claim the BTC without knowing the
  solution, destroying the RGB-based tokens.

- If, instead, Bob hears that Mallory knows the solution, he could sign 
a

  PSBT with the default SH_ALL to her, but then Mallory could take the
  BTC without solving the problem, again destroying the RGB-based
  tokens.

- Or, in another case, Bob hears that Alice knows the solution, but he
  doesn't want to risk his tokens being destroyed, so he refuses to sign
  a transaction paying Alice until she provides him the answer.  When
  Alice does provide him the answer, and he realizes it's so simple, he
  changes his mind about paying her and doesn't sign his transaction to
  her.  She has no recourse.

It seems to me, based on my understanding of Somsen's original insight,
that client-side validation by itself cannot enforce conditions in a
trustless multiparty setting.

I think that implies that it's only possible to enforce conditions in a
consensus system (or in a trust-dependent system), which would have
significant implications for the future direction of your work, as you
wrote in your email:

We're also working on the design of a layer 1 which will be perfect for 
the
client-side-validated applications (“how to design a blockchain today 
if we
knew about client-side-validation/single-use-seals”). This should be 
very
compact (order of one signature per block) ultra-scalable 
(theoretically
unlimited no of tx in a block) chain which can run systems like RGB - 
with

Bitcoin UTXO set migrated into RGB [...]


* * *

Looking at other parts of your email:


Nevertheless, in 2021 we were able to present both RGB powered with a
Turing-complete virtual machine (AluVM) [2] and RGB had became 
operational on
Lightning Network [3] using the LNP Node - a complete rust 
re-implementation of

the Lightning protocol made by me at the Association [4].


Could you clarify the status of these implementations?  While trying to
learn about RGB, I noticed that you don't have much completed
documentation.  Previous reviewers also mentioned this and I saw that
you suggested them to read the code or view your videos.

When reading your code for your LN implementation (LNP), I noticed it
seemed to be missing a lot of things present in other LN implementations
I regularly review.  For example, I can't find where it supports
creating or parsing onions, which seems to be a fundamental requirement
for using LN.  In trying to figure out how it works, I also noticed that
I couldn't find either unit tests or integration tests---indeed several
of your applications seem to almost entirely lack the string "test".
For example, here are LNP-node and RGB-node compared to the four LN
implementations I regularly monitor:

/tmp/rgb-node$ git grep -i '\' | wc -l
7
/tmp/lnp-node$ git grep -i '\' | wc -l
4

~/repos/rust-lightning$ git grep -i '\' | wc -l
2008
~/repos/cln$ git grep -i '\' | wc -l
1459
~/repos/lnd$ git grep -i '\' | wc -l
3547
~/repos/eclair$ git grep -i '\' | wc -l
2576

I realize those are all projects by larger teams than that which works
on RGB, but a difference of three orders of magnitude is very surprising
to me.  Do 

Re: [bitcoin-dev] BIP proposal: Fee-redistribution contracts

2023-03-01 Thread David A. Harding via bitcoin-dev

On 2023-02-27 03:32, Rastislav Budinsky via bitcoin-dev wrote:

When a miner mines a block he takes all the fees currently. However
with the proposed solution he takes only fraction M and remaining
fraction C is sent to one of more contracts. One contract at its
simplest collects fees from the miner and at the same time
redistributes it back to the miner.


Hi Rastislav,

I think you've incorrectly made the assumption that the only way a miner 
can profit from confirming a transaction is by collecting its 
transaction fees.  Miners can (and many have) accept payment through 
alternative means, which the Bitcoin technical community often calls 
"out-of-band fees".[1]  For example, some miners have provided a 
"transaction accelerator" service that accepts fiat-denominated credit 
cards to increase their prioritization of certain transactions and I'm 
personally aware of a large web wallet provider that would occasionally 
pay miners out of band to confirm hundreds or thousands of transactions 
rather than fix its broken fee estimation.


Out-of-band fees aren't frequently used in Bitcoin today because they 
have no advantage over correctly estimated in-band fees, and good fee 
estimation is very accessible to modern wallets.  However, if the 
consensus rules are changed to require each miner pay a percentage of 
its in-band fees to future miners, then there would be a strong 
incentive for them to prefer out-of-band fees that weren't subject to 
this redistribution scheme.


I think may have seen a variation on the scheme you propose play out in 
real life.  Here's how it works where I live: the government imposes 
taxes on goods, services, and income.  Ostensibly, it redistributes the 
collected funds back to citizens in the future by providing government 
services.  When I go to pay someone who trusts my discretion, they often 
offer me a discounted rate if I pay in a way that isn't reported to the 
government (e.g., I pay with cash); even with the discount provided to 
me, they get to keep more of their income than if they had reported the 
transaction to the government.


In the case of a government, tax evasion can be reduced by the 
deployment of investigators and enforcers.  In Bitcoin, we have no 
control over activity that happens outside of the protocol and so even a 
modest incentive to pay fees out of band might quickly lead to almost 
all fees being paid out of band.  This prevents the effective 
redistribution of fees as in your proposal.  Additionally, previous 
discussions on this mailing list about paying out-of-band fees have 
highlighted that larger miners have an advantage over smaller miners in 
collecting miner-specific fee payments, undermining the essential 
decentralization of Bitcoin's transaction confirmation mechanism (moreso 
than it is already weakened by fundamental economies of scale in 
mining).


In short, I think serious consideration of your proposal can only 
proceed if it adequately addresses the problem of out-of-band fees.


That said, thank you and your co-authors for putting serious thought 
into Bitcoin's long-term economic incentives.


-Dave

[1] https://bitcoinsearch.xyz/?q=out%20of%20band%20fees=n_50_n
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Codex32

2023-02-19 Thread David A. Harding via bitcoin-dev

On 2023-02-16 03:49, Andrew Poelstra via bitcoin-dev wrote:

the draft lists several benefits over SLIP-0039.


The only benefit over SLIP39 that I see explicitly mentioned in the
draft BIP is "simple enough for hand computation".  In the FAQ[1] on the
project's website, I see some additional reasons:

| This scheme is essentially the same as SLIP39, with the following 
differences:

|
| - The checksum is longer, slightly stronger, and designed to be
|   computable by hand.
|
| - Our encoding is more compact, giving us room for a bit of more
|   metadata, which is also designed to be readable by hand.
|
| - Unlike SLIP39, we do not support passphrases or hardening of any
|   form.
|
| - Unlike SLIP39, we have no hardware wallet support. But we hope that
|   will change!

From having perused the extended documentation myself, I think I would
personally note the following differences.

- Alphabet: Codex32 uses the bech32 alphabet rather than SLIP39's
  alphabet consisting of English words.  The benefit to human-language
  words is easier memorization for those proficient in the particular
  language (in this case, SLIP39 only allows the use of English).  A
  disadvantage, IMO, is that it encourages the practice of memorization
  (which does have a few advantages but also a lot of drawbacks).

  Interestingly, Codex32 addresses what I think is the main problems of
  memorization: difficult-to-prove successful recollection.  Someone who
  wants to reliably keep seed-related material only in their head
  needs to practice recalling it on a regular basis, but for BIP39,
  SLIP39, Aezeed, etc... there's no way for them to confirm they
  successfully recalled it short of going through the entire recovery
  process; they probably just judge how confident they feel about the
  recollection and assume that feeling like they recalled it correctly
  is the same thing as recalling it correctly.

  Codex32 allows the individual to periodically perform their
  recollection on paper in a private room without electronics and use
  nothing but a pen and some loookup tables (or a paper device) to
  verify that they recalled the string correctly (and its checksum can
  help with correcting up to several errors, although you might need a
  computer for error location and correction assistance).

- Hierarchy: Codex32 does not natively provide support for nested 
  whereas SLIP39 does.  E.g., in SLIP39, you can require 2-of-3 for
  {me, family, friends} where me is 2-of-3 {fire_safe, bank_safe,
  buried_in_woods}, family is 1-of-3 {alice, bob, carol}, and friends
  are 2-of-5 {d, e, f, g, h}.  I assume you can do the same with Codex32
  by using the share for one level as the secret for the next level,
  although this is not described in the protocol.

- Versioning: Codex32's metadata can store version information for
  wallets that use implicit BIP32 paths (e.g. BIP44/49/84/86), although
  this would cut into the space available for users to set their own
  metadata and it is not specified in the draft BIP.  SLIP39 also
  doesn't specify anything about implicit path versioning and, AFAICT,
  doesn't have any room to store such metadata without reducing seed
  entropy.

- Plausible deniability dummy wallets: Codex32 doesn't support this;
  SLIP39 does.  Much has been written by other people about whether
  dummy wallets are a good idea or not, with strong opinions on both
  sides, so maybe we can just leave it at that.

---

When I first saw the post about this, it was unclear to me that it was a
serious project, but I've become increasingly interested as I researched
it.  I'm not personally that interested in generating entropy from dice
or encoding shares by hand---it's already imperative that I acquire a
trustworthy computer and load it with trustworthy software in order to
use my seed securely, so I might as well have it generate my seeds and 
my

recovery codes for me.

What really did catch my attention, but which was kind of buried in the
project documentation, is the ability to verify the integrity of each
share independently without using a computer.  For example, if I store a
share with some relative who lives thousands of kilometers away, I'll be
able to take that share out of its tamper-evident bag on my annual
holiday visit, verify that I can still read it accurately by validating
its checksum, and put it into a new bag for another year.  For this
procedure, I don't need to bring copies of any of my other shares,
allowing them (and my seed) to stay safe.

---

I do have one question after watching an excellent video[2] about the
motivation for this system.  In the video, one of the threat models
described is a disarrangement of the words in a metal backup system.
The implication seems to be that this would be an accidental
disarrangement, which obviously the Codex32 checksum would catch during
periodic offline verification.  But what about deliberate modification
of a recovery code?  For example, Bob doesn't 

Re: [bitcoin-dev] Reference example bech32m address for future segwit versions

2023-01-31 Thread David A. Harding via bitcoin-dev

On 2023-01-31 04:30, Greg Sanders wrote:

Hi David,

From practical experience, I think you'll find that most exchanges
will not enable sends to future segwit versions,
as from a risk perspective it's likely a mistake to send funds there.


Hi Greg!,

I thought the best practice[1] was that wallets would spend to the 
output indicated by any valid bech32m address.  You seem to implying 
that the best practice is the opposite: that wallets should only send to 
outputs they know can be secured (i.e., which are not currently 
anyone-can-spend).  The more restrictive approach seems kind of sad to 
me since any problem which can result in a user accidentally withdrawing 
to a future segwit version could even more easily result in them 
withdrawing to a witness program for which there is no solution (i.e., 
no key or script is known to spend).


If it is a best practice, then I think there's a benefit to being able 
to test it even when other people's proprietary software is involved.  A 
wallet or service likely to follow that best practice may be more likely 
to follow other best practices which cannot be as easily tested for.  
But, if it's going to be tested, I want the testing to use the address 
least likely to cause problems for protocol developers in the future.  
Do you (and others on this list) have any reason to believe OP_16 
OP_PUSH2  would be a problematic script, or can you think of a 
better script?


Thanks!,

-Dave

[1] BIP350, emphasis in original: "[...] we emphatically recommend [...] 
ensuring that your implementation supports sending to v1 **and higher 
versions.**"

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Reference example bech32m address for future segwit versions

2023-01-30 Thread David A. Harding via bitcoin-dev

Hi y'all!,

One of the benefits proposed for bech32 (and, by extension, bech32m) is
that spender wallets shouldn't need to be upgraded to pay segwit outputs
defined in future soft forks.  For example, Bitcoin Core today can pay a
bech32m address for a segwit v2 output, even though no meaning has been
assigned to output scripts matching a segwit v2 template.

However, testing this behavior in production[1] can create an annoyance
for developers of future soft forks.  They will need to deal with any
existing outputs paid to the templates used in that proposed soft fork.
See, for example, some discussion by developer 0xB10C about payments to
segwit v1 addresses before activation of the taproot soft fork:
https://b10c.me/blog/007-spending-p2tr-pre-activation/

I was wondering if it would be useful to have a canonical examples of
future segwit addresses that are designed to be very unlikely to
interfere with future soft forks but which would still reasonably
exercise wallets supporting bech32m.  I think of this as the rough
equivalent of the RFC2606 domain "example.com" which has been reserved
for examples in documentation.

Specifically, I'm thinking of the following addresses, one each for
mainnet and testnet:

- HRP: bc for mainnet; tb for testent
- Witness version: 16 (the last segwit version)
- Witness program: 0x.  Two bytes is the minimum allowed
  by BIP141, but it's far too small to make any sort of secure 
commitment,

  so I'm hoping it won't conflict with any future use

I think we should try to start with just one reserved address per
network, but if that isn't enough, I think we could allow any two-byte
witness program with witness version 16.

Outputs paid to reserved addresses will still be anyone-can-spend, so
there's no change required to Bitcoin Core or other software and those
outputs can still be cleaned out of the UTXO set.  Additionally, if we
ever *really* need that address space for a soft fork, it will be
available.

Are there any objections to this idea, or suggestions for a better way
to go about it?

Thanks!,

-Dave

[1] Testing in production should be avoided because it uses block space
that could otherwise be used by actual value transfers.  Also, it costs
money and pollutes the UTXO set (at least temporarily).  However, when
testing whether proprietary third-party software, such as an exchange,
supports payments to future segwit versions, sometimes the only
convenient method is to actually pay the address for a future segwit
version.  Additionally, my specific use case is just to write 
documentation
about bech32m---but I worry that people will pay my example of a future 
segwit

version address.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Full-RBF Makes DoS Attacks on Multiparty Protocols Significantly More Expensive

2023-01-10 Thread David A. Harding via bitcoin-dev

On 2023-01-10 00:06, Peter Todd wrote:
Remember, we'd like decentralized coinjoin implementations like 
Joinmarket to
work. How does a decentralized coinjoin implement "conflict 
monitoring"?


1. Run a relay node with a conflict-detection patch.  Stock Bitcoin Core
   with -debug=mempoolrej will tell you when it rejects a transaction
   for conflicting with a transaction already in the mempool, e.g.:

  2022-11-01T02:53:17Z 
867b85d68d7a7244c1d65c4797006b56973110ac243ab5ee15a8c4d220060c58 from 
peer=58 was not accepted: txn-mempool-conflict


   I think it would be easy to extend this facility to list the inputs
   which conflicted.  So if Alice sees a conflict created by Mallory,
   she can create a new coinjoin transaction without Mallory.  This
   method has the advantage of being fast and attributing fault,
   although it does require Alice's node be online at the time Mallory's
   conflict is propagated.

2. Simply assume a conflict exists for otherwise unexplainable failures.
   For example, if Alice sees several new blocks whose bottom feerates
   are well below the feerates of an unconfirmed coinjoin transaction
   that Alice helped create and broadcast, she can assume it's a
   conflict that is preventing preventing confirmation of the coinjoin.
   She can find an entirely different set of collaborators and create a
   non-conflicting transaction without ever needing to know which inputs
   from the original transaction conflicted.  This method has the
   disadvantage of being slow (on the order of hours) and not 
attributing
   fault, although it doesn't require Alice has any information beyond 
copies

   of recent blocks.

I didn't list these methods or others before because the specific method 
used to

detect conflicts doesn't matter to the realization that software which
uses conflict detection and evasion to defeat the $17.00 attack also
defeats the $0.05 attack without any need for full-RBF.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Full-RBF Makes DoS Attacks on Multiparty Protocols Significantly More Expensive

2023-01-10 Thread David A. Harding via bitcoin-dev

On 2023-01-09 22:47, Peter Todd wrote:
How do you propose that the participants learn about the double-spend? 
Without

knowing that it happened, they can't respond as you suggested.


I can think of various ways---many of them probably the same ideas that
would occur to you.  More concise than listing them is to just assume
they exist and realize that any protocol software which wants to defeat
the $17.00 pinning attack needs to implement some sort of conflict
monitoring system---but by using that monitoring system to defeat the
$17.00 pinning attack, the software also defeats the $0.05 individual
conflicting input attack without any need for full-RBF.

Full-RBF provides no benefits here except those which are already
provided by other necessary tools.

Thanks,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Full-RBF Makes DoS Attacks on Multiparty Protocols Significantly More Expensive

2023-01-09 Thread David A. Harding via bitcoin-dev

On 2023-01-09 12:18, Peter Todd via bitcoin-dev wrote:

[The quote:]

"Does fullrbf offer any benefits other than breaking zeroconf 
business

 practices?"

...has caused a lot of confusion by implying that there were no 
benefits. [...]


tl;dr: without full-rbf people can intentionally and unintentionally 
DoS attack
multi-party protocols by double-spending their inputs with low-fee txs, 
holding

up progress until that low-fee tx gets mined.


Hi Peter,

I'm confused.  Isn't this an easily solvable issue without full-RBF?
Let's say Alice, Bob, Carol, and Mallory create a coinjoin transaction.
Mallory either intentionally or unintentionally creates a conflicting
transaction that does not opt-in to RBF.

You seem to be proposing that the other participants force the coinjoin
to complete by having the coinjoin transaction replace Mallory's
conflicting transaction, which requires a full-RBF world.

But isn't it also possible in a non-full-RBF world for Alice, Bob, and
Carol to simply create a new coinjoin transaction which does not include
any of Mallory's inputs so it doesn't conflict with Mallory's
transaction?  That way their second coinjoin transaction can confirm
independently of Mallory's transaction.

Likewise, if Alice and Mallory attempt an LN dual funding and Mallory
creates a conflict, Alice can just create an alternative dual funding
with Bob rather than try to use full-RBF to force Mallory's earlier dual
funding to confirm.


## Transaction Pinning

Exploiting either rule is expensive.


I think this transaction pinning attack against coinjoins and dual
fundings is also solved in a non-full-RBF world by the honest
participants just creating a non-conflicting transaction.

That said, if I'm missing something and these attacks do actually apply,
then it might be worth putting price figures on the attack in terms most
people will understand.  The conflicting inputs attack you described in
the beginning as being solved by full-RBF costs about $0.05 USD at
$17,000/BTC.  The transaction pinning attack you imply is unsolved by
full-RBF costs about $17.00.  If both attacks apply, any protocol which
is vulnerable to a $17.00 attack still seems highly vulnerable to me, so
it doesn't feel like a stretch to say that full-RBF lacks significant
benefits for those protocols.

Thanks,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Using Full-RBF to fix BIP-125 Rule #3 Pinning with nLockTime

2022-11-10 Thread David A. Harding via bitcoin-dev

On 2022-11-07 11:17, Peter Todd via bitcoin-dev wrote:
We can ensure with high probability that the transaction can be 
cancelled/mined
at some point after N blocks by pre-signing a transaction, with 
nLockTime set

sufficiently far into the future, spending one or more inputs of the
transaction with a sufficiently high fee that it would replace 
transaction(s)
attempting to exploit Rule #3 pinning (note how the package limits in 
Bitcoin

Core help here).


This implies a floor on the funds involved in a contract.  For example, 
if the pinning transaction is 100,000 vbytes at a feerate of 1 sat/vb, 
the minimum contract amount must be a bit over 100,000 sats (about $17 
USD at current prices).  However, participants in a contract not meant 
to settle immediately probably need to assume the worst case future 
pinning, for example where transactions paying even 100 sat/vb won't be 
mined promptly; in which case the minimum contract amount becomes 
something like $1,700 USD.


That seems sub-optimal to me.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2022-11-09 Thread David A. Harding via bitcoin-dev

On 2022-11-07 23:17, Salvatore Ingala via bitcoin-dev wrote:

Hi list,


Hi Salvatore!,


I have been working on some notes to describe an approach that uses
covenants in order to enable general smart contracts in bitcoin. You
can find them here:

https://merkle.fun


I haven't yet been able to understand everything in your post, but I'm 
wondering if you can describe how your proposal significantly differs in 
application from [1]?  E.g., you write:



1. Alice posts the statement “f(x) = y”.
2. After a challenge period, if no challenge occurs, Alice is free to 
continue and unlock the funds; the statement is true.
3. At any time before the challenge period expires, Bob can start a 
challenge: “actually, f(x) = z”.


That looks to me very similar to Gregory Maxwell's script from[1] 
(comments and variable name changes mine):


# Offchain, Alice posts the statement f(x) = y
# Offchain, Bob provides Ex, an encrypted form of x that can be proven 
in zero knowledge to satisfy both f(x) = y and sha256(x) = Y

OP_SHA256
 OP_EQUAL
OP_IF
  # Bob provided the preimage for Y, that preimage being the solution, 
so he can spend the funds now

  
OP_ELSE
  # The challenge period ended, so Alice can reclaim her funds
   OP_CHECKLOCKTIMEVERIFY OP_DROP
  
OP_ENDIF
OP_CHECKSIG

Thanks and apologies if I'm missing something obvious!,

-Dave

[1] 
https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On mempool policy consistency

2022-10-29 Thread David A. Harding via bitcoin-dev

On 2022-10-26 13:52, Anthony Towns via bitcoin-dev wrote:

The cutoff for that is probably something like "do 30% of listening
nodes have a compatible policy"? If they do, then you'll have about a
95% chance of having at least one of your outbound peers accept your 
tx,

just by random chance.


I think this might be understating the problem.  A 95% chance of having
an outbound peer accept your tx conversely implies 1 in 20 payments will 
fail to

propagate on their initial broadcast.  That seems to me like an
unacceptably high failure rate both for the UX of regular payments and
for the safety of time-sensitive transactions like onchain HTLC
resolutions.

Additionally, the less reliable propagation is, the more reliably spy
nodes can assume the first IP address they received a transaction from
is the creator of that transaction.

I think those two problems combine in an especially unfortunate way for
lightweight clients.  Lightweight clients wanting to find a peer who
supports a more permissive policy than most of the network and whose
client authors want to provide a good UX (or safety in the case of time
sensitive contract protocols like LN) will need to open large numbers of
connections, increasing their chance of connecting to a spy node which
will associate their IP address with their transaction, especially since
lightweight clients can't pretend to be relaying transactions for other
users.  Some napkin math: there are about 250,000 transactions a day; if
we round that up to 100 million a year and assume we only want one
transaction per year to fail to initially propagate on a network where
30% of nodes have adopted a more permissive policy, lightweight clients
will need to connect to over 50 randomly selected nodes.[1]  For a more
permissive policy only adopted by 10% of nodes, the lightweight client
needs to connect to almost 150 nodes.

This also implies that nodes adopting a more restrictive policy degrades
UX, safety, and privacy for users of transactions violating that policy.
For example, if 30% of nodes used Knots's -spkreuse configuration option
and about 50% of transactions reuse scriptPubKeys, then about 9
transactions a day wouldn't initially propagate (assuming 8 randomly
selected peers[2]) and lightweight clients who wanted 1-in-100-million
safety would need to connect to about 15 random nodes.

Towns's post to which I'm replying describes several alternative
approaches which mitigate the above problems, but he also documents that
they're not without tradeoffs.

-Dave

[1] (1-0.3)**50 * 100_000_000 =~ 1.8

[2] That assumes every transaction is sent to a different
randomly-selected set of peers, which isn't really the case.  However,
one day $GIANT_EXCHANGE could suddenly be unable to broadcast hundreds 
or

thousands of withdrawal transactions because all of its peers implement
a restrictive policy.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-23 Thread David A. Harding via bitcoin-dev

On 2022-10-19 04:29, Sergej Kotliar via bitcoin-dev wrote:

The biggest risk
in accepting bitcoin payments is in fact not zeroconf risk (it's
actually quite easily managed), it's FX risk as the merchant must
commit to a certain BTCUSD rate ahead of time for a purchase. Over
time some transactions lose money to FX and others earn money - that
evens out in the end. But if there is an _easily accessible in the
wallet_ feature to "cancel transaction" that means it will eventually
get systematically abused.


One way to address this risk is by turning it into a certainty.  If the 
price of BTC increases between when the invoice is generated and when a 
transaction is included in a block, give the customer a future purchase 
credit equal in value to the difference between the price they paid and 
the value of the purchase at confirmation time.  Now there's no benefit 
to the customer from canceling their transaction.


Of course, this means that the merchant will always either break even or 
lose money on the exchange rate part of the transaction and will need to 
raise their prices accordingly.  I can see how that would be unappealing 
to implement, but it seems better to me to address the incentive 
incompatibility you've raised rather than hope no large miners ever 
start performing full RBF.  Plus, maybe the future credit feature is 
something customers would like: I know I've been sad several times when 
the exchange rate changed significantly while I was waiting for one of 
my transactions to confirm.


The above mitigation is also compatible with LN payments.  For example, 
a merchant today might issue an LN invoice that expires in 10 minutes.  
The customer can wait for most of that time to elapse to see how the 
exchange rate changes before deciding to pay, obtaining the same 
American call option.  If they are instead offered a future purchase 
credit for any gains, the customer doesn't suffer any opportunity cost 
by paying immediately.  (With LN, it might be possible to have a better 
UX for this by either refunding any excess or (if using something like 
Original AMP or PTLCs) not claiming any parts of the payment which are 
in excess.)


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-20 Thread David A. Harding via bitcoin-dev

On 2022-10-20 09:58, Anthony Towns via bitcoin-dev wrote:
On Thu, Oct 20, 2022 at 02:37:53PM +0200, Sergej Kotliar via 
bitcoin-dev wrote:

AJ previously wrote:
> presumably that makes your bitcoin
> payments break down as something like:
>5% txs are on-chain and seem shady and are excluded from zeroconf
>   15% txs are lightning
>   20% txs are on-chain but signal rbf and are excluded from zeroconf
>   60% txs are on-chain and seem fine for zeroconf
Numbers are right. [...]


[...]

So the above suggests 25% of payments already get a sub-par experience 
[...]

going full rbf would bump that from 25% to 85%, which would be pretty
terrible.


Is it worth considering incremental steps between opt-in only (BIP125) 
and replace anything full RBF?  For example, in addition to opt-in RBF 
rules, treat any transaction with a txid ending in `0x1` as replacable?  
I assume 1/16th (6.25%) of transactions would match that pattern (some 
of which already opt-in to RBF, so the net effect would be smaller).  
This would have the following advantages:


1. We could see if miners are willing to enable unsignaled RBF at all

2. We could gather more evidence on how the change affects zeroconf 
businesses and everyday users, hopefully without requiring they make 
immediate and huge changes


3. Any wallet authors that oppose unsignaled RBF can opt-out by grinding 
their txids, at least until full RBF is accomplished


4. We can increase the percentage of transactions subject to unsignaled 
RBF in later releases of Bitcoin Core, steadily moving the system 
towards full RBF without any sudden leaps (assuming nobody builds a 
successful relay and mining network with less restrictive replacement 
rules)


I don't think this directly helps solve the problems with non-replacable 
transactions suffered by contract protocols since any adversary can 
opt-out of this scheme by grinding their txid, but I do think there's an 
advantage in transitioning slowly when people are still depending on 
previous behaviors.


Thanks,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-07 Thread David A. Harding via bitcoin-dev

On 2022-10-07 06:20, Dario Sneidermanis via bitcoin-dev wrote:

Hello list,

I'm Dario, from Muun wallet [...] we've been reviewing the latest 
bitcoin core release

candidate [...] we understood we had at least a year from the initial
opt-in  deployment until opt-out was deployed, giving us enough time to 
adapt
Muun to the new policies. However, when reviewing the 24.0 release 
candidate

just a few  days ago, we realized that zero-conf apps (like Muun) must
*immediately turn off* their zero-conf features.


Hi Dario,

I'm wondering if there's been some confusion.  There are two RBF-related 
items in the current release notes draft:[1]


1. "A new mempoolfullrbf option has been added, which enables the 
mempool to accept transaction replacement without enforcing BIP125 
replaceability signaling. (#25353)"


2. "The -walletrbf startup option will now default to true. The wallet 
will now default to opt-in RBF on transactions that it creates. 
(#25610)"


The first item (from PR #25353) does allow a transaction without a 
BIP125 signal to be replaced, but this configuration option is set to 
disabled by default.[2]  There have been software forks of Bitcoin Core 
since at least 2015 which have allowed replacement of non-signaling 
transactions, so this option just makes that behavior a little bit more 
accessible to users of Bitcoin Core.  Some developers have announced 
their intention to propose enabling this option by default in a future 
release, which I think is the behavior you're concerned about, but 
that's not planned for the release of 24.0 to the best of my knowledge.


The second item (from PR #25610) only affects Bitcoin Core's wallet, and 
in particular transactions created with it through the RPC interface.  
Those transactions will now default to signaling BIP125 replacability.  
This option has been default false for many years for the RPC, but for 
the GUI it's been default true since Bitcoin Core 0.16, released in 
early 2018[3].  It's no different than another popular wallet beginning 
to signal BIP125 support by default.


In short, I don't think anything in Bitcoin Core 24.0 RC1 significantly 
changes the current situation related to transaction replacability.  All 
it does is give Bitcoin Core RPC users by default the same settings long 
used for GUI users and introduce an option that those who object to 
non-signalled RBF will later be able to use to disable their relay of 
non-signalled replacements.


Does the above information resolve your concerns?

Thanks,

-Dave

[1] 
https://github.com/bitcoin-core/bitcoin-devwiki/wiki/24.0-Release-Notes-draft


[2] $ bin/bitcoind -help | grep -A3 mempoolfullrbf
  -mempoolfullrbf
   Accept transaction replace-by-fee without requiring 
replaceability

   signaling (default: 0)

[3] 
https://bitcoincore.org/en/2018/02/26/release-0.16.0/#replace-by-fee-by-default-in-gui

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Trustless Address Server – Outsourcing handing out addresses to prevent address reuse

2022-10-02 Thread David A. Harding via bitcoin-dev

On 2022-09-29 05:39, Ruben Somsen via bitcoin-dev wrote:

An alternative mitigation (more user friendly, but more implementation
complexity) would be to require the sender to reveal their intended
transaction to the server prior to receiving the address[^9]. This is
not a privacy degradation, since the server could already learn this
information regardless. If the transaction doesn't end up getting
sent, any subsequent attempt to reuse one of the inputs should either
be (temporarily) blacklisted or responded to with the same address
that was given out earlier
[...]
[^9]: *This would essentially look like an incomplete but signed
transaction where the output address is still missing.*


Hi Ruben,

Instead of maintaining a database of inputs that should be blocked or 
mapped to addresses, have the spender submit to you (but not the 
network) a valid transaction paying a placeholder address and in return 
give them a guaranteed unique address.  They can then broadcast a 
transaction using the same inputs to pay the guaranteed unique address.  
If you don't see that transaction within a reasonable amount of time, 
broadcast the transaction paying the placeholder address.  This makes it 
cost the same to them whether they use the unique address or not.  By 
placeholder address, I mean an address of yours that's never received a 
payment but which may have been provided in a previous invoice (e.g. to 
prevent exceeding the gap limit).


In short, what I think I've described is the BIP78 payjoin protocol 
without any payjoining going on (which is allowed by BIP78).  BTCPay 
already implements BIP78, as do several wallets, and I think it 
satisfies all the design constraints you've described.


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] More uses for CTV

2022-08-19 Thread David A. Harding via bitcoin-dev

On 2022-08-19 06:33, James O'Beirne via bitcoin-dev wrote:

Multiple parties could
trustlessly collaborate to settle into a single CTV output using
SIGHASH_ALL | ANYONECANPAY. This requires a level of interaction
similar to coinjoins.


Just to make sure I understand, is the reason for SH_ALL|SH_ACP so that 
any of the parties can subsequently RBF fee bump the transaction?



Conceptually, CTV is the most parsimonious way to do such a scheme,
since you can't really get smaller than a SHA256 commitment


What's the advantage of CTV here compared to presigned transactions?  If 
multiple parties need to interact to cooperatively sign a transaction, 
no significant overhead is added by having them simultaneously sign a 
second transaction that spends from the output of the first transaction. 
 Presigned transactions actually have two small benefits I can think of:


1. The payment from the first transaction (containing the spends from 
the channel setup transactions) can be sent to a P2WPKH output, which is 
actually smaller than a SHA256 commitment.  Though this probably does 
require an extra round of communication for commit-and-reveal to prevent 
a collision attack on the P2WPKH address.[1]


2. Having the first transaction pay a either a P2WPKH or bech32m output 
and the second transaction spend from that UTXO may blend in better with 
other transactions, enhancing privacy.  This advantage probably isn't 
compatible with SH_ALL|SH_ACP, though, and it would require other 
privacy upgrades to LN.



direct-from-coinbase payouts seem like a
desirable feature which avoids some trust in pools.
[...]
If the payout was instead a single OP_CTV output, an arbitrary number
of pool participants could be paid out "atomically" within a single
coinbase.  One limitation is
the size of the coinbase outputs owed to constituent miners; this
limits the number of participants in the pool.


I'm confused by this.  What is the size limitation on coinbase outputs, 
how does it limit the number of participants in a pool, and how does CTV 
fix that?


Thanks,

-Dave

[1] 
https://bitcoinops.org/en/newsletters/2020/06/24/#reminder-about-collision-attack-risks-on-two-party-ecdsa

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Regarding setting a lower minrelaytxfee

2022-07-28 Thread David A. Harding via bitcoin-dev

On 2022-07-26 02:45, Peter Todd via bitcoin-dev wrote:
On Tue, Jul 26, 2022 at 01:56:05PM +0530, Aaradhya Chauhan via 
bitcoin-dev wrote:

[...] in its early days, 1 sat/vB was a good dust protection
measure. But now, I think it's a bit high [...] I think it can be done 
easily [...]


[...] lowering the dust limit now is a good way to ensure
the entire ecosystem is ready to deal with those conditions.


I don't have anything new to add to the conversation at this time, but I 
did want to suggest a clarification and summarize some previous 
discussion that might be useful.


I think the phrasing by Aaradhya Chauhan and Peter Todd above are 
conflating the minimum output amount policy ("dust limit") with the 
minimum transaction relay feerate policy ("min tx relay fee").  Any 
transaction with an output amount below a node's configured dust limit 
(a few hundred sat by default) will not be relayed by that node no 
matter how high of a feerate it pays.  Any transaction with feerate 
below a nodes's minimum relay feerate (1 sat/vbyte by default) will not 
be relayed by that node even if the node has unused space in its mempool 
and peers that use BIP133 feefilters to advertise that they would accept 
low feerates.


Removing the dust limit was discussed extensively a year ago[1] with 
additional follow-up discussion about eight months ago.[2]


Lowering the minimum relay feerate was seriously proposed in a patch to 
Bitcoin Core four years ago[3] with additional related PRs being opened 
to ease the change.  Not all of the related PRs have been merged yet, 
and the original PR was closed.  I can't easily find some of the 
discussions I remember related to that change, but IIRC part of the 
challenge was that lower minimum relay fees reduce the cost of a variety 
of DoS attacks which could impact BIP152 compact blocks and erlay 
efficiency, could worsen transaction pinning, may increase IBD time due 
to more block chain data, and have other adverse effects.  Additionally, 
we've found in the past that some people who build systems that take 
advantage of low feerates become upset when feerates rise, sometimes 
creating problems even for people who prepared for eventual feerate 
rises.


Compared to the complexity of lowering the minimum feerate, the 
challenges of preventing denial/degregation-of-service attacks, and 
dealing with a fragmented userbase, the economic benefit of reducing the 
feerates for the bottom of the mempool seems small---if we lower min 
feerates to 1/10th their current values and that results in the 
equivalent of an extra 10 blocks of transactions getting mined a day, 
then users save a total of 0.09 BTC (~$1,800 USD) per day and miners 
earn an extra 0.01 BTC ($200 USD) per day (assuming all other things 
remain equal).[4]


-Dave

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019635.html

[3] https://github.com/bitcoin/bitcoin/pull/13922
[4] The current min relay fee is 1 sat/vbyte.  There are ~1 million 
vbytes in a block that can be allocated to regular transactions.  Ten 
blocks at the current min relay fee would pay (10 * 1e6 / 1e8 = 0.1 BTC) 
in fees.  Ten blocks at 1/10 sat/vbyte would thus pay 0.01 BTC in fees, 
which is $200 USD @ $20k/BTC.  Thus users would save (0.1 - 0.01 = 0.09 
BTC = $1,800 USD @ $20k/BTC).

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary

2022-07-18 Thread David A. Harding via bitcoin-dev

On 2022-07-10 07:27, Peter Todd via bitcoin-dev wrote:
The block subsidy directly ties miner revenue to the total value of 
Bitcoin:
that's exactly how you want to incentivise a service that keeps Bitcoin 
secure.


I'm confused.  I thought your argument in the OP of this thread was that 
a perpetual block subsidy would *not* be tied to the total value of 
bitcoin.  It'd be tied to the total value of bitcoin *lost* each year on 
average.


If so, would you then agree that the inability of a perpetual block 
subsidy to directly tie miner revenue to the total value of Bitcoin 
makes it not exactly how we want to incentivise a service that keeps 
Bitcoin secure?


Thanks,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bringing a nuke to a knife fight: Transaction introspection to stop RBF pinning

2022-05-12 Thread David A. Harding via bitcoin-dev

On 2022-05-10 08:53, Greg Sanders via bitcoin-dev wrote:

We add OPTX_SELECT_WEIGHT(pushes tx weight to stack, my addition to
the proposal) to the "state" input's script.
This is used in the update transaction to set the upper bound on the
final transaction weight.
In this same input, for each contract participant, we also
conditionally commit to the change output's scriptpubkey
via OPTX_SELECT_OUTPUT_SCRIPTPUBKEY and OPTX_SELECT_OUTPUTCOUNT==2.
This means any participant can send change back
to themselves, but with a catch. Each change output script possibility
in that state input also includes a 1 block
CSV to avoid mempool spending to reintroduce pinning.


I like the idea!   However, I'm not sure the `1 CSV` trick helps much.  
Can't an attacker just submit to the mempool their other eltoo state 
updates?  For example, let's assume Bob and Mallory have a channel with 
>25 updates and Mallory wants to prevent update[-1] from being committed onchain before its (H|P)TLC timeout.  Mallory also has at least 25 unencumbered UTXOs, so she submits to the mempool update[0], update[1], update[...], update[24]---each of them with a different second input to pay fees.


If `OPTX_SELECT_WEIGHT OP_TX` limits each update's weight to 1,000 
vbytes[1] and the default node relay/mempool policy of allowing a 
transaction and up to 24 descendants remains, Mallory can pin the 
unsubmitted update[-1] under 25,000 vbytes of junk---which is 25% of 
what she can pin under current mempool policies.


Alice can't RBF update[0] without paying for update[1..24] (BIP125 rule 
#3), and an RBF of update[24] will have its additional fees divided by 
its size plus the 24,000 vbytes of update[1..24].


To me, that seems like your proposal makes escaping the pinning at most 
75% cheaper than today.  That's certainly an improvement---yay!---but 
I'm not sure it eliminates the underlying concern.  Also depending on 
the mempool ancestor/descendant limits makes it harder to raise those 
limits in the future, which is something I think we might want to do if 
we can ensure raising them won't increase node memory/CPU DoS risk.


I'd love to hear that my analysis is missing something though!

Thanks!,

-Dave

[1] 1,000 vbytes per update seems like a reasonable value to me.  
Obviously there's a tradeoff here: making it smaller limits the amount 
of pinning possible (assuming mempool ancestor/descendant limits remain) 
but also limits the number and complexity of inputs that may be added.  
I don't think we want to discourage people too much from holding 
bitcoins in deep taproot trees or sophisticated tapscripts.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread David A. Harding via bitcoin-dev

On 21.04.2022 14:28, Anthony Towns wrote:

But, if [it's true that "many [...] use cases [...] to use CTV for
are very long term in nature"], that's presumably incompatible
with any sort of sunset that's less than many decades away, so doesn't
seem much better than just having it be available on a signet?


I fully acknowledge that a temporary test can't fully replicate a 
permanent condition.  That said, if people truly believe CTV vaults will 
significantly enhance their security, wouldn't it be worth using them 
for most of the deployment?  Users would receive both years of added 
security and the opportunity to convince other Bitcoiners to make CTV 
permanent by demonstrating real-world usage.



If sunsetting were a good idea, one way to think about implementing it
might be to code it as:

  if (DeploymentActiveAfter(pindexPrev, params, FOO) &&
  !DeploymentActiveAfter(pindexPrev, params, FOO_SUNSET))
  {
  EnforceFoo();
  }


Defining at the outset how we'll signal years later if we want to keep 
the rules seems intelligent to me.


Thanks!,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread David A. Harding via bitcoin-dev

[Rearranging Matt's text in my reply so my nitpicks come last.]

On 21.04.2022 13:02, Matt Corallo wrote:

I agree, there is no universal best, probably. But is there a concrete
listing of a number of use-cases and the different weights of things,
plus flexibility especially around forward-looking designs?


I'm sure we could make a nice list of covenant usecases, but I don't 
know how we would assign reasonable objective weights to the different 
things purely through group foresight.  I know I'm skeptical about 
congestion control and enthusiastic about joinpools---but I've talked to 
developers I respect who've had the opposite opinions from me about 
those things.  The best way I know of to reconcile our differing 
opinions is to see what real Bitcoin users actually pay for.  But to do 
that, I think they must have a way to use covenants in something like 
the production environment.



You're also writing off [...] a community of
independent contributors who care about Bitcoin working together to
make decisions on what is or isn't the "right way to go" [...]. Why are 
you

suggesting its something that you "don't know how to do"?


You said we should use the best design.  I said the different designs 
optimize for different things, so it's unlikely that there's an 
objective best.  That implies to me that we either need to choose a 
winner (yuck) or we need to implement more than one of the designs.  In 
either of those cases, choosing what to implement would benefit from 
data about how much the thing will be used and how much users will pay 
for it in fees.



Again, my point *is not* "will people use CTV", I think they will. I
think they would also use TLUV if that were activated for the exact
same use-cases. I think they would also use CAT+CSFS if that were what
was activated, again for the exact same use-cases. Given that, I'm not
sure how your proposal teaches us anything at all, aside from "yes,
there was demand for *some* kind of covenant".


I'm sorry if my OP was ambiguous about this, but my goal there was to 
describe a general framework for activating temporary consensus changes 
for the purpose of demonstrating demand for proposed features.  I gave 
CTV as an example for how the framework could be used, but we could use 
the same framework to activate APO and TLUV (or IIDs and EVICT)---and 
then we would see which of them people actually used.  If there was 
significant ongoing use of all three after 5 years, great!  We keep them 
all.  If some of them went largely unused, we let the extra validation 
rules expire and move on.


Alternatively, if we only enabled one covenant design (e.g. CTV), we 
would still gain data about how it was used and we could see if some of 
the alternative designs would've been more optimal for those 
demonstrated uses.


My goal here is obtaining data from which we can make informed 
decisions.  A transitory soft fork is an extreme way to acquire that 
data and I fully acknowledge it has several significant problems 
(including those I listed in my OP).  I'm hoping, though, that it's a 
better solution than another activation battle, prolonged yelling on 
this mailing list and elsewhere, or everyone just giving up and letting 
Bitcoin ossify prematurely.  Alternatively, I'm hoping one of the many 
people on this list who is smarter than I am will think of another way 
to obtain decisive data with less fuss.



Again, you're writing off the real and nontrivial risk of doing a fork
to begin with.


I agree this risk exists and it isn't my intention to write it off---my 
OP did say "we [must be] absolutely convinced CTV will have no negative 
effects on the holders or receivers of non-CTV coins."  I haven't been 
focusing on this in my replies because I think the other issues we've 
been discussing are more significant.  If we were to get everyone to 
agree to do a transitory soft fork, I think the safety concerns related 
to a CTV soft fork could be mitigated the same way we've mitigated them 
for previous soft forks: heaps of code review/testing and making sure a 
large part of the active community supports the change.



You don't
mention the lack of recursion in CTV vs CAT+CSFS


I mentioned recursion, or the lack thereof, in various proposals like 
five times in this thread.  :-)


Thanks again for your replies,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread David A. Harding via bitcoin-dev

On 21.04.2022 08:39, Matt Corallo wrote:

We add things to Bitcoin because (a) there's some demonstrated
use-cases and intent to use the change (which I think we definitely
have for covenants, but which only barely, if at all, suggests
favoring one covenant design over any other)


I'm unconvinced about CTV's use cases but others have made reasonable 
claims that it will be used.  We could argue about this indefinitely, 
but I would love to give CTV proponents an opportunity to prove that a 
significant number of people would use it.



(b) because its
generally considered aligned with Bitcoin's design and goals, based on
developer and more broad community response


I think CTV fulfills this criteria.  At least, I can't think of any way 
BIP119 itself (notwithstanding activation concerns) violates Bitcoin's 
designs and goals.



(c) because the
technical folks who have/are wiling to spend time working on the
specific design space think the concrete proposal is the best design
we have


This is the criteria that most concerns me.  What if there is no 
universal best?  For example, I mentioned in my previous email that I'm 
a partisan of OP_CAT+OP_CSFS due to their min-max of implementation 
simplicity versus production flexibility.  But one problem is that 
spends using them would need to contain a lot of witness data.  In my 
mind, they're the best for experimentation and for proving the existence 
of demand for more optimized constructions.


OP_TX or OP_TXHASH would likely offer almost as much simplicity and 
flexibility but be more efficient onchain.  Does that make them better 
than OP_CAT+OP_CSFS?  I don't know how to objectively answer that 
question, and I don't feel comfortable with my subjective opinion of 
CAT+CSFS being better than OP_TX.


APO/IIDs, CTV, and TLUV/EVICT all seem to me to be very specific to 
certain usecases (respectively: Eltoo, congestion control, and 
joinpools), providing maximum onchain efficiency for those cases but 
requiring contortions or larger witnesses to accomplish other covenant 
usecases.  Is their increased efficiency better than more general 
constructions like CSFS or TX?  Again, I don't know how to answer that 
question objectively, although subjectively I'm ok with optimized 
constructions for cases of proven demand.



, and finally (d) because the implementation is well-reviewed
and complete.


No comment here; I haven't followed CTV's review progress to know 
whether I'd consider it well enough reviewed or not.



I do not see how we can make an argument for any specific covenant
under (c) here. We could just as well be talking about
TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use
CTV can probably just as easily use those instead - ie this has
nothing to do with "will people use it".


I'm curious how we as a technical community will be able to determine 
which is the best approach.  Again, I like starting simple and general, 
gathering real usage data, and then optimizing for demonstrated needs.  
But the simplest and most general approaches seem to be too general for 
some people (because they enable recursive covenants), seemingly forcing 
us into looking only at application-optimized designs.  In that case, I 
think the main thing we want to know about these narrow proposals for 
new applications is whether the applications and the proposed consensus 
changes will actually receive significant use.  For that, I think we 
need some sort of test bed with real paying users, and ideally one that 
is as similar to Bitcoin mainnet as possible.



we
cannot remove the validation code for something ever, really - you
still want to be able to validate the historical chain


You and Jeremy both brought up this point.  I understand it and I 
should've addressed it better in my OP, but I'm of the opinion that 
reverting to earlier consensus rules gives future developers the 
*option* of dropping no-longer-used consensus code as a practical 
simplification of the same type we've used on several occasions before, 
and which is an optional default in newly started Bitcoin Core nodes for 
over a decade now (i.e. skipping verification of old signatures).  If 
future devs *want* to maintain code from a set of temporary rules used 
millions of blocks ago, that's great, but giving them the option to 
forget about those rules eliminates one of my concerns about making 
consensus changes without fully proven demand for that change.


I just wanted to mention the above in case this discussion comes back to 
serious consideration of a transitory soft fork.  For now, I think we 
can table a debate over validating reverted rules and focus on how we'll 
come to agreement that a particular covenant-related consensus change is 
warranted.


Thanks for your thoughtful response,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread David A. Harding via bitcoin-dev

On 21.04.2022 04:58, Matt Corallo wrote:

On 4/20/22 6:04 PM, David A. Harding via bitcoin-dev wrote:
The main criticisms I'm aware of against CTV seem to be along the 
following lines:


1. Usage, either:
   a. It won't receive significant real-world usage, or
   b. It will be used but we'll end up using something better later
2. An unused CTV will need to be supported forever, creating extra 
maintenance
    burden, increasing security surface, and making it harder to 
evaluate later

    consensus change proposals due to their interactions with CTV


Also "is this even the way we should be going about covenants?"


I consider this to be a version of point 1b above.  If we find a better 
way for going about covenants, then we'll activate that and let CTV 
automatically be retired at the end of its five years.


If you still think your point is separate from point 1b, I would 
appreciate you helping me understand.



the Bitcoin technical community (or at least those interested in
working on covenants) doesn't even remotely show any signs of
consensus around any concrete proposal,


This is also my assessment: neither CTV nor any other proposal currently 
has enough support to warrant a permanent change to the consensus rules. 
 My question to the list was whether we could use a transitory soft fork 
as a method for collecting real-world usage data about proposals.  E.g., 
a consensus change proposal could proceed along the following idealized 
path:


- Idea (individual or small group)
- Publication (probably to this list)
- Draft specification and implementation
- Riskless testing (integration tests, signet(s), testnet, etc)
- Money-at-stake testing (availability on a pegged sidechain, an altcoin 
similar to Bitcoin, or in Bitcoin via a transitory soft fork)

- Permanent consensus change


talking about a "way forward for CTV" or activating CTV or coming up
with some way of shoving it into Bitcoin at this stage [...] sets 
incredibly poor precedent for

how we think about changes to Bitcoin and maintaining Bitcoin's
culture of security and careful design.


How should we think about changes to Bitcoin and maintaining its culture 
of security and careful design?  My post suggested a generalized way we 
could evaluate proposed consensus changes for real-world demand, 
allowing us to settle what I see as the most contended part of the CTV 
proposal.  That feels to me like legitimate engineering and social 
consensus building.  What would be your preferred alternatives?


(For the record, my preferred alternative for years has been to add the 
technically trivial opcodes OP_CAT and OP_CHECKSIGFROMSTACK, see what 
covenant-y things people build with them, and then consider proposals to 
optimize the onchain usage of those covenant-y things.  Alas, this seems 
to fall afoul of the concerns held by some people about recursive 
covenants.)



I'm gobsmacked that the conversation has reached this point, and am
even more surprised that the response from the Bitcoin (technical)
community hasn't been a more resounding and complete rejection of this
narrative.


If the only choices are to support activation of BIP119 CTV at this time 
or to reject it, I would currently side with rejection.  But I would 
prefer over both of those options to find a third way that doesn't 
compromise safety or long-term maintainability and which gives us the 
data about CTV (or other covenant-related constructions) to see whether 
the concerns described above in 1a and 1b are actually non-issues.


I see one of those third ways as the testing on the CTV signet described 
in a contemporaneous thread on this list.[1]  Other third ways would be 
trying CTV on sidechains or altcoins, or perhaps allowing CTV to be 
temporarily used on Bitcoin as proposed in this thread.  Is there 
interest in working on those alternatives, or is the only path forward 
an argument over attempting activation of CTV?


Thanks,

-Dave

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-April/020234.html

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-20 Thread David A. Harding via bitcoin-dev

Hi all,

The main criticisms I'm aware of against CTV seem to be along the 
following lines:


1. Usage, either:
  a. It won't receive significant real-world usage, or
  b. It will be used but we'll end up using something better later
2. An unused CTV will need to be supported forever, creating extra 
maintenance
   burden, increasing security surface, and making it harder to evaluate 
later

   consensus change proposals due to their interactions with CTV

Could those concerns be mitigated by making CTV an automatically 
reverting
consensus change with an option to renew?  E.g., redefining OP_NOP4 as 
OP_CTV
for five years from BIP119's activation date and then reverting to 
OP_NOP4.
If, prior to the end of those five years, a second soft fork was 
activated, it

could continue enforcing the CTV rules either for another five years or
permanently.

This would be similar in nature to the soft fork described in BIP50 
where the
maximum block size was temporarily reduced to address the BDB locks 
issue and
then allowed to return to its original value.  In Script terms, any use 
of

OP_CTV would effectively be:

OP_IF
   OP_CTV
OP_ELSE
  <5 years after activation> OP_CLTV
OP_ENDIF

As long as we are absolutely convinced CTV will have no negative effects 
on the
holders or receivers of non-CTV coins, I think an automatically 
reverting soft
fork gives us some ability to experiment with new features without 
committing

ourselves to live with them forever.

The main downsides I can see are:

1. It creates a big footgun.  Anyone who uses CTV without adequately 
preparing for

   the reversion could easily lose their money.

2. Miners would be incentivized to censor spends of the reverting
   opcode near its reversion date.  E.g., if Alice receives 100 bitcoins 
to a
   script secured only by OP_CTV and attempts to spend them the day 
before it
   becomes OP_NOP4, miners might prefer to skip confirming that 
transaction even
   if it pays a high feerate in favor of spending her 100 bitcoins to 
themselves

   the next day after reversion.

   The degree to which this is an issue will depend on the diversity of
   hashrate and the willingness of any large percentage of hashrate to
   deliberately reorg the chain to remove confirmed transactions.  This 
could be
   mitigated by having OP_CTV change to OP_RETURN, destroying any 
unspent CTV-only
   coins so that any censoring miners only benefited from the (hopefully 
slight)

   decrease in bitcoin currency supply.

3. A bias towards keeping the change.  Even if it turned out very few 
people
   really used CTV, I think there would be a bias at the end of five 
years towards

   "why not just keep it".

4. The drama doesn't end.  Activating CTV now, or decisively not 
activating it,
   may bring to an end our frequent discussions about it (though I 
wouldn't

   count on that).  An automatically reverting soft fork would probably
   guarantee we'll have further consensus-level discussions about CTV in 
the

   future.

Thanks for reading.  I'm curious to hear y'alls thoughts,

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Sponsor transaction engineering, was Re: Thoughts on fee bumping

2022-02-18 Thread David A. Harding via bitcoin-dev
On Tue, Feb 15, 2022 at 01:37:43PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> Unfortunately, there are technical reasons for sponsors to not be monotone.
> Mostly that it requires the maintenance of an additional permanent
> TX-Index

Alternatively, you could allow a miner to include a sponsor transaction
in a later block than the sponsored transaction by providing an (SPV)
merkle inclusion proof that the sponsored transaction was a part of a
previous block on the same chain.[1]

This does raise the vbyte cost of including sponsor and sponsored
transactions in different blocks compared to including them both in the
same block, but I wonder if it mitigates the validity concern raised by
Suhas Daftuar in the previous sponsor transaction thread.

-Dave

[1] Bitcoin Core stores the complete headers chain, so it already has
the information necessary to validate such a proof (and the
`verifytxoutproof` RPC already does this).  Utreexo-style nodes might
not store old headers to save space, but I presume they could store a
merkle-like commitment to all headers they previously validated and then
have utreexo proofs include the necessary headers and intermediate
hashes necessary to validate subsequent-block sponsor transactions.


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-10 Thread David A. Harding via bitcoin-dev
On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev wrote:
> Whether [recursive covenants] is an issue or not precluding this sort
> of design or not, I defer to others.

For reference, I believe the last time the merits of allowing recursive
covenants was discussed at length on this list[1], not a single person
replied to say that they were opposed to the idea.

I would like to suggest that anyone opposed to recursive covenants speak
for themselves (if any intelligent such people exist).  Citing the risk
of recursive covenants without presenting a credible argument for the
source of that risk feels to me like (at best) stop energy[2] and (at
worst) FUD.

-Dave

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
[2] 
http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
(thanks to AJ who told me about stop energy one time when I was
producing it)



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Finding peers that relay taproot spends, was Re: bitcoinj fork with Taproot support

2021-11-20 Thread David A. Harding via bitcoin-dev
On Wed, Nov 17, 2021 at 08:05:55PM +, n1ms0s via bitcoin-dev wrote:
> This seems to be the case. I saw your reply on Bitcoin StackExchange
> as well. In bitcoinj I just made it so the client only connects to
> nodes with at least protocol version 70016. Seems to work well.

Hi,

This is a clever solution, but when I looked into this I found that P2P
protocol version 70016 was introduced in Bitcoin Core version 0.21.0[1].
This release will not ever relay taproot spends because it doesn't
contain taproot activation parameters[2].  So this heuristic is
imperfect: it only works when it happens to connect to the 0.21.1 and
22.0 versions of Bitcoin Core (or compatible nodes) which were
programmed to begin relaying taproot spends starting one block before
activation.

Can anyone recommend a better heuristic lite wallets can use to ensure
they're connecting to a taproot-activated node?  (If not, maybe this is
something we want nodes to advertise during activation of future
protocol extensions.)

Thanks,

-Dave

[1] 
https://github.com/bitcoin/bitcoin/commit/ccef10261efc235c8fcc8aad54556615b0cc23be
https://bitcoincore.org/en/releases/0.21.0/

[2] https://github.com/bitcoin/bitcoin/pull/20165


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread David A. Harding via bitcoin-dev
On Fri, Sep 10, 2021 at 11:24:15AM -0700, Matt Corallo via bitcoin-dev wrote:
> I'm [...] suggesting [...] that the existing block producers each
> generate a new key, and we then only sign reorgs with *those* keys.
> Users will be able to set a flag to indicate "I want to accept sigs
> from either sets of keys, and see reorgs" or "I only want sigs from
> the non-reorg keys, and will consider the reorg keys-signed blocks
> invalid"

This seems pretty useful to me.  I think we might want multiple sets of
keys:

0. No reorgs

1. Periodic reorgs of small to moderate depth for ongoing testing
without excessive disruption (e.g. the every 8 hours proposal).  I think
this probably ought to be the default-default `-signet` in Bitcoin Core
and other nodes.

2. Either frequent reorgs (e.g. every block) or a webapp that generates
reorgs on demand to further reduce testing delays.

If we can only have two, I'd suggest dropping 0.  I think it's already
the case that too few people test their software with reorgs.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-06 Thread David A. Harding via bitcoin-dev
On Mon, Sep 06, 2021 at 09:29:01AM +0200, Eric Voskuil wrote:
> It doesn’t centralize payment, which ultimately controls transaction 
> selection (censorship).

Yeah, but if you get paid after each share via LN and you can switch
pools instantly, then the worst case with centralized pools is that 
you don't get paid for one share.  If the hasher sets their share
difficulty low enough, that shouldn't be a big deal.

I'm interested in whether braidpool offers any significant benefits over
an idealized version of centralized mining with independent transaction
selection.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-06 Thread David A. Harding via bitcoin-dev
On Wed, Sep 01, 2021 at 11:46:55PM -0700, Billy Tetrud via bitcoin-dev wrote:
> How would you compare this to Stratum v2?

Specifically, I'd be interested in learning what advantages this has
over a centralized mining pool using BetterHash or StratumV2 with
payouts made via LN (perhaps immediately after each submitted share is
validated).

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-05 Thread David A. Harding via bitcoin-dev
On Fri, Sep 03, 2021 at 08:32:19PM -0700, Jeremy via bitcoin-dev wrote:
> Hi Bitcoin Devs,
> 
> I recently noticed a flaw in the Sequence lock implementation with respect
> to upgradability. It might be the case that this is protected against by
> some transaction level policy (didn't see any in policy.cpp, but if not,
> I've put up a blogpost explaining the defect and patching it
> https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/

Isn't this why BIP68 requires using tx.version=2?  Wouldn't we just
deploy any new nSequence rules with tx.version>2?

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread David A. Harding via bitcoin-dev
On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> I'm pretty conservative about increasing the standard dust limit in any
> way. This would convert a higher percentage of LN channels capacity into
> dust, which is coming with a lowering of funds safety [0]. 

I think that reasoning is incomplete.  There are two related things here:

- **Uneconomical outputs:** outputs that would cost more to spend than
  the value they contain.

- **Dust limit:** an output amount below which Bitcoin Core (and other
  nodes) will not relay the transaction containing that output.

Although raising the dust limit can have the effect you describe, 
increases in the minimum necessary feerate to get a transaction
confirmed in an appropriate amount of time also "converts a higher
percentage of LN channel capacity into dust".  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

(Related to your linked thread, that seems to be about the risk of
"burning funds" by paying them to a miner who may be a party to the
attack.  There's plenty of other alternative ways to burn funds that can
change the risk profile.)

> the standard dust limit [...] introduces a trust vector 

My point above is that any trust vector is introduced not by the dust
limit but by the economics of outputs being worth less than they cost to
spend.

> LN node operators might be willingly to compensate this "dust" trust vector
> by relying on side-trust model

They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098_0_178

(Probabalistic payments were discussed in the general context of Bitcoin
well before LN was proposed, and Elements even includes an opcode for
creating them.)

> smarter engineering such as utreexo on the base-layer side 

Utreexo doesn't solve this problem.  Many nodes (such as miners) will
still want to store the full UTXO set and access it quickly,  Utreexo
proofs will grow in size with UTXO set size (though, at best, only
log(n)), so full node operators will still not want their bandwidth
wasted by people who create UTXOs they have no reason to spend.

> I think the status quo is good enough for now

I agree.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-08 Thread David A. Harding via bitcoin-dev
On Sun, Aug 08, 2021 at 11:52:55AM -0700, Jeremy wrote:
> We should remove the dust limit from Bitcoin. Five reasons:

Jeremy knows this, but to be clear for other readers, the dust limit is
a policy in Bitcoin Core (and other software) where it refuses by
default to relay or mine transactions with outputs below a certain
amount.  If nodes or miners running with non-default policy choose to
relay or mine those transactions, they are not penalized (not directly,
at least; there's BIP152 to consider).

Question for Jeremy: would you also allow zero-value outputs?  Or would
you just move the dust limit down to a fixed 1-sat?

I think the dust limit is worth keeping:

> 1) it's not our business what outputs people want to create

Every additional output added to the UTXO set increases the amount of
work full nodes need to do to validate new transactions.  For miners
for whom fast validation of new blocks can significantly affect their
revenue, larger UTXO sets increase their costs and so contributes
towards centralization of mining.

Allowing 0-value or 1-sat outputs minimizes the cost for polluting the
UTXO set during periods of low feerates.

If your stuff is going to slow down my node and possibly reduce my
censorship resistance, how is that not my business?

> 2) dust outputs can be used in various authentication/delegation smart
> contracts

All of which can also use amounts that are economically rational to
spend on their own.  If you're gonna use the chain for something besides
value transfer, and you're already wiling to pay X in fees per onchain
use, why is it not reasonable for us to ask you to put up something on
the order of X as a bond that you'll actually clean up your mess when
you're no longer interested in your thing?

> 3) dust sized htlcs in lightning (
> https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
> force channels to operate in a semi-trusted mode 

Nope, nothing is forced.  Any LN node can simply refuse to accept/route
HTLCs below the dust limit.

> which has implications
> (AFAIU) for the regulatory classification of channels in various
> jurisdictions

Sucks for the people living there.  They should change their laws.  If
they can't do that, they should change their LN node policies not to
route uneconomic HTLCs.  We shouldn't make Bitcoin worse to make
complying with regulations easier.

I also doubt your proposed solution fixes the problem.  Any LN node that
accepts an uneconomic HTLC cannot recover that value, so the money is
lost either way.  Any sane regulation would treat losing value to
transaction fees the same as losing value to uneconomical conditions.

Finally, if LN nodes start polluting the UTXO set with no economic way
to clean up their mess, I think that's going to cause tension between
full node operators and LN node operators.

> agnostic treatment of fund transfers would simplify this
> (like getting a 0.01 cent dividend check in the mail)

I'm not sure I understand this point.  It sounds to me like you're
comparing receiving an uneconomic output to receiving a check that isn't
worth the time to cash.  But the costs of checks are borne only by the
people who send, receive, and process them.  The costs of uneconomic
outputs polluting the UTXO set are borne by every full node forever (or
for every archival full node forever if non-archival nodes end up using
something like utreexo).

> 4) thinly divisible colored coin protocols might make use of sats as value
> markers for transactions.

I'm not exactly sure what you're talking about, but if Alice wants to
communicate the number n onchain, she can do:

if n < dust:
  nSequence = 0x + n  # should probably check endianess
else:
  nValue = n

There's at least 15 bits of nSequence currently without consensus or
policy meaning, and the dust limits are currently in the hundreds of
sat, so there's plenty of space.

Alice could probably also communicate the same thing by grinding her
output script's hash or pubkey; again, with dust limits just being
hundreds of sats, that's not too much grinding.

> 5) should we ever do confidential transactions we can't prevent it without
> compromising privacy / allowed transfers

I'm not an expert, but it seems to me that you can do that with range
proofs.  The range proof for >dust doesn't need to become part of the
block chain, it can be relay only.

I haven't looked since they upgraded to bulletproofs, but ISTR the
original CT implementation leaked the most significant digits or
something (that kept down the byte size of the proofs), so maybe it was
already possible to know what was certainly not dust and what might be
dust.

In short, it's my opinion that the dust limit is not creating any real
problems, so it should be kept for its contribution to keeping full
nodes faster, cheaper, and more efficient.

-Dave

P.S. As I prepared to send this, Matt's email arrived about "If it
weren't 

Re: [bitcoin-dev] Covenant opcode proposal OP_CONSTRAINDESTINATION (an alternative to OP_CTV)

2021-07-25 Thread David A. Harding via bitcoin-dev
On Sun, Jul 25, 2021 at 12:49:38PM -0700, Billy Tetrud wrote:
> find the median fee-rate for each block and store that, then calculate
> the average of those stored per-block median numbers. 

One datapoint per block seems fine to me and it works much nicer with
pruned nodes.

> So the only situations where miners would gain something
> from raising the fee rate is for griefing situations, which should be so
> rare as to be completely insignificant to miners. 

I don't believe the problem scope can be reduced this way.  Although we
we often look at miners as separate from users, it's important to
remember that every miner is also a user of Bitcoin and ever user of
Bitcoin may also someday be a miner.  Users may also employ miners
directly via out-of-band payments.

In your usecase of vaults, we can imagine Bob is attempting to store
100,000 BTC.  He designs his vault to allow spending on fees up to 10x
the 3,000 block median fee.  Mallory steals Bob's encumbered spending
key.  Mallory could immediately go to a miner and offer them a 50/50
split on the 10x fees over the median (~10,000 sat?), or Mallory could
take a bit more time and work with a cartel of miners to raise the
median over a period of three weeks (3k blocks) to 10,000
BTC/transaction, allowing them to take all of Bob's coins in fees.

> if OP_CD allowed spending the entire output as a fee then it wouldn't
> be successful in constraining the destination to the listed addresses.

The alternative is to never allow OP_CD to spend any of the UTXOs it
encumbers to fees, requiring all fees be paid via another mechanism.
Since satisfactory designs are going to provide those other mechanisms
anyway, it seems to me that there's no need for OP_CD to manage fees.
That said, I don't have a real strong opinion here.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Covenant opcode proposal OP_CONSTRAINDESTINATION (an alternative to OP_CTV)

2021-07-24 Thread David A. Harding via bitcoin-dev
On Tue, Jul 20, 2021 at 10:56:10PM -0700, Billy Tetrud via bitcoin-dev wrote:
> This involves [...] constraining the amount of the fee that output is
> allowed to contribute to.  [...] fee is specified relative to recent
> median fee rates - details in the proposal).

Here are the relevant details:

> The medianFeeRate is defined as the median fee rate per vbyte for the
> most recent windowLength blocks. The maxFeeContribution is defined as
> medianFeeRate * 2^feeFactor of the fee. Note that this is a limitation
> on the fee, not on the fee-rate. If feeFactor is -1,
> maxFeeContribution is 0.

First, I don't think we want full nodes to have to store the feerate for
every transaction in a 3,000 block window (~2.5 million txes, assuming
all segwit).  I'm sure you could tweak this proposal to require a much
smaller dataset.

Second, I think this requires careful consideration of how it might
affect the incentives for miners.  Miners can include many small high-fee
pay-to-self transactions in their blocks to raise the median feerate,
but this puts them at increased risk of fee sniping from other miners,
which may incentivize fee-raisers to centralize their mining, which is
ultimately bad.  I'm not sure that's a huge concern with this proposal,
but I think it and other incentive problems require consideration.

Finally, I think this fee mechanism is redundant.  For any case where
this opcode will be used, you'll want to have two things:

1. A mutual spend clause (e.g. a multisignature taproot keypath
   spend) where all parties agree on a spend of the output and so
   can set an appropriate feerate at that time.  You want this
   because it'll be the most efficient way to spend.

2. A fee override that allows paying additional fees beyond what
   OP_CONSTRAINDESTINATION allows, either through attaching an
   additional input or through CPFP.  You want this because you
   will know more about feerate conditions at spend time than you
   did when you created the receiving script.

If you have the ability to choose feerates through the above mechanisms,
you don't need a constrained feerate mechanism that might be
manipulable by miners.

(I haven't looked closely at the rest of your proposal; the above just
caught my attention.)

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Multisig Enhanced Privacy Scheme

2021-07-24 Thread David A. Harding via bitcoin-dev
On Tue, Jul 20, 2021 at 07:44:19PM +, Michael Flaxman via bitcoin-dev wrote:
> I've been working on ways to prevent privacy leaks in multisig
> quorums, and have come up with a creative use of BIP32 paths.

It seems to me like it would be rare for an attacker to obtain a private
BIP32 seed but not simultaneously learn what HD paths it's being used with.
I assume basically everyone is storing their descriptors (or descriptor
equivalents) alongside their seeds; doing so helps ensure a robust
recovery.

However, to the degree that privacy from seed thieves is a problem we
want to solve, I think it's largely fixed by using taproot with
multisignatures and threshold signatures.  As long as participants
aren't reusing the same keys in different contexts, it shouldn't be
possible for a third party who doesn't know all involved pubkeys to
determine that any particular aggregated pubkey contained material from
a certain base pubkey.

I would suggest that it's probably more beneficial for wallet authors to
work on implementing support for taproot and MuSig or MuSig2 than
support for this scheme, although maybe I'm misunderstanding this
scheme's motivation.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Travel rule, VASP UID and bitcoin URI - A new BIP

2021-07-16 Thread David A. Harding via bitcoin-dev
On Fri, Jul 16, 2021 at 04:35:21PM +0200, Karel Kyovsky via bitcoin-dev wrote:
> I would like to propose a standardization of [a new] bitcoin URI parameter 
> name
> [...]
> My question is: Should I prepare a completely new BIP or should I prepare a
> modification of BIP21?

Please use a new BIP.  See BIP72 for a previous instance where another
URI parameter for BIP21 was standardized.
https://github.com/bitcoin/bips/blob/master/bip-0072.mediawiki

(I think your compliance situation is mostly off topic for this list, so
I'm not commenting on that.)

-Dave




signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread David A. Harding via bitcoin-dev
On Sun, Jul 04, 2021 at 11:39:44AM -0700, Jeremy wrote:
> However, I think the broader community is unconvinced by the cost benefit
> of arbitrary covenants. See
> https://medium.com/block-digest-mempool/my-worries-about-too-generalized-covenants-5eff33affbb6
> as a recent example. Therefore as a critical part of building consensus on
> various techniques I've worked to emphasize that specific additions do not
> entail risk of accidentally introducing more than was bargained for to
> respect the concerns of others.

Respecting the concerns of others doesn't require lobotomizing useful
tools.  Being respectful can also be accomplished by politely showing
that their concerns are unfounded (or at least less severe than they
thought).  This is almost always the better course IMO---it takes much
more effort to satisfy additional engineering constraints (and prove to
reviewers that you've done so!) than it does to simply discuss those
concerns with reasonable stakeholders.  As a demonstration, let's look
at the concerns from Shinobi's post linked above:

They seem to be worried that some Bitcoin users will choose to accept
coins that can't subsequently be fungibily mixed with other bitcoins.
But that's already been the case for a decade: users can accept altcoins
that are non-fungible with bitcoins.

They talk about covenants where spending is controlled by governments,
but that seems to me exactly like China's CBDC trial.

They talk about exchanges depositing users' BTC into a covenant, but 
that's just a variation on the classic not-your-keys-not-your-bitcoins
problem.  For all you know, your local exchange is keeping most of its
BTC balance commitments in ETH or USDT.

To me, it seems like the worst-case problems Shinobi describes with
covenants are some of the same problems that already exist with
altcoins.  I don't see how recursive covenants could make any of those
problems worse, and so I don't see any point in limiting Bitcoin's
flexibility to avoid those problems when there are so many interesting
and useful things that unlimited covenants could do.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-03 Thread David A. Harding via bitcoin-dev
On Sat, Jul 03, 2021 at 09:31:57AM -0700, Jeremy via bitcoin-dev wrote:
> Note that with *just* CheckSigFromStack, while you can do some very
> valuable use cases, but without OP_CAT it does not enable sophisticated
> covenants

Do you have concerns about sophisticated covenants, and if so, would you
mind describing them?  Your BIP119 CTV also mentions[1] being designed
to avoid sophisticated covenants.  If this is some sort of design
principle, I'd like to understand the logic behind it.

I'm a fan of CSFS, even mentioning it on zndtoshi's recent survey[2],
but it seems artificially limited without OP_CAT.  (I also stand by my
answer on that survey of believing there's a deep lack of developer
interest in CSFS at the moment.  But, if you'd like to tilt at that
windmill, I won't stop you.)

-Dave

[1] 
https://github.com/bitcoin/bips/blob/master/bip-0119.mediawiki#design-tradeoffs-and-risks

[2] https://twitter.com/zndtoshi/status/1405235814712422402



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposals for Output Script Descriptors

2021-07-03 Thread David A. Harding via bitcoin-dev
On Sat, Jul 03, 2021 at 10:35:48AM +0200, Craig Raw wrote:
> There is a downside to using "h"/"H" from a UX perspective - taking up more
> space 

Is this a serious concern of yours?  An apostrophe is 1/2 en; an "h" is
1 en; the following descriptor contains three hardened derivations in 149
characters; assuming the average non-'/h character width is 1.5 en, the
difference between 207 en and 208.5 en is barely more than half a
percent.


pkh([d34db33f/44h/0h/0h]xpub6ERApfZwUNrhLCkDtcHTcxd75RbzS1ed54G1LkBUHQVHQKqhMkhgbmJbZRkrgZw4koxb5JaHWkY4ALHY2grBGRjaDMzQLcgJvLJuZZvRcEL/1/*)#ml40v0wf

Here's a direct visual comparison: 
https://gist.github.com/harding/2fbbf2bfdce04c3e4110082f03ae3c80

> appearing as alphanumeric characters similar to the path numbers

First, I think you'd have to be using an awful font to confuse "h" with
any arabic numeral.  Second, avoiding transcription errors is exactly
why descriptors now have checksums.

> they make derivation paths and descriptors more difficult to read.

The example descriptor pasted above looks equally (un)readable to me
whether it uses ' or h.

> Also, although not as important, less efficient when making metal
> backups.

I think many metal backup schemes are using stamps or punch grids that
are fixed-width in nature, so there's no difference either way.  (And
you can argue that h is better since it's part of both the base58check
and bech32 character sets, so you already need a stamp or a grid row for
it---but ' is otherwise unused, so a stamp or grid row for it would be
special).

But even if people are manually etching descriptors into metal, we're
back to the original point where we're looking at something like a 0.7%
difference in "efficiency".

By comparison, the Bitcoin Core issue I cited in my earlier post
contains several examples of actual users needing technical support
because they tried to use '-containing descriptors in a bourne-style
shell.  (And I've personally lost time to that class of problems.)  In
the worst case, a shell-quoting accident can cause loss of money by
sending bitcoins to the descriptor for a key your hardware signing
device won't sign for.  I think these problems are much more serious
than using a tiny bit of extra space in a GUI or on a physical backup
medium.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposals for Output Script Descriptors

2021-07-02 Thread David A. Harding via bitcoin-dev
On Tue, Jun 29, 2021 at 09:14:39PM +, Andrew Chow via bitcoin-dev wrote:
> *** Optionally followed by a single /* or /*' final
> step to denote all direct unhardened or hardened children.
> 
> [...]
> 
> In the above specification, the hardened indicator ' may be
> replaced with alternative hardnened indicators of h or H.

Is there any chance we can take this opportunity to make "h"/"H" the
preferred aliases?  Using "'" in bourne-style shells is very
annoying[1], and I suspect it's also creating unnecessary complications
elsewhere.

Alternatives:

- Completely kill "'" (I'd prefer this, but I realize it's complicated
  with descriptors already being used widely).  If "h"/"H" are made the
  preferred aliases, maybe it'd be enough to make implementing "'" a
  SHOULD rather than a MUST; this would push implementations towards
  displaying descriptors using the h versions for maximum compatibility.

- Calculate the checksum over s/(h|H)/'/ (again, I know that's
  complicated with descriptors already widely used)

Thanks,

-Dave

[1] https://github.com/bitcoin/bitcoin/issues/15740#issuecomment-695815432



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-19 Thread David A. Harding via bitcoin-dev
On Fri, Jun 18, 2021 at 06:11:38PM -0400, Antoine Riard wrote:
> 2) Solving the Pre-Signed Feerate problem : Package-Relay or
> SIGHASH_ANYPREVOUT
> 
> For Lightning, either package-relay or SIGHASH_ANYPREVOUT should be able to
> solve the pre-signed feerate issue [3]
>
> [...]
>
> [3] I don't think there is a clear discussion on how SIGHASH_ANYPREVOUT
> solves pinnings beyond those LN meetings logs:
> https://gnusha.org/lightning-dev/2020-06-08.log

For anyone else looking, the most relevant line seems to be:

  13:50 < BlueMatt> (sidenote: sighash_no_input is *really* elegant here
  - assuming a lot of complicated logic in core to do so, you could
  imagine blind-cpfp-bumping *any* commitment tx without knowing its
  there or which one it is all with one tx...in theory)

That might work for current LN-penalty, but I'm not sure it works for
eltoo.  If Bitcoin Core can rewrite the blind CPFP fee bump transaction
to refer to any prevout, that implies anyone else can do the same.
Miners who were aware of two or more states from an eltoo channel would
be incentivized to rewrite to the oldest state, giving them fee revenue
now and ensuring fee revenue in the future when a later state update is
broadcast.

If the attacker using pinning is able to reuse their attack at no cost,
they can re-pin the channel again and force the honest user to pay
another anyprevout bounty to miners.  Repeat this a bunch of times and
the honest user has now spent more on fees than their balance from the
closed channel.

Even if my analysis above is wrong, I would encourage you or Matt or
someone to write up this anyprevout idea in more detail and distribute
it before you promote it much more.

> package-relay sounds a reasonable, temporary "patch".

Even if every protocol based on presigned transactions can magically
allow dynamically adding inputs and modifying outputs for fees, and we
also have a magic perfect transaction replacement protocol, package
relay is still fundamentally useful for CPFP fee bumping very low
feerate transactions received from an external party.  E.g. Alice pays
Bob, mempool min feerates increase and Alice's transaction is dropped,
Bob still wants the money, so he submits a package with Alice's
transaction plus his own high feerate spend of it.

Package relay is a clear improvement now, and one I expect to be
permanent for as long as we're using anything like the current protocol.
 
> # Deployment timeline
> 
> So what I believe as a rough deployment timeline.

I don't think it's appropriate to be creating timelines like this that
depend on the work of a large number of contributors who I don't believe
you've consulted.  For details on this point of view, please see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-July/014726.html

Stuff will get done when it gets done.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reminder on the Purpose of BIPs

2021-04-26 Thread David A. Harding via bitcoin-dev
On Sun, Apr 25, 2021 at 05:31:50PM -0400, Matt Corallo via bitcoin-dev wrote:
> In general, I think its time we all agree the BIP process has simply failed
> and move on. Luckily its not really all that critical and proposed protocol
> documents can be placed nearly anywhere with the same effect.

I recommend:

1. We add additional BIP editors, starting with Kalle Alm (if there are
   no continuing significant objections).

2. We seek Luke Dashjr's resignation as BIPs editor.

3. We begin treating protocol documents outside the BIPs repository as
   first-class BIP documentation.

The first recommendation permits continued maintenance of existing BIPs
plus gives the additional maintainers an opportunity to rebuild the
credibility of the repository.

The second recommendation addresses the dissatisfaction of many BIP
authors and potential authors with the current editor, which I think
will discourage many of them from making additional significant
contributions to the repository.  It also seems to me to be a better use
of Luke's talents and interests for him to focus on protocol research
and review rather than procedurally checking whether a bunch of
documents are well formed.

The third recommendation provides an escape hatch for anyone, such as
Matt, who currently thinks the process has failed, or for anyone who
comes to that same conclusion in the future under a different editing
team.  My specific recommendations there are:

a. Anyone writing protocol documentation in the spirit of the BIP
   process can post their idea to this mailing list like we've always
   done and, when they've finished collecting initial feedback, they can
   assign themselves a unique decentralized identifier starting with
   "bip-".  They may also define a shorter alias that they encourage
   people to use in cases where the correct document can be inferred
   from context.  E.g.,

  bip-wuille-taproot (bip-taproot)
  bip-towns-versionbits-min-activation-height (bip-vbmah)
  bip-todd-harding-opt-in-replace-by-fee (bip-opt-in-rbf)

b. The author then publishes the document to any place they'd like, although
   they are strongly encouraged to make any document source available
   under an open license to ensure others can create their own
   modifications.

c. Implementations of BIPs, whether original repository BIPs or
   decentralized BIPs, link to the BIPs they implement to ensure
   researchers and developers can find the relevant protocol
   documentation.  E.g.,
   
https://github.com/bitcoin/bitcoin/blob/fe5e495c31de47b0ec732b943db11fe345d874af/doc/bips.md

 (It may also be advisable for implementations to mirror copies of
 the BIPs they implement so later modifications to the document
 don't confuse anyone.  For this reason, extremely liberal
 licensing of BIP documents is encouraged.)

d. To help maintain quality and consistency between documentation, the
   BIP editors provide a BIP document template, guidelines similar to
   the existing BIP2, and an easy-to-run format linter.

I think this decentralized BIPs alternative also helps address some
longstanding problems with the BIPs system: that many casual Bitcoin
users and developers think of documents in the BIPs repo as
authoritative and that there are some development teams (such as for LN)
that have already abandoned the BIPs process because, in part, they want
complete control over their own documentation.  

The recommendations above were developed based on conversations I had
with a few stakeholders in the BIPs process, but I did not attempt a
comprehensive survey and I certainly don't claim to speak for anyone
else.  I hope the recommendations are satisfactory and I look forward to
your feedback.

Thanks,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposed BIP editor: Kalle Alm

2021-04-25 Thread David A. Harding via bitcoin-dev
On Sat, Apr 24, 2021 at 04:42:12AM +, Greg Maxwell via bitcoin-dev wrote:
> I am opposed to the addition of Kalle Alm at this time.  Those who
> believe [this] will resolve the situation [...] re: PR1104 are
> mistaken.

PR1104 has been merged.  Do you continue to oppose the addition?

Thanks,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Motivating Address type for OP_RETURN

2021-04-24 Thread David A. Harding via bitcoin-dev
On Sat, Apr 24, 2021 at 01:05:25PM -0700, Jeremy wrote:
> I meant the type itself is too wide, not the length of the value. As in
> Script can represent things we know nothing about. 

I guess I still don't understand your concern, then.  If script can
represent things we know nothing about, then script commitments such as
P2SH, P2WSH, and P2TR also represent things we know nothing about.  All
you know is what container format they used.  For P2PK, bare multisig,
OP_RETURN, and other direct uses of scriptPubKey, that container format
is "bare" (or whatever you want to call it).

> Btw: According to... Oh wait... You?
> https://bitcoin.stackexchange.com/questions/35878/is-there-a-maximum-size-of-a-scriptsig-scriptpubkey
> the max size is 10k bytes.

I'm not sure what I knew at the time I wrote that answer, but the 10,000
byte limit is only applied when EvalScript is run, which only happens
when the output is being spent.  I've appended to this email a
demonstration of creating a 11,000 byte OP_RETURN on regtest (I tried
999,000 bytes but ran into problems with bash's maximum command line
length limit).  I've updated the answer to hopefully make it more
correct.

> Is it possible/easy to, say, using bech32m make an inappropriate message in
> the address? You'd have to write the message, then see what it decodes to
> without checking, and then re encode? I guess this is worse than hex?

If someone wants to abuse bech32m, I suspect they'll do it the same way
people have abused base58check[1], by using the address format's
alphabet directly.  E.g., you compose your message using only
the characters qpzry9x8gf2tvdw0s3jn54khce6mua7l and then append
the appropriate checksum.

[1] https://en.bitcoin.it/wiki/P2SH%C2%B2#The_problem:_storing_data_in_hashes

> But it seems this is a general thing... If you wanted an inappropriate
> message you could therefore just use bech32m addressed outputs.

Yes, and people have done that with base58check.  IsStandard OP_RETURN
attempts to minimize that abuse by being cheaper in two ways:

1. More data allowed in scriptSig, e.g. 80 byte payload (81 actually, I
   think) for OP_RETURN versus 40 bytes for a BIP141 payload.
   Maximizing payload size better amortizes the overhead cost of the
   containing transaction and the output's nValue field.

2. Exemption from the dust limit.  If you use a currently defined
   address type, the nValue needs to pay at least a few thousand nBTC
   (few hundred satoshis), about $0.15 USD minimum at $50k USD/BTC.  For
   OP_RETURN, the nValue can be 0, so there's no additional cost beyond
   normal transaction relay fees.

Although someone creating an OP_RETURN up to ~1 MB with miner support
can bypass the dust limit, the efficiency advantage remains no matter
what.

> One of the nice things is that the current psbt interface uses a blind
> union type whereby the entires in an array are either [address, amount] or
> ["data", hex]. Having an address type would allow more uniform handling,
> which is convenient for strongly typed RPC bindings (e.g. rust bitcoin uses
> a hashmap of address to amount so without a patch you can't create op
> returns).

I don't particularly care how the data in PSBTs are structured.  My mild
opposition was to adding code to the wallet that exposes everyday users
to OP_RETURN addresses.

> I would much prefer to not have to do this in a custom way, as opposed
> to a way which is defined in a standard manner across all software
> (after all, that's the point of standards).

I'm currently +0.1 on the idea of an address format of OP_RETURN, but I
want to make sure this isn't underwhelmingly motivated or will lead to a
resurgence of block chain graffiti.

-Dave

## Creating an 11,000 byte OP_RETURN

$ bitcoind -daemon -regtest -acceptnonstdtxn
Bitcoin Core starting

$ bitcoin-cli -regtest -generate 101
{
  "address": "bcrt1qh9uka5z040vx2rc3ltz3tpwmq4y2mt0eufux9r",
  "blocks": [
[...]
}

$ bitcoin-cli -regtest send '[{"data": "'$( dd if=/dev/zero bs=1000 count=11 | 
xxd -g0 -p | tr -d '\n' )'"}]'
11+0 records in
11+0 records out
11000 bytes (11 kB, 11 KiB) copied, 0.000161428 s, 68.1 MB/s
{
  "txid": "ef3d396c7d21914a2c308031c9ba1857694fc33df71f5a349b409ab3406dab51",
  "complete": true
}

$ bitcoin-cli -regtest getrawmempool
[
  "ef3d396c7d21914a2c308031c9ba1857694fc33df71f5a349b409ab3406dab51"
]

$ bitcoin-cli -regtest -generate 1
{
  "address": "bcrt1qlzjd90tkfkr09m867zxhte9rqd3t03wc5py5zh",
  "blocks": [
"2986e9588c5bd26a629020b1ce8014d1f4ac9ac19106d216d3abb3a314c5604b"
  ]
}

$bitcoin-cli -regtest getblock 
2986e9588c5bd26a629020b1ce8014d1f4ac9ac19106d216d3abb3a314c5604b 2 | jq 
.tx[1].txid
"ef3d396c7d21914a2c308031c9ba1857694fc33df71f5a349b409ab3406dab51"


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Motivating Address type for OP_RETURN

2021-04-23 Thread David A. Harding via bitcoin-dev
On Tue, Apr 20, 2021 at 08:46:07AM -0700, Jeremy via bitcoin-dev wrote:
> Script is technically "too wide" a type as what I really want is to
> only return coins with known output types.

I don't understand this concern.  If script is too wide a type, then
OP_RETURN being a scriptPubKey of arbitrary length up to almost a
million bytes is also going to be too wide, right?

> 1) Should it be human readable & checksummed or encoded?

It should absolutely not be human readable in the sense of being
meaningful to humans.  We've seen in the past that tools and sites that
display OP_RETURN data as ASCII encourage people to put text in the
block chain that is offensive and illegal.  This puts people running
nodes at risk of social and legal intervention.  Bitcoin's
premissionless nature means we can't stop people from creating such
problems, but we can lower the risk by having our tools default to
meaningless representations of OP_RETURN data.

The best advice I've seen is to display OP_RETURN data in hex.  It's
still possible to say things like "dead beef" with that, but significant
abuse is hard.  This will, of course, make even 80 byte OP_RETURN
"addresses" very long.

> 2) Should it have a fixed length of max 40-80 bytes or should we support
> arbitrary length strings?

If it doesn't support the fell range, somebody's just going to complain
later and there will have to be a v2 address.

> 3) Should it be possible (i.e., from core) to pay into such an OP_RETURN or
> should we categorize OP_RETURNS as a non-payable address type (and just use
> it for parsing blockdata)

I don't think including arbitrary data in the block chain is something
that's currently useful for typical end users, and applications that
want to use OP_RETURN with Bitcoin Core can already call
create(psbt|rawtransaction) with the `data` field, so I'd be mildly
opposed in including such a feature in Bitcoin Core's wallet.  If at
least a few other wallets add the feature to pay OP_RETURN "addresses"
and it seems popular, then I'm wrong and so I would probably then change
my position.

Regarding "parsing block data", I don't think there's any need to change
Bitcoin Core's current representation of OP_RETURN outputs (which is
just showing the hex-encoded script in RPC output).  For any program
needing OP_RETURN output, hex format is going to be a the next best
thing to getting it in raw binary.  Any other address format is going to
be equal or more work.

Additionally, as mentioned in the other thread about OP_RETURN this
week, increasing transaction fees should increasingly push uses of
OP_RETURN off the network or into more efficient constructions, so it
doesn't seem warranted to me to spend a lot of time trying to optimize
how we use it when we'll be using it less and less over time.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Update on "Speedy" Trial: The circus rolls on

2021-04-08 Thread David A. Harding via bitcoin-dev
On Thu, Apr 08, 2021 at 12:40:42PM +0100, Michael Folkson via bitcoin-dev wrote:
> So the latest circus act is apparently a technical decision made by a
> coin toss [organized by] Jeremy Rubin

Actually, the coin toss was my idea[1], used a bash oneliner I wrote[2],
and is the same method I've been using in Bitcoin-related discussions
for over seven years[3] to help people transition from ancillary arguments
back to working on the things they really think are important.

I proposed the coin toss because I understood that both the MTP and the
height approaches required tradeoffs that were, to a certain degree,
unresolvable to the best of our current knowledge.  MTP is harder to
analyze for unexpected edge cases; heights would create extra work for
seasoned developers working on post-taproot soft forks.  MTP would
require developers of currently-planned UASF software either do extra
work or wait to release their software; heights don't guarantee a
minimum amount of time for a large number of economic full nodes to
upgrade.

Different people gave different weights to the different tradeoffs.  In
cases like this where there's no known way to eliminate the tradeoffs
and no way to objectively rank them, I think it's better to begin
working on something concrete than it is to try to persuade everyone to
adopt the same subjective ranking of the tradeoffs---or, as the IETF
published in RFC7282:

"There are times where the result of [an informal open-ended
conversation] is a pretty even split.  In practical terms, that
means it doesn't matter where the chair starts the discussion.  And
in fact, we've had working groups where a coin flip decided which
proposal to start with.  That doesn't mean that the coin flip
determined the outcome; if a fatal technical flaw was found in the
solution that won the coin flip, it is still incumbent upon the
group to address the issue raised or abandon that solution and find
another.  Rough consensus on the technical points, in the end, is
always required.  Any way to find a place to start, be it the hum or
the coin flip, is only getting to the beginning of the discussion,
not the end."

As Jeremy wrote, in this occassion, we didn't actually need the coin
toss.  The authors of the two PRs we were considering found a compromise
solution that seems to be good enough for both of them and which so far
seems to be good enough for the handful of people who agreed to the coin
toss (plus, it seems, several others who didn't agree to the toss).

In short, I think the coin toss was a good attempt.  Although we didn't
use its results this time, I think it's something we should keep in our
toolkit for the future when a group of people want to coordinate their
work on getting *a* solution released, even in cases where they don't
necessarily start out in agreement about which solution is best.

> I dread to think what individuals and businesses all over the world
> who have plans to utilize and build on Taproot are making of all of
> this. 

Geeks arguing over minutia is a well established stereotype.  That we've
conformed to that stereotype in this case is not great---but I don't
think it does us any significant reputational harm.  I hope those
individuals and businesses awaiting taproot are discerning enough to
realize that the method we use to activate taproot has nothing to do
with taproot itself.  I hope they realize that it remains the case that
there is nearly universal support for taproot from every entity that has
so far commented on it.

Hopefully we've made progress on Speedy Trial this week, that progress
will continue and we'll be able to release activation-ready software
soon, miners will be willing to signal for taproot, and we'll soon be
able to end this chapter in Bitcoin's storied history of soft fork
activations.[4]  (But I look forward to continued discussion about
better activation mechanisms for the future---if taproot locks in
quickly, I'd love to see human consensus form around a follow-up
deployment even before taproot reaches activation.)

Respectfully,

-Dave

[1] http://gnusha.org/taproot-activation/2021-04-04.log " [...]
If that's not our goal and we just want to give miners a chance to
activate taproot as soon as possible (which was certainly my original
objective in supporting ST), I'm personally happy with either MTP or
heights, and I'd be willing to join others in putting my effort behind
just one of them based on fair random chance."

[2] http://gnusha.org/taproot-activation/2021-04-04.log "18:09 <
harding> e.g.:   bitcoin-cli getblockhash 123456 | cut -b64 | grep -q
'[02468ace]' && echo MTP || echo height"

[3] E.g.,
https://github.com/bitcoin-dot-org/Bitcoin.org/pull/589#discussion_r18314009
and 
https://github.com/bitcoin-dot-org/Bitcoin.org/pull/566#issuecomment-56281595

[4] https://bitcoinops.org/en/topics/soft-fork-activation/


signature.asc
Description: PGP signature

Re: [bitcoin-dev] Taproot Activation Meeting Reminder: April 6th 19:00 UTC bitcoin/bitcoin-dev

2021-04-06 Thread David A. Harding via bitcoin-dev
(Replies to multiple emails)

On Tue, Apr 06, 2021 at 12:27:34PM -0400, Russell O'Connor wrote:
> It isn't  "$MIN_LOCKIN_TIME + $((10 * 2016)) minutes". It's
> "$MIN_LOCKIN_TIME + time until next retargeting period + $((10 * 2016))
> minutes".

Ah, drat, I forgot about that.  Thank you for correcting my oversight!

> That doesn't seem like a particularly important design goal to me? Having
> a last minute two week delay seems easy to deal with

From my perspective, that of a person focused on communicating
information that affects Bitcoin users and recommending infrastructure
adjustments that should be made to accomodate those changes, I'd find
having a predictable activation date to be of significant benefit.
Given that, an activation scheme that could provide a tight timeline
(only delayable, not accellerable, by miner shenanegeans) would be
something I'd consider an advantage of that method.

That said, it's probably not worth making the activation state machine
more complicated for when the simplicity of the machine for height-based
activations is it's chief touted benefit.

Thanks,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot Activation Meeting Reminder: April 6th 19:00 UTC bitcoin/bitcoin-dev

2021-04-06 Thread David A. Harding via bitcoin-dev
On Tue, Apr 06, 2021 at 10:34:57AM -0400, Russell O'Connor via bitcoin-dev 
wrote:
> The other relevant value of giving enough time for users to upgrade is not
> very sensitive.  It's not like 180 days is magic number that going over is
> safe and going below is unsafe.

I don't think it's the 180 days value that's important but the deadline
to upgrade before taproot activates.  With heights, some people will be
conservative and say:

  You need to upgrade by $( date -d "66 days" )

Some people will just assume 10 minutes and say:

  You need to upgrade by $( date -d "$((10 * 2016 * 13)) minutes" )

Some people might assume 9 minutes, which I think is roughly our
historic average:

  You need to upgrade by $( date -d "$((9 * 2016 * 13)) minutes" )

As a few weeks pass and the number of blocks left until activation
decreases, it's likely everyone will be saying slightly different dates.
Basically, it'll be like a few months before the recent halving where
you could go to different sites that would give you wildly different
estimates---several of them claiming to be better than the others
because they factored in .

We're stuck with that for halvings, but I think coordinating human
actions around heights creates unnecessary confusion.

Using Towns's updated MTP doesn't eliminate this problem, but it reduces
it significantly, especially early in the process.  Now conservative
estimators can say:

  You need to upgrade by $( date -d "$MIN_LOCKIN_TIME + 11 days" )

Ten minute estimators can say:

  You need to upgrade by $( date -d "$MIN_LOCKIN_TIME + $((10 * 2016)) minutes" 
)

And nine minute estimators can say:

  You need to upgrade by $( date -d "$MIN_LOCKIN_TIME + $((9 * 2016)) minutes" )

Those predictions are unlikely to change until shortly before the
lockin period.

I think those dates being much closer together (within 3 days) and
static for several months makes it much easier to communicate to users
(including organizations) the date by which they should upgrade if they
want to help enforce the soft fork's new rules.

As a side advantage, it also makes it easier to plan activation parties,
which is something I think we'll especially want after we finally start
doing something useful with the bikeshed we repainted so many times.  :-)

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread David A. Harding via bitcoin-dev
On Mon, Mar 15, 2021 at 09:48:15PM +, Luke Dashjr via bitcoin-dev wrote:
> Note that in all circumstances, Bitcoin is endangered when QC becomes
> a reality [...] it could very well become an unrecoverable situation
> if QC go online prior to having a full quantum-safe solution.

The main concern seems to be someone developing, in secret, a quantum
computer with enough capacity to compromise millions of keys and then
deciding to use the most powerful and (probably) expensive computer ever
developed to steal coins that will almost immediately lose most or all
of their value.

That's certainly a threat we should consider, but like other "movie
plot" threats, I think we should weigh its unlikeliness in comparison
to the people who are losing smaller amounts of money on a regular basis
right now because we don't already have taproot---people who don't use
multisig, or contracts with threshold reduction timeout clauses, or
certain types of covenants because the contingencies these types of
scripts protect against come at too great a cost in fees for the typical
case where no contingencies are needed.

We have many ideas about how to mitigate the risk of effective quantum
computing attacks, from emergency protection to long-term solutions, so
it seems to me that the real risk in the movie plot scenario comes
entirely from *secret advances* in quantum computing.  Other similar
risks for Bitcoin exist, such as secret discoveries about how to
compromise the hash functions Bitcoin depends on.  One way to help
control those risks is to pay a public bounty to anyone who provably and
publicly discloses the secret advance (ideally while allowing the leaker
to remain anonymous).  Several years ago, Peter Todd created a series of
Bitcoin addresses that does exactly that.[1]

For example, if you pay 35Snmmy3uhaer2gTboc81ayCip4m9DT4ko, then
anyone[2] who can prove a collision attack against Bitcoin's primary
hash function, SHA256, will be able to claim the bitcoins you and other
people sent to that address.  Their claim of the funds will publicly
demonstrate that someone can create a SHA256 collision, which is an
attack we currently believe to be impractical.  This system was
demonstrated to work about four years ago[3] when the collision bounty
for the much weaker SHA1 function was claimed.[4]

We can already create an output script with a Nothing Up My Sleeve
(NUMS) point that would provide a trustless bounty to anyone proving the
capability to steal any P2PK-style output with secp256k1's 128-bit
security.  I curious about whether anyone informed about ECC and QC
knows how to create output scripts with lower difficulty that could be
used to measure the progress of QC-based EC key cracking.  E.g.,
NUMS-based ECDSA- or taproot-compatible scripts with a security strength
equivalent to 80, 96, and 112 bit security.

That way the people and businesses concerned about QC advances could
send a few BTC to those addresses to create a QC early warning system
that would allow us to continue confidently working on EC-based
protocols for now but also objectively alert us to when we need to shift
to working on post-QC protocols for the future.

Thank you,

-Dave

[1] https://bitcointalk.org/index.php?topic=293382.0

[2] Anyone claiming the reward may need to mine their own transaction to
protect it against rewriting.  In the worst case, they may need to
mine at a depth of several blocks or share their reward with miners
to prevent sniping reorgs.

[3] 
https://blockstream.info/tx/8d31992805518fd62daa3bdd2a5c4fd2cd3054c9b3dca1d78055e9528cff6adc

[4] To the best of my knowledge, nothing in Bitcoin ever depended
significantly on SHA1, and especially not on SHA1 collision
resistance, which was known to be weak even in 2009 when Nakamoto
first published the Bitcoin software.


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-06 Thread David A. Harding via bitcoin-dev
On Sat, Mar 06, 2021 at 01:11:01PM -0500, Matt Corallo wrote:
> I'm really unsure that three months is a short enough time window that there
> wouldn't be a material effort to split the network with divergent consensus
> rules. 

I oppose designing activation mechanisms with the goal of preventing
other people from effectively exercising self determination over what
consensus rules their nodes enforce.

Three months was chosen because it's long enough to give miners a
reasonable enough amount of time to activate taproot but it's also short
enough that it doesn't delay any of the existing proposals with roughly
one-year timelines.  As such, I think it has the potential to gain
acceptance from multiple current factions (even if it doesn't ever gain
their full approval), allowing us to move forward with rough social
consensus and to gain useful information from the attempt that can
inform future decisions.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Taproot activation proposal "Speedy Trial"

2021-03-05 Thread David A. Harding via bitcoin-dev
On the ##taproot-activation IRC channel, Russell O'Connor recently
proposed a modification of the "Let's see what happens" activation
proposal.[1] The idea received significant discussion and seemed
acceptable to several people who could not previously agree on a
proposal (although this doesn't necessarily make it their first
choice).  The following is my attempt at a description.

1. Start soon: shortly after the release of software containing this
   proposed activation logic, nodes will begin counting blocks towards
   the 90% threshold required to lock in taproot.[2]

2. Stop soon: if the lockin threshold isn't reached within approximately
   three months, the activation attempt fails.  There is no mandatory
   activation and everyone is encouraged to try again using different
   activation parameters.
   
2. Delayed activation: in the happy occasion where the lockin threshold
   is reached, taproot is guaranteed to eventually activate---but not
   until approximately six months after signal tracking started.

## Example timeline

(All dates approximate; see the section below about BIP9 vs BIP8.)

- T+0: release of one or more full nodes with activation code
- T+14: signal tracking begins
- T+28: earliest possible lock in
- T+104: locked in by this date or need to try a different activation process
- T+194: activation (if lockin occurred)

## Analysis

The goal of Speedy Trial is to allow a taproot activation attempt to
either quickly succeed or quickly fail---without compromising safety in
either case.  Details below:

### Mitigating the problems of early success

New rules added in a soft fork need to be enforced by a large part of
the economy or there's a risk that a long chain of blocks breaking the
rules will be accepted by some users and rejected by others, causing a
chain split that can result in large direct losses to transaction
receivers and potentially even larger indirect losses to holders due to
reduced confidence in the safety of the Bitcoin system.

One step developers have taken in the past to ensure widespread adoption
of new consensus rules is programming in a delay between the time software
with those rules is expected to be released and when the software starts
tracking which blocks signal for activation.  For example:

Soft fork| Release| Start  | Delta 
-+++--
BIP68 (v0.12.1)  | 2016-04-15 | 2016-05-11 | 26 days 
BIP141 (v0.13.1) | 2016-10-27 | 2016-11-18 | 24 days

Sources: BitcoinCore.org, 
https://gist.github.com/ajtowns/1c5e3b8bdead01124c04c45f01c817bc

Speedy Trial replaces most of that upfront delay with a backend delay.
No matter how fast taproot's activation threshold is reached by miners,
there will be six months between the time signal tracking starts and when
nodes will begin enforcing taproot's rules.  This gives the userbase even
more time to upgrade than if we had used the most recently proposed start
date for a BIP8 activation (~July 23rd).[2] 

### Succeed, or fail fast

The earlier version of this proposal was documented over 200 days ago[3]
and taproot's underlying code was merged into Bitcoin Core over 140 days
ago.[4]  If we had started Speedy Trial at the time taproot
was merged (which is a bit unrealistic), we would've either be less than
two months away from having taproot or we would have moved on to the
next activation attempt over a month ago.

Instead, we've debated at length and don't appear to be any closer to
what I think is a widely acceptable solution than when the mailing list
began discussing post-segwit activation schemes over a year ago.[5]  I
think Speedy Trial is a way to generate fast progress that will either
end the debate (for now, if activation is successful) or give us some
actual data upon which to base future taproot activation proposals.

Of course, for those who enjoy the debate, discussion can continue while
waiting for the results of Speedy Trial.

### Base activation protocol

The idea can be implemented on top of either Bitcoin Core's existing
BIP9 code or its proposed BIP8 patchset.[6]

- BIP9 uses two time-based[7] parameters, starttime and timeout.  Using
  these values plus a time-based parameter for the minimum activation
  delay would give three months for miners to activate taproot, but some
  of that time near the start or the end might not be usable due to
  signals only being measured in full retarget periods.  However, the
  six month time for users to upgrade their node would be not be
  affected by either slow or fast block production.
  
BIP9 is already part of Bitcoin Core and I think the changes being
proposed would be relatively small, resulting in a small patch that
could be easy to review.

- BIP8 uses two height-based parameters, startheight and timeoutheight.
  Using height values would ensure miners had a certain number of
  retarget periods (6) to lock in taproot and that there'd be a certain
  number of blocks 

Re: [bitcoin-dev] A design for Probabilistic Partial Pruning

2021-02-27 Thread David A. Harding via bitcoin-dev
On Sat, Feb 27, 2021 at 09:19:34AM -1000, David A. Harding via bitcoin-dev 
wrote:
> - Discussion thread 1: 
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/014186.html

Two particularly useful emails from that thread are:

- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/014199.html
  which links to discussions about the topic prior to 2017, including
  discussion about DoS risks that are more important than the
  fingerprinting risk I mentioned in my previous reply.

- https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/014227.html
  which describes a potential way to distribute data with fewer DoS
  risks and less severe fingerprinting than each node storing a
  different set of blocks.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A design for Probabilistic Partial Pruning

2021-02-27 Thread David A. Harding via bitcoin-dev
On Fri, Feb 26, 2021 at 11:40:35AM -0700, Keagan McClelland via bitcoin-dev 
wrote:
> Hi all,

Hi Keagan,

> 4. Once the node's IBD is complete it would advertise this as a peer
> service, advertising its seed and threshold, so that nodes could
> deterministically deduce which of its peers had which blocks.

Although some of the details differed, I believe this general idea of
sharded block storage was previously discussed in the context of BIP159,
which warns:

"Peers may have different prune depths (depending on the peers
configuration, disk space, etc.) which can result in a
fingerprinting weakness (finding the prune depth through getdata
requests). NODE_NETWORK_LIMITED supporting peers SHOULD avoid
leaking the prune depth and therefore not serve blocks deeper than
the signaled NODE_NETWORK_LIMITED threshold (288 blocks)."

- BIP: 
https://github.com/bitcoin/bips/blob/master/bip-0159.mediawiki#counter-measures-for-peer-fingerprinting
- Discussion thread 1: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/014186.html
- Discussion thread 2: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014314.html
- Discussion thread 2, continued: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/014186.html
- BIP159-related PR, review comments: 
https://github.com/bitcoin/bitcoin/pull/10387

> If you have thoughts on
> 
> A. The protocol design itself
> B. The barriers to put this kind of functionality into Core
> 
> I would love to hear from you,

I think it would be unlikely for any popular node software to adopt a
technique that could make specific nodes easily fingerprintable on an
ongoing basis unless it solved some other urgent problem.  Luke Dashjr's
rough data collection currently shows 5,629 archival listening nodes,[1]
which is a substantial fraction of the roughly 10,000 listening nodes
reported by Addy Yeow,[2] so I don't think we're near the point of
needing to worry about the unavailability of historic blocks.

[1] https://luke.dashjr.org/programs/bitcoin/files/charts/services.html
[2] https://bitnodes.io/dashboard/

However, if there's a reasonable solution to the fingerprinting problem,
I do think node developers would find that very interesting.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot activation meeting 2 - Tuesday 16th February 19:00 UTC

2021-02-13 Thread David A. Harding via bitcoin-dev
On Fri, Feb 05, 2021 at 12:43:57PM +, Michael Folkson via bitcoin-dev wrote:
> https://old.reddit.com/r/Bitcoin/comments/lcjhl6/taproot_activation_pools_will_be_able_to_veto/gm2l02w/
> [...] 
> F6) It is more important that no rules that harm users are deployed
> than it is that new useful rules are deployed quickly. If there is a
> choice between “faster” and “more clear that this isn’t a mechanism to
> force bad things on users” we should prefer the latter. Plenty of
> people just don’t like LOT=true very much absent evidence that miners
> are blocking deployment. To some it just feels needlessly antagonistic
> and distrusting towards part of our community.

I think F6, above, bundles together several of Maxwell's points and
maybe loses something in summary.  I'd encourage interested readers to
view the original post that Folkson referenced.  I'd like to extract one
part as a separate point and write about it a bit in my own words:

F7) defaulting to LOT=false makes non-activation possible even if people
run the code that developers provide, meaning a successful
activation proves that at least some people (e.g. miners or UASFers)
voluntarily took actions that were well outside the scope of
developer control.

This makes it clear that developers don't control changes to the
system.  There are other arguments that demonstrate that developers
aren't in control[1], but they aren't as clear as simply pointing
out that a rule change won't go into effect until at least several
non-developers independently act of their own accord.

Having such a clear argument that developers aren't in control
bolsters the decentralized ethos of Bitcoin and reduces the chance
that bad actors will pressure Bitcoin developers to attempt future
unwanted changes.  

-Dave

[1] IMO, the main evidence we have that developers aren't in control of
the system is that Bitcoin Core is free software which gives anyone
who obtains a copy of it the legal right to run it, learn from it,
modify it, and share additional copies of it for any purpose.  Each
time someone uses those rights to create alternative Bitcoin
implementations, altcoins, or forkcoins, they demonstrate that users
could change the system---or resist changes to it---in opposition to
the current developer team, should that become necessary.


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-12-06 Thread David A. Harding via bitcoin-dev
On Sat, Dec 05, 2020 at 11:10:51PM +, Pieter Wuille via bitcoin-dev wrote:
> I think these results really show there is no reason to try to
> maintain the old-software-can-send-to-future-segwit-versions property,
> given that more than one not just didn't support it, but actually sent
> coins into a black hole.

I don't think this is a good criteria to use for making a decision.  We
shouldn't deny users of working implementations the benefit of a feature
because some other developers didn't implement it correctly.

> Thus, I agree with Rusty that we should change the checksum for v1+
> unconditionally. 

I disagreed with Rusty previously and he proposed we check to see how
disruptive an address format change would be by seeing how many wallets
already provide forward compatibility and how many would need to be
updated for taproot no matter what address format is used.  I think that
instead is a good criteria for making a decision.

I understand the results of that survey to be that only two wallets
correctly handled v1+ BIP173 addresses.  One of those wallets is Bitcoin
Core, which I personally believe will unhesitatingly update to a new
address format that's technically sound and which has widespread support
(doubly so if it's just a tweak to an already-implemented checksum
algorithm).

Given that, I also now agree with changing the checksum for v1+.

Thanks,

-Dave



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-20 Thread David A. Harding via bitcoin-dev
On Tue, Oct 20, 2020 at 11:12:06AM +1030, Rusty Russell wrote:
> Here are my initial results:

A while ago, around the Bitcoin Core 0.19.0 release that enabled
relaying v1+ segwit addresses, Mike Schmidt was working on the Optech
Compatibility Matrix[1] and tested a variety of software and services
with a v1 address using the original BIP341 specification (33 byte
pubkeys; we now use 32 byte keys).  Here's a summary of his results,
posted with his permission:

- abra: Bech32 not supported.

- binance: Does not pass front end javascript validation

- bitgo: Error occurs during sending process, after validation.

- bitmex: Bech32 not supported.

- bitrefill: Address does not pass validation.

- bitstamp: Address text input doesn’t allow bech32 addresses due to
  character limits.

- blockchain.info: Error occurs during sending process, after
  validation.

- brd: Allows sending workflow to complete in the UI. Transaction stays
  as pending in the transaction list.

- casa: Fails on signing attempt.

- coinbase: Fails address validation client side in the UI.

- conio: Server error 500 while attemping to send.

- copay: Allows v1 address to be entered in the UI. Fails during
  broadcast.

- edge: Allows sending workflow to complete. Transaction stays in
  pending state. Appears to causes issues with the balance calculation
  as well as ability to send subsequent transactions.

- electrum: Error message during broadcasting of transaction.

- green: Fails on validation of the address.

- jaxx: Fails on validation of the address.

- ledger live: Fails when transaction is sent to the hardwave device for
  signing.

- mycelium: Fails during address validation.

- purse: Transaction can be created and broadcast, relayed by peers
  compatible with Bitcoin Core v0.19.0.1 or above.

- river: Transaction can be created and broadcast, relayed by peers
  compatible with Bitcoin Core v0.19.0.1 or above.

- samourai: Fails on broadcast of transaction to the network.

- trezor: Fails on validation of the address.

- wasabi: Fails on validation of the address.

- xapo: Xapo allows users to create segwit v1 transactions in the UI.
  However, the transaction gets stuck as pending for an indeterminate
  period of time

I would guess that some of the failures / stuck transactions might now
be successes if the backend infrastructure has upgraded to Bitcoin Core
>= 0.19.

-Dave

[1] https://bitcoinops.org/en/compatibility/


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-08 Thread David A. Harding via bitcoin-dev
On Thu, Oct 08, 2020 at 10:51:10AM +1030, Rusty Russell via bitcoin-dev wrote:
> Hi all,
> 
> I propose an alternative to length restrictions suggested by
> Russell in https://github.com/bitcoin/bips/pull/945 : use the
> https://gist.github.com/sipa/a9845b37c1b298a7301c33a04090b2eb variant,
> unless the first byte is 0.
> 
> Here's a summary of each proposal:
> 
> Length restrictions (future segwits must be 10, 13, 16, 20, 23, 26, 29,
> 32, 36, or 40 bytes)
>   1. Backwards compatible for v1 etc; old code it still works.
>   2. Restricts future segwit versions, may require new encoding if we
>  want a diff length (or waste chainspace if we need to have a padded
>  version for compat).
> 
> Checksum change based on first byte:
>   1. Backwards incompatible for v1 etc; only succeeds 1 in a billion.
>   2. Weakens guarantees against typos in first two data-part letters to
>  1 in a billion.[1]

Excellent summary!

> I prefer the second because it forces upgrades, since it breaks so
> clearly.  And unfortunately we do need to upgrade, because the length
> extension bug means it's unwise to accept non-v0 addresses.

I don't think the second option forces upgrades.  It just creates
another opt-in address format that means we'll spend another several
years with every wallet having two address buttons, one for a "segwit
address" (v0) and one for a "taproot address" (v1).  Or maybe three
buttons, with the third being a "taproot-in-a-segwit-address" (v1
witness program using the original bech32 encoding).

It took a lot of community effort to get widespread support for bech32
addresses.  Rather than go through that again, I'd prefer we use the
backwards compatible proposal from BIPs PR#945 and, if we want to
maximize safety, consensus restrict v1 witness program size, e.g. reject
transactions with scriptPubKeys paying v1 witness programs that aren't
exactly 32 bytes.

Hopefully by the time we want to use segwit v2, most software will have
implemented length limits and so we won't need any additional consensus
restrictions from then on forward.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Floating-Point Nakamoto Consensus

2020-09-26 Thread David A. Harding via bitcoin-dev
On Fri, Sep 25, 2020 at 10:35:36AM -0700, Mike Brooks via bitcoin-dev wrote:
> -  with a fitness test you have a 100% chance of a new block from being
> accepted, and only a 50% or less chance for replacing a block which has
> already been mined.   This is all about keeping incentives moving forward.

FYI, I think this topic has been discussed on the list before (in
response to the selfish mining paper).  See this proposal:

  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-November/003583.html

Of its responses, I thought these two stood out in particular:

  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-November/003584.html
  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-November/003588.html

I think there may be some related contemporary discussion from
BitcoinTalk as well; here's a post that's not directly related to the
idea of using hash values but which does describe some of the challenges
in replacing first seen as the tip disambiguation method.  There may be
other useful posts in that thread---I didn't take the time to skim all
11 pages.

  https://bitcointalk.org/index.php?topic=324413.msg3476697#msg3476697

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-21 Thread David A. Harding via bitcoin-dev
On Sun, Sep 20, 2020 at 07:10:23PM -0400, Antoine Riard via bitcoin-dev wrote:
> As you mentioned, if the goal of the sponsor mechanism is to let any party
> drive a state N's first tx to completion, you still have the issue of
> concurrent states being pinned and thus non-observable for sponsoring by an
> honest party.
> 
> E.g, Bob can broadcast a thousand of revoked LN states and pin them with
> low-feerate sponsors such as these malicious packages absolute fee are
> higher than the honest state N. Alice can't fee-sponsor
> them as we can assume she hasn't a global view of network mempools. Due to
> the proposed policy rule "The Sponsor Vector's entry must be present in the
> mempool", Alice's sponsors won't propagate. 

Would it make sense that, instead of sponsor vectors
pointing to txids, they point to input outpoints?  E.g.:

1. Alice and Bob open a channel with funding transaction 0123...cdef,
   output 0.

2. After a bunch of state updates, Alice unilaterally broadcasts a
   commitment transaction, which has a minimal fee.

3. Bob doesn't immediately care whether or not Alice tried to close the
   channel in the latest state---he just wants the commitment
   transaction confirmed so that he either gets his money directly or he
   can send any necessary penalty transactions.  So Bob broadcasts a
   sponsor transaction with a vector of 0123...cdef:0

4. Miners can include that sponsor transaction in any block that has a
   transaction with an input of 0123...cdef:0.  Otherwise the sponsor
   transaction is consensus invalid.

(Note: alternatively, sponsor vectors could point to either txids OR
input outpoints.  This complicates the serialization of the vector but
seems otherwise fine to me.)

> If we want to solve the hard cases of pinning, I still think mempool
> acceptance of a whole package only on the merits of feerate is the easiest
> solution to reason on.

I don't think package relay based only on feerate solves RBF transaction
pinning (and maybe also doesn't solve ancestor/dependent limit pinning).
Though, certainly, package relay has the major advantage over this
proposal (IMO) in that it doesn't require any consensus changes.
Package relay is also very nice for fixing other protocol rough edges
that are needed anyway.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread David A. Harding via bitcoin-dev
On Sat, Sep 19, 2020 at 09:30:56AM -0700, Jeremy wrote:
> Yup, I was aware of this limitation but I'm not sure how practical it is as
> an attack because it's quite expensive for the attacker. 

It's cheap if:

1. You were planning to consolidate all those UTXOs at roughly that
   feerate anyway.

2. After you no longer need your pinning transaction in the mempool, you
   make an out-of-band arrangement with a pool to mine a small
   conflicting transaction.

> But there are a few simple policies that can eliminate it:
> 
> 1) A Sponsoring TX never needs to be more than, say, 2 inputs and 2
> outputs. Restricting this via policy would help, or more flexibly
> limiting the total size of a sponsoring transaction to 1000 bytes.

I think that works (as policy).

> 2) Make A Sponsoring TX not need to pay more absolute fee, just needs to
> increase the feerate (perhaps with a constant relay fee bump to prevent
> spam).

I think it'd be hard to find a constant relay fee bump amount that was
high enough to prevent abuse but low enough not to unduly hinder
legitimate users.

> I think 1) is simpler and should allow full use of the sponsor mechanism
> while preventing this class of issue mostly.

Agreed.

Thanks,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread David A. Harding via bitcoin-dev
On Fri, Sep 18, 2020 at 05:51:39PM -0700, Jeremy via bitcoin-dev wrote:
> I'd like to share with you a draft proposal for a mechanism to replace
> CPFP and RBF for increasing fees on transactions in the mempool that
> should be more robust against attacks.

Interesting idea!  This is going to take a while to think about, but I
have one immediate question:

> To prevent garbage sponsors, we also require that:
> 
> 1. The Sponsor's feerate must be greater than the Sponsored's ancestor fee 
> rate
> 
> We allow one Sponsor to replace another subject to normal replacement
> policies, they are treated as conflicts.

Is this in the reference implementation?  I don't see it and I'm
confused by this text.  I think it could mean either:

1. Sponsor Tx A can be replaced by Sponsor Tx B if A and B have at least
   one input in common (which is part of the "normal replacement policies")

2. A can be replaced by B even if they don't have any inputs in common
   as long as they do have a Sponsor Vector in common (while otherwise
   using the "normal replacement policies").

In the first case, I think Mallory can prevent Bob from
sponsor-fee-bumping (sponsor-bumping?) his transaction by submitting a
sponsor before he does; since Bob has no control over Mallory's inputs,
he can't replace Mallory's sponsor tx.

In the second case, I think Mallory can use an existing pinning
technique to make it expensive for Bob to fee bump.  The normal
replacement policies require a replacement to pay an absolute higher fee
than the original transaction, so Mallory can create a 100,000 vbyte
transaction with a single-vector sponsor at the end pointing to Bob's
transaction.  This sponsor transaction pays the same feerate as Bob's
transaction---let's say 50 nBTC/vbyte, so 5 mBTC total fee.  In order
for Bob to replace Mallory's sponsor transaction with his own sponsor
transaction, Bob needs to pay the incremental relay feerate (10
nBTC/vbyte) more, so 6 mBTC total ($66 at $11k/BTC).

Thanks,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] reviving op_difficulty

2020-08-22 Thread David A. Harding via bitcoin-dev
On Sun, Aug 16, 2020 at 11:41:30AM -0400, Thomas Hartman via bitcoin-dev wrote:
> First, I would like to pay respects to tamas blummer, RIP.
> 
> https://bitcoinmagazine.com/articles/remembering-tamas-blummer-pioneering-bitcoin-developer

RIP, Tamas.

> Tamas proposed an additional opcode for enabling bitcoin difficulty
> futures, on this list at
> 
> https://www.mail-archive.com/bitcoin-dev@lists.linuxfoundation.org/msg07991.html

Subsequent to Blummer's post, I heard from Jeremy Rubin about a
scheme[1] that allows difficulty futures without requiring any changes
to Bitcoin.  In short, it takes advantage of the fact that changes in
difficulty also cause a difference in maturation time between timelocks
and height-locks.  As an simple example:

1. Alice and Bob create an unsigned transaction that deposits their
   money into a 2-of-2 multisig.

2. They cooperate to create and sign two conflicting spends from the multisig:

a. Pays Alice with an nLockTime(height) of CURRENT_HEIGHT + 2016 blocks

b. Pays Bob with an nLockTime(time) of CURRENT_TIME + 2016 * 10 * 60 seconds

3. After both conflicting spends are signed, Alice and Bob sign and
   broadcast the deposit transaction from #1.

4. If hashrate increases during the subsequent period, the spend that
   pays Alice will mature first, so she broadcasts it and receives that
   money.  If hashrate decreases, the spend to Bob matures first, so he
   receives the money.

Of course, this basic formula can be tweaked to create other contracts,
e.g. a contract that only pays if hashrate goes down more than 25%.

As far as I can tell, this method should be compatible with offchain
commitments (e.g. payments within channels) and could be embedded in a
taproot commitment using OP_CLTV or OP_CSV instead of nLockTime.

-Dave

[1] https://powswap.com/


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-20 Thread David A. Harding via bitcoin-dev
On Sun, Aug 16, 2020 at 12:06:55PM -0700, Eric Voskuil via bitcoin-dev wrote:
> A requirement to ignore unknown (invalid) messages is [...] a protocol
> breaking change 

I don't think it is.  The proposed BIP, as currently written, only tells
nodes to ignore unknown messages during peer negotiation.  The only case
where this will happen so far is BIP339, which says:

The wtxidrelay message must be sent in response to a VERSION message
from a peer whose protocol version is >= 70016, and prior to sending
a VERACK

So unless you signal support for version >=70016, you'll never receive an
unknown message.  (And, if you do signal, you probably can't claim that
you were unaware of this new requirement, unless you were using a
non-BIP protocol like xthin[1]).

However, perhaps this new proposed BIP could be a bit clearer about its
expectations for future protocol upgrades by saying something like:

Nodes implementing this BIP MUST also not send new negotiation
message types to nodes whose protocol version is less than 70017.

That should promote backwards compatibility.  If you don't want to
ignore unknown negotiation messages between `version` and `verack`, you
can just set your protocol version to a max of 70016.

> A requirement to ignore unknown (invalid) messages is [...] poor
> protocol design. The purpose of version negotiation is to determine
> the set of valid messages. 

To be clear, the proposed requirement to ignore unknown messages is
limited in scope to the brief negotiation phase between `version` and
`verack`.  If you want to terminate connections (or do whatever) on
receipt of an unknown message, you can do that at any other time.

> Changes to version negotiation itself are very problematic.

For whom?

> The only limitation presented by versioning is that the system is
> sequential. 

That seems like a pretty significant limitation to decentralized
protocol development.

I think there are currently several people who want to run long-term
experiements for new protocol features using open source opt-in
codebases that anyone can run, and it would be advantageous to them to
have a flexible and lightweight feature negotiation system like this
proposed method.

> As such, clients that do not wish to implement (or operators who do
> not wish to enable) them are faced with a problem when wanting to
> support later features. This is resolvable by making such features
> optional at the new protocol level. This allows each client to limit
> its communication to the negotiated protocol, and allows ignoring of
> known but unsupported/disabled features.

I don't understand this.  How do two peers negotiate a set of two or
more optional features using only the exchange of single numbers?  For
example:

- Node A supports Feature X (implemented in protocol version 70998) and Feature 
Y (version 70999).

- Node B does not support X but does want to use Y; what does it use for its
  protocol version number when establishing a connection with node A?

---

Overall, I like the proposed BIP and the negotiation method it
describes.

Cheers,

-Dave

[1] This is not a recommendation for xthin, but I do think it's an example
of the challenges of using a shared linear version number scheme for
protocol negotiation in a decentralized system where different teams
don't necessarily get along well with each other.

https://github.com/ptschip/bitcoinxt/commit/7ea5854a3599851beffb1323544173f03d45373b#diff-c61070c281aed6ded69036c08bd08addR12


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP draft: BIP32 Path Templates

2020-07-03 Thread David A. Harding via bitcoin-dev
On Thu, Jul 02, 2020 at 09:28:39PM +0500, Dmitry Petukhov via bitcoin-dev wrote:
> I think there should be standard format to describe constraints for
> BIP32 paths.
> 
> I present a BIP draft that specifies "path templates" for BIP32 paths:
> 
> https://github.com/dgpv/bip32_template_parse_tplaplus_spec/blob/master/bip-path-templates.mediawiki

Hi Dmitry,

How do path templates compare to key origin identification[1] in
output script descriptors?

Could you maybe give a specfic example of how path templates might be
used?  Are they for backups?  Multisig wallet coordination?  Managing
data between software transaction construction and hardware device
signing?

Thanks,

-Dave

[1] 
https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md#key-origin-identification
(See earlier in the doc for examples)


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] MAD-HTLC

2020-06-28 Thread David A. Harding via bitcoin-dev
On Tue, Jun 23, 2020 at 03:47:56PM +0300, Stanga via bitcoin-dev wrote:
> On Tue, Jun 23, 2020 at 12:48 PM ZmnSCPxj  wrote:
> > * Inputs:
> >   * Bob 1 BTC - HTLC amount
> >   * Bob 1 BTC - Bob fidelity bond
> >
> > * Cases:
> >   * Alice reveals hashlock at any time:
> > * 1 BTC goes to Alice
> > * 1 BTC goes to Bob (fidelity bond refund)
> >   * Bob reveals bob-hashlock after time L:
> > * 2 BTC goes to Bob (HTLC refund + fidelity bond refund)
> >   * Bob cheated, anybody reveals both hashlock and bob-hashlock:
> > * 2 BTC goes to miner
> >
> > [...]
> 
> The cases you present are exactly how MAD-HTLC works. It comprises two
> contracts (UTXOs):
> * Deposit (holding the intended HTLC tokens), with three redeem paths:
> - Alice (signature), with preimage "A", no timeout
> - Bob (signature), with preimage "B", timeout T
> - Any entity (miner), with both preimages "A" and "B", no timeout
> * Collateral (the fidelity bond, doesn't have to be of the same amount)
> - Bob (signature), no preimage, timeout T
> - Any entity (miner), with both preimages "A" and "B", timeout T

I'm not these are safe if your counterparty is a miner.  Imagine Bob
offers Alice a MAD-HTLC.  Alice knows the payment preimage ("preimage
A").  Bob knows the bond preimage ("preimage B") and he's the one making
the payment and offering the bond.

After receiving the HTLC, Alice takes no action on it, so the timelock
expires.  Bob publicly broadcasts the refund transaction with the bond
preimage.  Unbeknownst to Bob, Alice is actually a miner and she uses her
pre-existing knowledge of the payment preimage plus her received
knowledge of the bond preimage to privately attempt mining a transaction
that pays her both the payment ("deposit") and the bond ("collateral").

Assuming Alice is a non-majority miner, she isn't guaranteed to
succeed---her chance of success depends on her percentage of the network
hashrate and how much fee Bob paid to incentivize other miners to
confirm his refund transaction quickly.  However, as long as Alice has a
non-trivial amount of hashrate, she will succeed some percentage of the
time in executing this type of attack.  Any of her theft attempts that
fail will leave no public trace, perhaps lulling users into a false
sense of security.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] MAD-HTLC

2020-06-28 Thread David A. Harding via bitcoin-dev
On Tue, Jun 23, 2020 at 09:41:56AM +0300, Stanga via bitcoin-dev wrote:
> Hi all,
> 
> We'd like to bring to your attention our recent result concerning HTLC.
> Here are the technical report and a short post outlining the main points:
> 
> * https://arxiv.org/abs/2006.12031
> * https://ittayeyal.github.io/2020-06-22-mad-htlc

Thank you for your interesting research!  Further quotes are from your
paper:

>  Myopic Miners: This bribery attack relies on all miners
> being rational, hence considering their utility at game conclu-
> sion instead of myopically optimizing for the next block. If
> a portion of the miners are myopic and any of them gets to
> create a block during the first T − 1 rounds, that miner would
> include Alice’s transaction and Bob’s bribery attempt would
> have failed.
>In such scenarios the attack succeeds only with a certain
> probability – only if a myopic miner does not create a block
> in the first T − 1 rounds. The success probability therefore
> decreases exponentially in T . Hence, to incentivize miners
> to support the attack, Bob has to increase his offered bribe
> exponentially in T .

This is a good abstract description, but I think it might be useful for
readers of this list who are wondering about the impact of this attack
to put it in concrete terms.  I'm bad at statistics, but I think the
probability of bribery failing (even if Bob offers a bribe with an
appropriately high feerate) is 1-exp(-b*h) where `b` is the number of
blocks until timeout and `h` is a percentage of the hashrate controlled
by so-called myopic miners.  Given that, here's a table of attack
failure probabilities:

 "Myopic" hashrate
 B  1%  10% 33% 50%
 l   +-
 o  6|  5.82%   45.12%  86.19%  95.02%
 c  36   |  30.23%  97.27%  100.00% 100.00%
 k  144  |  76.31%  100.00% 100.00% 100.00%
 s  288  |  94.39%  100.00% 100.00% 100.00%

So, if I understand correctly, even a small amount of "myopic" hashrate
and long timeouts---or modest amounts of hashrate and short
timeouts---makes this attack unlikely to succeed (and, even in the cases
where it does succeed, Bob will have to offer a very large bribe to
compensate "rational" miners for their high chance of losing out on
gaining any transaction fees).

Additionally, I think there's the problem of measuring the distribution
of "myopic" hashrate versus "rational" hashrate.  "Rational" miners need
to do this in order to ensure they only accept Bob's timelocked bribe if
it pays a sufficiently high fee.  However, different miners who try to
track what bribes were relayed versus what transactions got mined may
come to different conclusions about the relative hashrate of "myopic"
miners, leading some of them to require higher bribes, which may lead
those those who estimated a lower relative hash rate to assume the rate
of "myopic" mining in increasing, producing a feedback loop that makes
other miners think the rate of "myopic" miners is increasing.  (And that
assumes none of the miners is deliberately juking the stats to mislead
its competitors into leaving money on the table.)

By comparison, "myopic" miners don't need to know anything special about
the past.  They can just take the UTXO set, block height, difficulty
target, and last header hash and mine whatever available transactions
will give them the greatest next-block revenue.

In conclusion, I think: 

1. Given that all known Bitcoin miners today are "myopic", there's no
   short-term issue (to be clear, you didn't claim there was).

2. A very large percentage of the hashrate would have to implement
   "rational" mining for the attack to become particularly effective.
   Hopefully, we'd learn about this as it was happening and could adapt
   before it became an issue.

3. So-called rational mining is probably a lot harder to implement
   effectively than just 150 loc in Python; it probably requires a lot
   more careful incentive analysis than just looking at HTLCs.[1]

4. Although I can't offer a proof, my intuition says that "myopic"
   mining is probably very close to optimal in the current subsidy-fee
   regime.  Optimizing transaction selection only for the next block has
   already proven to be quite challenging to both software and protocol
   developers[2] so I can't imagine how much work it would take to build
   something that effectively optimizes for an unbounded future.  In
   short, I think so-called myopic mining might actually be the most
   rational mining we're capable of.

Nevertheless, I think your results are interesting and that MAD-HTLC is
a useful tool that might be particularly desirable in contracts that
involve especially high value or especially short timeouts (perhaps
asset swaps or payment channels used by traders?).  Thank you again for
posting!

-Dave

[1] For example, your paper says "[...] the bribing cost required to
attack HTLC is independent in T, meaning that 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-20 Thread David A. Harding via bitcoin-dev
On Sat, Jun 20, 2020 at 10:54:03AM +0200, Bastien TEINTURIER wrote:
> We're simply missing information, so it looks like the only good
> solution is to avoid being in that situation by having a foot in
> miners' mempools.

The problem I have with that approach is that the incentive is to
connect to the highest hashrate pools and ignore the long tail of
smaller pools and solo miners.  If miners realize people are doing this,
they may begin to charge for information about their mempool and the
largest miners will likely be able to charge more money per hashrate
than smaller miners, creating a centralization force by increasing
existing economies of scale.

Worse, information about a node's mempool is partly trusted.  A node can
easily prove what transactions it has, but it can't prove that it
doesn't have a certain transaction.  This implies incumbent pools with a
long record of trustworthy behavior may be able to charge more per
hashrate than a newer pools, creating a reputation-based centralizing
force that pushes individual miners towards well-established pools.

This is one reason I suggested using independent pay-to-preimage
transactions[1].  Anyone who knows the preimage can mine the
transaction, so it doesn't provide reputational advantage or direct
economies of scale---pay-to-preimage is incentive equivalent to paying
normal onchain transaction fees.  There is an indirect economy of
scale---attackers are most likely to send the low-feerate
preimage-containing transaction to just the largest pools, so small
miners are unlikely to learn the preimage and thus unlikely to be able
to claim the payment.  However, if the defense is effective, the attack
should rarely happen and so this should not have a significant effect on
mining profitability---unlike monitoring miner mempools which would have
to be done continuously and forever.

ZmnSCPxj noted that pay-to-preimage doesn't work with PTLCs.[2]  I was
hoping one of Bitcoin's several inventive cryptographers would come
along and describe how someone with an adaptor signature could use that
information to create a pubkey that could be put into a transaction with
a second output that OP_RETURN included the serialized adaptor
signature.  The pubkey would be designed to be spendable by anyone with
the final signature in a way that revealed the hidden value to the
pubkey's creator, allowing them to resolve the PTLC.  But if that's
fundamentally not possible, I think we could advocate for making
pay-to-revealed-adaptor-signature possible using something like
OP_CHECKSIGFROMSTACK.[3]

[1] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002664.html
[2] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002667.html
[3] https://bitcoinops.org/en/topics/op_checksigfromstack/

> Do you think it's unreasonable to expect at least some LN nodes to
> also invest in running nodes in mining pools, ensuring that they learn
> about attackers' txs and can potentially share discovered preimages
> with the network off-chain (by gossiping preimages found in the
> mempool over LN)?

Ignoring my concerns about mining centralization and from the
perspective of just the Lightning Network, that doesn't sound
unreasonable to me.  But from the perspective of a single LN node, it
might make more sense to get the information and *not* share it,
increasing your security and allowing you to charge lower routing fees
compared to your competitors.  This effect would only be enhanced if
miners charged for their mempool contents (indeed, to maximize their
revenue, miners might require that their mempool subscribers don't share
the information---which they could trivially enforce by occasionally
sending subscribers a preimage specific to the subscriber and seeing if
it propagated to the public network).

> I think that these recent attacks show that we need (at least some)
> off-chain nodes to be somewhat heavily invested in on-chain operations
> (layers can't be fully decoupled with the current security assumptions
> - maybe Eltoo will help change that in the future?).

I don't see how eltoo helps.  Eltoo helps ensure you reach the final
channel state, but this problem involves an abuse of that final state.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread David A. Harding via bitcoin-dev
On Fri, Jun 19, 2020 at 03:58:46PM -0400, David A. Harding via bitcoin-dev 
wrote:
> I think you're assuming here that the attacker broadcast a particular
> state.  

Whoops, I managed to confuse myself despite looking at Bastien's
excellent explainer.  The attacker would be broadcasting the latest
state, so the honest counterparty would only need to send one blind
child.  However, the blind child will only be relayed by a Bitcoin peer
if the peer also has the parent transaction (the latest state) and, if
it has the parent transaction, you should be able to just getdata('tx',
$txid) that transaction from the peer without CPFPing anything.  That
will give you the preimage and so you can immediately resolve the HTLC
with the upstream channel.

Revising my conclusion from the previous post:

I think the strongman argument for the attack would be that the attacker
will be able to perform a targeted relay of the low-feerate
preimage-containing transaction to just miners---everyone else on the
network will receive the honest user's higher-feerate expired-timelock
transaction.  Unless the honest user happens to have a connection to a
miner's node, the user will neither be able to CPFP fee bump nor use
getdata to retrieve the preimage.

Sorry for the confusion.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread David A. Harding via bitcoin-dev
On Fri, Jun 19, 2020 at 09:44:11AM +0200, Bastien TEINTURIER via Lightning-dev 
wrote:
> The gist is here, and I'd appreciate your feedback if I have wrongly
> interpreted some of the ideas:
> https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12

Quoted text below is from the gist:

> The trick to protect against a malicious participant that broadcasts a
> low-fee HTLC-success or Remote-HTLC-success transaction is that we can
> always blindly do a CPFP carve-out on them; we know their txid

I think you're assuming here that the attacker broadcast a particular
state.  However, in a channel which potentially had thousands of state
changes, you'd have to broadcast a blind child for each previous state
(or at least each previous state that pays the attacker more than the
latest state).  That's potentially thousands of transactions times
potentially dozens of peers---not impossible, but it seems messy.

I think there's a way to accomplish the same goal for less bandwidth and
zero fees.  The only way your Bitcoin peer will relay your blind child
is if it already has the parent transaction.  If it has the parent, you
can just request it using P2P getdata(type='tx', id=$txid).[1]  You can
batch multiple txid requests together (up to 50,000 IIRC) to minimize
overhead, making the average cost per txid a tiny bit over 36 bytes.
If you receive one of the transactions you request, you can extract the
preimage at no cost to yourself (except bandwidth).  If you don't
receive a transaction, then sending a blind child is hopeless
anyway---your peers won't relay it.

Overall, it's hard for me to guess how effective your proposal would be
at defeating the attack.  I think the strongman argument for the attack
would be that the attacker will be able to perform a targeted relay of
their outdated state to just miners---everyone else on the network
will receive the counterparty's honest final-state close.  Unless the
counterparty happens to have a connection to a miner's node, the
counterparty will neither be able to CPFP fee bump nor use getdata to
retrieve the preimage.

It seems to me it's practical for a motivated attacker to research which
IP addresses belong to miners so that they can target them, whereas
honest users won't practically be able to do that research (and, even if
they could, it would create a centralizing barrier to new miners
entering the market if users focused on maintaining connections to
previously-known miners).

-Dave

[1] You'd have to be careful to not attempt the getdata too soon after
you think the attacker broadcast their old state, but I think that
only means waiting a single block, which you have to do anyway to
see if the honest final-commitment transaction confirmed.  See
https://github.com/bitcoin/bitcoin/pull/18861



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP-341: Committing to all scriptPubKeys in the signature message

2020-05-02 Thread David A. Harding via bitcoin-dev
On Wed, Apr 29, 2020 at 04:57:46PM +0200, Andrew Kozlik via bitcoin-dev wrote:
> In order to ascertain non-ownership of an input which is claimed to be
> external, the wallet needs the scriptPubKey of the previous output spent by
> this input.

A wallet can easily check whether a scriptPubKey contais a specific
pubkey (as in P2PK/P2TR), but I think it's impractical for most wallets
to check whether a scriptPubKey contains any of the possible ~two
billion keys available in a specific BIP32 derivation path (and many
wallets natively support multiple paths).

It would seem to me that checking a list of scriptPubKeys for wallet
matches would require obtaining the BIP32 derivation paths for the
corresponding keys, which would have to be provided by a trusted data
source.  If you trust that source, you could just trust them to tell you
that none of the other inputs belong to your wallet.

Alternatively, there's the scheme described in the email you linked by
Greg Saunders (with the scheme co-attributed to Andrew Poelstra), which
seems reasonable to me.[1]  It's only downside (AFAICT) is that it
requires an extra one-way communication from a signing device to a
coordinator.  For a true offline signer, that can be annoying, but for
an automated hardware wallet participating in coinjoins or LN, that
doesn't seem too burdensome to me.

-Dave

[1] The scheme could be trivially tweaked to be compatible with BIP322
generic signed messages, which is something that could become widely
adopted (I hope) and so make supporting the scheme easier.


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread David A. Harding via bitcoin-dev
On Wed, Apr 22, 2020 at 03:53:37PM -0700, Matt Corallo wrote:
> if you focus on sending the pinning transaction to miner nodes
> directly (which isn't trivial, but also not nearly as hard as it
> sounds), you could still pull off the attack. 

If the problem is that miners might have information not available to
the network in general, you could just bribe them for that knowledge.
E.g. as Bob's refund deadline approaches and he begins to suspect that
mempool shenanigans are preventing his refund transaction from
confirming, he takes a confirmed P2WPKH UTXO he's been saving for use in
CPFP fee bumps and spends part of its value (say 1 mBTC) to the
following scriptPubKey[1],

OP_SHA256  OP_EQUAL

Assuming the feerate and the bribe amount are reasonable, any miner who
knows the preimage is incentivized to include Bob's transaction and a
child transation spending from it in their next block.  That child
transaction will include the preimage, which Bob will see when he
processes the block.

If any non-miner knows the preimage, they can also create that child
transaction.  The non-miner probably can't profit from this---miners can
just rewrite the child transaction to pay themselves since there's no
key-based security---but the non-miner can at least pat themselves on
the back for being a good Summaritan.  Again Bob will learn the preimage
once the child transaction is included in a block, or earlier if his
wallet is monitoring for relays of spends from his parent transaction.

Moreover, Bob can first create a bribe via LN and, in that case, things
are even better.  As Bob's deadline approaches, he uses one of his
still-working channels to send a bunch of max-length (20 hops?) probes
that reuse the earlier HTLC's .  If any hop along the path knows
the preimage, they can immediately claim the probe amount (and any
routing fees that were allocated to subsequent hops).  This not only
gives smaller miners with LN nodes an equal chance of claiming the
probe-bribe as larger miners, but it also allows non-miners to profit
from learning the preimage from miners.

That last part is useful because even if, as in your example, the
adversary is able to send one version of the transaction just to miners
(with the preimage) and another conflicting version to all relay nodes
(without the preimage), miners will naturally attempt to relay the
preimage version of the transaction to other users; if some of those
users run modified nodes that write all 32-byte witness data blobs to a
database---even if the transaction is ultimately rejected as a
conflict---then targetted relay to miners may not be effective at
preventing Bob from learning the preimage.

Obviously all of the above requires people run additional software to
keep track of potential preimages[2] and then compare them to hash
candidates, plus it requires additional complexity in LN clients, so I
can easily understand why it might be less desirable than the protocol
changes under discussion in other parts of this thread.  Still, with
lots of effort already being put into watchtowers and other
enforcement-assistance services, I wonder if this problem can be largely
addressed in the same general way.

-Dave

[1] Requires a change to standard relay and mining policy.
[2] Pretty easy, e.g.

bitcoin-cli getrawmempool \
| jq -r .[] \
| while read txid ; do
  bitcoin-cli getrawtransaction $txid true | jq .vout[].scriptPubKey.asm
done \
| grep -o '\<[0-9a-f]\{64\}\>'


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Wed, Apr 22, 2020 at 03:03:29PM -0400, Antoine Riard wrote:
> > In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?
> 
> That's an interesting idea, but an attacker can create a local conflict in
> your mempool

You don't need a mempool to send a transaction.  You can just open
connections to random Bitcoin nodes directly and try sending your
transaction.  That's what a lite client is going to do anyway.  If the
pinned transaction is in the mempools of a significant number of Bitcoin
nodes, then it should take just a few random connections to find one of
those nodes, learn about the conflict, and download the pinned
transaction.

If that's not acceptable, you could find some other way to poll a
significant number of people with mempools, e.g. BIP35 mempool messages
or reusing the payment hash in a bunch of 1 msat probes to LN nodes who
opt-in to scanning their bitcoind's mempools for a corresponding
preimage.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev wrote:
> A lightning counterparty (C, who received the HTLC from B, who
> received it from A) today could, if B broadcasts the commitment
> transaction, spend an HTLC using the preimage with a low-fee,
> RBF-disabled transaction.  After a few blocks, A could claim the HTLC
> from B via the timeout mechanism, and then after a few days, C could
> get the HTLC-claiming transaction mined via some out-of-band agreement
> with a small miner. This leaves B short the HTLC value.

IIUC, the main problem is honest Bob will broadcast a transaction
without realizing it conflicts with a pinned transaction that's already
in most node's mempools.  If Bob knew about the pinned transaction and
could get a copy of it, he'd be fine.

In that case, would it be worth re-implementing something like a BIP61
reject message but with an extension that returns the txids of any
conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
and sends his conflicting transaction, the nodes would reply with
something like "rejected: code 123: conflicts with txid 0123...cdef".
Bob could then reply with a a getdata('tx', '0123...cdef') to get the
pinned transaction, parse out its preimage, and resolve the HTLC.

This approach isn't perfect (if it even makes sense at all---I could be
misunderstanding the problem) because one of the problems that caused
BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
if Bob had at least one honest peer that had the pinned transaction in
its mempool and which implemented reject-with-conflicting-txid, Bob
might be ok.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Tue, Apr 21, 2020 at 09:13:34PM -0700, Olaoluwa Osuntokun wrote:
> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev 
> wrote:
> > While this is somewhat unintuitive, there are any number of good anti-DoS
> > reasons for this, eg:
> 
> None of these really strikes me as "good" reasons for this limitation
> [...]
> In the end, the simplest heuristic (accept the higher fee rate
> package) side steps all these issues and is also the most economically
> rationale from a miner's perspective. 

I think it's important to remember than mempool behavior affects not
just miners but also relay nodes.  Miner costs, such as bandwidth usage,
can be directly offset by their earned block rewards, so miners can be
much more tolerant of wasted bandwidth than relay nodes who receive no
direct financial compensation for the processing and relay of
unconfirmed transactions.[1]

> Why would one prefer a higher absolute fee package (which could be
> very large) over another package with a higher total _fee rate_?

To avoid the excessive wasting of bandwidth.  Bitcoin Core's defaults
require each replacement pay a feerate of 10 nBTC/vbyte over an existing
transaction or package, and the defaults also allow transactions or
packages up to 100,000 vbytes in size (~400,000 bytes).  So, without
enforcement of BIP125 rule 3, an attacker starting at the minimum
default relay fee also of 10 nBTC/vbyte could do the following:

- Create a ~400,000 bytes tx with feerate of 10 nBTC/vbyte (1 mBTC total
  fee)

- Replace that transaction with 400,000 new bytes at a feerate of 20
  nBTC/vbyte (2 mBTC total fee)

- Perform 998 additional replacements, each increasing the feerate by 10
  nBTC/vbyte and the total fee by 1 mBTC, using a total of 400 megabytes
  (including the original transaction and first replacement) to
  ultimately produce a transaction with a feerate of 10,000 nBTC/vbyte
  (1 BTC total fee)

- Perform one final replacement of the latest 400,000 byte transaction
  with a ~200-byte (~150 vbyte) 1-in, 1-out P2WPKH transaction that pays
  a feerate of 10,010 nBTC/vbyte (1.5 mBTC total fee)

Assuming 50,000 active relay nodes and today's BTC price of ~$7,000
USD/BTC, the above scenario would allow an attacker to waste a
collective 20 terabytes of network bandwidth for a total fee cost of
$10.50.  And, of course, the attacker could run multiple attacks of this
sort in parallel, quickly swamping the network.

To use the above concrete example to repeat the point made at the
beginning of this email: miners might be willing to accept the waste of
400 MB of bandwidth in order to gain a $10.50 fee, but I think very few
relay nodes could function for long under an onslaught of such behavior.

-Dave

[1] The reward to relay nodes of maintaining the public relay network is
that it helps protect against miner centralization.  If there was no
public relay network, users would need to submit transactions
directly to miners or via a privately-controlled relay network.
Users desiring timely confirmation (and operators of private relay
networks) would have a large incentive to get transactions to the
largest miners but only a small incentive to get the transaction to
the smaller miners, increasing the economies of scale in mining and
furthering centralization.

Although users of Bitcoin benefit by reducing mining centralization
pressure, I don't think we can expect most users to be willing to
bear large costs in defense of benefits which are largely intangible
(until they're gone), so we must try to keep the cost of operating a
relay node within a reasonable margin of the cost of operating a
minimal-bandwidth blocks-only node.


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Statechain implementations

2020-03-31 Thread David A. Harding via bitcoin-dev
On Wed, Mar 25, 2020 at 01:52:10PM +, Tom Trevethan via bitcoin-dev wrote:
> Hi all,
> 
> We are starting to work on an implementation of the statechains concept (
> https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39),
>
> [...]
> There are two main modifications we are looking at:
> [...]
> 
> 2. Replacing the 2-of-2 multisig output (paying to statechain entity SE key
> and transitory key) with a single P2(W)PKH output where the public key
> shared between the SE and the current owner. The SE and the current owner
> can then sign with a 2-of-2 ECDSA MPC. 

Dr. Trevethan,

Would you be able to explain how your proposal to use statechains with
2P-ECDSA relates to your patent assigned to nChain Holdings for "Secure
off-chain blockchain transactions"?[1]  

[1] https://patents.google.com/patent/US20200074464A1

Here are some excerpts from the application that caught my attention in
the context of statechains in general and your proposal to this list in
particular:

> an exchange platform that is trusted to implement and operate the
> transaction protocol, without requiring an on-chain transaction. The
> off-chain transactions enable one computer system to generate multiple
> transactions that are recordable to a blockchain in different
> circumstances
>
> [...]
>
> at least some of the off-chain transactions are valid for recording on
> the blockchain even in the event of a catastrophic failure of the
> exchange (e.g., exchange going permanently off-line or loosing key
> shares).
>
> [...]
>
> there may be provided a computer readable storage medium including a
> two-party elliptic curve digital signature algorithm (two-party ECDSA)
> script comprising computer executable instructions which, when
> executed, configure a processor to perform functions of a two-party
> elliptic curve digital signature algorithm described herein.
>
> [...]
>
> In this instance the malicious actor would then also have to collude
> with a previous owner of the funds to recreate the full key. Because
> an attack requires either the simultaneous theft of both exchange and
> depositor keys or collusion with previous legitimate owners of funds,
> the opportunities for a malicious attacker to compromise the exchange
> platform are limited.

Thank you,

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block solving slowdown question/poll

2020-03-22 Thread David A. Harding via bitcoin-dev
On Sat, Mar 21, 2020 at 11:40:24AM -0700, Dave Scotese via bitcoin-dev wrote:
> [Imagine] we also see mining power dropping off at a rate that
> suggests the few days [until retarget] might become a few weeks, and
> then, possibly, a few months or even the unthinkable, a few eons.  I'm
> curious to know if anyone has ideas on how this might be handled

There are only two practical solutions I'm aware of:

1. Do nothing
2. Hard fork a difficulty reduction

If bitcoins retain even a small fraction of their value compared to the
previous retarget period and if most mining equipment is still available
for operation, then doing nothing is probably the best choice---as block
space becomes scarcer, transaction feerates will increase and miners
will be incentivized to increase their block production rate.

If the bitcoin price has plummeted more than, say, 99% in two weeks
with no hope of short-term recovery or if a large fraction of mining
equipment has become unusable (again, say, 99% in two weeks with no
hope of short-term recovery), then it's probably worth Bitcoin users
discussing a hard fork to reduce difficulty to a currently sustainable
level.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot (and graftroot) complexity (reflowed)

2020-02-14 Thread David A. Harding via bitcoin-dev
On Fri, Feb 14, 2020 at 12:07:15PM -0800, Jeremy via bitcoin-dev wrote:
> Is the same if Schnorr + Merkle Branch without Taproot optimization, unless
> I'm missing something in one of the cases? 

That's fair.  However, it's only true if everyone constructs their
merkle tree in the same way, with a single ` OP_CHECKSIG` as
one of the top leaves.   Taproot effectively standardizes the position
of the all-parties-agree condition and so its anonymity set may contain
spends from scripts whose creators buried or excluded the the all-agree
option because they didn't think it was likely to be used.

More importantly, there's no incentive for pure single-sig users to use a
merkle tree, since that would make both the scriptPubKey and the witness
data are larger for them than just continuing to use v0 segwit P2WPKH.
Given that single-sig users represent a majority of transactions at
present (see AJ Towns's previous email in this thread), I think we
really want to make it as convenient as possible for them to participate
in the anonymity set.

(To be fair, taproot scriptPubKeys are also larger than P2WPKH
scriptPubKeys, but its witness data is considerably smaller, giving
receivers an incentive to demand P2TR payments even if spenders don't
like paying the extra 12 vbytes per output.)

Rough sums:

- P2WPKH scriptpubkey (22.00 vbytes): `OP_0 PUSH20 `
- P2WPKH witness data (26.75): `size(72) , size(33) `
- P2TR scriptpubkey (34.00): `OP_1 PUSH32 `
- P2TR witness data (16.25): `size(64) `
- BIP116 MBV P2WSH scriptpubkey (34.00): `OP_0 PUSH32 `
- BIP116 MBV P2WSH witness data (42.00): `size(64) , size(32)
  , size(32) , size(36)  PUSH32
   OP_MBV>`

-Dave

P.S. I think this branch of the thread is just rehashing points that
 were originally covered over two years ago and which haven't really
 changed since then.  E.g.:


https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015629.html


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


  1   2   >