Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-20 Thread David A. Harding via bitcoin-dev
On Tue, Oct 20, 2020 at 11:12:06AM +1030, Rusty Russell wrote:
> Here are my initial results:

A while ago, around the Bitcoin Core 0.19.0 release that enabled
relaying v1+ segwit addresses, Mike Schmidt was working on the Optech
Compatibility Matrix[1] and tested a variety of software and services
with a v1 address using the original BIP341 specification (33 byte
pubkeys; we now use 32 byte keys).  Here's a summary of his results,
posted with his permission:

- abra: Bech32 not supported.

- binance: Does not pass front end javascript validation

- bitgo: Error occurs during sending process, after validation.

- bitmex: Bech32 not supported.

- bitrefill: Address does not pass validation.

- bitstamp: Address text input doesn’t allow bech32 addresses due to
  character limits.

- Error occurs during sending process, after

- brd: Allows sending workflow to complete in the UI. Transaction stays
  as pending in the transaction list.

- casa: Fails on signing attempt.

- coinbase: Fails address validation client side in the UI.

- conio: Server error 500 while attemping to send.

- copay: Allows v1 address to be entered in the UI. Fails during

- edge: Allows sending workflow to complete. Transaction stays in
  pending state. Appears to causes issues with the balance calculation
  as well as ability to send subsequent transactions.

- electrum: Error message during broadcasting of transaction.

- green: Fails on validation of the address.

- jaxx: Fails on validation of the address.

- ledger live: Fails when transaction is sent to the hardwave device for

- mycelium: Fails during address validation.

- purse: Transaction can be created and broadcast, relayed by peers
  compatible with Bitcoin Core v0.19.0.1 or above.

- river: Transaction can be created and broadcast, relayed by peers
  compatible with Bitcoin Core v0.19.0.1 or above.

- samourai: Fails on broadcast of transaction to the network.

- trezor: Fails on validation of the address.

- wasabi: Fails on validation of the address.

- xapo: Xapo allows users to create segwit v1 transactions in the UI.
  However, the transaction gets stuck as pending for an indeterminate
  period of time

I would guess that some of the failures / stuck transactions might now
be successes if the backend infrastructure has upgraded to Bitcoin Core
>= 0.19.



Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-08 Thread David A. Harding via bitcoin-dev
On Thu, Oct 08, 2020 at 10:51:10AM +1030, Rusty Russell via bitcoin-dev wrote:
> Hi all,
> I propose an alternative to length restrictions suggested by
> Russell in : use the
> variant,
> unless the first byte is 0.
> Here's a summary of each proposal:
> Length restrictions (future segwits must be 10, 13, 16, 20, 23, 26, 29,
> 32, 36, or 40 bytes)
>   1. Backwards compatible for v1 etc; old code it still works.
>   2. Restricts future segwit versions, may require new encoding if we
>  want a diff length (or waste chainspace if we need to have a padded
>  version for compat).
> Checksum change based on first byte:
>   1. Backwards incompatible for v1 etc; only succeeds 1 in a billion.
>   2. Weakens guarantees against typos in first two data-part letters to
>  1 in a billion.[1]

Excellent summary!

> I prefer the second because it forces upgrades, since it breaks so
> clearly.  And unfortunately we do need to upgrade, because the length
> extension bug means it's unwise to accept non-v0 addresses.

I don't think the second option forces upgrades.  It just creates
another opt-in address format that means we'll spend another several
years with every wallet having two address buttons, one for a "segwit
address" (v0) and one for a "taproot address" (v1).  Or maybe three
buttons, with the third being a "taproot-in-a-segwit-address" (v1
witness program using the original bech32 encoding).

It took a lot of community effort to get widespread support for bech32
addresses.  Rather than go through that again, I'd prefer we use the
backwards compatible proposal from BIPs PR#945 and, if we want to
maximize safety, consensus restrict v1 witness program size, e.g. reject
transactions with scriptPubKeys paying v1 witness programs that aren't
exactly 32 bytes.

Hopefully by the time we want to use segwit v2, most software will have
implemented length limits and so we won't need any additional consensus
restrictions from then on forward.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Floating-Point Nakamoto Consensus

2020-09-26 Thread David A. Harding via bitcoin-dev
On Fri, Sep 25, 2020 at 10:35:36AM -0700, Mike Brooks via bitcoin-dev wrote:
> -  with a fitness test you have a 100% chance of a new block from being
> accepted, and only a 50% or less chance for replacing a block which has
> already been mined.   This is all about keeping incentives moving forward.

FYI, I think this topic has been discussed on the list before (in
response to the selfish mining paper).  See this proposal:

Of its responses, I thought these two stood out in particular:

I think there may be some related contemporary discussion from
BitcoinTalk as well; here's a post that's not directly related to the
idea of using hash values but which does describe some of the challenges
in replacing first seen as the tip disambiguation method.  There may be
other useful posts in that thread---I didn't take the time to skim all
11 pages.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-21 Thread David A. Harding via bitcoin-dev
On Sun, Sep 20, 2020 at 07:10:23PM -0400, Antoine Riard via bitcoin-dev wrote:
> As you mentioned, if the goal of the sponsor mechanism is to let any party
> drive a state N's first tx to completion, you still have the issue of
> concurrent states being pinned and thus non-observable for sponsoring by an
> honest party.
> E.g, Bob can broadcast a thousand of revoked LN states and pin them with
> low-feerate sponsors such as these malicious packages absolute fee are
> higher than the honest state N. Alice can't fee-sponsor
> them as we can assume she hasn't a global view of network mempools. Due to
> the proposed policy rule "The Sponsor Vector's entry must be present in the
> mempool", Alice's sponsors won't propagate. 

Would it make sense that, instead of sponsor vectors
pointing to txids, they point to input outpoints?  E.g.:

1. Alice and Bob open a channel with funding transaction 0123...cdef,
   output 0.

2. After a bunch of state updates, Alice unilaterally broadcasts a
   commitment transaction, which has a minimal fee.

3. Bob doesn't immediately care whether or not Alice tried to close the
   channel in the latest state---he just wants the commitment
   transaction confirmed so that he either gets his money directly or he
   can send any necessary penalty transactions.  So Bob broadcasts a
   sponsor transaction with a vector of 0123...cdef:0

4. Miners can include that sponsor transaction in any block that has a
   transaction with an input of 0123...cdef:0.  Otherwise the sponsor
   transaction is consensus invalid.

(Note: alternatively, sponsor vectors could point to either txids OR
input outpoints.  This complicates the serialization of the vector but
seems otherwise fine to me.)

> If we want to solve the hard cases of pinning, I still think mempool
> acceptance of a whole package only on the merits of feerate is the easiest
> solution to reason on.

I don't think package relay based only on feerate solves RBF transaction
pinning (and maybe also doesn't solve ancestor/dependent limit pinning).
Though, certainly, package relay has the major advantage over this
proposal (IMO) in that it doesn't require any consensus changes.
Package relay is also very nice for fixing other protocol rough edges
that are needed anyway.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread David A. Harding via bitcoin-dev
On Sat, Sep 19, 2020 at 09:30:56AM -0700, Jeremy wrote:
> Yup, I was aware of this limitation but I'm not sure how practical it is as
> an attack because it's quite expensive for the attacker. 

It's cheap if:

1. You were planning to consolidate all those UTXOs at roughly that
   feerate anyway.

2. After you no longer need your pinning transaction in the mempool, you
   make an out-of-band arrangement with a pool to mine a small
   conflicting transaction.

> But there are a few simple policies that can eliminate it:
> 1) A Sponsoring TX never needs to be more than, say, 2 inputs and 2
> outputs. Restricting this via policy would help, or more flexibly
> limiting the total size of a sponsoring transaction to 1000 bytes.

I think that works (as policy).

> 2) Make A Sponsoring TX not need to pay more absolute fee, just needs to
> increase the feerate (perhaps with a constant relay fee bump to prevent
> spam).

I think it'd be hard to find a constant relay fee bump amount that was
high enough to prevent abuse but low enough not to unduly hinder
legitimate users.

> I think 1) is simpler and should allow full use of the sponsor mechanism
> while preventing this class of issue mostly.




Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread David A. Harding via bitcoin-dev
On Fri, Sep 18, 2020 at 05:51:39PM -0700, Jeremy via bitcoin-dev wrote:
> I'd like to share with you a draft proposal for a mechanism to replace
> CPFP and RBF for increasing fees on transactions in the mempool that
> should be more robust against attacks.

Interesting idea!  This is going to take a while to think about, but I
have one immediate question:

> To prevent garbage sponsors, we also require that:
> 1. The Sponsor's feerate must be greater than the Sponsored's ancestor fee 
> rate
> We allow one Sponsor to replace another subject to normal replacement
> policies, they are treated as conflicts.

Is this in the reference implementation?  I don't see it and I'm
confused by this text.  I think it could mean either:

1. Sponsor Tx A can be replaced by Sponsor Tx B if A and B have at least
   one input in common (which is part of the "normal replacement policies")

2. A can be replaced by B even if they don't have any inputs in common
   as long as they do have a Sponsor Vector in common (while otherwise
   using the "normal replacement policies").

In the first case, I think Mallory can prevent Bob from
sponsor-fee-bumping (sponsor-bumping?) his transaction by submitting a
sponsor before he does; since Bob has no control over Mallory's inputs,
he can't replace Mallory's sponsor tx.

In the second case, I think Mallory can use an existing pinning
technique to make it expensive for Bob to fee bump.  The normal
replacement policies require a replacement to pay an absolute higher fee
than the original transaction, so Mallory can create a 100,000 vbyte
transaction with a single-vector sponsor at the end pointing to Bob's
transaction.  This sponsor transaction pays the same feerate as Bob's
transaction---let's say 50 nBTC/vbyte, so 5 mBTC total fee.  In order
for Bob to replace Mallory's sponsor transaction with his own sponsor
transaction, Bob needs to pay the incremental relay feerate (10
nBTC/vbyte) more, so 6 mBTC total ($66 at $11k/BTC).



Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] reviving op_difficulty

2020-08-22 Thread David A. Harding via bitcoin-dev
On Sun, Aug 16, 2020 at 11:41:30AM -0400, Thomas Hartman via bitcoin-dev wrote:
> First, I would like to pay respects to tamas blummer, RIP.

RIP, Tamas.

> Tamas proposed an additional opcode for enabling bitcoin difficulty
> futures, on this list at

Subsequent to Blummer's post, I heard from Jeremy Rubin about a
scheme[1] that allows difficulty futures without requiring any changes
to Bitcoin.  In short, it takes advantage of the fact that changes in
difficulty also cause a difference in maturation time between timelocks
and height-locks.  As an simple example:

1. Alice and Bob create an unsigned transaction that deposits their
   money into a 2-of-2 multisig.

2. They cooperate to create and sign two conflicting spends from the multisig:

a. Pays Alice with an nLockTime(height) of CURRENT_HEIGHT + 2016 blocks

b. Pays Bob with an nLockTime(time) of CURRENT_TIME + 2016 * 10 * 60 seconds

3. After both conflicting spends are signed, Alice and Bob sign and
   broadcast the deposit transaction from #1.

4. If hashrate increases during the subsequent period, the spend that
   pays Alice will mature first, so she broadcasts it and receives that
   money.  If hashrate decreases, the spend to Bob matures first, so he
   receives the money.

Of course, this basic formula can be tweaked to create other contracts,
e.g. a contract that only pays if hashrate goes down more than 25%.

As far as I can tell, this method should be compatible with offchain
commitments (e.g. payments within channels) and could be embedded in a
taproot commitment using OP_CLTV or OP_CSV instead of nLockTime.



Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-20 Thread David A. Harding via bitcoin-dev
On Sun, Aug 16, 2020 at 12:06:55PM -0700, Eric Voskuil via bitcoin-dev wrote:
> A requirement to ignore unknown (invalid) messages is [...] a protocol
> breaking change 

I don't think it is.  The proposed BIP, as currently written, only tells
nodes to ignore unknown messages during peer negotiation.  The only case
where this will happen so far is BIP339, which says:

The wtxidrelay message must be sent in response to a VERSION message
from a peer whose protocol version is >= 70016, and prior to sending

So unless you signal support for version >=70016, you'll never receive an
unknown message.  (And, if you do signal, you probably can't claim that
you were unaware of this new requirement, unless you were using a
non-BIP protocol like xthin[1]).

However, perhaps this new proposed BIP could be a bit clearer about its
expectations for future protocol upgrades by saying something like:

Nodes implementing this BIP MUST also not send new negotiation
message types to nodes whose protocol version is less than 70017.

That should promote backwards compatibility.  If you don't want to
ignore unknown negotiation messages between `version` and `verack`, you
can just set your protocol version to a max of 70016.

> A requirement to ignore unknown (invalid) messages is [...] poor
> protocol design. The purpose of version negotiation is to determine
> the set of valid messages. 

To be clear, the proposed requirement to ignore unknown messages is
limited in scope to the brief negotiation phase between `version` and
`verack`.  If you want to terminate connections (or do whatever) on
receipt of an unknown message, you can do that at any other time.

> Changes to version negotiation itself are very problematic.

For whom?

> The only limitation presented by versioning is that the system is
> sequential. 

That seems like a pretty significant limitation to decentralized
protocol development.

I think there are currently several people who want to run long-term
experiements for new protocol features using open source opt-in
codebases that anyone can run, and it would be advantageous to them to
have a flexible and lightweight feature negotiation system like this
proposed method.

> As such, clients that do not wish to implement (or operators who do
> not wish to enable) them are faced with a problem when wanting to
> support later features. This is resolvable by making such features
> optional at the new protocol level. This allows each client to limit
> its communication to the negotiated protocol, and allows ignoring of
> known but unsupported/disabled features.

I don't understand this.  How do two peers negotiate a set of two or
more optional features using only the exchange of single numbers?  For

- Node A supports Feature X (implemented in protocol version 70998) and Feature 
Y (version 70999).

- Node B does not support X but does want to use Y; what does it use for its
  protocol version number when establishing a connection with node A?


Overall, I like the proposed BIP and the negotiation method it



[1] This is not a recommendation for xthin, but I do think it's an example
of the challenges of using a shared linear version number scheme for
protocol negotiation in a decentralized system where different teams
don't necessarily get along well with each other.

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP draft: BIP32 Path Templates

2020-07-03 Thread David A. Harding via bitcoin-dev
On Thu, Jul 02, 2020 at 09:28:39PM +0500, Dmitry Petukhov via bitcoin-dev wrote:
> I think there should be standard format to describe constraints for
> BIP32 paths.
> I present a BIP draft that specifies "path templates" for BIP32 paths:

Hi Dmitry,

How do path templates compare to key origin identification[1] in
output script descriptors?

Could you maybe give a specfic example of how path templates might be
used?  Are they for backups?  Multisig wallet coordination?  Managing
data between software transaction construction and hardware device



(See earlier in the doc for examples)

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] MAD-HTLC

2020-06-28 Thread David A. Harding via bitcoin-dev
On Tue, Jun 23, 2020 at 03:47:56PM +0300, Stanga via bitcoin-dev wrote:
> On Tue, Jun 23, 2020 at 12:48 PM ZmnSCPxj  wrote:
> > * Inputs:
> >   * Bob 1 BTC - HTLC amount
> >   * Bob 1 BTC - Bob fidelity bond
> >
> > * Cases:
> >   * Alice reveals hashlock at any time:
> > * 1 BTC goes to Alice
> > * 1 BTC goes to Bob (fidelity bond refund)
> >   * Bob reveals bob-hashlock after time L:
> > * 2 BTC goes to Bob (HTLC refund + fidelity bond refund)
> >   * Bob cheated, anybody reveals both hashlock and bob-hashlock:
> > * 2 BTC goes to miner
> >
> > [...]
> The cases you present are exactly how MAD-HTLC works. It comprises two
> contracts (UTXOs):
> * Deposit (holding the intended HTLC tokens), with three redeem paths:
> - Alice (signature), with preimage "A", no timeout
> - Bob (signature), with preimage "B", timeout T
> - Any entity (miner), with both preimages "A" and "B", no timeout
> * Collateral (the fidelity bond, doesn't have to be of the same amount)
> - Bob (signature), no preimage, timeout T
> - Any entity (miner), with both preimages "A" and "B", timeout T

I'm not these are safe if your counterparty is a miner.  Imagine Bob
offers Alice a MAD-HTLC.  Alice knows the payment preimage ("preimage
A").  Bob knows the bond preimage ("preimage B") and he's the one making
the payment and offering the bond.

After receiving the HTLC, Alice takes no action on it, so the timelock
expires.  Bob publicly broadcasts the refund transaction with the bond
preimage.  Unbeknownst to Bob, Alice is actually a miner and she uses her
pre-existing knowledge of the payment preimage plus her received
knowledge of the bond preimage to privately attempt mining a transaction
that pays her both the payment ("deposit") and the bond ("collateral").

Assuming Alice is a non-majority miner, she isn't guaranteed to
succeed---her chance of success depends on her percentage of the network
hashrate and how much fee Bob paid to incentivize other miners to
confirm his refund transaction quickly.  However, as long as Alice has a
non-trivial amount of hashrate, she will succeed some percentage of the
time in executing this type of attack.  Any of her theft attempts that
fail will leave no public trace, perhaps lulling users into a false
sense of security.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] MAD-HTLC

2020-06-28 Thread David A. Harding via bitcoin-dev
On Tue, Jun 23, 2020 at 09:41:56AM +0300, Stanga via bitcoin-dev wrote:
> Hi all,
> We'd like to bring to your attention our recent result concerning HTLC.
> Here are the technical report and a short post outlining the main points:
> *
> *

Thank you for your interesting research!  Further quotes are from your

>  Myopic Miners: This bribery attack relies on all miners
> being rational, hence considering their utility at game conclu-
> sion instead of myopically optimizing for the next block. If
> a portion of the miners are myopic and any of them gets to
> create a block during the first T − 1 rounds, that miner would
> include Alice’s transaction and Bob’s bribery attempt would
> have failed.
>In such scenarios the attack succeeds only with a certain
> probability – only if a myopic miner does not create a block
> in the first T − 1 rounds. The success probability therefore
> decreases exponentially in T . Hence, to incentivize miners
> to support the attack, Bob has to increase his offered bribe
> exponentially in T .

This is a good abstract description, but I think it might be useful for
readers of this list who are wondering about the impact of this attack
to put it in concrete terms.  I'm bad at statistics, but I think the
probability of bribery failing (even if Bob offers a bribe with an
appropriately high feerate) is 1-exp(-b*h) where `b` is the number of
blocks until timeout and `h` is a percentage of the hashrate controlled
by so-called myopic miners.  Given that, here's a table of attack
failure probabilities:

 "Myopic" hashrate
 B  1%  10% 33% 50%
 l   +-
 o  6|  5.82%   45.12%  86.19%  95.02%
 c  36   |  30.23%  97.27%  100.00% 100.00%
 k  144  |  76.31%  100.00% 100.00% 100.00%
 s  288  |  94.39%  100.00% 100.00% 100.00%

So, if I understand correctly, even a small amount of "myopic" hashrate
and long timeouts---or modest amounts of hashrate and short
timeouts---makes this attack unlikely to succeed (and, even in the cases
where it does succeed, Bob will have to offer a very large bribe to
compensate "rational" miners for their high chance of losing out on
gaining any transaction fees).

Additionally, I think there's the problem of measuring the distribution
of "myopic" hashrate versus "rational" hashrate.  "Rational" miners need
to do this in order to ensure they only accept Bob's timelocked bribe if
it pays a sufficiently high fee.  However, different miners who try to
track what bribes were relayed versus what transactions got mined may
come to different conclusions about the relative hashrate of "myopic"
miners, leading some of them to require higher bribes, which may lead
those those who estimated a lower relative hash rate to assume the rate
of "myopic" mining in increasing, producing a feedback loop that makes
other miners think the rate of "myopic" miners is increasing.  (And that
assumes none of the miners is deliberately juking the stats to mislead
its competitors into leaving money on the table.)

By comparison, "myopic" miners don't need to know anything special about
the past.  They can just take the UTXO set, block height, difficulty
target, and last header hash and mine whatever available transactions
will give them the greatest next-block revenue.

In conclusion, I think: 

1. Given that all known Bitcoin miners today are "myopic", there's no
   short-term issue (to be clear, you didn't claim there was).

2. A very large percentage of the hashrate would have to implement
   "rational" mining for the attack to become particularly effective.
   Hopefully, we'd learn about this as it was happening and could adapt
   before it became an issue.

3. So-called rational mining is probably a lot harder to implement
   effectively than just 150 loc in Python; it probably requires a lot
   more careful incentive analysis than just looking at HTLCs.[1]

4. Although I can't offer a proof, my intuition says that "myopic"
   mining is probably very close to optimal in the current subsidy-fee
   regime.  Optimizing transaction selection only for the next block has
   already proven to be quite challenging to both software and protocol
   developers[2] so I can't imagine how much work it would take to build
   something that effectively optimizes for an unbounded future.  In
   short, I think so-called myopic mining might actually be the most
   rational mining we're capable of.

Nevertheless, I think your results are interesting and that MAD-HTLC is
a useful tool that might be particularly desirable in contracts that
involve especially high value or especially short timeouts (perhaps
asset swaps or payment channels used by traders?).  Thank you again for


[1] For example, your paper says "[...] the bribing cost required to
attack HTLC is independent in T, meaning that 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-20 Thread David A. Harding via bitcoin-dev
On Sat, Jun 20, 2020 at 10:54:03AM +0200, Bastien TEINTURIER wrote:
> We're simply missing information, so it looks like the only good
> solution is to avoid being in that situation by having a foot in
> miners' mempools.

The problem I have with that approach is that the incentive is to
connect to the highest hashrate pools and ignore the long tail of
smaller pools and solo miners.  If miners realize people are doing this,
they may begin to charge for information about their mempool and the
largest miners will likely be able to charge more money per hashrate
than smaller miners, creating a centralization force by increasing
existing economies of scale.

Worse, information about a node's mempool is partly trusted.  A node can
easily prove what transactions it has, but it can't prove that it
doesn't have a certain transaction.  This implies incumbent pools with a
long record of trustworthy behavior may be able to charge more per
hashrate than a newer pools, creating a reputation-based centralizing
force that pushes individual miners towards well-established pools.

This is one reason I suggested using independent pay-to-preimage
transactions[1].  Anyone who knows the preimage can mine the
transaction, so it doesn't provide reputational advantage or direct
economies of scale---pay-to-preimage is incentive equivalent to paying
normal onchain transaction fees.  There is an indirect economy of
scale---attackers are most likely to send the low-feerate
preimage-containing transaction to just the largest pools, so small
miners are unlikely to learn the preimage and thus unlikely to be able
to claim the payment.  However, if the defense is effective, the attack
should rarely happen and so this should not have a significant effect on
mining profitability---unlike monitoring miner mempools which would have
to be done continuously and forever.

ZmnSCPxj noted that pay-to-preimage doesn't work with PTLCs.[2]  I was
hoping one of Bitcoin's several inventive cryptographers would come
along and describe how someone with an adaptor signature could use that
information to create a pubkey that could be put into a transaction with
a second output that OP_RETURN included the serialized adaptor
signature.  The pubkey would be designed to be spendable by anyone with
the final signature in a way that revealed the hidden value to the
pubkey's creator, allowing them to resolve the PTLC.  But if that's
fundamentally not possible, I think we could advocate for making
pay-to-revealed-adaptor-signature possible using something like


> Do you think it's unreasonable to expect at least some LN nodes to
> also invest in running nodes in mining pools, ensuring that they learn
> about attackers' txs and can potentially share discovered preimages
> with the network off-chain (by gossiping preimages found in the
> mempool over LN)?

Ignoring my concerns about mining centralization and from the
perspective of just the Lightning Network, that doesn't sound
unreasonable to me.  But from the perspective of a single LN node, it
might make more sense to get the information and *not* share it,
increasing your security and allowing you to charge lower routing fees
compared to your competitors.  This effect would only be enhanced if
miners charged for their mempool contents (indeed, to maximize their
revenue, miners might require that their mempool subscribers don't share
the information---which they could trivially enforce by occasionally
sending subscribers a preimage specific to the subscriber and seeing if
it propagated to the public network).

> I think that these recent attacks show that we need (at least some)
> off-chain nodes to be somewhat heavily invested in on-chain operations
> (layers can't be fully decoupled with the current security assumptions
> - maybe Eltoo will help change that in the future?).

I don't see how eltoo helps.  Eltoo helps ensure you reach the final
channel state, but this problem involves an abuse of that final state.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread David A. Harding via bitcoin-dev
On Fri, Jun 19, 2020 at 03:58:46PM -0400, David A. Harding via bitcoin-dev 
> I think you're assuming here that the attacker broadcast a particular
> state.  

Whoops, I managed to confuse myself despite looking at Bastien's
excellent explainer.  The attacker would be broadcasting the latest
state, so the honest counterparty would only need to send one blind
child.  However, the blind child will only be relayed by a Bitcoin peer
if the peer also has the parent transaction (the latest state) and, if
it has the parent transaction, you should be able to just getdata('tx',
$txid) that transaction from the peer without CPFPing anything.  That
will give you the preimage and so you can immediately resolve the HTLC
with the upstream channel.

Revising my conclusion from the previous post:

I think the strongman argument for the attack would be that the attacker
will be able to perform a targeted relay of the low-feerate
preimage-containing transaction to just miners---everyone else on the
network will receive the honest user's higher-feerate expired-timelock
transaction.  Unless the honest user happens to have a connection to a
miner's node, the user will neither be able to CPFP fee bump nor use
getdata to retrieve the preimage.

Sorry for the confusion.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread David A. Harding via bitcoin-dev
On Fri, Jun 19, 2020 at 09:44:11AM +0200, Bastien TEINTURIER via Lightning-dev 
> The gist is here, and I'd appreciate your feedback if I have wrongly
> interpreted some of the ideas:

Quoted text below is from the gist:

> The trick to protect against a malicious participant that broadcasts a
> low-fee HTLC-success or Remote-HTLC-success transaction is that we can
> always blindly do a CPFP carve-out on them; we know their txid

I think you're assuming here that the attacker broadcast a particular
state.  However, in a channel which potentially had thousands of state
changes, you'd have to broadcast a blind child for each previous state
(or at least each previous state that pays the attacker more than the
latest state).  That's potentially thousands of transactions times
potentially dozens of peers---not impossible, but it seems messy.

I think there's a way to accomplish the same goal for less bandwidth and
zero fees.  The only way your Bitcoin peer will relay your blind child
is if it already has the parent transaction.  If it has the parent, you
can just request it using P2P getdata(type='tx', id=$txid).[1]  You can
batch multiple txid requests together (up to 50,000 IIRC) to minimize
overhead, making the average cost per txid a tiny bit over 36 bytes.
If you receive one of the transactions you request, you can extract the
preimage at no cost to yourself (except bandwidth).  If you don't
receive a transaction, then sending a blind child is hopeless
anyway---your peers won't relay it.

Overall, it's hard for me to guess how effective your proposal would be
at defeating the attack.  I think the strongman argument for the attack
would be that the attacker will be able to perform a targeted relay of
their outdated state to just miners---everyone else on the network
will receive the counterparty's honest final-state close.  Unless the
counterparty happens to have a connection to a miner's node, the
counterparty will neither be able to CPFP fee bump nor use getdata to
retrieve the preimage.

It seems to me it's practical for a motivated attacker to research which
IP addresses belong to miners so that they can target them, whereas
honest users won't practically be able to do that research (and, even if
they could, it would create a centralizing barrier to new miners
entering the market if users focused on maintaining connections to
previously-known miners).


[1] You'd have to be careful to not attempt the getdata too soon after
you think the attacker broadcast their old state, but I think that
only means waiting a single block, which you have to do anyway to
see if the honest final-commitment transaction confirmed.  See

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP-341: Committing to all scriptPubKeys in the signature message

2020-05-02 Thread David A. Harding via bitcoin-dev
On Wed, Apr 29, 2020 at 04:57:46PM +0200, Andrew Kozlik via bitcoin-dev wrote:
> In order to ascertain non-ownership of an input which is claimed to be
> external, the wallet needs the scriptPubKey of the previous output spent by
> this input.

A wallet can easily check whether a scriptPubKey contais a specific
pubkey (as in P2PK/P2TR), but I think it's impractical for most wallets
to check whether a scriptPubKey contains any of the possible ~two
billion keys available in a specific BIP32 derivation path (and many
wallets natively support multiple paths).

It would seem to me that checking a list of scriptPubKeys for wallet
matches would require obtaining the BIP32 derivation paths for the
corresponding keys, which would have to be provided by a trusted data
source.  If you trust that source, you could just trust them to tell you
that none of the other inputs belong to your wallet.

Alternatively, there's the scheme described in the email you linked by
Greg Saunders (with the scheme co-attributed to Andrew Poelstra), which
seems reasonable to me.[1]  It's only downside (AFAICT) is that it
requires an extra one-way communication from a signing device to a
coordinator.  For a true offline signer, that can be annoying, but for
an automated hardware wallet participating in coinjoins or LN, that
doesn't seem too burdensome to me.


[1] The scheme could be trivially tweaked to be compatible with BIP322
generic signed messages, which is something that could become widely
adopted (I hope) and so make supporting the scheme easier.

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread David A. Harding via bitcoin-dev
On Wed, Apr 22, 2020 at 03:53:37PM -0700, Matt Corallo wrote:
> if you focus on sending the pinning transaction to miner nodes
> directly (which isn't trivial, but also not nearly as hard as it
> sounds), you could still pull off the attack. 

If the problem is that miners might have information not available to
the network in general, you could just bribe them for that knowledge.
E.g. as Bob's refund deadline approaches and he begins to suspect that
mempool shenanigans are preventing his refund transaction from
confirming, he takes a confirmed P2WPKH UTXO he's been saving for use in
CPFP fee bumps and spends part of its value (say 1 mBTC) to the
following scriptPubKey[1],


Assuming the feerate and the bribe amount are reasonable, any miner who
knows the preimage is incentivized to include Bob's transaction and a
child transation spending from it in their next block.  That child
transaction will include the preimage, which Bob will see when he
processes the block.

If any non-miner knows the preimage, they can also create that child
transaction.  The non-miner probably can't profit from this---miners can
just rewrite the child transaction to pay themselves since there's no
key-based security---but the non-miner can at least pat themselves on
the back for being a good Summaritan.  Again Bob will learn the preimage
once the child transaction is included in a block, or earlier if his
wallet is monitoring for relays of spends from his parent transaction.

Moreover, Bob can first create a bribe via LN and, in that case, things
are even better.  As Bob's deadline approaches, he uses one of his
still-working channels to send a bunch of max-length (20 hops?) probes
that reuse the earlier HTLC's .  If any hop along the path knows
the preimage, they can immediately claim the probe amount (and any
routing fees that were allocated to subsequent hops).  This not only
gives smaller miners with LN nodes an equal chance of claiming the
probe-bribe as larger miners, but it also allows non-miners to profit
from learning the preimage from miners.

That last part is useful because even if, as in your example, the
adversary is able to send one version of the transaction just to miners
(with the preimage) and another conflicting version to all relay nodes
(without the preimage), miners will naturally attempt to relay the
preimage version of the transaction to other users; if some of those
users run modified nodes that write all 32-byte witness data blobs to a
database---even if the transaction is ultimately rejected as a
conflict---then targetted relay to miners may not be effective at
preventing Bob from learning the preimage.

Obviously all of the above requires people run additional software to
keep track of potential preimages[2] and then compare them to hash
candidates, plus it requires additional complexity in LN clients, so I
can easily understand why it might be less desirable than the protocol
changes under discussion in other parts of this thread.  Still, with
lots of effort already being put into watchtowers and other
enforcement-assistance services, I wonder if this problem can be largely
addressed in the same general way.


[1] Requires a change to standard relay and mining policy.
[2] Pretty easy, e.g.

bitcoin-cli getrawmempool \
| jq -r .[] \
| while read txid ; do
  bitcoin-cli getrawtransaction $txid true | jq .vout[].scriptPubKey.asm
done \
| grep -o '\<[0-9a-f]\{64\}\>'

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Wed, Apr 22, 2020 at 03:03:29PM -0400, Antoine Riard wrote:
> > In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?
> That's an interesting idea, but an attacker can create a local conflict in
> your mempool

You don't need a mempool to send a transaction.  You can just open
connections to random Bitcoin nodes directly and try sending your
transaction.  That's what a lite client is going to do anyway.  If the
pinned transaction is in the mempools of a significant number of Bitcoin
nodes, then it should take just a few random connections to find one of
those nodes, learn about the conflict, and download the pinned

If that's not acceptable, you could find some other way to poll a
significant number of people with mempools, e.g. BIP35 mempool messages
or reusing the payment hash in a bunch of 1 msat probes to LN nodes who
opt-in to scanning their bitcoind's mempools for a corresponding


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev wrote:
> A lightning counterparty (C, who received the HTLC from B, who
> received it from A) today could, if B broadcasts the commitment
> transaction, spend an HTLC using the preimage with a low-fee,
> RBF-disabled transaction.  After a few blocks, A could claim the HTLC
> from B via the timeout mechanism, and then after a few days, C could
> get the HTLC-claiming transaction mined via some out-of-band agreement
> with a small miner. This leaves B short the HTLC value.

IIUC, the main problem is honest Bob will broadcast a transaction
without realizing it conflicts with a pinned transaction that's already
in most node's mempools.  If Bob knew about the pinned transaction and
could get a copy of it, he'd be fine.

In that case, would it be worth re-implementing something like a BIP61
reject message but with an extension that returns the txids of any
conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
and sends his conflicting transaction, the nodes would reply with
something like "rejected: code 123: conflicts with txid 0123...cdef".
Bob could then reply with a a getdata('tx', '0123...cdef') to get the
pinned transaction, parse out its preimage, and resolve the HTLC.

This approach isn't perfect (if it even makes sense at all---I could be
misunderstanding the problem) because one of the problems that caused
BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
if Bob had at least one honest peer that had the pinned transaction in
its mempool and which implemented reject-with-conflicting-txid, Bob
might be ok.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread David A. Harding via bitcoin-dev
On Tue, Apr 21, 2020 at 09:13:34PM -0700, Olaoluwa Osuntokun wrote:
> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev 
> wrote:
> > While this is somewhat unintuitive, there are any number of good anti-DoS
> > reasons for this, eg:
> None of these really strikes me as "good" reasons for this limitation
> [...]
> In the end, the simplest heuristic (accept the higher fee rate
> package) side steps all these issues and is also the most economically
> rationale from a miner's perspective. 

I think it's important to remember than mempool behavior affects not
just miners but also relay nodes.  Miner costs, such as bandwidth usage,
can be directly offset by their earned block rewards, so miners can be
much more tolerant of wasted bandwidth than relay nodes who receive no
direct financial compensation for the processing and relay of
unconfirmed transactions.[1]

> Why would one prefer a higher absolute fee package (which could be
> very large) over another package with a higher total _fee rate_?

To avoid the excessive wasting of bandwidth.  Bitcoin Core's defaults
require each replacement pay a feerate of 10 nBTC/vbyte over an existing
transaction or package, and the defaults also allow transactions or
packages up to 100,000 vbytes in size (~400,000 bytes).  So, without
enforcement of BIP125 rule 3, an attacker starting at the minimum
default relay fee also of 10 nBTC/vbyte could do the following:

- Create a ~400,000 bytes tx with feerate of 10 nBTC/vbyte (1 mBTC total

- Replace that transaction with 400,000 new bytes at a feerate of 20
  nBTC/vbyte (2 mBTC total fee)

- Perform 998 additional replacements, each increasing the feerate by 10
  nBTC/vbyte and the total fee by 1 mBTC, using a total of 400 megabytes
  (including the original transaction and first replacement) to
  ultimately produce a transaction with a feerate of 10,000 nBTC/vbyte
  (1 BTC total fee)

- Perform one final replacement of the latest 400,000 byte transaction
  with a ~200-byte (~150 vbyte) 1-in, 1-out P2WPKH transaction that pays
  a feerate of 10,010 nBTC/vbyte (1.5 mBTC total fee)

Assuming 50,000 active relay nodes and today's BTC price of ~$7,000
USD/BTC, the above scenario would allow an attacker to waste a
collective 20 terabytes of network bandwidth for a total fee cost of
$10.50.  And, of course, the attacker could run multiple attacks of this
sort in parallel, quickly swamping the network.

To use the above concrete example to repeat the point made at the
beginning of this email: miners might be willing to accept the waste of
400 MB of bandwidth in order to gain a $10.50 fee, but I think very few
relay nodes could function for long under an onslaught of such behavior.


[1] The reward to relay nodes of maintaining the public relay network is
that it helps protect against miner centralization.  If there was no
public relay network, users would need to submit transactions
directly to miners or via a privately-controlled relay network.
Users desiring timely confirmation (and operators of private relay
networks) would have a large incentive to get transactions to the
largest miners but only a small incentive to get the transaction to
the smaller miners, increasing the economies of scale in mining and
furthering centralization.

Although users of Bitcoin benefit by reducing mining centralization
pressure, I don't think we can expect most users to be willing to
bear large costs in defense of benefits which are largely intangible
(until they're gone), so we must try to keep the cost of operating a
relay node within a reasonable margin of the cost of operating a
minimal-bandwidth blocks-only node.

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Statechain implementations

2020-03-31 Thread David A. Harding via bitcoin-dev
On Wed, Mar 25, 2020 at 01:52:10PM +, Tom Trevethan via bitcoin-dev wrote:
> Hi all,
> We are starting to work on an implementation of the statechains concept (
> [...]
> There are two main modifications we are looking at:
> [...]
> 2. Replacing the 2-of-2 multisig output (paying to statechain entity SE key
> and transitory key) with a single P2(W)PKH output where the public key
> shared between the SE and the current owner. The SE and the current owner
> can then sign with a 2-of-2 ECDSA MPC. 

Dr. Trevethan,

Would you be able to explain how your proposal to use statechains with
2P-ECDSA relates to your patent assigned to nChain Holdings for "Secure
off-chain blockchain transactions"?[1]  


Here are some excerpts from the application that caught my attention in
the context of statechains in general and your proposal to this list in

> an exchange platform that is trusted to implement and operate the
> transaction protocol, without requiring an on-chain transaction. The
> off-chain transactions enable one computer system to generate multiple
> transactions that are recordable to a blockchain in different
> circumstances
> [...]
> at least some of the off-chain transactions are valid for recording on
> the blockchain even in the event of a catastrophic failure of the
> exchange (e.g., exchange going permanently off-line or loosing key
> shares).
> [...]
> there may be provided a computer readable storage medium including a
> two-party elliptic curve digital signature algorithm (two-party ECDSA)
> script comprising computer executable instructions which, when
> executed, configure a processor to perform functions of a two-party
> elliptic curve digital signature algorithm described herein.
> [...]
> In this instance the malicious actor would then also have to collude
> with a previous owner of the funds to recreate the full key. Because
> an attack requires either the simultaneous theft of both exchange and
> depositor keys or collusion with previous legitimate owners of funds,
> the opportunities for a malicious attacker to compromise the exchange
> platform are limited.

Thank you,


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Block solving slowdown question/poll

2020-03-22 Thread David A. Harding via bitcoin-dev
On Sat, Mar 21, 2020 at 11:40:24AM -0700, Dave Scotese via bitcoin-dev wrote:
> [Imagine] we also see mining power dropping off at a rate that
> suggests the few days [until retarget] might become a few weeks, and
> then, possibly, a few months or even the unthinkable, a few eons.  I'm
> curious to know if anyone has ideas on how this might be handled

There are only two practical solutions I'm aware of:

1. Do nothing
2. Hard fork a difficulty reduction

If bitcoins retain even a small fraction of their value compared to the
previous retarget period and if most mining equipment is still available
for operation, then doing nothing is probably the best choice---as block
space becomes scarcer, transaction feerates will increase and miners
will be incentivized to increase their block production rate.

If the bitcoin price has plummeted more than, say, 99% in two weeks
with no hope of short-term recovery or if a large fraction of mining
equipment has become unusable (again, say, 99% in two weeks with no
hope of short-term recovery), then it's probably worth Bitcoin users
discussing a hard fork to reduce difficulty to a currently sustainable


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Taproot (and graftroot) complexity (reflowed)

2020-02-14 Thread David A. Harding via bitcoin-dev
On Fri, Feb 14, 2020 at 12:07:15PM -0800, Jeremy via bitcoin-dev wrote:
> Is the same if Schnorr + Merkle Branch without Taproot optimization, unless
> I'm missing something in one of the cases? 

That's fair.  However, it's only true if everyone constructs their
merkle tree in the same way, with a single ` OP_CHECKSIG` as
one of the top leaves.   Taproot effectively standardizes the position
of the all-parties-agree condition and so its anonymity set may contain
spends from scripts whose creators buried or excluded the the all-agree
option because they didn't think it was likely to be used.

More importantly, there's no incentive for pure single-sig users to use a
merkle tree, since that would make both the scriptPubKey and the witness
data are larger for them than just continuing to use v0 segwit P2WPKH.
Given that single-sig users represent a majority of transactions at
present (see AJ Towns's previous email in this thread), I think we
really want to make it as convenient as possible for them to participate
in the anonymity set.

(To be fair, taproot scriptPubKeys are also larger than P2WPKH
scriptPubKeys, but its witness data is considerably smaller, giving
receivers an incentive to demand P2TR payments even if spenders don't
like paying the extra 12 vbytes per output.)

Rough sums:

- P2WPKH scriptpubkey (22.00 vbytes): `OP_0 PUSH20 `
- P2WPKH witness data (26.75): `size(72) , size(33) `
- P2TR scriptpubkey (34.00): `OP_1 PUSH32 `
- P2TR witness data (16.25): `size(64) `
- BIP116 MBV P2WSH scriptpubkey (34.00): `OP_0 PUSH32 `
- BIP116 MBV P2WSH witness data (42.00): `size(64) , size(32)
  , size(32) , size(36)  PUSH32


P.S. I think this branch of the thread is just rehashing points that
 were originally covered over two years ago and which haven't really
 changed since then.  E.g.:

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Taproot (and graftroot) complexity (reflowed)

2020-02-09 Thread David A. Harding via bitcoin-dev
On Sun, Feb 09, 2020 at 02:47:29PM -0600, Anon via Bryan Bishop via bitcoin-dev 
> 1) Is Taproot actually more private than bare MAST and Schnorr separately?


> What are the actual anonymity set benefits compared to doing the separately?

When schnorr and taproot are done together, all of the following
transaction types can be part of the same set:

- single-sig spends (similar to current use of P2PKH and P2WPKH)

- n-of-n spends with musig or equivalent (similar to current use of
  P2SH and P2WSH 2-of-2 multisig without special features as used by
  Blockstream Green and LN mutual closes)

- k-of-n (for low values of n) using the most common k signers
  (similar to BitGo-style 2-of-3 where the keys involved are
  alice_hot, alice_cold, and bob_hot and almost all transactions are
  expected to be signed by {alice_hot, bob_hot}; that common case
  can be the key-path spend and the alternatives {alice_hot,
  alice_cold} and {alice_cold, bob_hot} can be script-path spends)

- contract protocols that can sometimes result in all parties
  agreeing on an outcome (similar to LN mutual closes, cross-chain
  atomic swaps, and same-chain coinswaps)

The four cases above represent an overwhelming percentage of the spends
seen on the block chain today and throughout Bitcoin's entire history to
date, so optimizing to include them in the anonymity set presents a huge

> 2) Is Taproot actually cheaper than bare MAST and Schnorr separately? 

Earlier in y'alls email, you claim that the difference between the two
approaches for a particular example is 67 bytes.  I haven't checked that
calculation, but it seems you're talking entirely about bytes that could
appear in the witness data and so would only represent 16.75 vbytes.
Compare that to the size of the other elements which would need to be
part of a typical input:

- (36 vbytes) outpoint
- (1) scriptSig compactSize uint
- (4) nSequence 
- (16.25) schnorr signature (includes size byte)

That's 57.25 vbytes exclusive of your example data or 74.00 vbytes
inclusive.  That means the overhead you're concerned about adds only
about 23% to the size of the input (or 30% on an exclusive basis).
That's definitely worth considering optimizations for, but I'm
personally ok with requiring users of advanced scripts (who can't manage
to produce mutual closes) pay an extra 23% for their inputs in order to
allow the creation of the large anonymity set described above for all
the other cases.

If, subsequent to deployment, large numbers of users do end up using
taproot script-path spends and we want to make things more fair, we can
even out the weighting, perhaps by simply increasing the weight of
key-path spends by 16.75 vbytes (though that would, of course,
proportionally lower the capacity of the block chain).  As mentioned in
a separate email by Matt Corallo, it seems worthwhile to optimize for
the case where script-path spenders are encouraged to look for
mutually-agreed contract resolutions in order to both minimize block
chain use and increase the size of the anonymity set.

> What evidence do we have that the assumption it will be more common to
> use Taproot with a key will outweigh Script cases?

The evidence that current users of single-sig, n-of-n, and k-of-n (for
small n) with a default k-set, and mutual-agreed contract protocol
outcomes vastly outweigh all other transaction inputs today and for all
of Bitcoin's history to date.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Onchain fee insurance mechanism

2020-01-31 Thread David A. Harding via bitcoin-dev
On Fri, Jan 31, 2020 at 03:42:08AM +, ZmnSCPxj via bitcoin-dev wrote:
> Let me then propose a specific mechanism for feerate insurance against 
> onchain feerate spikes.
> [...]
> At current blockheight B, Alice and Ingrid then arrange a series of 
> transactions:
> nLockTime: B+1
> nSequence: RBF enabled, no relative locktime.
> inputs: Alice 500, Ingrid 80
> outputs:
> Bob 40
> Alice 99400
> Ingrid 800400
> fee: 200
> [...]

Ingrid is able to rescind this series of pre-signed transactions at any
time before one of the transactions is confirmed by double spending her
UTXO (e.g. via a RBF fee bump).  If Alice needs to trust Ingrid to honor
the contract anyway, they might as well not include Ingrid's input or
output in the transaction and instead use an external accounting and
payment mechanism.  For example, Alice and Ingrid agree to a fee

> height: B+1
> fee: 200
> height: B+2
> fee: 400
> height: B+3
> fee: 599
> height: B+4
> fee: 3600

Then they wait for whichever version of the transaction to confirm and
one of them remits to the other the appropriate amount (either 400, 200,
or 1 base unit to Ingrid, or 3,000 base units to Alice).  This
remittance can be done by whatever mechanism they both support (e.g. an
onchain transaction, an LN payment, or just credit on an exchange).

Since it's possible to achieve equivilent security (or lack thereof)
without the locktime mechanism, I don't think the locktime mechanism
adds anything to the idea of hedging fees---and, as you note, it suffers
from incompatibility with some cases where users would be especially
eager to obtain feerate insurance.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Bech32 weakness and impact on bip-taproot addresses

2019-11-07 Thread David A. Harding via bitcoin-dev
On Thu, Nov 07, 2019 at 02:35:42PM -0800, Pieter Wuille via bitcoin-dev wrote:
> In the current draft, witness v1 outputs of length other
> than 32 remain unencumbered, which means that for now such an
> insertion or erasure would result in an output that can be spent by
> anyone. If that is considered unacceptable, it could be prevented by
> for example outlawing v1 witness outputs of length 31 and 33.

Either a consensus rule or a standardness rule[1] would require anyone
using a bech32 library supporting v1+ segwit to upgrade their library.
Otherwise, users of old libraries will still attempt to pay v1 witness
outputs of length 31 or 33, causing their transactions to get rejected
by newer nodes or get stuck on older nodes.  This is basically the
problem #15846[2] was meant to prevent.

If we're going to need everyone to upgrade their bech32 libraries
anyway, I think it's probably best that the problem is fixed in the
bech32 algorithm rather than at the consensus/standardness layer.



P.S. My thanks as well to the people who asked the question during
 review that lead to this discussion:
bitcoin-dev mailing list

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread David A. Harding via bitcoin-dev
On Mon, Oct 28, 2019 at 10:45:39AM +0100, Johan Torås Halseth wrote:
> Relay cost is the obvious problem with just naively removing all limits.
> Relaxing the current rules by allowing to add a child to each output as
> long as it has a single unconfirmed parent would still only allow free
> relay of O(size of parent) extra data (which might not be that bad? Similar
> to the carve-out rule we could put limits on the child size). 

A parent transaction near the limit of 100,000 vbytes could have almost
10,000 outputs paying OP_TRUE (10 vbytes per output).  If the children
were limited to 10,000 vbytes each (the current max carve-out size),
that allows relaying 100 mega-vbytes or nearly 400 MB data size (larger
than the default maximum mempool size in Bitcoin Core).

As Matt noted in discussion on #lightning-dev about this issue, it's
possible to increase second-child carve-out to nth-child carve-out but
we'd need to be careful about choosing an appropriately low value for n.

For example, BOLT2 limits the number of HTLCs to 483 on each side of the
channel (so 966 + 2 outputs total), which means the worst case free
relay to support the current LN protocol would be approximately:

(10 + 968 * 1) * 4 = ~39 MB

Even if the mempool was empty (as it sometimes is these days), it would
only cost an attacker about 1.5 BTC to fill it at the default minimum
relay feerate[1] so that they could execute this attack at the minimal
cost per iteration of paying for a few hundred or a few thousand vbytes
at slightly higher than the current mempool minimum fee.

Instead, with the existing rules (including second-child carve-out),
they'd have to iterate (39 MB / 400 kB = ~100) times more often to
achieve an equivalent waste of bandwidth, costing them proportionally
more in fees.

So, I think these rough numbers clearly back what Matt said about us
being able to raise the limits a bit if we need to, but that we have to
be careful not to raise them so far that attackers can make it
significantly more bandwidth expensive for people to run relaying full


[1] Several developers are working on lowering the default minimum in
Bitcoin Core, which would of course make this attack proportionally
bitcoin-dev mailing list

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-27 Thread David A. Harding via bitcoin-dev
On Thu, Oct 24, 2019 at 03:49:09PM +0200, Johan Torås Halseth wrote:
> [...] what about letting the rule be
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
>   * Have no more than one unconfirmed parent.
> [... subsequent email ...]
> I realize these limits are there for a reason though, but I'm wondering if
> we could relax them.


I'm not sure any of the other replies to this thread addressed your
request for a reason behind the limits related to your proposal, so I
thought I'd point out that---subsequent to your posting here---a
document[1] was added to the Bitcoin Core developer wiki that I think
describes the risk of the approach you proposed:

> Free relay attack:
> - Create a low feerate transaction T.
> - Send zillions of child transactions that are slightly higher feerate
>   than T until mempool is full.
> - Create one small transaction with feerate just higher than T’s, and
>watch T and all its children get evicted. Total fees in mempool drops
> - Attacker just relayed (say) 300MB of data across the whole network
>   but only pays small feerate on one small transaction.

The document goes on to describe at a high level how Bitcoin Core
attempts to mitigate this problem as well as other ways it tries to
optimize the mempool in order to maximize miner profit (and so ensure
that miners continue to use public transaction relay).

I hope that's helpful to you and to others in both understanding the
current state and in thinking about ways in which it might be improved.


Content adapted from slides by Suhas Daftuar, uploaded and formatted
by Gregory Sanders and Marco Falke.

bitcoin-dev mailing list

Re: [bitcoin-dev] Draft BIP for SNICKER

2019-10-20 Thread David A. Harding via bitcoin-dev
On Sun, Oct 20, 2019 at 12:29:25AM +, SomberNight via bitcoin-dev wrote:
> waxwing, ThomasV, and I recently had a discussion about implementing
> SNICKER in Electrum; specifically the "Receiver" role. 

That'd be awesome!

> As the referenced section [0] explains, the "Receiver" can restore
> from seed, and assuming he knows he needs to do extra scanning steps
> (e.g. via a seed version that signals SNICKER support), he can find
> and regain access to his SNICKER outputs. However, to calculate `c` he
> needs access to his private keys, as it is the ECDH of one of the
> Receiver's pubkeys and one of the Proposer's pubkeys.
> This means the proposed scheme is fundamentally incompatible with
> watch-only wallets.
> [0] 

Your logic seems correct for the watching half of the wallet, but I
think it's ok to consider requiring interaction with the cold wallet.
Let's look at the recovery procedure from the SNICKER documentation
that you kindly cited:

1. Derive all regular addresses normally (doable watch-only for
wallets using public BIP32 derivation)

2. Find all transactions spending an output for each of those
addresses.  Determine whether the spend looks like a SNICKER
coinjoin (e.g. "two equal-[value] outputs").  (doable watch-only)

3. "For each of those transactions, check, for each of the two equal
sized outputs, whether one destination address can be regenerated
from by taking c found in the method described above" (not doable
watch only; requires private keys)

I'd expect the set of candidate transactions produced in step #2 to be
pretty small and probably with no false positives for users not
participating in SNICKER coinjoins or doing lots of payment batching.
That means, if any SNICKER candidates were found by a watch-only wallet,
they could be compactly bundled up and the user could be encouraged to
copy them to the corresponding cold wallet using the same means used for
PSBTs (e.g. USB drive, QR codes, etc).  You wouldn't even need the whole
transactions, just the BIP32 index of the user's key, the pubkey of the
suspected proposer, and a checksum of the resultant address.

The cold wallet could then perform step #3 using its private keys and
return a file/QRcode/whatever to the hot wallet telling it any shared
secrets it found.

This process may need to be repeated several times if an output created
by one SNICKER round is spent in a subsequent SNICKER round.  This can be
addressed by simply refusing to participate in chains of SNICKER
transactions or by refusing to participant in chains of SNICKERs more
than n long (requring a maximum n rounds of recovery).  It could also be
addressed by the watching-only wallet looking ahead at the block chain a
bit in order to grab SNICKER-like child and grandchild transactions of
our SNICKER candidates and sending them also to the cold wallet for
attempted shared secret recovery.

The SNICKER recovery process is, of course, only required for wallet
recovery and not normal wallet use, so I don't think a small amount of
round-trip communication between the hot wallet and the cold wallet is
too much to ask---especially since anyone using SNICKER with a
watching-only wallet must be regularly interacting with their cold
wallet anyway to sign the coinjoins.

bitcoin-dev mailing list

Re: [bitcoin-dev] Removal of reject network messages from Bitcoin Core (BIP61)

2019-10-18 Thread David A. Harding via bitcoin-dev
On Thu, Oct 17, 2019 at 01:16:47PM -0700, Eric Voskuil via bitcoin-dev wrote:
> As this is a P2P protocol change it should be exposed as a version
> increment (and a BIP) [...]
> BIP61 is explicit:
> “All implementations of the P2P protocol version 70,002 and later
> should support the reject message.“

I don't think a new BIP or a version number increment is necessary.

1. "Should support" isn't the same as "must support".  See ; by that reading,
   implementations with protocol versions above 70,002 are not required
   to support the reject message.

2. If you don't implement a BIP, as Bitcoin Core explicitly doesn't any
   more for BIP61[1], you're not bound by its conditions.


[1]  "BIP61
[...] Support was removed in v0.20.0"
bitcoin-dev mailing list

Re: [bitcoin-dev] Chain width expansion

2019-10-04 Thread David A. Harding via bitcoin-dev
On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev wrote:
> This paper describes a solution [to DoS attacks] that does not
> require enabling or maintaining checkpoints and provides improved security.
> [...] 
> The paper is available at:

Hi Braydon,

Thank you for researching this important issue.  An alternative solution
proposed some time ago (I believe originally by Gregory Maxwell) was a
soft fork to raise the minimum difficulty.  You can find discussion of
it in various old IRC conversations[1,2] as well as in related changes
to Bitcoin Core such as PR #9053 addining minimum chain work[3] and the
assumed-valid change added in Bitcoin Core 0.14.0[4].


The solutions proposed in section 4.2 and 4.3 of your paper have the
advantage of not requiring any consensus changes.  However, I find it
hard to analyze the full consequences of the throttling solution in
4.3 and the pruning solution in 4.2.  If we assume a node is on the
most-PoW valid chain and that a huge fork is unlikely, it seems fine.
But I worry that the mechanisms could also be used to keep a node that
synced to a long-but-lower-PoW chain on that false chain (or other false
chain) indefinitely even if it had connections to honest peers that
tried to tell it about the most-PoW chain.

For example, with your maximum throttle of 5 seconds between
`getheaders` requests and the `headers` P2P message maximum of 2,000
headers per instance, it would take about half an hour to get a full
chain worth of headers.  If a peer was disconnected before sending
enough headers to establish they were on the most-PoW chain, your
pruning solution would delete whatever progress was made, forcing the
next peer to start from genesis and taking them at least half an hour
too.  On frequently-suspended laptops or poor connections, it's possible
a node could be be operational for a long time before it kept the same
connection open for half an hour.  All that time, it would be on a
dishonest chain.

By comparison, I find it easy to analyze the effect of raising the
minimum difficulty.  It is a change to the consensus rules, so it's
something we should be careful about, but it's the kind of
basically-one-line change that I expect should be easy for a large
number of people to review directly.  Assuming the choice of a new
minimum (and what point in the chain to use it) is sane, I think it
would be easy to get acceptance, and I think it would further be easy
increase it again every five years or so as overall hashrate increases.

bitcoin-dev mailing list

Re: [bitcoin-dev] PoW fraud proofs without a soft fork

2019-09-16 Thread David A. Harding via bitcoin-dev
On Sun, Sep 08, 2019 at 05:39:28AM +0200, Ruben Somsen via bitcoin-dev wrote:
> After looking more deeply into Tadge Dryja’s utreexo work [0], it has
> become clear to me that this opens up a way to implement PoW fraud
> proofs [1] without a soft fork. 

This is a nifty idea.

> [...] you’d need to download:
> [...]
> 3. the utreexo merkle proofs which prove that all inputs of N+1 are
> part of the UTXO set (~1MB)

I think "~1 MB" is probably a reasonable estimate for the average case
but not for the worst case.  To allow verification of the spends in
block N+1, each UTXO entry must contain its entire scriptPubKey.  I
believe the current consensus rules allow scriptPubKeys to be up to
10,000 bytes in size.  A specially-constructed block can contain a bit
more than 20,000 inputs, making the worst case size of just the UTXO
entries that needs to be communicated over 200 MB.

> If it turns out that one of your peers disagrees on what the correct
> hash is, you find the last utreexo hash where that peer still agreed,
> let’s say block M, and you simply execute the same three steps to find
> out which peer is wrong

I think this also expands to a worst-case of over 200 MB.  A lying peer
will only be able to get you on one of these checks, so it's 200 MB per
lying peer.  For an honest peer communicating valid blocks, the worst
case is that they'll need to communicate both of these state
transactions, so over 400 MB.  That could be a bandwidth-wasting DoS
attack on honest listening nodes if there were a large number of SPV
clients using this type of fraud proofs.

Additionally, each node capable of providing fraud proofs will need to
persistently store the state transition proof for each new block.  I
assume this is equal to the block undo data currently stored by archival
full nodes plus the utreexo partial merkle branches.

This data would probably not be stored by pruned nodes, at least not
beyond their prune depth, even for pruned nodes that use utreexo.  That
would mean this system will only work with archival full nodes with an
extra "index" containing the utreexo partial merkle branches, or it will
require querying utreexo bridge nodes.

Given that both of those would require significant additional system
resources beyond the minimum required to operate a full node, such nodes
might be rare and so make it relatively easy to eclipse attack an SPV
client depending on these proofs.

Finally, this system depends on SPV clients implementing all the same
consensus checks that full nodes can currently perform.  Given that most
SPV clients I'm aware of today don't even perform the full range of
checks it's possible to run on block headers, I have serious doubts that
many (or any) SPV clients will actually implement full verification.  On
top of that, each client must implement those checks perfectly or they
could be tricked into a chainsplit the same as a full node that follows
different rules than the economic consensus.

> [1] Improving SPV security with PoW fraud proofs:

One thing I didn't like in your original proposal---which I appologize
for keeping to myself---is that the SPV client will accept confirmations
on the bad chain until a fork is produced.  Even a miner with a minority
of the hash rate will sometimes be able to produce a 6-block chain before
the remaining miners produce a single block.  In that case, SPV clients
with a single dishonest peer in collusion with the miner will accept any
transctions in the first block of that chain as having six
confirmations.  That's the same as it is today, but today SPV users
don't think fraud proofs help keep them secure.

I think that, if we wanted to widely deploy fraud proofs depending on
forks as a signal, we'd have to also retrain SPV users to wait for much
higher confirmation counts before accepting transactions as reasonably

bitcoin-dev mailing list

Re: [bitcoin-dev] Improving JoinMarket's resistance to sybil attacks using fidelity bonds

2019-07-31 Thread David A. Harding via bitcoin-dev
On Tue, Jul 30, 2019 at 10:27:17PM +0100, Chris Belcher wrote:
> And any ECC-alternative or hash-function-alternative fork will
> probably take a couple of months to be designed, implemented and
> deployed as well, giving a chance for lockers to move coins.

Probably.  A stronger form of my argument would apply to single-wallet
(or wallet library) problems of the type we see with depressing
regularity, such as reused nonces, weak nonces, brainwallets, and weak
HD seeds.  In some cases, this leads directly to theft and loss---but in
others, the problem is detected by a friendly party and funds can be
moved to a secure address before the problem is publicly disclosed and
attackers try to exploit it themselves.

If funds are timelocked, there's a greater chance that the issue will
become publicly known and easily exploitable while the funds are
inaccessible.  Then, at the time the lock expires, it'll become a race
between attackers and the coin owner to see who can get a spending
transaction confirmed first.

> This scheme could be attacked using address reuse. An attacker could
> create an aged coin on a heavily-reused address, which would force an
> SPV client using this scheme to download all the blocks which contain
> this reused address which could result in many gigabytes of extra
> download requirement.

Good point.  There's also the case that some Electrum-style indexers
don't index more than a certain number of outputs sent to the same
address.  E.g., I believe Electrs[1] stops indexing by default after 100
outputs to the same address.


> So to fix this: a condition for aged coins is that their address has not
> been reused, if the coin is on a reused address then the value of the
> fidelity bond becomes zero.

I don't think that works.  If Bob sends 100 BTC to bc1foo and then uses
that UTXO as his fidelity bond, Mallory can subsequently send some dust
to bc1foo to invalidate Bob's bond.

To use compact block filters in a way that prevents spamming, I think
we'd need a different filter type that allowed you to filter by

bitcoin-dev mailing list

Re: [bitcoin-dev] Improving JoinMarket's resistance to sybil attacks using fidelity bonds

2019-07-27 Thread David A. Harding via bitcoin-dev
On Thu, Jul 25, 2019 at 12:47:54PM +0100, Chris Belcher via bitcoin-dev wrote:
> A way to create a fidelity bond is to burn an amount of bitcoins by
> sending to a OP_RETURN output. Another kind is time-locked addresses
> created using OP_CHECKLOCKTIMEVERIFY where the valuable thing being
> sacrificed is time rather than money, but the two are related because of
> the time-value-of-money.

Timelocking bitcoins, especially for long periods, carries some special
risks in Bitcoin:

1. Inability to sell fork coins, also creating an inability to influence
the price signals that help determine the outcome of chainsplits.

2. Possible inability to transition to new security mechanisms if
a major weakness is discovered in ECC or a hash function.

An alternative to timelocks might be coin age---the value of a UTXO
multiplied by the time since that UTXO was confirmed.  Coin age may be
even harder for an attacker to acquire given that it is a measure of
past patience rather than future sacrifice.  It also doesn't require
using any particular script and so is flexible no matter what policy the
coin owner wants to use (especially if proof-of-funds signatures are
generated using something like BIP322).

Any full node (archival or pruned) can verify coin age using the UTXO
set.[1]  Unlike script-based timelock (CLTV or CSV), there is no current
SPV-level secure way to prove to lite clients that an output is still
unspent, however such verification may be possible within each lite
client's own security model related to transaction withholding attacks:

- Electrum-style clients can poll their server to see if a particular
  UTXO is unspent.

- BIP158 users who have saved their past filters to disk can use them to
  determine which blocks subsequent to the one including the UTXO may
  contain a spend from it.  However, since a UTXO can be spent in the
  same block, they'd always need to download the block containing the
  UTXO (alternatively, the script could contain a 1-block CSV delay
  ensuring any spend occurred in a later block).  If BIP158 filters
  become committed at some point, this mechanism is upgraded to SPV-level

> Note that a long-term holder (or hodler) of bitcoins can buy time-locked
> fidelity bonds essentially for free, assuming they never intended to
> transact with their coins much anyway.

This is the thing I most like about the proposal.  I suspect most
honest makers are likely to have only a small portion of their funds
under JoinMarket control, with the rest sitting idle in a cold wallet.
Giving makers a way to communicate that they fit that user template
would indeed seem to provide significant sybil resistance.


[1] See, bitcoin-cli help gettxout
bitcoin-dev mailing list

Re: [bitcoin-dev] Generalized covenants with taproot enable riskless or risky lending, prevent credit inflation through fractional reserve

2019-06-29 Thread David A. Harding via bitcoin-dev
On Fri, Jun 28, 2019 at 10:27:16AM +0200, Tamas Blummer via bitcoin-dev wrote:
> The value of these outputs to Charlie is the proof that he has
> exclusive control of the coins until maturity.
> Alice can not issue promissory notes in excess of own capital or
> capital that she was able to borrow. No coin inflation or fractional
> reserve here, which also reduces the credit risk Charlie takes.

I believe these goals are obtainable today without any consensus
changes.  Bob can provably timelock bitcoins using CLTV or CSV in a
script that commits to the outpoint (txid, vout) of an output that will
be used as a colored coin to track the debt instrument.  The colored
coin, which has no appreciable onchain value itself, can then be
trustlessly traded, e.g. from Alice to Charlie to Dan as you describe.  

Anyone with a copy of the script Bob paid, the confirmed transaction he
included it in, and the confirmed transaction history of the colored
coin can trustlessly verify the ownership record---including that no
inflation or fractional reserve occurred.

I believe the RGB working group has set for itself the goal[1] of making
trustless colored coin protocols more efficient when performed on top of
Bitcoin.  I'd also suggest reading about Peter Todd's concept of
single-use seals[2].  You may want to investigate these ideas and see
whether they can be integrated with your own.


bitcoin-dev mailing list

Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)

2019-06-09 Thread David A. Harding via bitcoin-dev
On Thu, Jun 06, 2019 at 02:46:54PM +0930, Rusty Russell via bitcoin-dev wrote:
> Matt Corallo  writes:
> > 2) wrt rule 4, I'd like to see a calculation of worst-case free
> > relay. I think we're already not in a great place, but maybe it's
> > worth it or maybe there is some other way to reduce this cost
> > (intuitively it looks like this proposal could make things very,
> > very, very bad).
> I *think* you can currently create a tx at 1 sat/byte, have it
> propagate, then RBF it to 2 sat/byte, 3... and do that a few thousand
> times before your transaction gets mined.

Yes, the current incremental relay fee in Bitcoin Core is 0.1000

> If that's true, I don't think this proposal makes it worse.

Here's a scenario that I think shows it being at least 20x worse.

Let's imagine that you create two transactions:

  tx0: A very small transaction (~100 vbytes) that's just 1-in, 1-out.
   At the minimum relay fee, this costs 0.0100 BTC

  tx1: A child of that transaction that's very large (~100,000 vbytes,
   or almost 400,000 bytes using special scripts that allow witness
   stuffing).  At the minimum relay fee, this costs 0.0010 BTC

Under the current rules, if an attacker wants to fee-bump tx0 by the
minimum incremental fee (a trivial amount, ~0.0100 BTC), the
attacker's replacement also needs to pay for the eviction of the huge
child tx1 by that same incremental fee (~0.0010).

Thus the replacement would cost the attacker a minimum of about
0.00100100 (~1 mBTC) for the original transactions and 0.00100100 for
the replacement (about 2 mBTC total).

The attacker could then spend another 1 mBTC re-attaching the child and
yet another 1 mBTC bumping again, incuring about a 2 mBTC cost per
replacement round.  At writing, 2 mBTC is about $14.00 USD---an amount
that's probably enough to deter most attacks at scale.

* * *

Under the new proposed rule 6, Mallory's replacement cost would be the
amount to get the small tx0 to near the top of the mempool (say
0.0010 BTC/KvB, so 0.0001 BTC total).  Because this would evict
the expensive child, it would actually reduce the original amount paid
by the attacker by 90% compared to the previous section's example where
using RBF increased the original costs by 100%.

The 0.1 mBTC cost of this attack is about $0.70 USD today for the
roughly the same data relay use as one round of the currently possible
attack.  In short, if I haven't misplaced a decimal point or made some
other mistake, I think the proposed rule 6 would result in approximately
a 95% reduction in the cost paid by an attacker for wasting 400,000
bytes of bandwidth per node (x60,000 nodes = 24 GB across all nodes, not
counting INV overhead).

Although the attacker might only get one replacement per block per
transaction pair out of this version of the attack, they could execute
the attack many times in parallel using different tranaction pairs.  If
this is combined with the treadmill leapfrogging Russell O'Connor
described elsewhere in this thread, the attack could possibly be
repeated multiple times per block per transaction pair at only slightly
increased cost (to pay the increasing next-block transaction fees).

> >> 3) wrt rule 5, I'd like to see benchmarks, it's probably a pretty
> >> nasty DoS attack, but it may also be the case that is (a) not worse
> >> than other fundamental issues or (b) sufficiently expensive.
> I thought we still meet rule 5 in practice since bitcoind will never
> even accept a tree of unconfirmed txs which is > 100 txs?  That would
> still stand, it's just that we'd still consider a replacement.

Although the BIP125 limit is 100, Bitcoin Core's current default is 25.[1]
(When RBF was implemented in Bitcoin Core, transaction ancestry was only
tracked for purposes of ensuring valid transaction ordering within
blocks; later when CPFP was implemented, ancestry was additionally used
to calculate each transaction's package fee---the value of it and all
its unconfirmed ancestors.  This requires more computation to update
the mempool metadata when the ancestry graph changes.)

Again, I'd be thinking here of something similar to O'Connor's
treadmilling attack where replacements can push each other out of the
top mempool and so create enough churn for a CPU exhaustion attack.

> >>  Obviously there is also a ton more client-side knowledge required
> >>  and complexity to RBF decisions here than other previous, more
> >>  narrowly-targeted proposals.
> I'd say from the lightning side it's as simple as a normal RBF policy
> until you get within a few blocks of a deadline, then you increase the
> fees until it's well within reach of the next block.

It's already hard for wallet software to determine whether or not its
transactions have successfully been relayed.  This proposal requires LN
wallets not only be able to guess where the next-block feerate boundary
is in other nodes' individual mempools (now and in the future for the
time it 

Re: [bitcoin-dev] New BIP - v2 peer-to-peer message transport protocol (former BIP151)

2019-03-25 Thread David A. Harding via bitcoin-dev
On Sun, Mar 24, 2019 at 09:29:10AM -0400, David A. Harding via bitcoin-dev 
> Why is this optional and only specified here for some message types
> rather than being required by v2 and specified for all message types?

Gregory Maxwell discussed this with me on IRC[1].  My summary of our

Although the BIP can easily allocate short-ids to all existing messages,
anyone who wants to add an additional protocol message later will need
to coordinate their number allocation with all other developers working
on protocol extensions.  This includes experimental and private
extensions.  At best this would be annoying, and at worst it'd be
another set of bikeshed problems we'd waste time arguing about.

Allowing nodes to continue using arbitrary command names eliminates this
coordination problem.   Yet we can also gain the advantage of saving
bandwidth by allowing mapping (with optional negotiation) of short-ids.

Now that I understand the motivation, this part of the proposal makes
sense to me.



Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] New BIP - v2 peer-to-peer message transport protocol (former BIP151)

2019-03-24 Thread David A. Harding via bitcoin-dev
On Fri, Mar 22, 2019 at 10:04:46PM +0100, Jonas Schnelli via bitcoin-dev wrote:
> === v2 Messages Structure ===
> {|class="wikitable"
> ! Field Size !! Description !! Data type !! Comments
> [...]
> | 1-13 || encrypted command || variable || ASCII command (or one byte short 
> command ID)
> [...] 
> The command field MUST start with a byte that defines the length of the ASCII
> command string up to 12 chars (1 to 12) or a short command ID (see below).
> [...] 
>  Short Command ID 
> To save valuable bandwidth, the v2 message format supports message command
> short IDs for message types with high frequency. The ID/string mapping is a
> peer to peer arrangement and MAY be negotiated between the initiating and
> responding peer. 

Why is this optional and only specified here for some message types
rather than being required by v2 and specified for all message types?
There's only 26 different types at present[1], so it seems better to
simply make this a one-byte fixed-length field than it is to deal with
variable size, mapping negotiation, per-peer mapping in general, and
(once the network is fully v2) the dual-logic of being able to process
messages either from a short ID or a full command name.



[1] src/protocol.cpp:

const static std::string allNetMessageTypes[] = {

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Signet

2019-03-11 Thread David A. Harding via bitcoin-dev
On Sun, Mar 10, 2019 at 09:43:43AM +0900, Karl-Johan Alm via bitcoin-dev wrote:
> Keeping the PoW rule and moving the signature would mean DoS attacks
> would be trivial as anyone could mine blocks without a signature in
> them

Sure, but anyone could also just connect their lite client to a trusted
node (or nodes) on signet.  The nodes would protect the clients from
missing/invalid-signature DoS and the clients wouldn't have to implement
any more network-level changes than they need to now for testnet.

For people who don't want to run their own trusted signet nodes, there
could be a list of signet nodes run by well-known Bitcoiners (and this
could even be made available via a simple static dns seeder lite clients
could use).

> On Sat, Mar 9, 2019 at 5:20 AM Matt Corallo 
> wrote:
> > A previous idea regarding reorgs (that I believe Greg came up with)
> > is to allow multiple keys to sign blocks, with one signing no reorgs
> > and one signing a reorg every few blocks, allowing users to choose
> > the behavior they want.
> Not sure how this would work in practice.

This post from Maxwell could be the idea Corallo is describing:

I read it as:

  - Trusted signer Alice only signs extensions of her previous blocks

  - Trusted signer Bob periodically extends one of Alice's blocks
(either the tip or an earlier block) with a chain that grows faster
than Alice's chain, becoming the most-PoW chain.  At some point he
stops and Alice's chain overtakes Bob's fork as the most-PoW chain

  - User0 who wants to ignore reorg problems starts his node with
-signet -signers="alice", causing his node to only accept blocks
from Alice.

  - User1 who wants to consider reorg problems starts his node with
-signet -signers="alice,bob", causing his node to accept blocks from
both Alice and Bob, thus experiencing periodic reorgs.

  - There can also be other signing keys for any sort of attack
that can be practically executed, allowing clients to test their
response to the attack when they want to but also ignore any
disruption it would otherwise cause the rest of the time.

  - As an alternative to particular signing keys, there could just be
flags put in the header versionbits, header nonce, or generation
transaction indicating how the block should be classified (e.g.
no_reorg, reorg_max6, reorg_max144, merkle_vulnerability, special0,
special1, etc...)

(If something like this is implemented, I propose reserving one of the
signing keys/classification flags for use by any of Bitcoin's more
devious devs in unannounced attacks.  Having to occasionally dig
through weird log messages and odd blocks with other Bitcoin dorks on
IRC in order to figure out why things went horribly sideways in our
signet clients sounds to me like an enjoyable experience.  :-)


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-10 Thread David A. Harding via bitcoin-dev
On Thu, Dec 06, 2018 at 11:57:09AM -0500, Russell O'Connor via bitcoin-dev 
> One more item to consider is "signature covers witness weight".
> While signing the witness weight doesn't completely eliminate witness
> malleability (of the kind that can cause grief for compact blocks), it does
> eliminate the worst kind of witness malleability from the user's
> perspective, the kind where malicious relay nodes increase the amount of
> witness data and therefore reduce the overall fee-rate of the transaction.

To what degree is this an actual problem?  If the mutated transaction
pays a feerate at least incremental-relay-fee[1] below the original
transaction, then the original transaction can be rebroadcast as an RBF
replacement of the mutated transaction (unless the mutated version has
been pinned[2]).


[1] $ bitcoind -help-debug | grep -A2 incremental
   Fee rate (in BTC/kB) used to define cost of relay, used for mempool
   limiting and BIP 125 replacement. (default: 0.1)


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-06-09 Thread David A. Harding via bitcoin-dev
On Fri, Jun 08, 2018 at 04:35:29PM -0700, Olaoluwa Osuntokun via bitcoin-dev 
>   2. Since the coinbase transaction is the first in a block, it has the
>  longest merkle proof path. As a result, it may be several hundred bytes
>  (and grows with future capacity increases) to present a proof to the
>  client.

I'm not sure why commitment proof size is a significant issue.  Doesn't
the current BIP157 protocol have each filter commit to the filter for
the previous block?  If that's the case, shouldn't validating the
commitment at the tip of the chain (or buried back whatever number of
blocks that the SPV client trusts) obliviate the need to validate the
commitments for any preceeding blocks in the SPV trust model?

> Depending on the composition of blocks, this may outweigh the gains
> had from taking advantage of the additional compression the prev outs
> allow.

I think those are unrelated points.  The gain from using a more
efficient filter is saved bytes.  The gain from using block commitments
is SPV-level security---that attacks have a definite cost in terms of
generating proof of work instead of the variable cost of network
compromise (which is effectively free in many situations).

Comparing the extra bytes used by block commitments to the reduced bytes
saved by prevout+output filters is like comparing the extra bytes used
to download all blocks for full validation to the reduced bytes saved by
only checking headers and merkle inclusion proofs in simplified
validation.  Yes, one uses more bytes than the other, but they're
completely different security models and so there's no normative way for
one to "outweigh the gains" from the other.

> So should we optimize for the ability to validate in a particular
> model (better security), or lower bandwidth in this case?

It seems like you're claiming better security here without providing any
evidence for it.  The security model is "at least one of my peers is
honest."  In the case of outpoint+output filters, when a client receives
advertisements for different filters from different peers, it:

1. Downloads the corresponding block
2. Locally generates the filter for that block
3. Kicks any peers that advertised a different filter than what it
   generated locally

This ensures that as long as the client has at least one honest peer, it
will see every transaction affecting its wallet.  In the case of
prevout+output filters, when a client receives advertisements for
different filters from different peers, it:

1. Downloads the corresponding block and checks it for wallet
   transactions as if there had been a filter match

This also ensures that as long as the client has at least one honest
peer, it will see every transaction affecting its wallet.  This is
equivilant security.

In the second case, it's possible for the client to eventually
probabalistically determine which peer(s) are dishonest and kick them.
The most space efficient of these protocols may disclose some bits of
evidence for what output scripts the client is looking for, but a
slightly less space-efficient protocol simply uses randomly-selected
outputs saved from previous blocks to make the probabalistic
determination (rather than the client's own outputs) and so I think
should be quite private.  Neither protocol seems significantly more
complicated than keeping an associative array recording the number of
false positive matches for each peer's filters.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-06-02 Thread David A. Harding via bitcoin-dev
On Fri, Jun 01, 2018 at 07:02:38PM -0700, Jim Posen via bitcoin-dev wrote:
> Without the ability to verify filter validity, a client would have to stop
> syncing altogether in the presence of just one malicious peer, which is
> unacceptable.

I'm confused about why this would be the case.  If Alice's node
generates filters accurately and Mallory's node generates filters
inaccurately, and they both send their filters to Bob, won't Bob be able
to download any blocks either filter indicates are relevant to his

If Bob downloads a block that contains one of his transactions based on
Alice's filter indicating a possible match at a time when Mallory's
filter said there was no match, then this false negative is perfect
evidence of deceit on Mallory's part[1] and Bob can ban her.

If Bob downloads a block that doesn't contain any of his transactions
based on Mallory's filter indicating a match at a time when Alice's
filter said there was no match, then this false positive can be recorded
and Bob can eventually ban Mallory should the false positive rate
exceeds some threshold.

Until Mallory is eventually banned, it seems to me that the worst she
can do is waste Bob's bandwidth and that of any nodes serving him
accurate information, such as Alice's filters and the blocks Bob
is misled into downloading to check for matches.  The amount of
attacker:defender asymetry in the bandwidth wasted increases if
Mallory's filters become less accurate, but this also increases her
false positive rate and reduces the number of filters that need to be
seen before Bob bans her, so it seems to me (possibly naively) that this
is not a significant DoS vector.


[1] Per BIP158 saying, "a Golomb-coded set (GCS), which matches all
items in the set with probability 1"

Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Transaction Merging (bip125 relaxation)

2018-01-28 Thread David A. Harding via bitcoin-dev
On Sun, Jan 28, 2018 at 05:43:34PM +0100, Sjors Provoost via bitcoin-dev wrote:
> Peter Todd wrote:
> > In fact I considered only requiring an increase in fee rate, based on the
> > theory that if absolute fee went down, the transaction must be smaller and 
> > thus
> > miners could overall earn more from the additional transactions they could 
> > fit
> > into their block. But to do that properly requires considering whether or 
> > not
> > that's actually true in the particular state the mempool as a whole happens 
> > to
> > be in, so I ditched that idea early on for the much simpler criteria of 
> > both a
> > feerate and absolute fee increase.
> Why would you need to consider the whole mempool? 

Imagine a miner is only concerned with creating the next block and his
mempool currently only has 750,000 vbytes in it.  If two 250-vbyte
transactions each paying a feerate of 100 nanobitcoins per vbyte (50k
total) are replaced with one 325-vbyte transaction paying a feerate of
120 nBTC (39k total), the miner's potential income from mining the next
block is reduced by 11k nBTC.

Moving away from this easily worked example, the problem can still exist
even if a miner has enough transactions to fill the next block.  For
replacement consideration only by increased feerate to be guaranteed
more profitable, one has to assume the mempool contains an effectively
continuous distribution of feerates.  That may one day be true of the
mempool (it would be good, because it helps keep block production
regular sans subsidy) but it's often not the case these days.


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP Proposal: Utilization of bits denomination

2017-12-13 Thread David A. Harding via bitcoin-dev
On Wed, Dec 13, 2017 at 01:46:09PM -0600, Jimmy Song via bitcoin-dev wrote:
> Hey all,
> I am proposing an informational BIP to standardize the term "bits". The
> term has been around a while, but having some formal informational standard
> helps give structure to how the term is used.

Wallets and other software is already using this term, so I think it's a
good idea to ensure its usage is normalized.

That said, I think the term is unnecessary and confusing given that
microbitcoins provides all of the same advantages and at least two
additional advantages:

- Microbitcoins is not a homonym for any other word in English (and
  probably not in any other language), whereas "bit" and "bits" have
  more than a dozen homonyms in English---some of which are quite common
  in general currency usage, Bitcoin currency usage, or Bitcoin
  technical usage.

- Microbitcoins trains users to understand SI prefixes, allowing them to
  easily migrate from one prefix to the next.  This will be important
  when bitcoin prices rise to $10M USD[1] and the bits denomination has
  the same problems the millibitcoin denomination has now, but it's also
  useful in the short term when interacting with users who make very
  large payments (bitcoin-scale) or very small payments
  (nanobitcoin-scale).[2]  Maybe a table of scale can emphasize this

  Wrong (IMO):Right (IMO):
  --- --
[1] A rise in price to $10M doesn't require huge levels of growth---it
only requires time under the assumption that a percentage of bitcoins will
be lost every year due to wallet mishaps, failure to inherit bitcoins,
and other issues that remove bitcoins from circulation.  In other words,
it's important to remember that Bitcoin is expected to become a
deflationary currency and plan accordingly.

[2] Although Bitcoin does not currently support committed
nanobitcoin-scale payments in the block chain, it can be supported in a
variety of ways by offchain systems---including (it is hypothesized)
trustless systems based on probabilistic payments.



Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP 2 promotion to Final

2016-03-18 Thread David A. Harding via bitcoin-dev

Arguing about which wiki is better doesn't feel productive to me. Can we
just let BIP authors decide for themselves? Draft-BIP2 already has a
provision for allowing authors to specify a backup wiki of their own
choosing; can we just make that the policy in all cases (and drop the
need for a backup wiki)?

bitcoin-dev mailing list

Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-02 Thread David A. Harding via bitcoin-dev
On Wed, Mar 02, 2016 at 05:53:46PM +, Gregory Maxwell wrote:
> What you are proposing makes sense only if it was believed that a very
> large difficulty drop would be very likely.
> This appears to be almost certainly untrue-- consider-- look how long
> ago since hashrate was 50% of what it is now, or 25% of what it is
> now-- this is strong evidence that supermajority of the hashrate is
> equipment with state of the art power efficiency.

To avoid duplication of looking up this statistic among readers, here
are the various recent difficulties:

$ for i in $( seq 0 2016 6 ) ; do echo -n $i blocks ago:' ' ; 
bitcoin-cli getblock $( bitcoin-cli getblockhash $(( 400857 - i )) ) | jshon -e 
difficulty ; done | column -t
0  blocks  ago:  163491654908.95929
2016   blocks  ago:  144116447847.34869
4032   blocks  ago:  120033340651.237
6048   blocks  ago:  113354299801.4711
8064   blocks  ago:  103880340815.4559
10080  blocks  ago:  93448670796.323807
12096  blocks  ago:  79102380900.225983
14112  blocks  ago:  72722780642.54718
16128  blocks  ago:  65848255179.702606
18144  blocks  ago:  62253982449.760818
20160  blocks  ago:  60883825480.098282
22176  blocks  ago:  60813224039.440353
24192  blocks  ago:  59335351233.86657
26208  blocks  ago:  56957648455.01001
28224  blocks  ago:  54256630327.889961
30240  blocks  ago:  52699842409.347008
32256  blocks  ago:  52278304845.591682
34272  blocks  ago:  51076366303.481934
36288  blocks  ago:  49402014931.227463
38304  blocks  ago:  49692386354.893837
40320  blocks  ago:  47589591153.625008
42336  blocks  ago:  48807487244.681381
44352  blocks  ago:  47643398017.803436
46368  blocks  ago:  47610564513.47126
48384  blocks  ago:  49446390688.24144
50400  blocks  ago:  46717549644.706421
52416  blocks  ago:  47427554950.6483
54432  blocks  ago:  46684376316.860291
56448  blocks  ago:  44455415962.343803
58464  blocks  ago:  41272873894.697021

<50% of current hash rate was last seen roughly six retarget periods (12
weeks) ago and <25% of current hash rate was last seen roughly 29 periods
(58 weeks) ago.

I think that's reasonably strong evidence for your thesis given that
the increases in hash rate from the introduction of new efficient
equipment are likely partly offset by the removal from the hash rate of
lower efficiency equipment, so the one-year tail of ~25% probably means
that less than 25% of operating equipment is one year old or older.

However, it is my understanding that most mining equipment can be run at
different hash rates. Is there any evidence that high-efficiency miners
today are using high clock speeds to produce more hashes per ASIC than
they will after halving?  Is there any way to guess at how many fewer
hashes they might produce?

> If a pre-programmed ramp and drop is set then it has the risk of
> massively under-setting difficulty; which is also strongly undesirable
> (e.g. advanced inflation and exacerbating existing unintentional
> selfish mining)

Maybe I'm not thinking this through thoroughly, but I don't think it's
possible to significantly advance inflation unless the effective hash
rate increases by more than 300% at the halving.  With the proposal
being replied to, if all mining equipment operation before the
halving continued operating after it, the effective increase would be
200%. That doubling in effective hash rate would've been offset in
advance through a reduction in the effective hash rate in the weeks
before the halving.

Exacerbated unintentional selfish mining is a much more significant
concern IMO, even if it's only for a short retarget period or two. This
is especially the case given the current high levels of centralization
and validationless mining on the network today, which we would not want
to reward by making those miners the only ones effectively capable of
creating blocks until difficulty adjusted. I had not thought of this
aspect; thank you for bringing it up.

> and that is before suggesting that miners voluntarily take a loss of
> inflation now.

Yes, I very much don't like that aspect, which is why I made sure to
mention it.

> So while I think this concern is generally implausible; I think it's
> prudent to have a difficulty step patch (e.g. a one time single point
> where a particular block is required to lower bits a set amount) ready
> to go in the unlikely case the network is stalled.

I think having that code ready in general is a good idea, and a one-time
change in nBits is sounds like a good and simple way to go about it.

Thank you for your insightful reply,

bitcoin-dev mailing list

Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-02 Thread David A. Harding via bitcoin-dev
On Wed, Mar 02, 2016 at 02:56:14PM +, Luke Dashjr via bitcoin-dev wrote:
> To alleviate this risk, it seems reasonable to propose a hardfork to the 
> difficulty adjustment algorithm so it can adapt quicker to such a significant 
> drop in mining rate.

Having a well-reviewed hard fork patch for rapid difficulty adjustment
would seem to be a useful reserve for all sorts of possible problems.
That said, couldn't this specific potential situation be dealt with by a
relatively simple soft fork?

Let's say that, starting soon, miners require that valid block header
hashes be X% below the target value indicated by nBits. The X% changes
with each block, starting at 0% and increasing to 50% just before block
420,000 (the halving). This means the before the halving, every two
hashes are being treated as one hash, on average.

For blocks 420,000 and higher the code is disabled, immediately doubling
the effective hash rate at the same time the subsidy is halved,
potentially roughly canceling each other out to make a pre-halving hash
equal in economic value to a post-halving hash.

Of course, some (perhaps many) miners will not be profitable at the
post-halving subsidy level, so the steady increase in X% will force them
off the network at some point before the halving, hopefully in small
numbers rather than all at once like the halving would be expected to do.

For example, if the soft fork begins enforcement at block 410,000, then
X% can be increased 0.01% per block. Alice is a miner whose costs are
24BTC per block and she never claims tx fees for some reason, so her
profits now are always 25BTC per block. During the first difficulty
period after the soft fork is deployed, the cost to produce a hash will
increase like this,

0: 0%   500: 5% 1000: 10%   1500: 15%   2000: 20%
100: 1% 600: 6% 1100: 11%   1600: 16%
200: 2% 700: 7% 1200: 12%   1700: 17%
300: 3% 800: 8% 1300: 13%   1800: 18%
400: 4% 900: 9% 1400: 14%   1900: 19%

Somewhere around block 417, Alice will need to drop out because her
costs are now above 25BTC per block.  With the loss of her hash rate,
the average interblock time will increase and the capacity will decrease
(all other things being equal). However, Bob whose costs are 20BTC per
block can keep mining through the period.

At the retarget, the difficulty will go down (the target goes up) to
account for the loss of Alice's hashes. It may even go down enough
that Alice can mine profitably for a few more blocks early in the new
period, but the increasing X% factor will make her uneconomical again,
and this time it might even make Bob uneconomical too near the end of
the period. However, Charlie whose costs are 12BTC per block will
never be uneconomical as he can continue mining profitably even after
the halving. Alice and Bob mining less will increase the percentage of
blocks Charlie produces before the retarget, steadily shifting the
dynamics of the mining network to the state expected after the halving
and hopefully minimizing the magnitude of any shocks.

This does create the question about whether this soft fork would be
ethical, as Alice and Bob may have invested money and time on the
assumption that their marginal hardware would be usable up until the
halving and with this soft fork they would become uneconomical earlier
than block 420,000. A counterargument here is such an investment was
always speculative given the vagaries of exchange rate fluctuation, so
it could be permissible to change the economics slightly in order to
help ensure all other Bitcoin users experience minimal disruption during
the halving.

Unless I'm missing something (likely) I think this proposal has the
advantage of fast rollout (if the mechanism of an adjusted target is as
simple as I think it could be) in a non-emergency manner without a hard
fork that would require all full nodes upgrade (plus maybe some SPV
software that check nBits, which they probably all should be doing
given it's in the block headers that they download anyway).


P.S. I see Tier Nolan proposed something similar while I was writing
 this. I think this proposal differs in its analysis to warrant a
 possible duplicate posting.
bitcoin-dev mailing list

Re: [bitcoin-dev] nSequence multiple uses

2016-01-22 Thread David A. Harding via bitcoin-dev
On Fri, Jan 22, 2016 at 04:36:58PM +, Andrew C via bitcoin-dev wrote:
> Spending a time locked output requires setting nSequence to less than
> MAX_INT but opting into RBF also requires setting nSequence to less than

Hi Andrew,

Opt-in RBF requires setting nSequence to less than MAX-1 (not merely
less than MAX), so an nSequence of exactly MAX-1 (which appears in
hex-encoded serialized transactions as feff) enables locktime
enforcement but doesn't opt in to RBF.

For more information, please see BIP125:


Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] Ads on website

2015-11-13 Thread David A. Harding via bitcoin-dev
On Fri, Nov 13, 2015 at 08:27:48AM +0100, Jonas Schnelli via bitcoin-dev wrote:
> I'm a little bit concerned about the future of hosts the Bitcoin Core binaries refered to in release
announcements, so this subject is conceivably on-topic for bitcoin-dev.
However, I think questions about how operates are best asked
on bitcoin-discuss[1] (or somewhere else) and issues about the
advertising are best addressed in the PRs you referenced.

So, as one of the maintainers, here are my suggestions:

1. I'm on bitcoin-discuss.  Please feel free to ask any questions there.

2. I just opened a GitHub issue for discussing advertising on[2]

- I will reply to your specific questions there after I've sent this

3. If advertisements are added to and there is general
   dissatisfaction about that, maybe then we can come back to
   bitcoin-dev to discuss moving the Bitcoin Core binaries
   somewhere else.





Description: PGP signature
bitcoin-dev mailing list

Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-27 Thread David A. Harding via bitcoin-dev
On Thu, Aug 27, 2015 at 12:38:42PM +0930, Rusty Russell via bitcoin-dev wrote:
 So I'd like an IsStandard() rule to say it nLockTime be 0 if an
 nSequence != 0x. Would that screw anyone currently?

That sentence doesn't quite parse (say it nLockTime), so please
forgive me if I'm misunderstanding you. Are you saying that you want
IsStandard() to require a transaction have a locktime of 0 (no
confirmation delay) if any of its inputs use a non-final sequence?

If so, wouldn't that make locktime useless for delaying confirmation in
IsStandard() transactions because the consensus rules require at least
one input be non-final in order for locktime to have any effect?



Description: Digital signature
bitcoin-dev mailing list