Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-13 Thread Matt Corallo via bitcoin-dev
So we’d kill two birds with one stone if all bloom support was dropped. As far 
as I understand, precomputed filters are now provided via p2p connections as 
well.

Matt

> On Jan 14, 2021, at 00:33, Anthony Towns  wrote:
> 
> On Wed, Jan 13, 2021 at 01:40:03AM -0500, Matt Corallo via bitcoin-dev wrote:
>> Out of curiosity, was the interaction between fRelay and bloom disabling ever
>> specified? ie if you aren’t allowed to enable bloom filters on a connection 
>> due
>> to resource constraints/new limits, is it ever possible to “set” fRelay 
>> later?
> 
> (Maybe I'm missing something, but...)
> 
> In the current bitcoin implementation, no -- you either set
> m_tx_relay->fRelayTxes to true via the VERSION message (either explicitly
> or by not setting fRelay), or you enable it later with FILTERLOAD or
> FILTERCLEAR, both of which will cause a disconnect if bloom filters
> aren't supported. Bloom filter support is (optionally?) indicated via
> a service bit (BIP 111), so you could assume you know whether they're
> supported as soon as you receive the VERSION line.
> 
> fRelay is specified in BIP 37 as:
> 
>  | 1 byte || fRelay || bool || If false then broadcast transactions will
>  not be announced until a filter{load,add,clear} command is received. If
>  missing or true, no change in protocol behaviour occurs.
> 
> BIP 60 defines the field as "relay" and references BIP 37. Don't think
> it's referenced in any other bips.
> 
> Cheers,
> aj
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-01-12 Thread Matt Corallo via bitcoin-dev
Out of curiosity, was the interaction between fRelay and bloom disabling ever 
specified? ie if you aren’t allowed to enable bloom filters on a connection due 
to resource constraints/new limits, is it ever possible to “set” fRelay later?

Matt

> On Jan 6, 2021, at 11:35, Suhas Daftuar via bitcoin-dev 
>  wrote:
> 
> 
> Hi,
> 
> I'm proposing the addition of a new, optional p2p message to allow peers to 
> communicate that they do not want to send or receive (loose) transactions for 
> the lifetime of a connection. 
> 
> The goal of this message is to help facilitate connections on the network 
> over which only block-related data (blocks/headers/compact blocks/etc) are 
> relayed, to create low-resource connections that help protect against 
> partition attacks on the network.  In particular, by adding a network message 
> that communicates that transactions will not be relayed for the life of the 
> connection, we ease the implementation of software that could have increased 
> inbound connection limits for such peers, which in turn will make it easier 
> to add additional persistent block-relay-only connections on the network -- 
> strengthening network security for little additional bandwidth.
> 
> Software has been deployed for over a year now which makes such connections, 
> using the BIP37/BIP60 "fRelay" field in the version message to signal that 
> transactions should not be sent initially.  However, BIP37 allows for 
> transaction relay to be enabled later in the connection's lifetime, 
> complicating software that would try to distinguish inbound peers that will 
> never relay transactions from those that might.
> 
> This proposal would add a single new p2p message, "disabletx", which (if used 
> at all) must be sent between version and verack.  I propose that this message 
> is valid for peers advertising protocol version 70017 or higher.  Software is 
> free to implement this BIP or ignore this message and remain compatible with 
> software that does implement it.
> 
> Full text of the proposed BIP is below.
> 
> Thanks,
> Suhas
> 
> ---
> 
> 
>   BIP: XXX
>   Layer: Peer Services
>   Title: Disable transaction relay message
>   Author: Suhas Daftuar 
>   Comments-Summary: No comments yet.
>   Comments-URI:
>   Status: Draft
>   Type: Standards Track
>   Created: 2020-09-03
>   License: BSD-2-Clause
> 
> 
> ==Abstract==
> 
> This BIP describes a change to the p2p protocol to allow a node to tell a peer
> that a connection will not be used for transaction relay, to support
> block-relay-only connections that are currently in use on the network.
> 
> ==Motivation==
> 
> For nearly the past year, software has been deployed[1] which initiates
> connections on the Bitcoin network and sets the transaction relay field
> (introduced by BIP 37 and also defined in BIP 60) to false, to prevent
> transaction relay from occurring on the connection. Additionally, addr 
> messages
> received from the peer are ignored by this software.
> 
> The purpose of these connections is two-fold: by making additional
> low-bandwidth connections on which blocks can propagate, the robustness of a
> node to network partitioning attacks is strengthened.  Additionally, by not
> relaying transactions and ignoring received addresses, the ability of an
> adversary to learn the complete network graph (or a subgraph) is reduced[2],
> which in turn increases the cost or difficulty to an attacker seeking to carry
> out a network partitioning attack (when compared with having such knowledge).
> 
> The low-bandwidth / minimal-resource nature of these connections is currently
> known only by the initiator of the connection; this is because the transaction
> relay field in the version message is not a permanent setting for the lifetime
> of the connection.  Consequently, a node receiving an inbound connection with
> transaction relay disabled cannot distinguish between a peer that will never
> enable transaction relay (as described in BIP 37) and one that will.  
> Moreover,
> the node also cannot determine that the incoming connection will ignore 
> relayed
> addresses; with that knowledge a node would likely choose other peers to
> receive announced addresses instead.
> 
> This proposal adds a new, optional message that a node can send a peer when
> initiating a connection to that peer, to indicate that connection should not 
> be
> used for transaction-relay for the connection's lifetime. In addition, without
> a current mechanism to negotiate whether addresses should be relayed on a
> connection, this BIP suggests that address messages not be sent on links where
> tx-relay has been disabled.
> 
> ==Specification==
> 
> # A new disabletx message is added, which is defined as an empty message 
> where pchCommand == "disabletx".
> # The protocol version of nodes implementing this BIP must be set to 70017 or 
> higher.
> # If a node sets the transaction relay field in the version 

Re: [bitcoin-dev] Default Signet, Custom Signets and Resetting Testnet

2020-09-13 Thread Matt Corallo via bitcoin-dev

[resent with correct source, sorry Michael, stupid Apple]

Yes, a “default” signet that regularly reorgs a block or two all the time and is “compatible” with testnet but a faster 
block target (eg so that it is trivial to mine but still has PoW) and freshly-seeded genesis would be a massive step-up 
in testing usability across the space.


I don’t have strong feelings about the multisig policy, but probably something that is at least marginally robust (ie 
2-of-N) and allows valid blocks to select the next block’s signers for key rollovers is probably close enough.


There are various folks with operational experience in the community, so let’s 
not run stuff on DO/AWS/etc, please.

Matt

On 8/29/20 6:14 AM, Michael Folkson via bitcoin-dev wrote:

Hi all

Signet has been announced and discussed previously on the mailing list so I 
won't repeat what Signet is and its motivation.

(For more background we recently had a Socratic Seminar with Kalle Alm and AJ Towns on Signet. Transcript, reading list 
and video are available.)


https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/ 



The first (of multiple) Signet PR 18267 in Bitcoin Core is at an advanced stage of review and certainly additional code 
review and testing of that PR is encouraged.


https://github.com/bitcoin/bitcoin/pull/18267 


However there are some meta questions around Signet(s) that are best discussed outside of the Bitcoin Core repo and it 
would be good to ensure everyone's testing needs are being met. I will put forward my initial thoughts on some of these 
questions. These thoughts seem to be aligned with Kalle's and AJ's initial views but they have not reviewed this post 
and they can chime in if they feel I am misrepresenting their perspectives.


1) Should there be one "default" Signet that we use for specific purpose(s) or should we 
"let a thousand ships sail"?

To be clear there will be multiple custom Signets. Even if we wanted to prevent them we couldn't. But is there an 
argument for having a "default" Signet with a network effect? A Signet that a large proportion of the community is drawn 
to using with tooling and support? I would say yes. Especially if we see Signet as a staging ground for testing proposed 
soft fork(s). Otherwise there will be lots of splintered Signet networks all with different combinations of proposed 
soft forks enabled and no network effect around a particular Signet. I think this would be bewildering for say Taproot 
testers to have to choose between Person A's Signet with Taproot enabled and Person B's Signet with Taproot enabled. For 
this to work there would have to be a formal understanding of at what stage a proposed soft fork should be enabled on 
"default" Signet. It would have to be at a sufficiently mature stage (e.g. BIP number allocated, BIP drafted and under 
review, PR open in Bitcoin Core repo under review etc) but early enough so that it can be tested on Signet well in 
advance of being considered for activation on mainnet. This does present challenges if soft forks are enabled on Signet 
and then change/get updated. However there are approaches that AJ in particular is working on to deal with this, one of 
which I have described below.


https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining 



2) Assuming there is a "default" Signet how many people and who should have keys to sign each new "default" Signet 
block? If one of these keys is lost or stolen should we reset Signet? Should we plan to reset "default" Signet at 
regular intervals anyway (say every two years)?


Currently it is a 1-of-2 multisig with Kalle Alm and AJ Towns having keys. It was suggested on IRC that there should be 
at least one additional key present in the EU/US timezone so blocks can continue to be mined during an Asia-Pacific 
outage. (Kalle and AJ are both in the Asia-Pacific region). Kalle believes we should keep Signet running indefinitely 
unless we encounter specific problems and personally I think this makes sense.


https://github.com/bitcoin/bitcoin/issues/19787#issuecomment-679160691 



3) Kalle has also experienced concern from some in the community that testnet will somehow be replaced by Signet. This 
is not the case. As long as someone out there is mining testnet blocks testnet will continue. However, there is the 
question of whether testnet needs to be reset. It was last reset in 2012 and there are differing accounts on 
whether this is presenting a problem for users of testnet. Assuming Signet is successful there will be less testing on 

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
Hmm, could that not be accomplished by simply building this into new messages? eg, send "betterprotocol", if you see a 
verack and no "betterprotocol" from your peer, send "worseprotocol" before you send a "verack".


Matt

On 8/21/20 5:17 PM, Jeremy wrote:
As for an example of where you'd want multi-round, you could imagine a scenario where you have a feature A which gets 
bugfixed by the introduction of feature B, and you don't want to expose that you support A unless you first negotiate B. 
Or if you can negotiate B you should never expose A, but for old nodes you'll still do it if B is unknown to them. An 
example of this would be (were it not already out without a feature negotiation existing) WTXID/TXID relay.


The SYNC primitve simply codifies what order messages should be in and when you're done for a phase of negotiation 
offering something. It can be done without, but then you have to be more careful to broadcast in the correct order and 
it's not clear when/if you should wait for more time before responding.



On Fri, Aug 21, 2020 at 2:08 PM Jeremy mailto:jlru...@mit.edu>> wrote:

Actually we already have service bits (which are sadly limited) which allow 
negotiation of non bilateral feature
support, so this would supercede that.
--
@JeremyRubin 



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
This seems to be pretty overengineered. Do you have a specific use-case in mind for anything more than simply continuing 
the pattern we've been using of sending a message indicating support for a given feature? If we find some in the future, 
we could deploy something like this, though the current proposal makes it possible to do it on a per-feature case.


The great thing about Suhas' proposal is the diff is about -1/+1 (not including tests), while still getting all the 
flexibility we need. Even better, the code already exists.


Matt

On 8/21/20 3:50 PM, Jeremy wrote:

I have a proposal:

Protocol >= 70016 cease to send or process VERACK, and instead use HANDSHAKEACK, which is completed after feature 
negotiation.


This should make everyone happy/unhappy, as in a new protocol number it's fair game to change these semantics to be 
clear that we're acking more than version.


I don't care about when or where these messages are sequenced overall, it seems to have minimal impact. If I had free 
choice, I slightly agree with Eric that verack should come before feature negotiation, as we want to divorce the idea 
that protocol number and feature support are tied.


But once this is done, we can supplant Verack with HANDSHAKENACK or HANDSHAKEACK to signal success or failure to agree 
on a connection. A NACK reason (version too high/low or an important feature missing) could be optional. Implicit NACK 
would be disconnecting, but is discouraged because a peer doesn't know if it should reconnect or the failure was 
intentional.


--

AJ: I think I generally do prefer to have a FEATURE wrapper as you suggested, or a rule that all messages in this period 
are interpreted as features (and may be redundant with p2p message types -- so you can literally just use the p2p 
message name w/o any data).


I think we would want a semantic (which could be based just on message names, but first-class support would be nice) for 
ACKing that a feature is enabled. This is because a transcript of:


NODE0:
FEATURE A
FEATURE B
VERACK

NODE1:
FEATURE A
VERACK

It remains unclear if Node 1 ignored B because it's an unknown feature, or 
because it is disabled. A transcript like:

NODE0:
FEATURE A
FEATURE B
FEATURE C
ACK A
VERACK

NODE1:
FEATURE A
ACK A
NACK B
VERACK

would make it clear that A and B are known, B is disabled, and C is unknown. C has 0 support, B Node 0 should support 
inbound messages but knows not to send to Node 1, and A has full bilateral support. Maybe instead it could a message 
FEATURE SEND A and FEATURE RECV A, so we can make the split explicit rather than inferred from ACK/NACK.



--

I'd also propose that we add a message which is SYNC, which indicates the end of a list of FEATURES and a request to 
send ACKS or NACKS back (which are followed by a SYNC). This allows multi-round negotiation where based on the presence 
of other features, I may expand the set of features I am offering. I think you could do without SYNC, but there are more 
edge cases and the explicitness is nice given that this already introduces future complexity.


This multi-round makes it an actual negotiation rather than a pure announcement system. I don't think it would be used 
much in the near term, but it makes sense to define it correctly now. Build for the future and all...




--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Matt Corallo via bitcoin-dev
Sure, we could do a new message for negotiation, but there doesn’t seem to be a 
lot of reason for it - using the same namespace for negotiation seems fine too. 
In any case, this is one of those things that doesn’t matter in the slightest, 
and if one person volunteers to write a BIP and code, no reason they shouldn’t 
just decide and be allowed to run with it. Rough consensus and running code, as 
it were :)

Matt


> On Aug 20, 2020, at 22:37, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> On Fri, Aug 14, 2020 at 03:28:41PM -0400, Suhas Daftuar via bitcoin-dev 
> wrote:
>> In thinking about the mechanism used there, I thought it would be helpful to
>> codify in a BIP the idea that Bitcoin network clients should ignore unknown
>> messages received before a VERACK.  A draft of my proposal is available here
>> [2].
> 
> Rather than allowing arbitrary messages, maybe it would make sense to
> have a specific feature negotiation message, eg:
> 
>  VERSION ...
>  FEATURE wtxidrelay
>  FEATURE packagerelay
>  VERACK
> 
> with the behaviour being that it's valid only between VERSION and VERACK,
> and it takes a length-prefixed-string giving the feature name, optional
> additional data, and if the feature name isn't recognised the message
> is ignored.
> 
> If we were to support a "polite disconnect" feature like Jeremy suggested,
> it might be easier to do that for a generic FEATURE message, than
> reimplement it for the message proposed by each new feature.
> 
> Cheers,
> aj
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev
ndshake negotiations implemented for optional messages which are valid 
at the negotiated version. The protocol may be flexible while remaining 
validateable. There is no reason to force a client to accept unknown message 
traffic.
A generalized versioning change can be implemented in or after the handshake. 
The latter is already done on an ad-hoc basis. The former is possible as long 
as the peer’s version is sufficient to be aware of the behavior. This does not 
imply any need to send invalid messages. The verack itself can simply be 
extended with a matrix of feature support. There is no reason to complicate 
negotiation with an additional message(s).
FWIW, bip37 did this poorly, adding a feature field to the version message, 
resulting in bip60. Due to this design, older protocol-validating clients were 
broken. In this case it was message length that was presumed to not be 
validated.
e

On Aug 18, 2020, at 07:59, Matt Corallo via bitcoin-dev 
 wrote:


This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with 
different features implemented in each. The Bitcoin protocol hasn't (fully) 
evolved to capture that reality. Initially the Bitcoin protocol had a simple 
numerical version field, but that is wholly impractical for any diverse network 
- some clients may not wish to implement every possible new relay mechanic, and 
why should they have to in order to use other new features?

Bitcoin protocol changes have, many times in recent history, been made via new dummy 
"negotiation" messages, which take advantage of the fact that the Bitcoin 
protocol has always expected clients to ignore unknown messages. Given that pattern, it 
makes sense to have an explicit negotiation phase - after version and before verack, just 
send the list of features that you support to negotiate what the connection will be 
capable of. The exact way we do that doesn't matter much, and sending it as a stream of 
messages which each indicate support for a given protocol feature perfectly captures the 
pattern that has been used in several recent network upgrades, keeping consistency.

Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,
Back in February I posted a proposal for WTXID-based transaction relay[1] (now 
known as BIP 339), which included a proposal for feature negotiation to take 
place prior to the VERACK message being received by each side.  In my email to 
this list, I had asked for feedback as to whether that proposal was 
problematic, and didn't receive any responses.
Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.
In thinking about the mechanism used there, I thought it would be helpful to 
codify in a BIP the idea that Bitcoin network clients should ignore unknown 
messages received before a VERACK.  A draft of my proposal is available here[2].
I presume that software upgrading past protocol version 70016 was already 
planning to either implement BIP 339, or ignore the wtxidrelay message proposed 
in BIP 339 (if not, then this would create network split concerns in the future 
-- so I hope that someone would speak up if this were a problem).  When we 
propose future protocol upgrades that would benefit from feature negotiation at 
the time of connection, I think it would be nice to be able to use the same 
method as proposed in BIP 339, without even needing to bump the protocol 
version.  So having an understanding that this is the standard of how other 
network clients operate would be helpful.
If, on the other hand, this is problematic for some reason, I look forward to 
hearing that as well, so that we can be careful about how we deploy future p2p 
changes to avoid disruption.
Thanks,
Suhas Daftuar
[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html 
<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html>
[2] 
https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki
 
<https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev
There are several cases where a new message has been sent as a part of a negotiation without changing the protocol 
version. You may chose to ignore that, but that doesn't mean that it isn't an understood and even relied upon feature of 
the Bitcoin P2P protocol. If you wish to fail connections to new nodes (and risk network splits, as Suhas points out), 
then you may do so, but that doesn't make it a part of the Bitcoin P2P protocol that you must do so. Of course there is 
no "official document" by which we can make a formal appeal, but historical precedent suggests otherwise.


Still, I think we're talking pedantics here, and not in a useful way. Ultimately we need some kind of negotiation which 
is flexible in allowing different software to negotiate different features without a global lock-step version number 
increase. Or, to put it another way, if a feature is fully optional, why should there be a version number increase for 
it - the negotiation of it is independent and a version number only increases confusion over which change "owns" a given 
version number.


I presume you'd support a single message that lists the set of features which a node (optionally) wishes to support on 
the connection. This proposal is fully equivalent to that, instead opting to list them as individual messages instead of 
one message, which is a bit nicer in that they can be handled more independently or by different subsystems including 
even the message hashing.


Matt

On 8/18/20 12:54 PM, Eric Voskuil wrote:

“Bitcoin protocol has always expected clients to ignore unknown messages”

This is not true. Bitcoin has long implemented version negotiation, which is the opposite expectation. Libbitcoin’s p2p 
protocol implementation immediately drops a peer that sends an invalid message according to the negotiated version. The 
fact that a given client does not validate the protocol does not make it an expectation that the protocol not be validated.


Features can clearly be optional within an actual protocol. There have been post-handshake negotiations implemented for 
optional messages which are valid at the negotiated version. The protocol may be flexible while remaining validateable. 
There is no reason to force a client to accept unknown message traffic.


A generalized versioning change can be implemented in or after the handshake. The latter is already done on an ad-hoc 
basis. The former is possible as long as the peer’s version is sufficient to be aware of the behavior. This does not 
imply any need to send invalid messages. The verack itself can simply be extended with a matrix of feature support. 
There is no reason to complicate negotiation with an additional message(s).


FWIW, bip37 did this poorly, adding a feature field to the version message, resulting in bip60. Due to this design, 
older protocol-validating clients were broken. In this case it was message length that was presumed to not be validated.


e


On Aug 18, 2020, at 07:59, Matt Corallo via bitcoin-dev 
 wrote:

This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with different features implemented in each. 
The Bitcoin protocol hasn't (fully) evolved to capture that reality. Initially the Bitcoin protocol had a simple 
numerical version field, but that is wholly impractical for any diverse network - some clients may not wish to 
implement every possible new relay mechanic, and why should they have to in order to use other new features?


Bitcoin protocol changes have, many times in recent history, been made via new dummy "negotiation" messages, which 
take advantage of the fact that the Bitcoin protocol has always expected clients to ignore unknown messages. Given 
that pattern, it makes sense to have an explicit negotiation phase - after version and before verack, just send the 
list of features that you support to negotiate what the connection will be capable of. The exact way we do that 
doesn't matter much, and sending it as a stream of messages which each indicate support for a given protocol feature 
perfectly captures the pattern that has been used in several recent network upgrades, keeping consistency.


Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,
Back in February I posted a proposal for WTXID-based transaction relay[1] (now known as BIP 339), which included a 
proposal for feature negotiation to take place prior to the VERACK message being received by each side.  In my email 
to this list, I had asked for feedback as to whether that proposal was problematic, and didn't receive any responses.

Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.
In thinking about the mechanism used there, I thought it would be helpful to codify in a BIP the idea that Bitcoin 
network clients should ignore unknown messages received before a VERACK.  A draft of my proposal is available he

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-18 Thread Matt Corallo via bitcoin-dev

This sounds like a great idea!

Bitcoin is no longer a homogeneous network of one client - it is many, with different features implemented in each. The 
Bitcoin protocol hasn't (fully) evolved to capture that reality. Initially the Bitcoin protocol had a simple numerical 
version field, but that is wholly impractical for any diverse network - some clients may not wish to implement every 
possible new relay mechanic, and why should they have to in order to use other new features?


Bitcoin protocol changes have, many times in recent history, been made via new dummy "negotiation" messages, which take 
advantage of the fact that the Bitcoin protocol has always expected clients to ignore unknown messages. Given that 
pattern, it makes sense to have an explicit negotiation phase - after version and before verack, just send the list of 
features that you support to negotiate what the connection will be capable of. The exact way we do that doesn't matter 
much, and sending it as a stream of messages which each indicate support for a given protocol feature perfectly captures 
the pattern that has been used in several recent network upgrades, keeping consistency.


Matt

On 8/14/20 3:28 PM, Suhas Daftuar via bitcoin-dev wrote:

Hi,

Back in February I posted a proposal for WTXID-based transaction relay[1] (now known as BIP 339), which included a 
proposal for feature negotiation to take place prior to the VERACK message being received by each side.  In my email to 
this list, I had asked for feedback as to whether that proposal was problematic, and didn't receive any responses.


Since then, the implementation of BIP 339 has been merged into Bitcoin Core, 
though it has not yet been released.

In thinking about the mechanism used there, I thought it would be helpful to codify in a BIP the idea that Bitcoin 
network clients should ignore unknown messages received before a VERACK.  A draft of my proposal is available here[2].


I presume that software upgrading past protocol version 70016 was already planning to either implement BIP 339, or 
ignore the wtxidrelay message proposed in BIP 339 (if not, then this would create network split concerns in the future 
-- so I hope that someone would speak up if this were a problem).  When we propose future protocol upgrades that would 
benefit from feature negotiation at the time of connection, I think it would be nice to be able to use the same method 
as proposed in BIP 339, without even needing to bump the protocol version.  So having an understanding that this is the 
standard of how other network clients operate would be helpful.


If, on the other hand, this is problematic for some reason, I look forward to hearing that as well, so that we can be 
careful about how we deploy future p2p changes to avoid disruption.


Thanks,
Suhas Daftuar

[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-February/017648.html 



[2] https://github.com/sdaftuar/bips/blob/2020-08-generalized-feature-negotiation/bip-p2p-feature-negotiation.mediawiki 



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-10 Thread Matt Corallo via bitcoin-dev
I was assuming, largely, that Bitcoin Core will eventually get what you describe here (which is generally termed 
"package relay", implying we relay, and process, groups of transactions as one).


What we'd need for SIGHASH_ANYPREVOUT is a relay network that isn't just smart about fee calculation, but can actually 
rewrite the transactions themselves before passing them on to a local bitcoind.


eg such a network would need to be able to relay
"I have transaction A, with one input, which is valid for any output-idx-0 in a 
transaction spending output B".
and then have the receiver go look up which transaction in its mempool/chain spends output B, then fill in the input 
with that outpoint and hand the now-fully-formed transaction to their local bitcoind for processing.


Matt

On 8/7/20 11:34 AM, Richard Myers wrote:
When you say that a special relay network might be more "smart about replacement" in the context of ANYPREVOUT*, do you 
mean these nodes could RBF parts of a package like this:



Given:
  - Package A = UpdateTx_A(n=1): txin: AnchorTx, txout: SettlementTx_A(n=1) -> HtlcTxs(n=1)_A -> .chain of  transactions 
that pin UpdateTx_A(n=1) with high total fee, etc.



And a new package with higher fee rate versions of ANYPREVOUT* transactions in 
the package, but otherwise lower total fee:

  - Package B = UpdateTx_B(n=1): txin: AnchorTx, txout: SettlementTx_B(n=1) -> 
HtlcTxs(n=1)_B -> low total fee package


Relay just the higher up-front fee-rate transactions from package B which get spent by the high absolute fee child 
transactions from package A:


  - Package A' = UpdateTx_B(n=1): txin: AnchorTx, txout: SettlementTx_B(n=1) -> HtlcTxs(n=1)_A -> ...chain of up to 25 
txs that pin UpdateTx(n=1) with high total fee, etc.


On Thu, Aug 6, 2020 at 5:59 PM Matt Corallo via bitcoin-dev <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


In general, SIGHASH_NOINPUT makes these issues much, much simpler to 
address, but only if we assume that nodes can
somehow be "smart" about replacement when they see a SIGHASH_NOINPUT spend 
which can spend an output that something else
in the mempool already spends (potentially a different input than the 
relaying node thinks the transaction should
spend). While ideally we'd be able to shove that (significant) complexity 
into the Bitcoin P2P network, that may not be
feasible, but we could imagine a relay network of lightning nodes doing 
that calculation and then passing the
transactions to their local full nodes. 




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-06 Thread Matt Corallo via bitcoin-dev
Yep! That is the attack I had in mind - just in general any time you have a 
non-relative time limit (ie an HTLC) for
confirmation, relay attacks become critical and its no longer just about 
revocation (which is fine when your time limit
is CSV-based).

In general, SIGHASH_NOINPUT makes these issues much, much simpler to address, 
but only if we assume that nodes can
somehow be "smart" about replacement when they see a SIGHASH_NOINPUT spend 
which can spend an output that something else
in the mempool already spends (potentially a different input than the relaying 
node thinks the transaction should
spend). While ideally we'd be able to shove that (significant) complexity into 
the Bitcoin P2P network, that may not be
feasible, but we could imagine a relay network of lightning nodes doing that 
calculation and then passing the
transactions to their local full nodes.

Given such an overlay network would represent an increase in local mempool 
fees, it is not unreasonable to expect at
least some miners to run a local node which can submit such transactions to 
their template-generating nodes.

Matt

On 8/4/20 10:59 AM, ZmnSCPxj wrote:
> Good morning Matt,
> 
>> Hmm, apologies that little context was provided - this was meant in the 
>> context of the current crop of relay-based attacks that have been 
>> discovered. As we learned in those contexts, “just handle it when it 
>> confirms” doesn’t provide the types of guarantees we were hoping for as 
>> placing commitment transactions in mempools can be used to prevent honest 
>> nodes from broadcasting the latest state. This implies that HTLC security 
>> may be at risk.
>>
> 
> Ah, okay.
> 
> So the attack is this:
> 
> * Attacker connects twice to the LN: one to any node near the victim, one to 
> the victim.
> * Attacker arranges for the attacker-victim channel to have most funds in the 
> side of the victim.
> * The attacker routes a circular payment terminating in the victim-attacker 
> channel.
>   * The victim accepts some incoming HTLC, and provides an outgoing HTLC to 
> the attacker via the victim-attacker channel.
> * The attacker broadcasts a very low-fee old-state transaction of the 
> victim-attacker channel, one that is too low-fee to practically get 
> confirmed, just before the HTLC timeout.
> * The victim-outgoing HTLC times out, making the victim broadcast a 
> unilateral close attempt for the victim-attacker channel in order to enforce 
> the HTLC onchain.
>   * Unfortunately for the victim, relay shenanigans prevent the latest 
> commitment from being broadcast.
> * The attacker waits for the victim-incoming HTLC to timeout, which forces 
> the victim to `update_htlc_failed` the incoming HTLC or risk having that 
> channel closed (and losing future routing fees).
>   * The attacker now gets back its outgoing funds.
> * The attacker lets the old-state transaction get relayed, and then re-seats 
> the latest update transaction to that.
> * Once the latest transaction allows the HTLCs to be published, the attacker 
> claims the victim-outgoing HTLC with the hashlock branch.
>   * The attacker now gets its incoming funds, doubling its money, because 
> that is how the "send me 1 BTC I send you 2 BTC back" Twitter thing works 
> right?
> 
> Hmmm.
> 
> The only thing I can imagine helping here is for the forwarding node to drop 
> channels onchain "early", i.e. if the HTLC will time out in say 14 blocks we 
> drop the channel onchain, so we have a little leeway in bumping up fees for 
> the commitment transaction.
> Maybe.
> I am sure Matt can find yet another relay attack that prevents that, at this 
> point, haha.
> 
> "Are we *still* talking about onchain fees?" - Adelaide 2018
> 
> Regards,
> ZmnSCPxj
> 
> 
> 
> 
>>> On Aug 4, 2020, at 00:23, ZmnSCPxj zmnsc...@protonmail.com wrote:
>>> Good morning Matt,
>>>
 While I admit I haven’t analyzed the feasibility, I want to throw one 
 additional design consideration into the ring.
 Namely, it would ideally be trivial, at the p2p protocol layer, to relay a 
 transaction to a full node without knowing exactly which input transaction 
 that full node has in its mempool/active chain. This is at least 
 potentially important for systems like lighting where you do not know 
 which counterparty commitment transaction(s) are in a random node’s 
 mempool and you should be able to describe to that node that you are 
 spending then nonetheless.
 This is (obviously) an incredibly nontrivial problem both in p2p protocol 
 complexity and mempool optimization, but it may leave SIGHASH_NOINPUT 
 rather useless for lighting without it.
 The least we could do is think about the consensus design in that context, 
 even if we have to provide an external overlay relay network in order to 
 make lighting transactions relay properly (presumably with miners running 
 such software).
>>>
>>> Ah, right.
>>> A feasible attack, without the above, would be 

Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-04 Thread Matt Corallo via bitcoin-dev
Hmm, apologies that little context was provided - this was meant in the context 
of the current crop of relay-based attacks that have been discovered. As we 
learned in those contexts, “just handle it when it confirms” doesn’t provide 
the types of guarantees we were hoping for as placing commitment transactions 
in mempools can be used to prevent honest nodes from broadcasting the latest 
state. This implies that HTLC security may be at risk.

> On Aug 4, 2020, at 00:23, ZmnSCPxj  wrote:
> 
> Good morning Matt,
> 
>> While I admit I haven’t analyzed the feasibility, I want to throw one 
>> additional design consideration into the ring.
>> 
>> Namely, it would ideally be trivial, at the p2p protocol layer, to relay a 
>> transaction to a full node without knowing exactly which input transaction 
>> that full node has in its mempool/active chain. This is at least potentially 
>> important for systems like lighting where you do not know which counterparty 
>> commitment transaction(s) are in a random node’s mempool and you should be 
>> able to describe to that node that you are spending then nonetheless.
>> 
>> This is (obviously) an incredibly nontrivial problem both in p2p protocol 
>> complexity and mempool optimization, but it may leave SIGHASH_NOINPUT rather 
>> useless for lighting without it.
>> 
>> The least we could do is think about the consensus design in that context, 
>> even if we have to provide an external overlay relay network in order to 
>> make lighting transactions relay properly (presumably with miners running 
>> such software).
> 
> Ah, right.
> 
> A feasible attack, without the above, would be to connect to the fullnode of 
> the victim, and connect to miners separately.
> Then you broadcast to the victim one of the old txes, call it tx A, but you 
> broadcast to the miners a *different* old tx, call it B.
> The victim reacts only to tA, but does not react to B since it does not see B 
> in the mempool.
> 
> On the other hand --- what the victim needs to react to is *onchain* 
> confirmed transactions.
> So I think all the victim needs to do, in a Lightning universe utilizing 
> primarily `SIGHASH_NOINPUT`-based mechanisms, is to monitor onchain events 
> and ignore mempool events.
> 
> So if we give fairly long timeouts for our mechanisms, it should be enough, I 
> think, since once a transaction is confirmed its txid does not malleate 
> without a reorg and a `SIGHASH_NOINPUT` signature can then be "locked" to 
> that txid, unless a reorg unconfirms the transaction.
> We only need to be aware of deep reorgs and re-broadcast with a malleated 
> prevout until the tx being spent is deeply confirmed.
> 
> In addition, we want to implement scorch-the-earth, keep-bumping-the-fee 
> strategies anyway, so we would keep rebroadcasting new versions of the 
> spending transaction, and spending from a transaction that is confirmed.
> 
> Or are there other attack vectors you can see that I do not?
> I think this is fixed by looking at the blockchain.
> 
> Regards,
> ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 118 and SIGHASH_ANYPREVOUT

2020-08-03 Thread Matt Corallo via bitcoin-dev
While I admit I haven’t analyzed the feasibility, I want to throw one 
additional design consideration into the ring.

Namely, it would ideally be trivial, at the p2p protocol layer, to relay a 
transaction to a full node without knowing exactly which input transaction that 
full node has in its mempool/active chain. This is at least potentially 
important for systems like lighting where you do not know which counterparty 
commitment transaction(s) are in a random node’s mempool and you should be able 
to describe to that node that you are spending then nonetheless.

This is (obviously) an incredibly nontrivial problem both in p2p protocol 
complexity and mempool optimization, but it may leave SIGHASH_NOINPUT rather 
useless for lighting without it.

The least we could do is think about the consensus design in that context, even 
if we have to provide an external overlay relay network in order to make 
lighting transactions relay properly (presumably with miners running such 
software).

Matt

> On Jul 9, 2020, at 17:46, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> Hello world,
> 
> After talking with Christina ages ago, we came to the conclusion that
> it made more sense to update BIP 118 to the latest thinking than have
> a new BIP number, so I've (finally) opened a (draft) PR to update BIP
> 118 with the ANYPREVOUT bip I've passed around to a few people,
> 
> https://github.com/bitcoin/bips/pull/943
> 
> Probably easiest to just read the new BIP text on github:
> 
> https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki
> 
> It doesn't come with tested code at this point, but I figure better to
> have the text available for discussion than nothing.
> 
> Some significant changes since previous discussion include complete lack
> of chaperone signatures or anything like it (if you want them, you can
> always add them yourself, of course), and that ANYPREVOUTANYSCRIPT no
> longer commits to the value (details/rationale in the text).
> 
> Cheers,
> aj
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on soft-fork activation

2020-07-14 Thread Matt Corallo via bitcoin-dev
Thanks Anthony for this writeup!

I find it incredibly disappointing that the idea of naive flag day fork 
activation is being seriously discussed in the
form of BIP 9. Activation of forks is not only about the included changes but 
also around the culture of how changes to
Bitcoin should be and are made. Whether we like it or not, how Taproot 
activates will set a community understanding and
future norms around how many changes are made.

Members of this list lost sleep and years off their life from stress fighting 
to ensure that the process by which
Bitcoin changes is not only principled in its rejection of unilateral changes, 
but also that that idea was broadly
understood, and broadly *enforced* by community members - the only way in which 
it has any impact. That fight is far
from over - Bitcoin's community grows and changes daily, and the history around 
what changed and how has been rewritten
time and time again. Worse still, the principled nature of Bitcoin's change 
process is targeted constantly as untrue in
an attempt by various alternative systems to pretend that their change process 
of "developers ship new code, users run
it blindly" is identical to Bitcoin.

While members of this list may be aware of significant outreach efforts and 
design work to ensure that Taproot is not
only broadly acceptable to Bitcoin users, but also has effectively no impact on 
users who wish not to use it, it is
certainly not the case that all Bitcoin users are aware of that work, nor seen 
the results directly communicated to them.

Worse still, it is hard to argue that a new version of Bitcoin Core containing 
a fixed future activation of a new
consensus rule is anything other than "developers have decided on new rules" 
(even if it is, based on our own knowledge,
not the case). Indeed, even the proposal by Anthony, which makes reference to 
my previous work has this issue, and it
may not be avoidable - there is very legitimate concern over miners blocking 
changes to Bitcoin which do not harm them
which users objectively desire, potentially purely through apathy. But to 
dismiss the concerns over the optics which set
the stage for how future changes are made to Bitcoin purely because miners may 
be too busy with other things to upgrade
their nodes seems naive at best.

I appreciate the concern over activation timeline given miner apathy, and to 
some extend Anthony's work here addresses
that with decreasing activation thresholds during the second signaling period, 
but bikeshedding on timeline may be merited.

To not make every attempt to distance the activation method from the public 
perception unilateral activation strikes me
as the worst of all possible outcomes for Bitcoin's longevity. Having a 
quieting period after BIP 9 activation failure
may not be the best way to do that, but it seems like a reasonable attempt.

Matt

On 7/14/20 5:37 AM, Anthony Towns via bitcoin-dev wrote:
> Hi,
> 
> I've been trying to figure out a good way to activate soft forks in
> future. I'd like to post some thoughts on that. So:
> 
> I think there's two proposals that are roughly plausible. The first is
> Luke's recent update to BIP 8:
> 
> https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki
> 
> It has the advantage of being about as simple as possible, and (in my
> opinion) is an incremental improvement on how segwit was activated. Its
> main properties are:
> 
>- signalling via a version bit
>- state tansitions based on height rather than median time
>- 1 year time frame
>- optional mandatory activation at the end of the year
>- mandatory signalling if mandatory activation occurs
>- if the soft fork activates on the most work chain, nodes don't
>  risk falling out of consensus depending on whether they've opted in
>  to mandatory activation or not
> 
> I think there's some fixable problems with that proposal as it stands
> (mostly already mentioned in the comments in the recently merged PR,
> https://github.com/bitcoin/bips/pull/550 )
> 
> The approach I've been working on is based on the more complicated and
> slower method described by Matt on this list back in January. I've got a
> BIP drafted at:
> 
> 
> https://github.com/ajtowns/bips/blob/202007-activation-dec-thresh/bip-decthresh.mediawiki
> 
> The main difference with the mechanism described in January is that the
> threshold gradually decreases during the secondary period -- it starts at
> 95%, gradually decreases until 50%, then mandatorily activates. The idea
> here is to provide at least some potential reward for miners signalling
> in the secondary phase: if 8% of hashpower had refused to signal for
> a soft-fork, then there would have been no chance of activating until
> the very end of the period. This way, every additional percentage of
> hashpower signalling brings the activation deadline forward.
> 
> The main differences between the two proposals is that the BIP 8 approach
> has a relatively short time 

Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-24 Thread Matt Corallo via bitcoin-dev
Given transaction relay delays and a network topology that is rather 
transparent if you look closely enough, I think this is very real and very 
practical (double-digit % success rate, at least, with some trial and error 
probably 50+). That said, we all also probably know most of the people who know 
enough to go from zero to doing this practically next week. As for motivated 
folks who have lots of time to read code and dig, this seems like something 
worth fixing in the medium term.

Your observation is what’s largely led me to conclude there isn’t a lot we can 
do here without a lot of creativity and fundamental rethinking of our approach. 
One thing I keep harping on is maybe saving the blind-CPFP approach with a) 
eltoo, and b) some kind of magic transaction relay metadata that allows you to 
specify “this spends at least one output on any transaction that spends output 
X” so that nodes can always apply it properly. But maybe that’s a pipedream of 
complexity. I know Antoine has other thoughts.

Matt

> On Jun 22, 2020, at 04:04, Bastien TEINTURIER via bitcoin-dev 
>  wrote:
> 
> 
> Hey ZmnSCPxj,
> 
> I agree that in theory this looks possible, but doing it in practice with 
> accurate control
> of what parts of the network get what tx feels impractical to me (but maybe 
> I'm wrong!).
> 
> It feels to me that an attacker who would be able to do this would break 
> *any* off-chain
> construction that relies on absolute timeouts, so I'm hoping this is insanely 
> hard to
> achieve without cooperation from a miners subset. Let me know if I'm too 
> optimistic on
> this!
> 
> Cheers,
> Bastien
> 
>> Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :
>> Good morning Bastien,
>> 
>> > Thanks for the detailed write-up on how it affects incentives and 
>> > centralization,
>> > these are good points. I need to spend more time thinking about them.
>> >
>> > > This is one reason I suggested using independent pay-to-preimage
>> > > transactions[1]
>> >
>> > While this works as a technical solution, I think it has some incentives 
>> > issues too.
>> > In this attack, I believe the miners that hide the preimage tx in their 
>> > mempool have
>> > to be accomplice with the attacker, otherwise they would share that tx 
>> > with some of
>> > their peers, and some non-miner nodes would get that preimage tx and be 
>> > able to
>> > gossip them off-chain (and even relay them to other mempools).
>> 
>> I believe this is technically possible with current mempool rules, without 
>> miners cooperating with the attacker.
>> 
>> Basically, the attacker releases two transactions with near-equal fees, so 
>> that neither can RBF the other.
>> It releases the preimage tx near miners, and the timelock tx near non-miners.
>> 
>> Nodes at the boundaries between those that receive the preimage tx and the 
>> timelock tx will receive both.
>> However, they will receive one or the other first.
>> Which one they receive first will be what they keep, and they will reject 
>> the other (and *not* propagate the other), because the difference in fees is 
>> not enough to get past the RBF rules (which requires not just a feerate 
>> increase, but also an increase in absolute fee, of at least the minimum 
>> relay feerate times transaction size).
>> 
>> Because they reject the other tx, they do not propagate the other tx, so the 
>> boundary between the two txes is inviolate, neither can get past that 
>> boundary, this occurs even if everyone is running 100% unmodified Bitcoin 
>> Core code.
>> 
>> I am not a mempool expert and my understanding may be incorrect.
>> 
>> Regards,
>> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread Matt Corallo via bitcoin-dev



On 4/23/20 8:46 AM, ZmnSCPxj wrote:
>>> -   Miners, being economically rational, accept this proposal and include 
>>> this in a block.
>>>
>>> The proposal by Matt is then:
>>>
>>> -   The hashlock branch should instead be:
>>> -   B and C must agree, and show the preimage of some hash H (hashlock 
>>> branch).
>>> -   Then B and C agree that B provides a signature spending the hashlock 
>>> branch, to a transaction with the outputs:
>>> -   Normal payment to C.
>>> -   Hook output to B, which B can use to CPFP this transaction.
>>> -   Hook output to C, which C can use to CPFP this transaction.
>>> -   B can still (somehow) not maintain a mempool, by:
>>> -   B broadcasts its timelock transaction.
>>> -   B tries to CPFP the above hashlock transaction.
>>> -   If CPFP succeeds, it means the above hashlock transaction exists and B 
>>> queries the peer for this transaction, extracting the preimage and claiming 
>>> the A->B HTLC.
>>
>> Note that no query is required. The problem has been solved and the 
>> preimage-containing transaction should now confirm just fine.
> 
> Ah, right, so it gets confirmed and the `blocksonly` B sees it in a block.
> 
> Even if C hooks a tree of low-fee transactions on its hook output or normal 
> payment, miners will still be willing to confirm this and the B hook CPFP 
> transaction without, right?

Correct, once it makes it into the mempool we can CPFP it and all the regular 
sub-package CPFP calculation will pick it
and its descendants up. Of course this relies on it not spending any other 
unconfirmed inputs.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread Matt Corallo via bitcoin-dev
Great summary, a few notes inline.

> On Apr 22, 2020, at 21:50, ZmnSCPxj  wrote:
> 
> Good morning lists et al,
> 
> Let me try to summarize things a little:
> 
> * Suppose we have a forwarding payment A->B->C.
> * Suppose B does not want to maintain a mempool and is running in 
> `blocksonly` mode to reduce operational costs.

Quick point of clarification, due to the mempool lacking a consensus system 
(that’s the whole point, after all :p), there are several reasons to that just 
running a full node/having a mempool isn’t sufficient.

> * C triggers B somehow dropping the B<->C channel, such as by sending an 
> `error` message, which will usually cause the other side to drop the channel 
> onchain using its commitment transaction.
> * The dropped B<->C channel has an HTLC (that was set up during the A->B->C 
> forwarding).
> * The HTLC, being used in a Poon-Dryja channel, actually has the following 
> contract text:
> * The fund may be claimed by either of these clauses:
> * C can claim, if C shows the preimage of some hash H (hashlock branch).
> * B and C must agree, and claim after time L (timelock branch).
> * B holds a signature from C that can claim the timelock branch of the HTLC, 
> for a transaction that spends to an output with an `OP_CHECKSEQUENCEVERIFY`.
> * The signature is `SIGHASH_ALL`, so the transaction has a fixed feerate.
> * C can "pin" the HTLC output by spending using the hashlock branch, and 
> creating a large fee, low fee-rate (tree of) transactions.

Another: this is the simplest example. There are also games around the package 
size limits if I recall correctly.

> * As it is a low fee-rate, miners have no incentive to put this in a block, 
> especially if unrelated higher-fee-rate transactions exist that would earn 
> them more money.
> * Even in a full RBF universe, because of the anti-DoS mempool rules, B 
> cannot evict this pinned transaction by just bidding up the feerate.
> * A replacing transaction cannot evict alternatives unless its absolute fee 
> is greater than the absolute fee of the alternative.
> * The pinning transaction has a high fee, but is blockspace-wasteful, so it 
> is:
>   * Undesirable to mine (low feerate).
>   * Difficult to evict (high fee).
> * Thus, B is unable to get its timelock-branch transaction in the mempools of 
> miners.
> * C waits until the A->B HTLC times out, then:
> * C directly contacts miners with an out-of-band proposal to replace its 
> transaction with an alternative that is much smaller and has a low fee, but 
> much better feerate.

Or they can just wait. For example in today’s mempool it would not be strange 
for a transaction at 1 sat/vbyte to wait a day but eventually confirm.

> * Miners, being economically rational, accept this proposal and include this 
> in a block.
> 
> The proposal by Matt is then:
> 
> * The hashlock branch should instead be:
> * B and C must agree, and show the preimage of some hash H (hashlock branch).
> * Then B and C agree that B provides a signature spending the hashlock 
> branch, to a transaction with the outputs:
> * Normal payment to C.
> * Hook output to B, which B can use to CPFP this transaction.
> * Hook output to C, which C can use to CPFP this transaction.
> * B can still (somehow) not maintain a mempool, by:
> * B broadcasts its timelock transaction.
> * B tries to CPFP the above hashlock transaction.
> * If CPFP succeeds, it means the above hashlock transaction exists and B 
> queries the peer for this transaction, extracting the preimage and claiming 
> the A->B HTLC.

Note that no query is required. The problem has been solved and the 
preimage-containing transaction should now confirm just fine.

> Is that a fair summary?

Yep!

> --
> 
> Naively, and remembering I am completely ignorant of the exact details of the 
> mempool rules, it seems to me quite strange that we are allowing an 
> undesirable transaction (tree) into the mempool:
> 
> * Undesirable to mine (low fee-rate).
> * Difficult to evict (high fee).

As noted, such transactions today are profit in 10 hours. Just because they’re 
big doesn’t mean they don’t pay.

> Miners are not interested in low fee-rate transactions, as long as higher 
> fee-rate transactions exist.
> And being difficult to evict means miners cannot get alternatives that are 
> more lucrative for them.
> 
> The reason (as I understand it) eviction is purposely made difficult here is 
> to prevent certain DoS attacks on Bitcoin nodes, specifically:
> 
> 1. Attacker sends a low fee-rate tx as a "root" transaction.
> 2  Attacker sends thousands of low fee-rate tx that build off the above root.

I believe the limit is 25, though the point stands, mostly from a total-size 
perspective.

> 3. Attacker sends a slightly higher fee-rate alternative to the root, 
> evicting the above tree of txes.
> 4. Attacker sends thousands of low fee-rate tx that build off the latest root.
> 5. GOTO 3.
> 
> However, it seems to me, naively, that "an ounce of prevention 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev


On 4/22/20 7:27 PM, Olaoluwa Osuntokun wrote:
> 
>> Indeed, that is what I’m suggesting
> 
> Gotcha, if this is indeed what you're suggesting (all HTLC spends are now
> 2-of-2 multi-sig), then I think the modifications to the state machine I
> sketched out in an earlier email are required. An exact construction which
> achieves the requirements of "you can't broadcast until you have a secret
> which I can obtain from the htlc sig for your commitment transaction, and my
> secret is revealed with another swap", appears to be an open problem, atm.

Hmm, indeed, it does seem to require a change to the state machine, but I don't 
think a very interesting one. Because B
providing A an HTLC signature spending a commitment transaction B will 
broadcast does not allow A to actually broadcast
said HTLC transaction, B can be rather liberal with it. Indeed, however, it 
would require that B provide such a
signature before A can send the commitment_signed that exists today.

> Even if they're restricted in this fashion (must be a 1-in-1 out,
> sighashall, fees are pre agreed upon), they can still spend that with a CPFP
> (while still unconfirmed in the mempool) and create another heavy tree,
> which puts us right back at the same bidding war scenario?

Right, you'd have to use anchor outputs just like we do on the commitment 
transaction :).

>> There are a bunch of ways of doing pinning - just opting into RBF isn’t
>> even close to enough.
> 
> Mhmm, there're other ways of doing pinning. But with anchors as is defined
> in that spec PR, they're forced to spend with an RBF-replaceable
> transaction, which means the party wishing to time things out can enter into
> a bidding war. If the party trying to impeded things participates in this
> progressive absolute fee increase, it's likely that the war terminates
> with _one_ of them getting into the block, which seems to resolve
> everything?

No? Even if we assume there are no tricks that you can play with, eg, the 
package limits duri eviction, which I'd be
surprised about, the "absolute fee/feerate" thing still screws you. The 
attacker here gets to hold something at the
bottom of the mempool and the poor honest party is going to have to pay an 
absurd (likely more than the HTLC value) fee
just to get it unstuck, whereas the attacker never would have had to pay said 
fee.

> -- Laolung
> 
> 
> On Wed, Apr 22, 2020 at 4:20 PM Matt Corallo  > wrote:
> 
> 
> 
>> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun > > wrote:
>>
>> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures 
>> to
>> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
>> > transaction by sending a transaction which spends it and seeing if it 
>> is
>> > accepted
>>
>> Sorry I still don't follow. By "we clearly need to go the other 
>> direction -
>> all HTLC output spends need to be pre-signed.", you don't mean that the 
>> HTLC
>> spends of the non-broadcaster also need to be an off-chain 2-of-2 
>> multi-sig
>> covenant? If the other party isn't restricted w.r.t _how_ they can spend 
>> the
>> output (non-rbf'd, ect), then I don't see how that addresses anything.
> 
> Indeed, that is what I’m suggesting. Anchor output and all. One thing we 
> could think about is only turning it on
> over a certain threshold, and having a separate 
> “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.
> 
>> Also see my mail elsewhere in the thread that the other party is actually
>> forced to spend their HTLC output using an RBF-replaceable transaction. 
>> With
>> that, I think we're all good here? In the end both sides have the 
>> ability to
>> raise the fee rate of their spending transactions with the highest 
>> winning.
>> As long as one of them confirms within the CLTV-delta, then everyone is
>> made whole.
> 
> It does seem like my cached recollection of RBF opt-in was incorrect but 
> please re-read the intro email. There are a
> bunch of ways of doing pinning - just opting into RBF isn’t even close to 
> enough.
> 
>> [1]: https://github.com/bitcoin/bitcoin/pull/18191
>>
>>
>> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo > > wrote:
>>
>> A few replies inline.
>>
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> >
>> >
>> >> While this is somewhat unintuitive, there are any number of good 
>> anti-DoS
>> >> reasons for this, eg:
>> >
>> > None of these really strikes me as "good" reasons for this 
>> limitation, which
>> > is at the root of this issue, and will also plague any more 
>> complex Bitcoin
>> > contracts which rely on nested trees of transaction to confirm 
>> (CTV, Duplex,
>> > channel factories, etc). Regarding the various (seemingly 
>> arbitrary) package
>> 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev


> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun  wrote:
> 
> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
> > transaction by sending a transaction which spends it and seeing if it is
> > accepted
> 
> Sorry I still don't follow. By "we clearly need to go the other direction -
> all HTLC output spends need to be pre-signed.", you don't mean that the HTLC
> spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
> covenant? If the other party isn't restricted w.r.t _how_ they can spend the
> output (non-rbf'd, ect), then I don't see how that addresses anything.

Indeed, that is what I’m suggesting. Anchor output and all. One thing we could 
think about is only turning it on over a certain threshold, and having a 
separate “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.

> Also see my mail elsewhere in the thread that the other party is actually
> forced to spend their HTLC output using an RBF-replaceable transaction. With
> that, I think we're all good here? In the end both sides have the ability to
> raise the fee rate of their spending transactions with the highest winning.
> As long as one of them confirms within the CLTV-delta, then everyone is
> made whole.

It does seem like my cached recollection of RBF opt-in was incorrect but please 
re-read the intro email. There are a bunch of ways of doing pinning - just 
opting into RBF isn’t even close to enough.

> [1]: https://github.com/bitcoin/bitcoin/pull/18191
> 
> 
>> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo  
>> wrote:
>> A few replies inline.
>> 
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> > 
>> > 
>> >> While this is somewhat unintuitive, there are any number of good anti-DoS
>> >> reasons for this, eg:
>> > 
>> > None of these really strikes me as "good" reasons for this limitation, 
>> > which
>> > is at the root of this issue, and will also plague any more complex Bitcoin
>> > contracts which rely on nested trees of transaction to confirm (CTV, 
>> > Duplex,
>> > channel factories, etc). Regarding the various (seemingly arbitrary) 
>> > package
>> > limits it's likely the case that any issues w.r.t computational complexity
>> > that may arise when trying to calculate evictions can be ameliorated with
>> > better choice of internal data structures.
>> > 
>> > In the end, the simplest heuristic (accept the higher fee rate package) 
>> > side
>> > steps all these issues and is also the most economically rationale from a
>> > miner's perspective. Why would one prefer a higher absolute fee package
>> > (which could be very large) over another package with a higher total _fee
>> > rate_?
>> 
>> This seems like a somewhat unnecessary drive-by insult of a project you 
>> don't contribute to, but feel free to start with
>> a concrete suggestion here :).
>> 
>> >> You'll note that B would be just fine if they had a way to safely monitor 
>> >> the
>> >> global mempool, and while this seems like a prudent mitigation for
>> >> lightning implementations to deploy today, it is itself a quagmire of
>> >> complexity
>> > 
>> > Is it really all that complex? Assuming we're talking about just watching
>> > for a certain script template (the HTLC scipt) in the mempool to be able to
>> > pull a pre-image as soon as possible. Early versions of lnd used the 
>> > mempool
>> > for commitment broadcast detection (which turned out to be a bad idea so we
>> > removed it), but at a glance I don't see why watching the mempool is so
>> > complex.
>> 
>> Because watching your own mempool is not guaranteed to work, and during 
>> upgrade cycles that include changes to the
>> policy rules an attacker could exploit your upgraded/non-upgraded status to 
>> perform the same attack.
>> 
>> >> Further, this is a really obnoxious assumption to hoist onto lightning
>> >> nodes - having an active full node with an in-sync mempool is a lot more
>> >> CPU, bandwidth, and complexity than most lightning users were expecting to
>> >> face.
>> > 
>> > This would only be a requirement for Lightning nodes that seek to be a part
>> > of the public routing network with a desire to _forward_ HTLCs. This isn't
>> > doesn't affect laptops or mobile phones which likely mostly have private
>> > channels and don't participate in HTLC forwarding. I think it's pretty
>> > reasonable to expect a "proper" routing node on the network to be backed by
>> > a full-node. The bandwidth concern is valid, but we'd need concrete numbers
>> > that compare the bandwidth over head of mempool awareness (assuming the
>> > latest and greatest mempool syncing) compared with the overhead of the
>> > channel update gossip and gossip queries over head which LN nodes face 
>> > today
>> > as is to see how much worse off they really would be.
>> 
>> If mempool-watching were practical, maybe, though there are a number of 
>> folks who are talking about designing
>> 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev
Hmm, that's an interesting suggestion, it definitely raises the bar for attack 
execution rather significantly. Because lightning (and other second-layer 
systems) already relies heavily on uncensored access to blockchain data, its 
reasonable to extend the "if you don't have enough blocks, aggressively query 
various sources to find new blocks, or, really just do it always" solution to 
"also send relevant transactions while we're at it".

Sadly, unlike for block data, there is no consensus mechanism for nodes to 
ensure the transactions in their mempools are the same as others. Thus, if you 
focus on sending the pinning transaction to miner nodes directly (which isn't 
trivial, but also not nearly as hard as it sounds), you could still pull off 
the attack. However, to do it now, you'd need to
wait for your counterparty to broadcast the corresponding timeout transaction 
(once it is confirmable, and can thus get into mempools), turning the whole 
thing into a mempool-acceptance race. Luckily there isn’t much cost to 
*trying*, though it’s less likely you’ll succeed.

There are also practical design issues - if you’re claiming multiple HTLC 
output in a single transaction the node would need to provide reject messages 
for each input which is conflicted, something which we’d need to think hard 
about the DoS implications of.

In any case, while it’s definitely better than nothing, it’s unclear if it’s 
really the kind of thing I’d want to rely on for my own funds.

Matt


> On 4/22/20 2:24 PM, David A. Harding wrote:
>> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev 
>> wrote:
>> A lightning counterparty (C, who received the HTLC from B, who
>> received it from A) today could, if B broadcasts the commitment
>> transaction, spend an HTLC using the preimage with a low-fee,
>> RBF-disabled transaction.  After a few blocks, A could claim the HTLC
>> from B via the timeout mechanism, and then after a few days, C could
>> get the HTLC-claiming transaction mined via some out-of-band agreement
>> with a small miner. This leaves B short the HTLC value.
> 
> IIUC, the main problem is honest Bob will broadcast a transaction
> without realizing it conflicts with a pinned transaction that's already
> in most node's mempools.  If Bob knew about the pinned transaction and
> could get a copy of it, he'd be fine.
> 
> In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
> and sends his conflicting transaction, the nodes would reply with
> something like "rejected: code 123: conflicts with txid 0123...cdef".
> Bob could then reply with a a getdata('tx', '0123...cdef') to get the
> pinned transaction, parse out its preimage, and resolve the HTLC.
> 
> This approach isn't perfect (if it even makes sense at all---I could be
> misunderstanding the problem) because one of the problems that caused
> BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
> if Bob had at least one honest peer that had the pinned transaction in
> its mempool and which implemented reject-with-conflicting-txid, Bob
> might be ok.
> 
> -Dave

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev



On 4/22/20 12:12 AM, ZmnSCPxj wrote:
> Good morning Matt, and list,
> 
> 
> 
>> RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds, how, 
>> now?")
>> =
>>
>> You'll note that in the discussion of RBF pinning we were pretty broad, 
>> and that that discussion seems to in fact cover
>> our HTLC outputs, at least when spent via (3) or (4). It does, and in 
>> fact this is a pretty severe issue in today's
>> lightning protocol [2]. A lightning counterparty (C, who received the 
>> HTLC from B, who received it from A) today could,
>> if B broadcasts the commitment transaction, spend an HTLC using the 
>> preimage with a low-fee, RBF-disabled transaction.
>> After a few blocks, A could claim the HTLC from B via the timeout 
>> mechanism, and then after a few days, C could get the
>> HTLC-claiming transaction mined via some out-of-band agreement with a 
>> small miner. This leaves B short the HTLC value.
> 
> My (cached) understanding is that, since RBF is signalled using `nSequence`, 
> any `OP_CHECKSEQUENCEVERIFY` also automatically imposes the requirement "must 
> be RBF-enabled", including `<0> OP_CHECKSEQUENCEVERIFY`.
> Adding that clause (2 bytes in witness if my math is correct) to the hashlock 
> branch may be sufficient to prevent C from making an RBF-disabled transaction.

Hmm, indeed, though note that (IIRC) you can break this by adding children or 
parents which are *not* RBF-enabled and
then the package may lose the ability to be RBF'd.

> But then you mention out-of-band agreements with miners, which basically 
> means the transaction might not be in the mempool at all, in which case the 
> vulnerability is not really about RBF or relay, but sheer economics.

No. The whole point of this attack is that you keep a transaction in the 
mempool but unconfirmed via RBF pinning, which
prevents an *alternative* transaction from being confirmed. You then have 
plenty of time to go get it confirmed later.

> The payment is A->B->C, and the HTLC A->B must have a larger timeout (L + 1) 
> than the HTLC B->C (L), in abstract non-block units.
> The vulnerability you are describing means that the current time must now be 
> L + 1 or greater ("A could claim the HTLC from B via the timeout mechanism", 
> meaning the A->B HTLC has timed out already).
> 
> If so, then the B->C transaction has already timed out in the past and can be 
> claimed in two ways, either via B timeout branch or C hashlock branch.
> This sets up a game where B and C bid to miners to get their version of 
> reality committed onchain.
> (We can neglect out-of-band agreements here; miners have the incentive to 
> publicly leak such agreements so that other potential bidders can offer even 
> higher fees for their versions of that transaction.)

Right, I think I didn't explain clearly enough. The point is that, here, B 
tries to broadcast the timeout transaction
but cannot because there is an in-mempool conflict.

> Before L+1, C has no incentive to bid, since placing any bid at all will leak 
> the preimage, which B can then turn around and use to spend from A, and A and 
> C cannot steal from B.
> 
> Thus, B should ensure that *before* L+1, the HTLC-Timeout has been committed 
> onchain, which outright prevents this bidding war from even starting.
> 
> The issue then is that B is using a pre-signed HTLC-timeout, which is needed 
> since it is its commitment tx that was broadcast.
> This prevents B from RBF-ing the HTLC-Timeout transaction.
> 
> So what is needed is to allow B to add fees to HTLC-Timeout:
> 
> * We can add an RBF carve-out output to HTLC-Timeout, at the cost of more 
> blockspace.
> * With `SIGHASH_NOINPUT` we can make the C-side signature 
> `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side signature 
> for a higher-fee version of HTLC-Timeout (assuming my cached understanding of 
> `SIGHASH_NOINPUT` still holds).

This does not solve the issue because you can add as many fees as you want, as 
long as the transaction is RBF-pinned,
there is not much you can do in an automated fashion.

> With this, B can exponentially increase the fee as L+1 approaches.
> If B can get HTLC-Timeout confirmed before L+1, then C cannot steal the HTLC 
> value at all, since the UTXO it could steal from has already been spent.
> 
> In particular, it does not seem to me that it is necessary to change the 
> hashlock-branch transaction of C at all, since this mechanism is enough to 
> sidestep the issue (as I understand it).
> But it does point to a need to make HTLC-Timeout (and possibly symmetrically, 
> HTLC-Success) also fee-bumpable.
> 
> Note as well that this does not require a mempool: B can run in `blocksonly` 
> mode and as each block comes in from L to L+1, if HTLC-Timeout is not 
> confirmed, feebump HTLC-Timeout.
> In particular, HTLC-Timeout comes into play only if B broadcast its own 
> commitment transaction, and B *should* be aware that it 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via bitcoin-dev
A few replies inline.

On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
> Hi Matt,
> 
> 
>> While this is somewhat unintuitive, there are any number of good anti-DoS
>> reasons for this, eg:
> 
> None of these really strikes me as "good" reasons for this limitation, which
> is at the root of this issue, and will also plague any more complex Bitcoin
> contracts which rely on nested trees of transaction to confirm (CTV, Duplex,
> channel factories, etc). Regarding the various (seemingly arbitrary) package
> limits it's likely the case that any issues w.r.t computational complexity
> that may arise when trying to calculate evictions can be ameliorated with
> better choice of internal data structures.
> 
> In the end, the simplest heuristic (accept the higher fee rate package) side
> steps all these issues and is also the most economically rationale from a
> miner's perspective. Why would one prefer a higher absolute fee package
> (which could be very large) over another package with a higher total _fee
> rate_?

This seems like a somewhat unnecessary drive-by insult of a project you don't 
contribute to, but feel free to start with
a concrete suggestion here :).

>> You'll note that B would be just fine if they had a way to safely monitor the
>> global mempool, and while this seems like a prudent mitigation for
>> lightning implementations to deploy today, it is itself a quagmire of
>> complexity
> 
> Is it really all that complex? Assuming we're talking about just watching
> for a certain script template (the HTLC scipt) in the mempool to be able to
> pull a pre-image as soon as possible. Early versions of lnd used the mempool
> for commitment broadcast detection (which turned out to be a bad idea so we
> removed it), but at a glance I don't see why watching the mempool is so
> complex.

Because watching your own mempool is not guaranteed to work, and during upgrade 
cycles that include changes to the
policy rules an attacker could exploit your upgraded/non-upgraded status to 
perform the same attack.

>> Further, this is a really obnoxious assumption to hoist onto lightning
>> nodes - having an active full node with an in-sync mempool is a lot more
>> CPU, bandwidth, and complexity than most lightning users were expecting to
>> face.
> 
> This would only be a requirement for Lightning nodes that seek to be a part
> of the public routing network with a desire to _forward_ HTLCs. This isn't
> doesn't affect laptops or mobile phones which likely mostly have private
> channels and don't participate in HTLC forwarding. I think it's pretty
> reasonable to expect a "proper" routing node on the network to be backed by
> a full-node. The bandwidth concern is valid, but we'd need concrete numbers
> that compare the bandwidth over head of mempool awareness (assuming the
> latest and greatest mempool syncing) compared with the overhead of the
> channel update gossip and gossip queries over head which LN nodes face today
> as is to see how much worse off they really would be.

If mempool-watching were practical, maybe, though there are a number of folks 
who are talking about designing
partially-offline local lightning hubs which would be rendered impractical.

> As detailed a bit below, if nodes watch the mempool, then this class of
> attack assuming the anchor output format as described in the open
> lightning-rfc PR is mitigated. At a glance, watching the mempool seems like
> a far less involved process compared to modifying the state machine as its
> defined today. By watching the mempool and implementing the changes in
> #lightning-rfc/688, then this issue can be mitigated _today_. lnd 0.10
> doesn't yet watch the mempool (but does include anchors [1]), but unless I'm
> missing something it should be pretty straight forward to add which mor or 
> less
> resolves this issue all together.
> 
>> not fixing this issue seems to render the whole exercise somewhat useless
> 
> Depends on if one considers watching the mempool a fix. But even with that a
> base version of anchors still resolves a number of issues including:
> eliminating the commitment fee guessing game, allowing users to pay less on
> force close, being able to coalesce 2nd level HTLC transactions with the
> same CLTV expiry, and actually being able to reliably enforce multi-hop HTLC
> resolution.
> 
>> Instead of making the HTLC output spending more free-form with
>> SIGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other
>> direction - all HTLC output spends need to be pre-signed.
> 
> I'm not sure this is actually immediately workable (need to think about it
> more). To see why, remember that the commit_sig message includes HTLC
> signatures for the _remote_ party's commitment transaction, so they can
> spend the HTLCs if they broadcast their version of the commitment (force
> close). If we don't somehow also _gain_ signatures (our new HTLC signatures)
> allowing us to spend HTLCs on _their_ version of the commitment, then if
> they 

[bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-20 Thread Matt Corallo via bitcoin-dev
[Hi bitcoin-dev, in lightning-land we recently discovered some quite 
frustrating issues which I thought may merit
broader discussion]

While reviewing the new anchor outputs spec [1] last week, I discovered it 
introduced a rather nasty ability for a user
to use RBF Pinning to steal in-flight HTLCs which are being enforced on-chain. 
Sadly, Antoine pointed out that this is
an issue in today's light as well, though see [2] for qualifications. After 
some back-and-forth with a few other
lightning folks, it seems clear that there is no easy+sane fix (and the 
practicality of exploitation today seems
incredibly low), so soliciting ideas publicly may be the best step forward.

I've included lots of background for those who aren't super comfortable with 
lightning's current design, but if you
already know it well, you can skip at least background 1 & 2.

Background - Lightning's Transactions (you can skip this)
=

As many of you likely know, lightning today does all its update mechanics 
through:
 a) a 2-of-2 multisig output, locking in the channel,
 b) a "commitment transaction", which spends that output: i) back to its 
owners, ii) to "HTLC outputs",
 c) HTLC transactions which spend the relevant commitment transaction HTLC 
outputs.

This somewhat awkward third layer of transactions is required to allow HTLC 
timeouts to be significantly lower than the
time window during which a counterparty may be punished for broadcasting a 
revoked state. That is to say, you want to
"lock-in" the resolution of an HTLC output (ie by providing the hash lock 
preimage on-chain) by a fixed block height
(likely a few hours from the HTLC creation), but the punishment mechanism needs 
to occur based on a sequence height
(possibly a day or more after transaction broadcast).

As Bitcoin has no covanents, this must occur using pre-signed transactions - 
namely "HTLC-Success" and "HTLC-Timeout"
transactions, which finalize the resolution of an HTLC, but have a 
sequence-lock for some time during which the funds
may be taken if they had previously been revoked. To avoid needless delays, if 
the counterparty which did *not*
broadcast the commitment transaction wishes to claim the HTLC value, they may 
do so immediately (as there is no reason
to punish the non-broadcaster for having *not* broadcasted a revoked state). 
Thus, we have four possible HTLC
resolutions depending on the combination of which side broadcast the HTLC and 
which side sent the HTLC (ie who can claim
it vs who can claim it after time-out):

 1) pre-signed HTLC-Success transaction, providing the preimage in the witness 
and sent to an output which is sequence-
locked for some time to provide the non-broadcasting side the opportunity 
to take the funds,
 2) pre-signed HTLC-Timeout transaction, time-locked to N, providing no 
preimage, but with a similar sequence lock and
output as above,
 3) non-pre-signed HTLC claim, providing the preimage in the witness and 
unencumbered by the broadcaster's signature,
 4) non-pre-signed HTLC timeout, OP_CLTV to N, and similarly unencumbered.

Background 2 - RBF Pinning (you can skip this)
==

Bitcoin Core's general policy on RBF transactions is that if a counterparty 
(either to the transaction, eg in lightning,
or not, eg a P2P node which sees the transaction early) can modify a 
transaction, especially if they can add an input or
output, they can prevent it from confirming in a world where there exists a 
mempool (ie in a world where Bitcoin works).
While this is somewhat unintuitive, there are any number of good anti-DoS 
reasons for this, eg:
 * (ok, this is a bad reason, but) a child transaction could be marked 
'non-RBF', which would mean allowing the parent
   be RBF'd would violate the assumptions those who look at the RBF opt-in 
marking make,
 * a parent may be very large, but low feerate - this requires the RBF attempt 
to "pay for its own relay" and include a
   large absolute fee just to get into the mempool,
 * one of the various package size limits is at its maximum, and depending on 
the structure of the package the
   computational complexity of calculation evictions may be more than we want 
to do for a given transaction.

Background 3 - "The RBF Carve-Out" (you can skip this)
==

In today's lightning, we have a negotiation of what we expect the future 
feerate to be when one party goes to close the
channel. All the pre-signed transactions above are constructed with this 
fee-rate in mind, and, given they are all
pre-signed, adding additional fee to them is not generally an option. This is 
obviously a very maddening prediction
game, especially when the security consequences for negotiating a value which 
is wrong may allow your counterparty to
broadcast and time out HTLCs which you otherwise have the preimage for. To 
remove this quirk, we came up with an idea a
year or two back now called "anchor outputs" (aka 

Re: [bitcoin-dev] Taproot (and graftroot) complexity

2020-02-09 Thread Matt Corallo via bitcoin-dev
Responding purely to one point as this may be sufficient to clear up
lots of discussion:

On 2/9/20 8:19 PM, Bryan Bishop via bitcoin-dev wrote:
> Is Taproot just a probability assumption about the frequency and
> likelihood of
> the signature case over the script case? Is this a good assumption?  The BIP
> only goes as far as to claim that the advantage is apparent if the outputs
> *could be spent* as an N of N, but doesn't make representations about
> how likely
> that N of N case would be in practice compared to the script paths. Perhaps
> among use cases, more than half of the ones we expect people to be doing
> could be
> spent as an N of N. But how frequently would that path get used?
> Further, while
> the *use cases* might skew toward things with N of N opt-out, we might
> end up in
> a power law case where it's the one case that doesn't use an N of N opt
> out at
> all (or at a de minimis level) that becomes very popular, thereby making
> Taproot
> more costly then beneficial.
Its not just about the frequency and likelihood, no. If there is a
clearly-provided optimization for this common case in the protocol, then
it becomes further more likely that developers put in the additional
effort required to make this possibility a reality. This has a very
significant positive impact on user privacy, especially those who wish
to utilize more advanced functionality in Bitcoin. Further, yes, it is
anticipated that the N of N case is possible to take in the vast
majority of deployed use-cases for advanced scripting systems, ensuring
that it is maximally efficient to do so (and thereby encouraging
developers to do so) is a key goal in this work.

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Characterizing orphan transaction in the Bitcoin network

2020-02-02 Thread Matt Corallo via bitcoin-dev
The orphan pool has nontrivial denial of service properties around transaction 
validation. In general, I think the goal has been to reduce/remove it, not the 
other way around. In any case, this is likely the wrong forum for 
software-level discussion of Bitcoin Core. For that, you probably want to open 
an issue on github.com/bitcoin/bitcoin.

Matt

> On Feb 1, 2020, at 14:12, Anas via bitcoin-dev 
>  wrote:
> 
> 
> Hi all,
> 
> This paper - https://arxiv.org/pdf/1912.11541.pdf - characterizes orphan 
> transactions in the Bitcoin network and shows that increasing the size of the 
> orphan pool reduces network overhead with almost no additional performance 
> overhead. What are your thoughts?
> 
> Abstract: 
>> Orphan transactions are those whose parental income-sources are missing at 
>> the time that they are processed. These transactions are not propagated to 
>> other nodes until all of their missing parents are received, and they thus 
>> end up languishing in a local buffer until evicted or their parents are 
>> found. Although there has been little work in the literature on 
>> characterizing the nature and impact of such orphans, it is intuitive that 
>> they may affect throughput on the Bitcoin network. This work thus seeks to 
>> methodically research such effects through a measurement campaign of orphan 
>> transactions on live Bitcoin nodes. Our data show that, surprisingly, orphan 
>> transactions tend to have fewer parents on average than non-orphan 
>> transactions. Moreover, the salient features of their missing parents are a 
>> lower fee and larger size than their non-orphan counterparts, resulting in a 
>> lower transaction fee per byte. Finally, we note that the network overhead 
>> incurred by these orphan transactions can be significant, exceeding 17% when 
>> using the default orphan memory pool size (100 transactions). However, this 
>> overhead can be made negligible, without significant computational or memory 
>> demands, if the pool size is merely increased to 1000 transactions.
> 
> Regards,
> Anas
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-14 Thread Matt Corallo via bitcoin-dev
In general, your thoughts on the theory of how consensus changes should
work I strongly agree with. However, my one significant disagreement is
how practical it is for things to *actually* work that way. While I wish
ecosystem players (both businesses and users) spent their time
interacting with the Bitcoin development community enough that they had
a deep understanding of upcoming protocol change designs, it just isn't
realistic to expect that. Thus, having an "out" to avoid activation
after a release has been cut with fork activation logic is quite a
compelling requirement.

Thus, part of the goal here is that we ensure we have that "out", and
can observe the response of the ecosystem once the change is "staring
them in the face", as it were. A BIP 9 process is here not only to offer
a compelling activation path, but *also* to allow for observation and
discussion time for any lingering minor objections prior to a BIP 8/flag
day activation.

As for a "mandatory signaling period" as a part of BIP 8, I find this
idea strange both in that it flies in the face of all recent soft fork
design work, and because it doesn't actually accomplish its stated goal.

Recent soft-fork design has all been about how to design something with
minimal ecosystem impact. Certainly in the 95% activation case I can't
say I feel strongly, but if you actually *hit* the BIP 8 flag day,
deliberately causing significant network forks for old clients has the
potential to cause real ecosystem risk. While part of the reason for a
24-month time horizon between BIP 8 decision and flag-day activation
endeavors to de-risk the chance that major players are running on
un-upgraded nodes, you cannot ignore the reality of them, both full-,
and SPV-clients.

On the other hand, in practice, we've seen that version bits are set on
the pool side, and not on the node side, meaning the goal of ensuring
miners have upgraded isn't really accomplished in practice, you just end
up forking the chain for no gain.

Matt

On 1/11/20 2:42 PM, Anthony Towns wrote:
> On Fri, Jan 10, 2020 at 09:30:09PM +, Matt Corallo via bitcoin-dev wrote:
>> 1) a standard BIP 9 deployment with a one-year time horizon for
>> activation with 95% miner readiness,
>> 2) in the case that no activation occurs within a year, a six month
>> quieting period during which the community can analyze and discussion
>> the reasons for no activation and,
>> 3) in the case that it makes sense, a simple command-line/bitcoin.conf
>> parameter which was supported since the original deployment release
>> would enable users to opt into a BIP 8 deployment with a 24-month
>> time-horizon for flag-day activation (as well as a new Bitcoin Core
>> release enabling the flag universally).
> 
> FWIW etc, but my perspective on this is that the way we want consensus
> changes in Bitcoin to work is:
> 
>  - decentralised: we want everyone to be able to participate, in
>designing/promoting/reviewing changes, without decision making
>power getting centralised amongst one group or another
> 
>  - technical: we want changes to be judged on their objective technical
>merits; politics and animal spirits and the like are fine, especially
>for working out what to prioritise, but they shouldn't be part of the
>final yes/no decision on consensus changes
> 
>  - improvements: changes might not make everyone better off, but we
>don't want changes to screw anyone over either -- pareto
>improvements in economics, "first, do no harm", etc. (if we get this
>right, there's no need to make compromises and bundle multiple
>flawed proposals so that everyone's an equal mix of happy and
>miserable)
> 
> In particular, we don't want to misalign skills and responsibilities: it's
> fine for developers to judge if a proposal has bugs or technical problems,
> but we don't want want developers to have to decide if a proposal is
> "sufficiently popular" or "economically sound" and the like, for instance.
> Likewise we don't want to have miners or pool operators have to take
> responsibility for managing the whole economy, rather than just keeping
> their systems running.
> 
> So the way I hope this will work out is:
> 
>  - investors, industry, people in general work out priorities for what's
>valuable to work on; this is an economic/policy/subjective question,
>that everyone can participate in, and everyone can act on --
>either directly if they're developers who can work on proposals and
>implementations directly, or indirectly by persuading or paying other
>people to work on whatever's important
> 
>  - developers work on proposals, designing and implementing them to make
>(some subset of) bitcoin users better off, and t

Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-14 Thread Matt Corallo via bitcoin-dev
Good thing no one is proposing a naive BIP 9 approach :). I'll note that
BIP 9 has been fairly robust (spy-mining issues notwithstanding, which
we believe are at least largely solved in the wild) in terms of safety,
though I noted extensively in the first mail that it failed in terms of
misunderstanding the activation parameters. I think the above proposal
largely solves that, and I don't see much in the way of arguing that
point from you, here.

As an aside, BIP 9 is also the Devil We Know, which carries a lot of
value, since we've found (and addressed) direct issues with it, whereas
all other activation methods we have ~0 experience with in the modern
Bitcoin network.

On 1/10/20 11:37 PM, Luke Dashjr wrote:
> I think BIP 9 is a proven failure, and flag day softforks have their own 
> problems:
> 
> A) There is no way to unambiguously say "the rules for this chain are 
> ". It leaves the chain in a kind of "quantum state" where the rules 
> could be one thing, or could be another. Until the new rules are violated, we 
> do not know if the softfork was a success or not. Because of this, people 
> will rightly shy away from relying on the new rules. This problem is made 
> worse by the fact that common node policies might not produce blocks which 
> violate the rules. If we had gone with BIP149 for Segwit, it is IMO probable 
> we would still not have a clear answer today to "Is Segwit active or not?"
> 
> B) Because of (A), there is also no clear way to intentionally reject the 
> softfork. Those who do not consent to it are effectively compelled to accept 
> it anyway. While it is usually possible to craft an opposing softfork, this 
> should IMO be well-defined and simple to do (including a plan to do so in any 
> BIP9-alike spec).
> 
> For these reasons, in 2017, I proposed revising BIP 8 with a mandatory 
> signal, 
> similar to how BIP148 worked: https://github.com/bitcoin/bips/pull/550
> However, the author of BIP 8 has since vanished, and because we had no 
> immediate softfork plans, efforts to move this forward were abandoned 
> temporarily. It seems like a good time to resume this work.
> 
> In regard to your goal #3, I would like to note that after the mandatory 
> signal period, old miners could resume mining unchanged. This means there is 
> a temporary loss of hashrate to the network, but I think it is overall better 
> than the alternatives. The temporary loss of income from invalid blocks will 
> also give affected miners a last push to upgrade, hopefully improving the 
> long run security of the network hashrate.
> 
> Luke
> 
> (P.S. As for your #1, I do think it is oversimplified in some cases, but we 
> should leave that for later discussion when it actually becomes relevant.)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-10 Thread Matt Corallo via bitcoin-dev
I went back and forth with a few folks on this one. I think the fact that we 
lose goals 3/4 very explicitly in order to nudge miners seems like a poor trade 
off. I’ll note that your point 2 here seems a bit disconnected to me. If you 
want to fork yourself off the network, you can do it in easier ways, and if 
miners want to maliciously censors transactions to the detriment of users, 
rejecting a version bit doesn’t really help avoid that.

Your point about upgrade warnings is well-made, but I’m dubious of it’s value 
over the network chaos many large forks might otherwise cause.

Matt

> On Jan 10, 2020, at 17:22, Jorge Timón  wrote:
> 
> Well, bip9 doesn't only fall apart in case of unreasonable objection,
> it also fails simply with miners' apathy.
> Anyway, your proposed plan should take care of that case too, I think.
> Overall sounds good to me.
> 
> Regarding bip8-like activation, luke-jr suggested that instead of
> simply activating on date x if failed to do so by miners' signaling, a
> consensus rule could require the blocks to signal for activation in
> the last activation window.
> I see 2 main advantages for this:
> 
> 1) Outdated nodes can implement warnings (like in bip9) and they can
> see those warnings even if it's activated in the last activation
> window. Of course this can become counterproductive if miners' squat
> signaling bits for asicboost again.
> 
> 2) It is easier for users to actively resist a given change they
> oppose. Instead of requiring signaling, their nodes can be set to
> ignore chains that activate it. This will result in a fork, but if
> different groups of users want different things, this is arguably the
> best behaviour: a "clean" split.
> 
> I assume many people won't like this, but I really think we should
> consider how users should ideally resist an unwanted change, even if
> the proponents had the best intentions in mind, there may be
> legitimate reasons to resist it that they may not have considered.
> 
>> On Fri, Jan 10, 2020 at 10:30 PM Matt Corallo via bitcoin-dev
>>  wrote:
>> 
>> There are a series of soft-fork designs which have recently been making
>> good progress towards implementation and future adoption. However, for
>> various reasons, activation methods therefor have gotten limited
>> discussion. I'd like to reopen that discussion here.
>> 
>> It is likely worth revisiting the goals both for soft forks and their
>> activation methods to start. I'm probably missing some, but some basic
>> requirements:
>> 
>> 1) Avoid activating in the face of significant, reasonable, and directed
>> objection. Period. If someone has a well-accepted, reasonable use of
>> Bitcoin that is working today, have no reason to believe wouldn't work
>> long into the future without a change, and which would be made
>> impossible or significantly more difficult by a change, that change must
>> not happen. I certainly hope there is no objection on this point (see
>> the last point for an important caveat that I'm sure everyone will jump
>> to point out).
>> 
>> 2) Avoid activating within a timeframe which does not make high
>> node-level-adoption likely. As with all "node" arguments, I'll note that
>> I mean "economically-used" nodes, not the thousand or so spy nodes on
>> Google Cloud and AWS. Rule changes don't make sense without nodes
>> enforcing them, whether they happen to be a soft fork, hard fork, or a
>> blue fork, so activating in a reduced timeframe that doesn't allow for
>> large-scale node adoption doesn't have any value, and may cause other
>> unintended side effects.
>> 
>> 3) Don't (needlessly) lose hashpower to un-upgraded miners. As a part of
>> Bitcoin's security comes from miners, reducing the hashpower of the
>> network as a side effect of a rule change is a needless reduction in a
>> key security parameter of the network. This is why, in recent history,
>> soft forks required 95% of hashpower to indicate that they have upgraded
>> and are capable of enforcing the new rules. Further, this is why recent
>> soft forks have not included changes which would result in a standard
>> Bitcoin Core instance mining invalid-by-new-rules changes (by relying on
>> the standardness behavior of Bitcoin Core).
>> 
>> 4) Use hashpower enforcement to de-risk the upgrade process, wherever
>> possible. As a corollary of the above, one of the primary reasons we use
>> soft forks is that hashpower-based enforcement of rules is an elegant
>> way to prevent network splits during the node upgrade process. While it
>> does not make sense to invest material value in systems protected by ne

[bitcoin-dev] Modern Soft Fork Activation

2020-01-10 Thread Matt Corallo via bitcoin-dev
There are a series of soft-fork designs which have recently been making
good progress towards implementation and future adoption. However, for
various reasons, activation methods therefor have gotten limited
discussion. I'd like to reopen that discussion here.

It is likely worth revisiting the goals both for soft forks and their
activation methods to start. I'm probably missing some, but some basic
requirements:

1) Avoid activating in the face of significant, reasonable, and directed
objection. Period. If someone has a well-accepted, reasonable use of
Bitcoin that is working today, have no reason to believe wouldn't work
long into the future without a change, and which would be made
impossible or significantly more difficult by a change, that change must
not happen. I certainly hope there is no objection on this point (see
the last point for an important caveat that I'm sure everyone will jump
to point out).

2) Avoid activating within a timeframe which does not make high
node-level-adoption likely. As with all "node" arguments, I'll note that
I mean "economically-used" nodes, not the thousand or so spy nodes on
Google Cloud and AWS. Rule changes don't make sense without nodes
enforcing them, whether they happen to be a soft fork, hard fork, or a
blue fork, so activating in a reduced timeframe that doesn't allow for
large-scale node adoption doesn't have any value, and may cause other
unintended side effects.

3) Don't (needlessly) lose hashpower to un-upgraded miners. As a part of
Bitcoin's security comes from miners, reducing the hashpower of the
network as a side effect of a rule change is a needless reduction in a
key security parameter of the network. This is why, in recent history,
soft forks required 95% of hashpower to indicate that they have upgraded
and are capable of enforcing the new rules. Further, this is why recent
soft forks have not included changes which would result in a standard
Bitcoin Core instance mining invalid-by-new-rules changes (by relying on
the standardness behavior of Bitcoin Core).

4) Use hashpower enforcement to de-risk the upgrade process, wherever
possible. As a corollary of the above, one of the primary reasons we use
soft forks is that hashpower-based enforcement of rules is an elegant
way to prevent network splits during the node upgrade process. While it
does not make sense to invest material value in systems protected by new
rules until a significant majority of "economic nodes" is enforcing said
rules, hashpower lets us neatly bridge the gap in time between
activation and then. By having a supermajority of miners enforce the new
rules, attempts at violating the new rules does not result in a
significant network split, disrupting existing users of the system. If
we aren't going to take advantage of this, we should do a hard fork
instead, with the necessarily slow timescale that entails.

5) Follow the will of the community, irrespective of individuals or
unreasoned objection, but without ever overruling any reasonable
objection. Recent history also includes "objection" to soft forks in the
form of "this is bad because it doesn't fix a different problem I want
fixed ASAP". I don't think anyone would argue this qualifies as a
reasonable objection to a change, and we should be in a place, as a
community (never as developers or purely one group), to ignore such
objections and make forward progress in spite of them. We don't make
good engineering decisions by "bundling" unrelated features together to
enable political football and compromise.

I think BIP 9 (plus a well-crafted softfork) pretty effectively checks
the boxes for #2-4 here, and when done carefully with lots of community
engagement and measurement, can effectively fulfill #1 as well. #5 is,
as I'm sure everyone is aware, where it starts to fall down pretty hard.

BIP 8 has been proposed as an alternative, largely in response to issues
with #5. However, a naive deployment of it, rather obviously, completely
fails #1, #3, and #4, and, in my view, fails #5 as well by both giving
an impression of, setting a precedent of, and possibly even in practice
increasing the ability of developers to decide the consensus rules of
the system. A BIP 8 deployment that more accurately measures community
support as a prerequisite could arguably fulfill #1 and #5, though I'm
unaware of any concrete proposals on how to accomplish that. Arguably, a
significantly longer activation window could also allow BIP 8 to fulfill
#3 and #4, but only by exploiting the "needlessly" and "wherever
possible" loopholes.

You may note that, from the point of view of achieving the critical
goals here, BIP 8 is only different from a flag-day activation in that,
if it takes the "happy-path" of activating before the flag day, it looks
like BIP 9, but isn't guaranteed to. It additionally has the
"nice-to-have" property that activation can occur before the flag-day in
the case of faster miner adoption, though there is a limit of how fast
is useful due 

Re: [bitcoin-dev] v3 onion services

2019-11-17 Thread Matt Corallo via bitcoin-dev
There is effort ongoing to upgrade the Bitcoin P2P protocol to support other 
address types, including onion v3. There are various posts on this ML under the 
title “addrv2”. Further review and contributions to that effort is, as always, 
welcome.

> On Nov 17, 2019, at 00:05, Mr. Lee Chiffre via bitcoin-dev 
>  wrote:
> 
> Right now bitcoin client core supports use of tor hidden service. It
> supports v2 hidden service. I am in progress of creating a new bitcoin
> node which will use v3 hidden service instead of v2. I am looking at
> bitcoin core and btcd to use. Do any of these or current node software
> support the v3 onion addresses for the node address? What about I2P
> addresses? If not what will it take to get it to support the longer
> addresses that is used by i2p and tor v3?
> 
> 
> -- 
> lee.chif...@secmail.pro
> PGP 97F0C3AE985A191DA0556BCAA82529E2025BDE35
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bech32 weakness and impact on bip-taproot addresses

2019-11-10 Thread Matt Corallo via bitcoin-dev
Seems good to me, though I'm curious if we have any (even vaguely)
immediate need for non-32/20-byte Segwit outputs? It seems to me this
can be resolved by just limiting the size of bech32 outputs and calling
it a day - adding yet another address format has very significant
ecosystem costs, and if we don't anticipate needing it for 5 years (if
at all)...lets not jump to pay that cost.

Matt

On 11/10/19 9:51 PM, Pieter Wuille via bitcoin-dev wrote:
> On Thu, Nov 7, 2019, 18:16 David A. Harding  > wrote:
> 
> On Thu, Nov 07, 2019 at 02:35:42PM -0800, Pieter Wuille via
> bitcoin-dev wrote:
> > In the current draft, witness v1 outputs of length other
> > than 32 remain unencumbered, which means that for now such an
> > insertion or erasure would result in an output that can be spent by
> > anyone. If that is considered unacceptable, it could be prevented by
> > for example outlawing v1 witness outputs of length 31 and 33.
> 
> Either a consensus rule or a standardness rule[1] would require anyone
> using a bech32 library supporting v1+ segwit to upgrade their library.
> Otherwise, users of old libraries will still attempt to pay v1 witness
> outputs of length 31 or 33, causing their transactions to get rejected
> by newer nodes or get stuck on older nodes.  This is basically the
> problem #15846[2] was meant to prevent.
> 
> If we're going to need everyone to upgrade their bech32 libraries
> anyway, I think it's probably best that the problem is fixed in the
> bech32 algorithm rather than at the consensus/standardness layer.
> 
> 
> Admittedly, this affecting development of consensus or standardness
> rules would feel unnatural. In addition, it also has the potential
> downside of breaking batched transactions in some settings (ask an
> exchange for a withdrawal to a invalid/nonstandard version, which they
> batch with other outputs that then get stuck because the transaction
> does not go through).
> 
> So, Ideally this is indeed solved entirely on the bech32/address
> encoding side of things. I did not initially expect the discussion here
> to go in that direction, as that could come with all problems that
> rolling out a new address scheme in the first place has. However, there
> may be a way to mostly avoid those problems for the time being, while
> also not having any impact on consensus or standardness rules.
> 
> I believe that most new witness programs we'd want to introduce anyway
> will be 32 bytes in the future, if the option exists. It's enough for a
> 256-bit hash (which has up to 128-bit collision security, and more than
> 128 bits is hard to achieve in Bitcoin anyway), or for X coordinates
> directly. Either of those, plus a small version number to indicate the
> commitment structure should be enough to encode any spendability
> condition we'd want with any achievable security level.
> 
> With that observation, I propose the following. We amend BIP173 to be
> restricted to witness programs of length 20 or 32 (but still support
> versions other than 0). This seems like it may be sufficient for several
> years, until version numbers run out. I believe that some wallet
> implementations already restrict sending to known versions only, which
> means effectively no change for them in addition to normal deployment.
> 
> In the mean time we develop a variant of bech32 with better
> insertion/erasure detecting properties, which will be used for witness
> programs of length different from 20 or 32. If we make sure that there
> are never two distinct valid checksum algorithms for the same output, I
> don't believe there is any need for a new address scheme or a different
> HRP. The latter is something I'd strongly try to avoid anyway, as it
> would mean additional cognitive load on users because of another
> visually distinct address style, plus more logistical overhead
> (coordination and keeping track of 2 HRPs per chain).
> 
> I believe improving bech32 itself is preferable over changing the way
> segwit addresses use bech32, as that can be done without making
> addresses even longer. Furthermore, the root of the issue is in bech32,
> and it is simplest to fix things there. The easiest solution is to
> simply change the constant 1 that is xor'ed into the checksum before
> encoding it to a 30-bit number. This has the advantage that a single
> checksum is never valid for both algoritgms simultaneously. Another
> approach is to implicitly including the length into the checksummed data.
> 
> What do people think?
> 
> Cheers,
> 
> -- 
> Pieter
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bech32 weakness and impact on bip-taproot addresses

2019-11-07 Thread Matt Corallo via bitcoin-dev
Given the issue is in the address format, not the consensus/standardness layer, 
it does seem somewhat strange to jump to addressing it with a 
consensus/standardness fix. Maybe the ship has sailed, but for the sake of 
considering all our options, we could also redefine bech32 to not allow such 
addresses.

Matt

>> On Nov 7, 2019, at 17:47, Greg Sanders via bitcoin-dev 
>>  wrote:
> 
> Could the softer touch of just making them non-standard apply as a future 
> preparation for an accepted softfork? Relaxations could easily be done later 
> if desired.
> 
>>> On Thu, Nov 7, 2019, 5:37 PM Pieter Wuille via bitcoin-dev 
>>>  wrote:
>> Hello all,
>> 
>> A while ago it was discovered that bech32 has a mutation weakness (see
>> https://github.com/sipa/bech32/issues/51 for details). Specifically,
>> when a bech32 string ends with a "p", inserting or erasing "q"s right
>> before that "p" does not invalidate it. While insertion/erasure
>> robustness was not an explicit goal (BCH codes in general only have
>> guarantees about substitution errors), this is very much not by
>> design, and this specific issue could have been made much less
>> impactful with a slightly different approach. I'm sorry it wasn't
>> caught earlier.
>> 
>> This has little effect on the security of P2WPKH/P2WSH addresses, as
>> those are only valid (per BIP173) for specific lengths (42 and 62
>> characters respectively). Inserting 20 consecutive "p"s in a typo
>> seems highly improbable.
>> 
>> I'm making this post because this property may unfortunately influence
>> design decisions around bip-taproot, as was brought up in the review
>> session 
>> (https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017427.html)
>> past tuesday. In the current draft, witness v1 outputs of length other
>> than 32 remain unencumbered, which means that for now such an
>> insertion or erasure would result in an output that can be spent by
>> anyone. If that is considered unacceptable, it could be prevented by
>> for example outlawing v1 witness outputs of length 31 and 33.
>> 
>> Thoughts?
>> 
>> Cheers,
>> 
>> -- 
>> Pieter
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-25 Thread Matt Corallo via bitcoin-dev
I don’te see how? Let’s imagine Party A has two spendable outputs, now they 
stuff the package size on one of their spendable outlets until it is right at 
the limit, add one more on their other output (to meet the Carve-Out), and now 
Party B can’t do anything.

> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
> 
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as 
> long as there is an output available without any descendants. It changes the 
> commitment from "you always need at least, and exactly, one non-CSV output 
> per party. " to "you always need at least one non-CSV output per party. "
> 
> I realize these limits are there for a reason though, but I'm wondering if 
> could relax them. Also now that jeremyrubin has expressed problems with the 
> current mempool limits.
> 
>> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo  
>> wrote:
>> I may be missing something, but I'm not sure how this changes anything?
>> 
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>> 
>> Matt
>> 
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> > 
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> > 
>> > Instead, what about letting the rule be
>> > 
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> > 
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles this, but I’m asking since it seems like
>> > there has been several changes to the acceptance code and eviction
>> > policy since the limit was first introduced.
>> > 
>> > - Johan
>> > 
>> > 
>> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell > > > wrote:
>> > 
>> > Matt Corallo > > > writes:
>> > >>> Thus, even if you imagine a steady-state mempool growth, unless the
>> > >>> "near the top of the mempool" criteria is "near the top of the next
>> > >>> block" (which is obviously *not* incentive-compatible)
>> > >>
>> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
>> > >> block, and assumed you'd only allow RBF if the old package wasn't
>> > in the
>> > >> top and the replacement would be.  That seems incentive
>> > compatible; more
>> > >> than the current scheme?
>> > >
>> > > My point was, because of block time variance, even that criteria
>> > doesn't hold up. If you assume a steady flow of new transactions and
>> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
>> > likely to get confirmed until a few blocks come in "early". Given
>> > block variance within a 12 block window, this is a relatively likely
>> > scenario.
>> > 
>> > [ Digging through old mail. ]
>> > 
>> > Doesn't really matter.  Lightning close algorithm would be:
>> > 
>> > 1.  Give bitcoind unileratal close.
>> > 2.  Ask bitcoind what current expidited fee is (or survey your 
>> > mempool).
>> > 3.  Give bitcoind child "push" tx at that total feerate.
>> > 4.  If next block doesn't contain unilateral close tx, goto 2.
>> > 
>> > In this case, if you allow a simpified RBF where 'you can replace if
>> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
>> > old tx isnt',
>> > it works.
>> > 
>> > It allows someone 100k of free tx spam, sure.  But it's simple.
>> > 
>> > We could further restrict it by marking the unilateral close somehow to
>> > say "gonna be pushed" and further limiting the child tx weight (say,
>> > 5kSipa?) in that case.
>> > 
>> > Cheers,
>> > Rusty.
>> > 

Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Matt Corallo via bitcoin-dev
I may be missing something, but I'm not sure how this changes anything?

If you have a commitment transaction, you always need at least, and
exactly, one non-CSV output per party. The fact that there is a size
limitation on the transaction that spends for carve-out purposes only
effects how many other inputs/outputs you can add, but somehow I doubt
its ever going to be a large enough number to matter.

Matt

On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> Reviving this old thread now that the recently released RC for bitcoind
> 0.19 includes the above mentioned carve-out rule.
> 
> In an attempt to pave the way for more robust CPFP of on-chain contracts
> (Lightning commitment transactions), the carve-out rule was added in
> https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> an implementation of a new commitment format for utilizing the Bring
> Your Own Fees strategy using CPFP, I’m wondering if the special case
> rule should have been relaxed a bit, to avoid the need for adding a 1
> CSV to all outputs (in case of Lightning this means HTLC scripts would
> need to be changed to add the CSV delay).
> 
> Instead, what about letting the rule be
> 
> The last transaction which is added to a package of dependent
> transactions in the mempool must:
>   * Have no more than one unconfirmed parent.
> 
> This would of course allow adding a large transaction to each output of
> the unconfirmed parent, which in effect would allow an attacker to
> exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> this a problem with the current mempool acceptance code in bitcoind? I
> would imagine evicting transactions based on feerate when the max
> mempool size is met handles this, but I’m asking since it seems like
> there has been several changes to the acceptance code and eviction
> policy since the limit was first introduced.
> 
> - Johan
> 
> 
> On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > wrote:
> 
> Matt Corallo  > writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't
> in the
> >> top and the replacement would be.  That seems incentive
> compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria
> doesn't hold up. If you assume a steady flow of new transactions and
> one or two blocks come in "late", suddenly "top 4MWeight" isn't
> likely to get confirmed until a few blocks come in "early". Given
> block variance within a 12 block window, this is a relatively likely
> scenario.
> 
> [ Digging through old mail. ]
> 
> Doesn't really matter.  Lightning close algorithm would be:
> 
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
> 
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> old tx isnt',
> it works.
> 
> It allows someone 100k of free tx spam, sure.  But it's simple.
> 
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
> 
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> 
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is Signet Bitcoin?

2019-10-14 Thread Matt Corallo via bitcoin-dev
Indeed, Signet is no less (or more) Bitcoin than a seed format or BIP 32. It’s 
“not Bitcoin” but it’s certainly “interoperability for how to build good 
testing for Bitcoin”.

> On Oct 14, 2019, at 19:55, Karl-Johan Alm via bitcoin-dev 
>  wrote:
> 
> Hello,
> 
> The pull request to the bips repository for Signet has stalled, as the
> maintainer isn't sure Signet should have a BIP at all, i.e. "is Signet
> Bitcoin?".
> 
> My argument is that Signet is indeed Bitcoin and should have a BIP, as
> this facilitates the interoperability between different software in
> the Bitcoin space.
> 
> Feedback welcome, here or on the pull request itself:
> https://github.com/bitcoin/bips/pull/803
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-08-14 Thread Matt Corallo via bitcoin-dev
You very clearly didn't bother to read other mails in this thread. To make it 
easy for you, here's a few links:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017147.html
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017175.html

Matt

> On Aug 13, 2019, at 23:05, Will Madden  wrote:
> 
> For the record, strong NACK. My understanding is that this breaks several 
> established SPV implementations (such as early breadwallet for sure and 
> possibly current BRD wallets) and I have yet to see quantitative 
> prioritization or even a rational justification for this change.
> 
> Requiring SPV wallets to communicate with trusted nodes is centralization, 
> and breaking functionality and implementations that enable this without a 
> thoroughly researched rationale is highly suspect.
> 
>> On Jul 20, 2019, at 1:46 PM, Matt Corallo via bitcoin-dev 
>>  wrote:
>> 
>> Just a quick heads-up for those watching the list who may be using it -
>> in the next Bitcoin Core release bloom filter serving will be turned off
>> by default. This has been a long time coming, it's been an option for
>> many releases and has been a well-known DoS vector for some time.
>> As other DoS vectors have slowly been closed, this has become
>> increasingly an obvious low-hanging fruit. Those who are using it should
>> already have long been filtering for NODE_BLOOM-signaling nodes, and I
>> don't anticipate those being gone any time particularly soon.
>> 
>> See-also PR at https://github.com/bitcoin/bitcoin/pull/16152
>> 
>> The release notes will liekly read:
>> 
>> P2P Changes
>> ---
>> - The default value for the -peerbloomfilters configuration option (and,
>> thus, NODE_BLOOM support) has been changed to false.
>> This resolves well-known DoS vectors in Bitcoin Core, especially for
>> nodes with spinning disks. It is not anticipated that
>> this will result in a significant lack of availability of
>> NODE_BLOOM-enabled nodes in the coming years, however, clients
>> which rely on the availability of NODE_BLOOM-supporting nodes on the
>> P2P network should consider the process of migrating
>> to a more modern (and less trustful and privacy-violating) alternative
>> over the coming years.
>> 
>> Matt
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Matt Corallo via bitcoin-dev
This conversation went off the rails somewhat. I don't think there's any 
immediate risk of NODE_BLOOM peers being unavailable. This is a defaults 
change, not a removal of the code to serve BIP 37 peers (nor would I suggest 
removing said code while people still want to use them - the maintenance burden 
isn't much). Looking at historical upgrade cycles, ignoring any other factors, 
there will be a large number of nodes serving NODE_BLOOM for many years.

Even more importantly, if you need them, run a node or two. As long as no one 
is exploiting the issues with them such a node isn't *too* expensive. Or don't, 
I guarantee you chainanalysis or some competitor of theirs will very very 
happily serve bloom-filtered clients as long as such clients want to 
deanonymize themselves. We already see a plurality of nodes on the network are 
clearly not run-of-the-mill Core nodes, many of which are likely 
deanonimization efforts.

In some cases BIP 137 is a replacement, in some cases, indeed, it is not. I 
agree at a protocol level we shouldn't be passing judgement about how users 
wish to interact with the Bitcoin system (aside from not putting our own, 
personal, effort into building such things) but that isn't what's happening 
here. This is an important DoS fix for the average node, and I don't really 
understand the argument that this is going to break existing BIP 37 wallets, 
but if it makes you feel any better I can run some beefy BIP 37 nodes.

Matt

> On Jul 26, 2019, at 06:04, Jonas Schnelli via bitcoin-dev 
>  wrote:
> 
> 
>> 1) It causes way too much traffic for mobile users, and likely even too
>> much traffic for fixed lines in not so developed parts of the world.
> 
> Yes. It causes more traffic than BIP37.
> Basic block filters for current last ~7 days (1008 blocks) are about 19MB 
> (just the filters).
> On top, you will probably fetch a handful of irrelevant blocks due to the FPs 
> and due to true relevant txns.
> A over-the-thumb estimation: ~25MB per week of catch-up.
> If you where offline for a month: ~108MB
> 
> Thats certainly more then BIP37 BF (measured 1.6MB total traffic with android 
> schildbach wallet restore blockchain for 8 week [7 weeks headers, 1week 
> merkleblocks]).
> 
> But lets look at it like this: for an additional, say 25MB per week (maybe a 
> bit more), you get the ability to filter blocks without depending on serving 
> peers who may compromise your financial privacy.
> Also, if you keep the filters, further rescans do consume the same or less 
> bandwidth than BF BIP37.
> In other words: you have the chance to potentially increase privacy by 
> consuming bandwidth in the range of a single audio podcast per week.
> 
> I would say the job of protocol developers is protect users privacy where 
> it’s possible (as a default).
> It’s probably a debatable point wether 25MB per week of traffic is worth a 
> potential increase in privacy, though I absolutely think 25MB/week is an 
> acceptable tradeoff.
> Saving traffic is possible by using BIP37 or stratum/electrum… but developers 
> should make sure users are __warned about the consequences__!
> 
> Additionally, it looks like, peer operators are not endless being willing to 
> serve – for free – a CPU/disk intense service with no benefits for the 
> network. I would question wether a decentralised form of BIP37 is sustainable 
> in the long run (if SPV wallet provider bootstrap a net range of NODE_BLOOM 
> peers to make it more reliable on the network would be snake-oil).
> 
> 
>> 
>> 2) It filters blocks only. It doesn't address unconfirmed transactions.
> 
> Well, unconfirmed transaction are uncertain for various reasons.
> 
> BIP158 won't allow you to filter the mempool.
> But as soon as you are connected to the network, you may fetch tx with 
> inv/getdata and pick out the relevant ones (causes also traffic).
> Unclear and probably impossible with the current BIP158 specs to fetch 
> transactions that are not in active relay and are not in a block (mempool 
> txns, at least this is true with the current observed relay tactics).
> 
> 
>> 3) Afaik, it enforces/encourages address re-use. This stems from the
>> fact that the server decides on the filter and in particular on the
>> false positive rate. On wallets with many addresses, a hardcoded filter
>> will be too blurry and thus each block will be matched. So wallets that
>> follow the "one address per incoming payment" pattern (e.g. HD wallets)
>> at some point will be forced to wrap their key chains back to the
>> beginning. If I'm wrong on this one please let me know.
> 
> I’m probably the wrong guy to ask (haven’t made the numbers) but last time I 
> rescanned a Core wallet (in my dev branch) with block filters (and a Core 
> wallet has >2000 addresses by default) it fetched a low and acceptable amount 
> of false positive blocks.
> (Maybe someone who made the numbers step in here.)
> 
> Though, large wallets – AFAIK – also operate badly with BIP37.
> 
>> 
>> 4) 

Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-22 Thread Matt Corallo via bitcoin-dev
Hey Andreas,

I think maybe some of the comments here were misunderstood - I don't
anticipate that most people will change their defaults, indeed, but
given the general upgrade cycles we've seen on the network over the
entire course of Bitcoin's history, there's little reason to believe
that many nodes with NODE_BLOOM publicly accessible will be around for
at least three or four years to come, though obviously any conscious
effort by folks who need those services to run nodes could extend that
significantly.

As for the DoS issues, a super old Proof-of-Concept of the I/O variant
is here: https://github.com/petertodd/bloom-io-attack though CPU DoS
attacks are also possible that use high hash counts to fill a node's CPU
usage (you can pretty trivially see when a bloom-based peer connects to
you just by looking at top...).

Finally, regarding alternatives, the filter-generation code for BIP
157/158 has been in Bitcoin Core for some time, though the P2P serving
side of things appears to have lost any champions working on it. I
presume one of the Lightning folks will eventually, given they appear to
be requiring their users connect to a handful of their own servers right
now, but if you really need it, its likely not a ton of work to pipe
them through.

Matt

On 7/21/19 10:56 PM, Andreas Schildbach via bitcoin-dev wrote:
> An estimated 10+ million wallets depend on that NODE_BLOOM to be
> updated. So far, I haven't heard of an alternative, except reading all
> transactions and full blocks.
> 
> It goes without saying pulling the rug under that many wallets is a
> disastrous idea for the adoption of Bitcoin.
> 
>> well-known DoS vectors
> 
> I asked many people, even some "core developers" at meetings, but nobody
> ever was able to explain the DoS vector. I think this is just a myth.
> 
> Yes, you can set an overly blurry filter and thus cause useless traffic,
> but it never exceeds just drinking from the full firehose (which this
> change doesn't prohibit). So where is the point? An attacker will just
> switch filtering off, or in fact has never used it.
> 
>> It is not anticipated that
>> this will result in a significant lack of availability of
>> NODE_BLOOM-enabled nodes in the coming years
> 
> Why don't you anticipate that? People almost never change defaults,
> especially if it's not for their own immediate benefit. At the same
> time, release notes in general recommend updating to the latest version.
> I *do* anticipate this will reduce the number of nodes usable by a large
> enough amount so that the feature will become unstable.
> 
>> clients
>> which rely on the availability of NODE_BLOOM-supporting nodes on the
>> P2P network should consider the process of migrating
>> to a more modern (and less trustful and privacy-violating) alternative
>> over the coming years.
> 
> There is no such alternative.
> 
> I strongly recommend postponing this change until an alternative exists
> and then give developers enough time to implement, test and roll out.
> 
> I also think as long as we don't have an alternative, we should improve
> the current filtering for segwit. E.g. testing the scripts themselves
> and each scriptPubKey spent by any input against the filter would do,
> and it also fixes the main privacy issue with server-side filtering
> (wallets have to add two items per address to the filter).
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-21 Thread Matt Corallo via bitcoin-dev
Just a quick heads-up for those watching the list who may be using it -
in the next Bitcoin Core release bloom filter serving will be turned off
by default. This has been a long time coming, it's been an option for
many releases and has been a well-known DoS vector for some time.
As other DoS vectors have slowly been closed, this has become
increasingly an obvious low-hanging fruit. Those who are using it should
already have long been filtering for NODE_BLOOM-signaling nodes, and I
don't anticipate those being gone any time particularly soon.

See-also PR at https://github.com/bitcoin/bitcoin/pull/16152

The release notes will liekly read:

P2P Changes
---
- The default value for the -peerbloomfilters configuration option (and,
thus, NODE_BLOOM support) has been changed to false.
  This resolves well-known DoS vectors in Bitcoin Core, especially for
nodes with spinning disks. It is not anticipated that
  this will result in a significant lack of availability of
NODE_BLOOM-enabled nodes in the coming years, however, clients
  which rely on the availability of NODE_BLOOM-supporting nodes on the
P2P network should consider the process of migrating
  to a more modern (and less trustful and privacy-violating) alternative
over the coming years.

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)

2019-06-03 Thread Matt Corallo via bitcoin-dev
I think this needs significantly improved motivation/description. A few areas 
I'd like to see calculated out:

1) wrt rule 3, for this to be 
obviously-incentive-compatible-for-the-next-miner, I'd think no evicted 
transactions would be allowed to be in the next block range. This would 
probably require some significant additional tracking in today's mempool logic.

2) wrt rule 4, I'd like to see a calculation of worst-case free relay. I think 
we're already not in a great place, but maybe it's worth it or maybe there is 
some other way to reduce this cost (intuitively it looks like this proposal 
could make things very, very, very bad).

3) wrt rule 5, I'd like to see benchmarks, it's probably a pretty nasty DoS 
attack, but it may also be the case that is (a) not worse than other 
fundamental issues or (b) sufficiently expensive.

4) As I've indicated before, I'm generaly not a fan of such vague protections 
for time-critical transactions such as payment channel punishment transactions. 
At a high-level, in this context your counterparty's transactions (not to 
mention every other transaction in everyone's mempool) are still involved in 
the decision about whether to accept an RBF, in contrast to previous proposals, 
which makes it much harder to reason about. As a specific example, if an 
attacker exploits mempool policy differences they may cause your concept of 
"top 4M weight" to be bogus for a subeset of nodes, causing propogation to be 
limited.

Obviously there is also a ton more client-side knowledge required and 
complexity to RBF decisions here than other previous, more narrowly-targeted 
proposals.

(I don't think this one use-case being not optimal should prevent such a 
proposal, i agree it's quite nice for some other cases).

Matt

> On Jun 2, 2019, at 06:41, Rusty Russell  wrote:
> 
> Hi all,
> 
>   I want to propose a modification to rules 3, 4 and 5 of BIP 125:
> 
> To remind you of BIP 125:
> 3. The replacement transaction pays an absolute fee of at least the sum
>   paid by the original transactions.
> 
> 4. The replacement transaction must also pay for its own bandwidth at
>   or above the rate set by the node's minimum relay fee setting.
> 
> 5. The number of original transactions to be replaced and their
>   descendant transactions which will be evicted from the mempool must not
>   exceed a total of 100 transactions.
> 
> The new "emergency RBF" rule:
> 
> 6. If the original transaction was not in the first 4,000,000 weight
>   units of the fee-ordered mempool and the replacement transaction is,
>   rules 3, 4 and 5 do not apply.
> 
> This means:
> 
> 1. RBF can be used in adversarial conditions, such as lightning
>  unilateral closes where the adversary has another valid transaction
>  and can use it to block yours.  This is a problem when we allow
>  differential fees between the two current lightning transactions
>  (aka "Bring Your Own Fees").
> 
> 2. RBF can be used without knowing about miner's mempools, or that the
>  above problem is occurring.  One simply gets close to the required
>  maximum height for lightning timeout, and bids to get into the next
>  block.
> 
> 3. This proposal does not open any significant new ability to RBF spam,
>  since it can (usually) only be used once.  IIUC bitcoind won't
>  accept more that 100 descendents of an unconfirmed tx anyway.
> 
> 4. This proposal makes RBF miner-incentive compatible.  Currently the
>  protocol tells miners they shouldn't accept the highest bidding tx
>  for the good of the network.  This conflict is particularly sharp
>  in the case where the replacement tx would be immediately minable,
>  which this proposal addresses.
> 
> Unfortunately I haven't found time to code this up in bitcoin, but if
> there's positive response I can try.
> 
> Thanks for reading!
> Rusty.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-22 Thread Matt Corallo via bitcoin-dev
If we're going to do covenants (and I think we should), then I think we
need to have a flexible solution that provides more features than just
this, or we risk adding it only to go through all the effort again when
people ask for a better solution.

Matt

On 5/20/19 8:58 PM, Jeremy via bitcoin-dev wrote:
> Hello bitcoin-devs,
> 
> Below is a link to a BIP Draft for a new opcode,
> OP_CHECKOUTPUTSHASHVERIFY. This opcode enables an easy-to-use trustless
> congestion control techniques via a rudimentary, limited form of
> covenant which does not bear the same technical and social risks of
> prior covenant designs.
> 
> Congestion control allows Bitcoin users to confirm payments to many
> users in a single transaction without creating the UTXO on-chain until a
> later time. This therefore improves the throughput of confirmed
> payments, at the expense of latency on spendability and increased
> average block space utilization. The BIP covers this use case in detail,
> and a few other use cases lightly.
> 
> The BIP draft is here:
> https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki
> 
> The BIP proposes to deploy the change simultaneously with Taproot as an
> OPSUCCESS, but it could be deployed separately if needed.
> 
> An initial reference implementation of the consensus changes and  tests
> which demonstrate how to use it for basic congestion control is
> available at
> https://github.com/JeremyRubin/bitcoin/tree/congestion-control.  The
> changes are about 74 lines of code on top of sipa's Taproot reference
> implementation.
> 
> Best regards,
> 
> Jeremy Rubin
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-12 Thread Matt Corallo via bitcoin-dev
Note that even your carve-outs for OP_NOP is not sufficient here - if you were 
using nSequence to tag different pre-signed transactions into categories 
(roughly as you suggest people may want to do with extra sighash bits) then 
their transactions could very easily have become un-realistically-spendable. 
The whole point of soft forks is that we invalidate otherwise-unused bits of 
the protocol. This does not seem inconsistent with the proposal here.

> On Mar 9, 2019, at 13:29, Russell O'Connor  wrote:
> Bitcoin has *never* made a soft-fork, since the time of Satoishi, that 
> invalidated transactions that send secured inputs to secured outputs 
> (excluding uses of OP_NOP1-OP_NOP10).

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-11 Thread Matt Corallo via bitcoin-dev
I think you may have misunderstood part of the motivation. Yes, part of the 
motivation *is* to remove OP_CODESEPARATOR wholesale, greatly simplifying the 
theoretical operation of checksig operations (thus somewhat simplifying the 
implementation but also simplifying analysis of future changes, such as 
sighash-caching code).

I think a key part of the analysis here is that no one I've spoken to (and 
we've been discussing removing it for *years*, including many attempts at 
coming up with reasons to keep it) is aware of any real proposals to use 
OP_CODESEPARATOR, let alone anyone using it in the wild. Hiding data in invalid 
pubic keys is a long-discussed-and-implemented idea (despite it's 
discouragement, not to mention it appears on the chain in many places).

It would end up being a huge shame to have all the OP_CORESEPARATOR mess left 
around after all the effort that has gone into removing it for the past few 
years, especially given the stark difference in visibility of a fork when 
compared to a standardness change.

As for your specific proposal of increasing the weight of anything that has an 
OP_CODESEPARATOR in it by the cost of an additional (simple) input, this 
doesn't really solve the issue. After all, if we're assuming some user exists 
who has been using sending money, unspent, to scripts with OP_CODESEPARATOR to 
force signatures to commit to whether some other signature was present and who 
won't see a (invariably media-covered) pending soft-fork in time to claim their 
funds, we should also assume such a user has pre-signed transactions which are 
time-locked and claim a number of inputs and have several paths in the script 
which contain OP_CODESEPARATOR, rendering their transcription invalid.

Matt

> On Mar 11, 2019, at 15:15, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> Increasing the OP_CODESEPARATOR weight by 520 (p2sh redeemScript size limit) 
> + 40 (stripped txinput size) + 8 (stripped txoutput size) + a few more 
> (overhead for varints) = 572ish bytes should be enough to completely 
> eliminate any vulnerability caused by OP_CODESEPARATOR within P2SH 
> transactions without the need to remove it ever.  I think it is worth 
> attempting to be a bit more clever than such a blunt rule, but it would be 
> much better than eliminating OP_CODESEPARATOR within P2SH entirely.
> 
> Remember that the goal isn't to eliminate OP_CODESEPARATOR per se; the goal 
> is to eliminate the vulnerability associated with it.
> 
>> On Mon, Mar 11, 2019 at 12:47 PM Dustin Dettmer via bitcoin-dev 
>>  wrote:
>> What about putting it in a deprecated state for some time. Adjust the 
>> transaction weight so using the op code is more expensive (10x, 20x?) and 
>> get the word out that it will be removed in the future.
>> 
>> You could even have nodes send a reject code with the message 
>> “OP_CODESEPARATOR is depcrecated.”
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Signet

2019-03-09 Thread Matt Corallo via bitcoin-dev
To make testing easier, it may make sense to keep the existing block header 
format (and PoW) and instead apply the signature rules to some field in the 
coinbase transaction. This means SPV clients (assuming they only connect to 
honest/trusted nodes) work as-is.

A previous idea regarding reorgs (that I believe Greg came up with) is to allow 
multiple keys to sign blocks, with one signing no reorgs and one signing a 
reorg every few blocks, allowing users to choose the behavior they want.


> On Mar 8, 2019, at 00:54, Karl-Johan Alm via bitcoin-dev 
>  wrote:
> 
> Hello,
> 
> As some of you already know, I've been working on a network called "signet", 
> which is bascially a complement to the already existing testnet, except it is 
> completely centralized, and blocks are signed by a specific key rather than 
> using proof of work.
> 
> Benefits of this:
> 
> 1. It is more predictable than testnet. Miners appear and disappear 
> regularly, causing irregular block generation.
> 
> 2. Since it is centrally controlled, it is easy to perform global testing, 
> such as reorgs (e.g. the network performs a 4 block reorg by request, or as 
> scheduled).
> 
> 3. It is more stable than testnet, which occasionally sees several thousand 
> block reorgs.
> 
> 4. It is trivial to spin up (and shut down) new signets to make public tests 
> where anyone can participate.
> 
> Anyone can create a signet at any time, simply by creating a key pair and 
> creating a challenge (scriptPubKey). The network can then be used globally by 
> anyone, assuming the creator sends some coins to the other participants.
> 
> Having a persistent signet would be beneficial in particular to services 
> which need a stable place to test features over an extended period of time. 
> My own company implements protocols on top of Bitcoin with sidechains. We 
> need multi-node test frameworks to behave in a predictable manner (unlike 
> testnet) and with the same standardness relay policy as mainnet.
> 
> Signets consist of 2 parameters: the challenge script (scriptPubKey) and the 
> solution length. (The latter is needed to retain fixed length block headers, 
> despite having an additional payload.)
> 
> I propose that a default persistent "signet1" is created, which can be 
> replaced in future versions e.g. if the coins are unwisely used as real 
> money, similarly to what happened to previous testnets. This signet is picked 
> by default if a user includes -signet without providing any of the parameters 
> mentioned above. The key holder would be someone sufficiently trusted in the 
> community, who would be willing to run the system (block generation code, 
> faucet, etc). It could be made a little more sturdy by using 1-of-N multisig 
> as the challenge, in case 1 <= x < N of the signers disappear. If people 
> oppose this, it can be skipped, but will mean people can't just jump onto 
> signet without first tracking down parameters from somewhere.
> 
> Implementation-wise, the code adds an std::map with block hash to block 
> signature. This is serialized/deserialized as appropriate (Segwit witness 
> style), which means block headers in p2p messages are (80 + solution_length) 
> bytes. Block header non-contextual check goes from checking if block header 
> hash < target to checking if the payload is a valid signature for the block 
> header hash instead.
> 
> Single commit with code (will split into commits and make PR later, but just 
> to give an idea what it looks like): 
> https://github.com/kallewoof/bitcoin/pull/4
> 
> I don't think this PR is overly intrusive, and I'm hoping to be able to get 
> signet code into Bitcoin Core eventually, and am equally hopeful that devs of 
> other (wallet etc) implementations will consider supporting it.
> 
> Feedback requested on this.
> 
> Attribution: parts of the signet code (in particular signblock and 
> getnewblockhex) were adapted from the ElementsProject/elements repository. 
> When PR is split into atomic commits, I will put appropriate attribution 
> there.
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-09 Thread Matt Corallo via bitcoin-dev
Aside from the complexity issues here, note that for a user to be adversely 
affect, they probably have to have pre-signed lock-timed transactions. 
Otherwise, in the crazy case that such a user exists, they should have no 
problem claiming the funds before activation of a soft-fork (and just switching 
to the swgwit equivalent, or some other equivalent scheme). Thus, adding 
additional restrictions like tx size limits will equally break txn.

> On Mar 8, 2019, at 14:12, Sjors Provoost  wrote:
> 
> 
>> (1) It has been well documented again and again that there is desire to 
>> remove OP_CODESEPARATOR, (2) it is well-documented OP_CODESEPARATOR in 
>> non-segwit scripts represents a rather significant vulnerability in Bitcoin 
>> today, and (3) lots of effort has gone into attempting to find practical 
>> use-cases for OP_CODESEPARATOR's specific construction, with no successes as 
>> of yet. I strongly, strongly disagree that the highly-unlikely remote 
>> possibility that someone created something before which could be rendered 
>> unspendable is sufficient reason to not fix a vulnerability in Bitcoin today.
>> 
>>> I suggest an alternative whereby the execution of OP_CODESEPARATOR 
>>> increases the transactions weight suitably as to temper the vulnerability 
>>> caused by it.  Alternatively there could be some sort of limit (maybe 1) on 
>>> the maximum number of OP_CODESEPARATORs allowed to be executed per script, 
>>> but that would require an argument as to why exceeding that limit isn't 
>>> reasonable.
>> 
>> You could equally argue, however, that any such limit could render some 
>> moderately-large transaction unspendable, so I'm somewhat skeptical of this 
>> argument. Note that OP_CODESEPARATOR is non-standard, so getting them mined 
>> is rather difficult in any case.
> 
> Although I'm not a fan of extra complicity, just to explore these two ideas a 
> bit further.
> 
> What if such a transaction:
> 
> 1. must have one input; and
> 2. must be smaller than 400 vbytes; and
> 3. must spend from a UTXO older than fork activation
> 
> Adding such a contextual check seems rather painful, perhaps comparable to 
> nLockTime. Anything more specific than the above, e.g. counting the number of 
> OP_CODESEPARATOR calls, seems like guess work.
> 
> Transaction weight currently doesn't consider OP codes, it only considers if 
> bytes are part of the witness. Changing that to something more akin to 
> Ethereums gas pricing sounds too complicated to even consider.
> 
> 
> I would also like to believe that whoever went through the trouble of using 
> OP_CODESEPARATOR reads this list.
> 
> Sjors
> 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-08 Thread Matt Corallo via bitcoin-dev

Replies inline.

On 3/8/19 3:57 PM, Russell O'Connor wrote:
On Thu, Mar 7, 2019 at 2:50 PM Matt Corallo > wrote:

It's very easy to construct a practical script using OP_CODESEPARATOR.

IF <2>   <2> CHECKMULTISIGVERIFY ELSE 
CODESEPARATOR  CHECKSIGVERFY ENDIF


Now when someone hands Alice, the CFO of XYZ corp., some transaction, 
she has the option of either signing it unilaterally herself, or 
creating a partial signature such that the transaction additionally 
needs Bob, the CEOs signature as well, and Alice's choice is committed 
to the blockchain for auditing purposes later.


Now, there are many things you might object about this scheme, but my 
point is that (A) regardless of what you think about this scheme, it, or 
similar schemes, may have been devised by users, and (B) users may have 
already committed funds to such schemes, and due to P2SH you cannot know 
that this is not the case.


The common way to set that up is to have a separate key, but, ok, fair 
enough. That said, the argument that "it may be hidden by P2SH!" isn't 
sufficient here. It has to *both* be hidden by P2SH and have never been 
spent from (either on mainnet or testnet) or be lock-timed a year in the 
future. I'm seriously skeptical that someone is using a highly esoteric 
scheme and has just been pouring money into it without ever having 
tested it or having withdrawn any money from it whatsoever. This is just 
a weird argument.



Please don't strawman my position.  I am not suggesting we don't fix a 
vulnerability in Bitcoin.  I am suggesting we find another way.  One 
that limits the of risk destroying other people's money.


Here is a more concrete proposal:  No matter how bad OP_CODESEPARATOR 
is, it cannot be worse than instead including another input that spends 
another identically sized UTXO.  So how about we soft-fork in a rule 
that says that an input's weight is increased by an amount equal to the 
number of OP_CODESEPARATORs executed times the sum of weight of the UTXO 
being spent and 40 bytes, the weight of a stripped input. The risk of 
destroying other people's money is limited and AFAIU it would completely 
address the vulnerabilities caused by OP_CODESEPARATOR.


You're already arguing that someone has such an esoteric use of script, 
suggesting they aren't *also* creating pre-signed, long-locktimed 
transactions with many inputs isn't much of a further stretch 
(especially since this may result in the fee being non-standardly low if 
you artificially increase its weight).


Note that "just limit number of OP_CODESEPARATOR calls" results in a ton 
of complexity and reduces the simple analysis that fees (almost) have 
today vs just removing it allows us to also remove a ton of code.


Further note that if you don't remove it getting the efficiency wins 
right is even harder because instead of being able to cache sighashes 
you now have to (at a minimum) wipe the cache between each 
OP_CODESEPARATOR call, which results in a ton of additional 
implementation complexity.




 > I suggest an alternative whereby the execution of OP_CODESEPARATOR
 > increases the transactions weight suitably as to temper the
 > vulnerability caused by it.  Alternatively there could be some
sort of
 > limit (maybe 1) on the maximum number of OP_CODESEPARATORs
allowed to be
 > executed per script, but that would require an argument as to why
 > exceeding that limit isn't reasonable.

You could equally argue, however, that any such limit could render some
moderately-large transaction unspendable, so I'm somewhat skeptical of
this argument. Note that OP_CODESEPARATOR is non-standard, so getting
them mined is rather difficult in any case.


I already know of people who's funds are tied up due to in other changes 
to Bitcoin Core's default relay policy.  Non-standardness is not an 
excuse to take other people's tied up funds and destroy them permanently.


Huh?! The whole point of non-standardness in this context is to (a) make 
soft-forking something out safer by derisking miners not upgrading right 
away and (b) signal something that may be a candidate for soft-forking 
out so that we get feedback. Who is getting things disabled who isn't 
bothering to *tell* people that their use-case is being hurt?!

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Sighash Type Byte; Re: BIP Proposal: The Great Consensus Cleanup

2019-03-07 Thread Matt Corallo via bitcoin-dev
I can't say I'm particularly married to this idea (hence the alternate 
proposal in the original email), but at the same time the lack of 
existing transactions using these bits (and the redundancy thereof - 
they don't *do* anything special) seems to be pretty strong indication 
that they are not in use. One could argue a similarity between these 
bits and OP_NOPs - no one is going to create transactions that require 
OP_NOP execution to be valid as they are precisely the kind of thing 
that may get soft-forked to have a new meaning. While the sighash bits 
are somewhat less candidates for soft-forking, I don't think "someone 
may have shoved random bits into parts of their 
locked-for-more-than-a-year transactions" is sufficient reason to not 
soft-fork something out. Obviously, actually *seeing* it used in 
practice or trying to fork them out in a fast manner would be 
unacceptable, but neither is being proposed here.


Matt

On 3/7/19 3:16 PM, Russell O'Connor wrote:


* If the sighash type byte (ie last byte in a signature being evaluated
during the execution of OP_CHECKSIG[VERIFY] or
OP_CHECKMULTISIG[VERIFY])
is anything other than 1, 2, 3, 0x81, 0x82, or 0x83, the script
execution fails. This does not apply to 0-length signature stack
elements.


The sighash type byte is a "great" place to store a few bits of 
ancillary data when making signatures.  Okay it isn't great, but it is 
good enough that some misguided users may have been using it and have 
unbroadcast transactions in cold storage (think sweeps) for UTXOs whose 
private keys may have been lost.  I don't think that one's hunch that 
there isn't much risk in disabling these sighashes is good enough to put 
people funds at risk, especially given the alternative proposal of 
caching the just-before-the-last-byte sighash midstate that is available.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-07 Thread Matt Corallo via bitcoin-dev

Replies inline.

Matt

On 3/7/19 3:03 PM, Russell O'Connor wrote:


* OP_CODESEPARATOR in non-BIP 143 scripts fails the script validation.
This includes OP_CODESEPARATORs in unexecuted branches of if
statements,
similar to other disabled opcodes, but unlike OP_RETURN.


OP_CODESEPARATOR is the only mechanism available that allows users to 
sign which particular branch they are authorizing for within scripts 
that have multiple possible conditions that reuse the same public key.


This is true, and yet it does not appear to actually be practically 
usable. Thus far, despite a ton of effort, I have not yet seen a 
practical use-case for OP_CODESEPARATOR (except for one example of it 
being used to make SegWit scripts ever-so-slightly more effecient in 
TumbleBit, hence why this BIP does not propose disabling it for SegWit).


Because of P2SH you cannot know that no one is currently using this 
feature.  Activating a soft-fork as describe above means these sorts of 
funds would be permanently lost.  It is not acceptable to risk people's 
money like this.


(1) It has been well documented again and again that there is desire to 
remove OP_CODESEPARATOR, (2) it is well-documented OP_CODESEPARATOR in 
non-segwit scripts represents a rather significant vulnerability in 
Bitcoin today, and (3) lots of effort has gone into attempting to find 
practical use-cases for OP_CODESEPARATOR's specific construction, with 
no successes as of yet. I strongly, strongly disagree that the 
highly-unlikely remote possibility that someone created something before 
which could be rendered unspendable is sufficient reason to not fix a 
vulnerability in Bitcoin today.


I suggest an alternative whereby the execution of OP_CODESEPARATOR 
increases the transactions weight suitably as to temper the 
vulnerability caused by it.  Alternatively there could be some sort of 
limit (maybe 1) on the maximum number of OP_CODESEPARATORs allowed to be 
executed per script, but that would require an argument as to why 
exceeding that limit isn't reasonable.


You could equally argue, however, that any such limit could render some 
moderately-large transaction unspendable, so I'm somewhat skeptical of 
this argument. Note that OP_CODESEPARATOR is non-standard, so getting 
them mined is rather difficult in any case.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: The Great Consensus Cleanup

2019-03-07 Thread Matt Corallo via bitcoin-dev

Replies inline.

On 3/7/19 10:44 AM, Luke Dashjr wrote:

On Wednesday 06 March 2019 21:39:15 Matt Corallo wrote:

I'd like to ask the BIP editor to assign a BIP number.


Needs a Backward Compatibility section, and should have a bips repo PR opened
after discussion on the ML.


Oops, I guess most of the "Discussion" section can just be moved into a 
"Backwards Compatibility" section. Will do before PR'ing.



   * The 4th change (making non-standard signature hash types invalid)
may be worth discussing. In order to limit the number of potential
signature hashes which could be used per-input (allowing us to cache
them to avoid re-calculation), we can disable non-standard sighash
types. Alternatively, however, most of the same effect could be achieved
by caching the just-before-the-last-byte sighash midstate and hashing
only the last byte when a checking signatures. Still, them having been
non-standard for many years makes me doubt there is much risk involved
in disabling them, and I don't see much potential use-case for keeping
them around so I'd like to just remove them.


I don't understand what is being removed here.


This refers to the following spec change:

If the sighash type byte (ie last byte in a signature being evaluated 
during the execution of OP_CHECKSIG[VERIFY] or OP_CHECKMULTISIG[VERIFY]) 
is anything other than 1, 2, 3, 0x81, 0x82, or 0x83, the script 
execution fails. This does not apply to 0-length signature stack elements.



As for why the timewarp vulnerability should (IMO rather obviously) be
fixed, it seems rather clear that the only potential use for exploiting
it would be either to inflate the currency supply maliciously by miners
or to fork in what amounts to extension blocks. As for why extension
blocks are almost certainly not the right approach to such changes, its
likely worth reading this old post:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013510
.html


While I agree that extension blocks are typically a bad choice, I'm not sure
the argument really applies to forward blocks. (That being said, I find
forward blocks overcomplicated and probably not a reason to avoid this.)


I agree they are somewhat separate ideas, but the arguments in that 
thread apply equally to timewarp-based inter-block-time reductions. If 
you want to discuss it further, I'd suggest a new thread.



* Transactions smaller than 65 bytes when serialized without witness
data are invalid.


Rationale should include the reason(s) why the size doesn't count the witness
here.


Will add.


** Note that miners today only enforce increasing timestamps against the
median-timestamp-of-last-11-blocks, so miners who do not upgrade may
mine a block which violates this rule at the beginning of a difficulty
window if the last block in a difficulty window has a timestamp in the
future. Thus, it is strongly recommended that SPV clients enforce the
new nTime rules to avoid following any potential forks which occur.


This should probably be moved outside Discussion. (Perhaps to the missing
Backward Compatibility section?)


* There are several early-stage proposals which may affect the execution
of scripts, including proposals such as Schnorr signatures, Taproot,
Graftroot, and MAST. These proposals are not expected to have any
interaction with the changes in this BIP, as they are likely to only
apply to SegWit scripts, which are not covered by any of the new rules
except for the sighash type byte rule. Thus, the sighash type byte rule
defined above only applies to *current* signature-checking opcodes, as
any new signature-checking is likely to be implemented via the
introduction of new opcodes.


It's not clear that new opcodes will necessarily always be used. Probably
would be good to clarify the "non-Segwit or witness v0 only" rule in the
Specification section.


Note that you inherently have to use a new opcode for such things - the 
non-standard type bytes *are* defined and define a sighash/signature, 
they can't be simply redefined to a new sighash/signature type in a soft 
fork.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP Proposal: The Great Consensus Cleanup

2019-03-07 Thread Matt Corallo via bitcoin-dev
The following is a proposed BIP to soft-fork out some oddities in the 
current Bitcoin consensus rules, resolving several vulnerabilities, in 
addition to fixing the timewarp vulnerability. I'd like to ask the BIP 
editor to assign a BIP number.


The latest version of the BIP can be found at 
https://github.com/TheBlueMatt/bips/blob/cleanup-softfork/bip-.mediawiki 
(a text copy is included below).


Some things that may be worth discussing:

 * Note that the activation times in this BIP may result in the 
activation of the new soft-fork rules on the same block as the scheduled 
block-subsidy halving. Sadly, avoiding this either requires a 
significantly compressed BIP activation time (which may result in the 
rules not activating for benign reasons) or beginning the activation 
process significantly into the future.


 * The BIP proposes allowing timestamps on the difficulty-adjustment 
block to go backwards by 600 seconds which has the nice property of 
making the difficulty-adjustment algorithm target almost exactly one 
block per 600 seconds in the worst-case (where miners are attempting to 
exploit the timewarp attack), while avoiding any potential hardware 
bricking (assuming upgrades on the part of mining pools). Alternatively, 
some have proposed allowing the time to go backwards 7200 seconds, which 
introduces some small level of inflation in the case of a miner attack 
(though much less than we've had historically simply due to the rapidly 
growing hashrate) but avoids any requirements for upgrades as the 
existing 7200-second-in-the-future check implies miners will only ever 
build on blocks for which they can set the next timestamp to their 
current time.


 * The 4th change (making non-standard signature hash types invalid) 
may be worth discussing. In order to limit the number of potential 
signature hashes which could be used per-input (allowing us to cache 
them to avoid re-calculation), we can disable non-standard sighash 
types. Alternatively, however, most of the same effect could be achieved 
by caching the just-before-the-last-byte sighash midstate and hashing 
only the last byte when a checking signatures. Still, them having been 
non-standard for many years makes me doubt there is much risk involved 
in disabling them, and I don't see much potential use-case for keeping 
them around so I'd like to just remove them.


As for why the timewarp vulnerability should (IMO rather obviously) be 
fixed, it seems rather clear that the only potential use for exploiting 
it would be either to inflate the currency supply maliciously by miners 
or to fork in what amounts to extension blocks. As for why extension 
blocks are almost certainly not the right approach to such changes, its 
likely worth reading this old post: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013510.html




BIP: 
Layer: Consensus (soft fork)
Title: The Great Consensus Cleanup
Author: Matt Corallo
Status: Draft
Type: Standards Track
Created: 2019-01-28
License: PD


==Abstract==

This BIP defines a set of consensus changes which reduce the complexity 
of Bitcoin implementations and improve worst-case validation times, 
fixing a number of long-standing vulnerabilities.


==Motivation==

BIP 143 significantly improved certain aspects of Bitcoin's consensus 
rules, key to this being changes to the format of the data which is 
hashed and signed in CHECKSIG operations during script execution. 
However, several improvements were left for later forks to avoid 
bloating the original activation with unrelated changes. This BIP seeks 
to make some of these changes as well as a few other simplifications. 
Specifically, this BIP proposes the following changes:


* Worst-case validation time for non-BIP 143 transactions has long been 
considered a significant vulnerability. To address this, both 
OP_CODESEPARATOR in non-BIP 143 scripts and FindAndDelete fail script 
validation, among other cleanups. This drastically reduces worst-case 
validation time for non-BIP 143 transactions by enabling Signature Hash 
caching on a per-input basis. While validation time of large, simple 
non-BIP 143 transactions can still be excessively high on their own, 
removing these multipliers goes a long way towards resolving the issue.


* By further restricting nTime fields on difficulty adjustment blocks, 
we propose fixing the long-standing "timewarp" inflation vulnerability 
in Bitcoin's difficulty adjustment without risking existing mining 
hardware becoming unusable. This limits the worst-case difficulty 
adjustment target in case of attack from the current exponential growth, 
to once every roughly 600 seconds. Note that no change in default 
behavior is proposed, keeping the existing target of one block every 
~600.6 seconds[1] in the common case (ie we limit the attack scenario to 
about a 0.1% inflation rate, much smaller than the historical inflation 
rate due to rapid hashrate growth).


* Several vulnerabilities 

Re: [bitcoin-dev] Interrogating a BIP157 server, BIP158 change proposal

2019-02-06 Thread Matt Corallo via bitcoin-dev



On 2/4/19 8:18 PM, Jim Posen via bitcoin-dev wrote:
- snip -
> 1) Introduce a new P2P message to retrieve all prev-outputs for a given
> block (essentially the undo data in Core), and verify the scripts
> against the block by executing them. While this permits some forms of
> input script malleability (and thus cannot discriminate between all
> valid and invalid filters), it restricts what an attacker can do. This
> was proposed by Laolu AFAIK, and I believe this is how btcd is 
proceeding.


I'm somewhat confused by this - how does the undo data help you without 
seeing the full (mistate compressed) transaction? In (the realistic) 
thread model where an attacker is trying to blind you from some output, 
they can simply give you "undo data" where scriptPubKeys are OP_TRUE 
instead of the real script and you'd be none the wiser.


On 2/5/19 1:42 AM, Olaoluwa Osuntokun via bitcoin-dev wrote:
- snip -

I think it's too late into the current deployment of the BIPs to change
things around yet again. Instead, the BIP already has measures in place for
adding _new_ filter types in the future. This along with a few other filter
types may be worthwhile additions as new filter types.

- snip -

Huh? I don't think we should seriously consider 
only-one-codebase-has-deployed-anything-with-very-limited-in-the-wild-use 
as "too late into the current deployment"?


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-01-08 Thread Matt Corallo via bitcoin-dev
I responded to a few things in-line before realizing I think we're out of sync 
on what this alternative proposal actually implies. In my understanding is it, 
it does *not* imply that you are guaranteed the ability to RBF as fees change. 
The previous problem is still there - your counterparty can announce a bogus 
package and leave you unable to add a new transaction to it, the difference 
being it may be significantly more expensive to do so. If it were the case the 
you could RBF after the fact, I would likely agree with you.

> On Jan 8, 2019, at 00:50, Rusty Russell  wrote:
> 
> Matt Corallo  writes:
>> Ultimately, defining a "near the top of the mempool" criteria is fraught 
>> with issues. While it's probably OK for the original problem (large 
>> batched transactions where you don't want a single counterparty to 
>> prevent confirmation), lightning's requirements are very different. 
>> Instead is wanting a high probability that the transaction in question 
>> confirms "soon", we need certainty that it will confirm by some deadline.
> 
> I don't think it's different, in practice.

I strongly disagree. If you're someone sending a batched payment, 5% chance it 
takes 13 blocks is perfectly acceptable. If you're a lightning operator, that 
quickly turns into "5% chance, or 35% chance if your counterparty is malicious 
and knows more about the market structure than you". Eg in the past it's been 
the case that transaction volume would spike every day at the same time when 
Bitmex proceed a flood of withdrawals all at once in separate transactions. 
Worse, it's probably still the case that, in case is sudden market movement, 
transaction volume can spike while people arb exchanges and move coins into 
exchanges to sell.

>> Thus, even if you imagine a steady-state mempool growth, unless the 
>> "near the top of the mempool" criteria is "near the top of the next 
>> block" (which is obviously *not* incentive-compatible)
> 
> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> block, and assumed you'd only allow RBF if the old package wasn't in the
> top and the replacement would be.  That seems incentive compatible; more
> than the current scheme?

My point was, because of block time variance, even that criteria doesn't hold 
up. If you assume a steady flow of new transactions and one or two blocks come 
in "late", suddenly "top 4MWeight" isn't likely to get confirmed until a few 
blocks come in "early". Given block variance within a 12 block window, this is 
a relatively likely scenario.

> The attack against this is to make a 100k package which would just get
> into this "top", then push it out with a separate tx at slightly higher
> fee, then repeat.  Of course, timing makes that hard to get right, and
> you're paying real fees for it too.
> 
> Sure, an attacker can make you pay next-block high fees, but it's still
> better than our current "*always* overpay and hope!", and you can always
> decide at the time based on whether the expiring HTLC(s) are worth it.
> 
> But I think whatever's simplest to implement should win, and I'm not in
> a position to judge that accurately.
> 
> Thanks,
> Rusty.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-01-07 Thread Matt Corallo via bitcoin-dev

Sorry for the late reply.

Hmm, I included the old RBF-pinning proposal as a comparison. 
Personally, I find it both less clean and less convincingly secure.


Ultimately, defining a "near the top of the mempool" criteria is fraught 
with issues. While it's probably OK for the original problem (large 
batched transactions where you don't want a single counterparty to 
prevent confirmation), lightning's requirements are very different. 
Instead is wanting a high probability that the transaction in question 
confirms "soon", we need certainty that it will confirm by some deadline.


Thus, even if you imagine a steady-state mempool growth, unless the 
"near the top of the mempool" criteria is "near the top of the next 
block" (which is obviously *not* incentive-compatible), its easy to see 
how the package would fail to confirm within a handful of blocks given 
block time variance. Giving up the ability to RBF/CPFP more than once in 
case the fee moves away from us seems to be a rather significant 
restriction.


THe original proposal is somewhat of a hack, but its a hack on the 
boundary condition where packages meet our local anti-DoS rules in 
violation of the "incentive compatible" goal anyway (essentially, though 
miners also care about anti-DoS). This proposal is very different and, 
similar to how it doesn't work if blocks randomly come in a bit slow for 
an hour or two, isn't incentive compatible if blocks come in a bit fast 
for an hour or two, as all of a sudden that "near the top of the 
mempool" criteria makes no sense and you should have accepted the new 
transaction(s).


As for package relay, indeed, we can probably do soemthing simpler for 
this specific case, but itdepends on what the scope of that design is. 
Suhas opened an issue to try to scope it out a bit more at 
https://github.com/bitcoin/bitcoin/issues/14895


Matt


On Dec 3, 2018, at 22:33, Rusty Russell  wrote:

Matt Corallo  writes:
As an alternative proposal, at various points there have been 
discussions around solving the "RBF-pinning" problem by allowing 
transactors to mark their transactions as "likely-to-be-RBF'ed", which 
could enable a relay policy where children of such transactions would be 
rejected unless the resulting package would be "near the top of the 
mempool". This would theoretically imply such attacks are not possible 
to pull off consistently, as any "transaction-delaying" channel 
participant will have to place the package containing A at an effective 
feerate which makes confirmation to occur soon with some likelihood. It 
is, however, possible to pull off this attack with low probability in 
case of feerate spikes right after broadcast.


I like this idea.

Firstly, it's incentive-compatible[1]: assuming blocks are full, miners
should always take a higher feerate tx if that tx would be in the
current block and the replaced txs would not.[2]

Secondly, it reduces the problem that the current lightning proposal
adds to the UTXO set with two anyone-can-spend txs for 1000 satoshis,
which might be too small to cleanup later.  This rule would allow a
simple single P2WSH(OP_TRUE) output, or, with IsStandard changed,
a literal OP_TRUE.

Note that this clearly relies on some form of package relay, which comes 
with its own challenges, but I'll start a separate thread on that.


Could be done client-side, right?  Do a quick check if this is above 250
satoshi per kweight but below minrelayfee, put it in a side-cache with a
60 second timeout sweep.  If something comes in which depends on it
which is above minrelayfee, then process them as a pair[3].

Cheers,
Rusty.
[1] Miners have generally been happy with Defaults Which Are Good For The
   Network, but I feel a long term development aim should to be reduce
   such cases to smaller and smaller corners.
[2] The actual condition is subtler, but this is a clear subset AFAICT.
[3] For Lightning, we don't care about child-pays-for-grandparent etc.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2018-11-30 Thread Matt Corallo via bitcoin-dev
(cross-posted to both lists to make lightning-dev folks aware, please 
take lightning-dev off CC when responding).


As I'm sure everyone is aware, Lightning (and other similar systems) 
work by exchanging pre-signed transactions for future broadcast. Of 
course in many cases this requires either (a) predicting what the 
feerate required for timely confirmation will be at some (or, really, 
any) point in the future, or (b) utilizing CPFP and dependent 
transaction relay to allow parties to broadcast low-feerate transactions 
with children created at broadcast-time to increase the effective 
feerate. Ideally transactions could be constructed to allow for 
after-the-fact addition of inputs to increase fee without CPFP but it is 
not always possible to do so.


Option (a) is rather obviously intractible, and implementation 
complexity has led to channel failures in lightning in practice (as both 
sides must agree on a reasonable-in-the-future feerate). Option (b) is a 
much more natural choice (assuming some form of as-yet-unimplemented 
package relay on the P2P network) but is made difficult due to 
complexity around RBF/CPFP anti-DoS rules.


For example, if we take a simplified lightning design with pre-signed 
commitment transaction A with one 0-value anyone-can-spend output 
available for use as a CPFP output, a counterparty can prevent 
confirmation of/significantly increase the fee cost of confirming A by 
chaining a large-but-only-moderate-feerate transaction off of this 
anyone-can-spend output. This transaction, B, will have a large absolute 
fee while making the package (A, B) have a low-ish feerate, placing it 
solidly at the bottom of the mempool but without significant risk of it 
getting evicted during memory limiting. This large absolute fee forces a 
counterparty which wishes to have the commitment transaction confirm to 
increase on this absolute fee in order to meet RBF rules.


For this reason (and many other similar attacks utilizing the package 
size limits), in discussing the security model around CPFP, we've 
generally considered it too-difficulty-to-prevent third parties which 
are able to spend an output of a transaction from delaying its 
confirmation, at least until/unless the prevailing feerates decline and 
some of the mempool backlog gets confirmed.


You'll note, however, that this attack doesn't have to be permanent to 
work - Lightning's (and other contracting/payment channel systems') 
security model assumes the ability to get such commitment transactions 
confirmed in a timely manner, as otherwise HTLCs may time out and 
counterparties can claim the timeout-refund before we can claim the HTLC 
using the hash-preimage.


To partially-address the CPFP security model considerations, a next step 
might involve tweaking Lightning's commitment transaction to have two 
small-value outputs which are immediately spendable, one by each channel 
participant, allowing them to chain children off without allowng 
unrelated third-parties to chain children. Obviously this does not 
address the specific attack so we need a small tweak to the anti-DoS 
CPFP rules in Bitcoin Core/BIP 125:


The last transaction which is added to a package of dependent 
transactions in the mempool must:

 * Have no more than one unconfirmed parent,
 * Be of size no greater than 1K in virtual size.
(for implementation sanity, this would effectively reduce all mempool 
package size limits by 1 1K-virtual-size transaction, and the last would 
be "allowed to violate the limits" as long as it meets the above criteria).


For contracting applications like lightning, this means that as long as 
the transaction we wish to confirm (in this case the commitment transaction)

 * Has only two immediately-spendable (ie non-CSV) outputs,
 * where each immediately-spendable output is only spendable by one 
counterparty,

 * and is no larger than MAX_PACKAGE_VIRTUAL_SIZE - 1001 Vsize,
each counterparty will always be able to independantly CPFP the 
transaction in question. ie because if the "malicious" (ie 
transaction-delaying) party bradcasts A with a child, it can never meet 
the "last transaction" carve-out as its transaction cannot both meet the 
package limit and have only one unconfirmed ancestor. Thus, the 
non-delaying counterparty can always independently add its own CPFP 
transaction, increasing the (A, Tx2) package feerate and confirming A 
without having to concern themselves with the (A, Tx1) package.


As an alternative proposal, at various points there have been 
discussions around solving the "RBF-pinning" problem by allowing 
transactors to mark their transactions as "likely-to-be-RBF'ed", which 
could enable a relay policy where children of such transactions would be 
rejected unless the resulting package would be "near the top of the 
mempool". This would theoretically imply such attacks are not possible 
to pull off consistently, as any "transaction-delaying" channel 
participant will have to place the 

Re: [bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2018-11-30 Thread Matt Corallo via bitcoin-dev
Hmm, you may be correct that this doesn't (striclty speaking) imply a 
change to the BIP 125 itself, though the high-level protocol here is 
likely of interest to the list, as well as likely to generate feedback. 
Note that in your example, output Z must be CSV-delayed (ie you cannot 
construct a packeg using that output as it must be spent in a different 
block than TX0 is confirmed in) in order for the proposal to be secure 
as otherwise Alice could use output A to pin the transaction, and then 
"use up" the proposed "last-transaction" rule by spending output Z, 
leaving Bob unable to spend output B without meeting the (expensive) RBF 
criteria.


It was further pointed out to me that while the original mail states 
that this relies on package relay, this isn't really entirely true. The 
status quo today may leave a commitment transaction unable to be 
broadcast if feerates spike much higher than the feerate negotiated at 
the time of construction. Under this proposal this is not changed, it is 
only the implementation proposal which implies the commitment 
transaction feerate negotiation will simply be replaced with a 
1sat/vbyte constant which relies on some form of package relay.


Matt

On 11/30/18 5:38 PM, Russell O'Connor wrote:
On Fri, Nov 30, 2018 at 9:50 AM Matt Corallo via bitcoin-dev 
<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:


To partially-address the CPFP security model considerations, a next
step
might involve tweaking Lightning's commitment transaction to have two
small-value outputs which are immediately spendable, one by each
channel
participant, allowing them to chain children off without allowng
unrelated third-parties to chain children. Obviously this does not
address the specific attack so we need a small tweak to the anti-DoS
CPFP rules in Bitcoin Core/BIP 125:


It seems to me that this two-output scheme does address the specific 
attack without tweaking the RBF rules of BIP 125, since you are not 
doing an RBF at all.


Suppose we have a 1k-vbyte unconfirmed transaction, TX0, with outputs Z, 
A, and B, where A and B are small outputs controlled by the participants 
Alice and Bob respectively, with a 1ksat fee, yielding a fee rate of 
1sat/vbyte.
Someone, maybe Alice, attempts to pin the transaction, maliciously or 
not, by attaching a 10k-vbyte transaction, TX1, to either output Z or 
output A, with a fee of 21ksats.  This brings the fee rate for the 
TX0-TX1 package to 2sat/vbyte, being 11k-vbyte total size with 22ksats 
in total fees.


Now Bob wants to CPFP to increase the effective fee rate of TX0 to 
3sats/vbyte using output B.  He attaches a 1k-vbyte transaction, TX2, to 
output B with a fee of 5ksats.  This ought to create a new TX0-TX2 
package with a 3sat/vbyte fee rate, being 2k-vbyte total size with 
6ksats in total fees.  TX1 has now been excluded from the package 
containing TX0. But TX1 hasn't been replaced, so the RBF rules from 
BIP125 don't apply.  TX1 is still a valid unconfirmed transaction 
operating at a fee rate of 2.1sats/vbyte.


That said, I'm not an expert on how packages and package fee rates are 
calculated in Bitcoin Core, so I am speculating a bit.  And, because I'm 
talking with Matt, it's more likely that I'm mistaken.  AFAIK, any rules 
about CPFP's behaviour in Bitcoin Core is undocumented.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A BIP proposal for transactions that are 'cancellable'

2018-09-06 Thread Matt Corallo via bitcoin-dev
I think you misunderstood my proposal. What you'd do is the transaction
is spendable by either Bob OR (Bob AND Alice) and before
broadcast/during construction/whatever sign a new transaction that
spends it and is only spendable by Alice, but is timelocked for 24
hours. At the 24h mark, Alice broadcasts the transaction and once it is
confirmed only Alice can claim the money.

On 09/06/18 10:59, Alejandro Ranchal Pedrosa wrote:
> Dear Matt,
> 
> Notice that what you suggest has some substantial differences. With your
> suggestion of a multisig option with a 24h timelock, once you give Alice
> the chance to spend that UTXO without a negative timelock (as we argue),
> by means of, say, a transaction that she can use, you cannot enforce
> that this is not used by Alice after the 24hs. Perhaps it is possible,
> tweaking the Lightning Channel design of Breach Remedy txs, to penalize
> Alice if she does this, but this requires Bob to check the Blockchain in
> case he needs to publish a proof-of-fraud, think of adding extra funds
> to the transaction to account for penalization, etc.
> 
> Feel free to correct me if I got it wrong in your email.
> 
> Best,
> Alejandro.
> 
> 
> On Thu, Sep 6, 2018 at 3:32 PM Matt Corallo  > wrote:
> 
> I think a simple approach to what you want to accomplish is to
> simply have a multisig option with a locktime pre-signed transaction
> which is broadcastable at the 24h mark and has different
> spendability. This avoids introducing reorg-induced invalidity.
> 
> On September 6, 2018 9:19:24 AM UTC, Alejandro Ranchal Pedrosa via
> bitcoin-dev  > wrote:
> 
> Hello everyone,
> 
> We would like to propose a new BIP to extend OP_CSV (and/or OP_CLTV) 
> in
> order for these to allow and interpret negative values. This way,
> taking the example shown in BIP 112:
> 
> HASH160  EQUAL
> IF
>      
> ELSE
>      "24h" CHECKSEQUENCEVERIFY DROP
>      
> ENDIF
> CHECKSIG
> 
> that gives ownership only to Bob for the first 24 hours and then to
> whichever spends first, we basically propose using the negative bit 
> value:
> 
> HASH160  EQUAL
> IF
>      
> ELSE
>      "-24h" CHECKSEQUENCEVERIFY DROP
>      
> ENDIF
> CHECKSIG
> 
> meaning that both would have ownership for the first 24 hours, but
> after that only Bob would own such coins. Its implementation should
> not be too tedious, and in fact it simply implies considering negative
> values that are at the moment discarded as for the specification of
> BIP-112, leaving the sign bit unused.
> 
> This, we argue, an increase the fairness of the users, and can at 
> times
> be more cost-effective for users to do rather than trying a 
> Replace-By-Fee
> transaction, should they want to modify such payment.
> 
> We would like to have a discussion about this before proposing the
> BIP, for which we are preparing the text.
> 
> You can find our paper discussing it here:
> https://hal-cea.archives-ouvertes.fr/cea-01867357 (find attached as 
> well)
> 
> Best,
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A BIP proposal for transactions that are 'cancellable'

2018-09-06 Thread Matt Corallo via bitcoin-dev
I think a simple approach to what you want to accomplish is to simply have a 
multisig option with a locktime pre-signed transaction which is broadcastable 
at the 24h mark and has different spendability. This avoids introducing 
reorg-induced invalidity.

On September 6, 2018 9:19:24 AM UTC, Alejandro Ranchal Pedrosa via bitcoin-dev 
 wrote:
>Hello everyone,
>
>We would like to propose a new BIP to extend OP_CSV (and/or OP_CLTV) in
>order for these to allow and interpret negative values. This way,
>taking the example shown in BIP 112:
>
>HASH160  EQUAL
>IF
>     
>ELSE
>     "24h" CHECKSEQUENCEVERIFY DROP
>     
>ENDIF
>CHECKSIG
>
>that gives ownership only to Bob for the first 24 hours and then to
>whichever spends first, we basically propose using the negative bit
>value:
>
>HASH160  EQUAL
>IF
>     
>ELSE
>     "-24h" CHECKSEQUENCEVERIFY DROP
>     
>ENDIF
>CHECKSIG
>
>meaning that both would have ownership for the first 24 hours, but
>after that only Bob would own such coins. Its implementation should
>not be too tedious, and in fact it simply implies considering negative
>values that are at the moment discarded as for the specification of
>BIP-112, leaving the sign bit unused.
>
>This, we argue, an increase the fairness of the users, and can at times
>be more cost-effective for users to do rather than trying a
>Replace-By-Fee
>transaction, should they want to modify such payment.
>
>We would like to have a discussion about this before proposing the
>BIP, for which we are preparing the text.
>
>You can find our paper discussing it here:
>https://hal-cea.archives-ouvertes.fr/cea-01867357 (find attached as
>well)
>
>Best,
>
>-- 
>Alejandro Ranchal Pedrosa, Önder Gürcan and Sara Tucci-Piergiovanni
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BetterHash status

2018-06-26 Thread Matt Corallo via bitcoin-dev
Things go into production when people decide to adopt them, not before. You're 
welcome to contribute to the implementation at 
https://github.com/TheBlueMatt/mining-proxy

On June 26, 2018 2:32:06 PM UTC, "Casciano, Anthony via bitcoin-dev" 
 wrote:
>What is the status of Matt Corallo's "BetterHash" BIP??   I recommend
>it
>
>goes into production sooner than later.  Any 2nd's ?
>
>
>Thanks in advance!
>
>Tony Cash
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] BetterHash Mining Protocol Replacements

2018-06-06 Thread Matt Corallo via bitcoin-dev
Clients "inspecting and modifying the transactions" is explicitly *not*
supported. There should be more than enough features for clients to get
bitcoind to generate the exact block they want already available via
Bitcoin Core. The only reason transactions are exposed over the work
protocol at all, really, is so that clients can generate weak blocks to
be sent to the pool for efficient client -> pool block relay, not sure
that's worth bothering to add a whole new endpoint for, sounds
needlessly complicated (and the spec is already more than complicated
enough, sadly).

Matt

On 06/05/18 21:26, Chris Pacia via bitcoin-dev wrote:
> Really like that you're moving forward with this. A few months ago I was
> working on something similar as it seemed like nobody else was interested.
> 
> In regards to the specific proposal, would it make sense to offer a tx
> subscription endpoint in addition to TRANSACTION_DATA_REQUEST? Such an
> endpoint could respond to the subscription with the current full list of
> transactions and then push the diff every time a new template is pushed.
> A client that wants to inspect and modify the transactions would use
> quite a bit less data than polling the request endpoint.
> 
> 
> On 06/05/2018 02:44 PM, Matt Corallo via bitcoin-dev wrote:
>> Been working on this one for a while, so its already been through a few
>> rounds of feeback (thanks to all those who already have provided
>> feedback)!
>>
>> At a high level, this meets a few goals:
>>
>> 1) Replace getblocktemplate with something that is both more performant
>> (no JSON encoding, no full transactions sent over the wire to update a
>> job, hence we can keep the same CTransactionRef in Bitcoin Core making
>> lots of validation things way faster), more robust for consensus changes
>> (no need to add protocol changes to add commitments ala SegWit in the
>> future), and moves more block-switching logic inside of the work
>> provider (allowing Bitcoin Core to better optimize work switching as it
>> knows more than an outside pool server, specifically we can play more
>> games with how we do mempool eviction, empty block mining, and not
>> mining fresh transactions more easily by moving to a more "push" model
>> from the normal "pull" getblocktemplate implementation).
>>
>> 2) Replace Stratum with something more secure (sign messages when
>> applicable, without adding too much overhead to the pool), simpler to
>> implement (not JSON-wrapped-hex, no 32-byte-swapped-per-4-byte-byteorder
>> insanity), and better-defined (a clearly written spec, encompassing the
>> various things shoved backwards into stratum like suggested difficulty
>> in the password field and device identification by setting user to
>> "user.device") with VENDOR_MESSAGEs provided for extensibility instead
>> of conflicting specifications from various different vendors.
>>
>> 3) Provide the ability for a pool to accept work which the users of the
>> pool selected the transactions for, providing strong decentralization
>> pressure by removing the network-level centralization attacks pools can
>> do (or be compromised and used to perform) while still allowing them
>> full control of payout management and variance reduction.
>>
>> While (1) and (2) stand on their own, making it all one set of protocols
>> to provide (3) provides at least the opportunity for drastically better
>> decentralization in Bitcoin mining in the future.
>>
>> The latest version of the full BIP draft can be found at
>> https://github.com/TheBlueMatt/bips/blob/betterhash/bip-.mediawiki
>> and implementations of the work-generation part at
>> https://github.com/TheBlueMatt/bitcoin/commits/2018-02-miningserver and
>> pool/proxy parts at https://github.com/TheBlueMatt/mining-proxy (though
>> note that both implementations are currently on a slightly out-of-date
>> version of the protocol, I hope to get them brought up to date in the
>> coming day or two and make them much more full-featured. The whole stack
>> has managed to mine numerous testnet blocks on several different types
>> of hardware).
>>
>> Matt
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [BIP Proposal] BetterHash Mining Protocol Replacements

2018-06-05 Thread Matt Corallo via bitcoin-dev
Been working on this one for a while, so its already been through a few
rounds of feeback (thanks to all those who already have provided feedback)!

At a high level, this meets a few goals:

1) Replace getblocktemplate with something that is both more performant
(no JSON encoding, no full transactions sent over the wire to update a
job, hence we can keep the same CTransactionRef in Bitcoin Core making
lots of validation things way faster), more robust for consensus changes
(no need to add protocol changes to add commitments ala SegWit in the
future), and moves more block-switching logic inside of the work
provider (allowing Bitcoin Core to better optimize work switching as it
knows more than an outside pool server, specifically we can play more
games with how we do mempool eviction, empty block mining, and not
mining fresh transactions more easily by moving to a more "push" model
from the normal "pull" getblocktemplate implementation).

2) Replace Stratum with something more secure (sign messages when
applicable, without adding too much overhead to the pool), simpler to
implement (not JSON-wrapped-hex, no 32-byte-swapped-per-4-byte-byteorder
insanity), and better-defined (a clearly written spec, encompassing the
various things shoved backwards into stratum like suggested difficulty
in the password field and device identification by setting user to
"user.device") with VENDOR_MESSAGEs provided for extensibility instead
of conflicting specifications from various different vendors.

3) Provide the ability for a pool to accept work which the users of the
pool selected the transactions for, providing strong decentralization
pressure by removing the network-level centralization attacks pools can
do (or be compromised and used to perform) while still allowing them
full control of payout management and variance reduction.

While (1) and (2) stand on their own, making it all one set of protocols
to provide (3) provides at least the opportunity for drastically better
decentralization in Bitcoin mining in the future.

The latest version of the full BIP draft can be found at
https://github.com/TheBlueMatt/bips/blob/betterhash/bip-.mediawiki
and implementations of the work-generation part at
https://github.com/TheBlueMatt/bitcoin/commits/2018-02-miningserver and
pool/proxy parts at https://github.com/TheBlueMatt/mining-proxy (though
note that both implementations are currently on a slightly out-of-date
version of the protocol, I hope to get them brought up to date in the
coming day or two and make them much more full-featured. The whole stack
has managed to mine numerous testnet blocks on several different types
of hardware).

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-17 Thread Matt Corallo via bitcoin-dev
Yea I generally would really prefer something like that but it
significantly complicates the download logic - currently clients can
easily cross-check a filter in case they differ between providers by
downloading the block. If we instead went with the script being spent
they would have to be provided all previous transactions (potentially
compressed via midstate) as well, making it potentially infeasible to
identify the offending node while remaining a lightweight client. Maybe
there is some other reasonable download logic to replace it with, however.

Matt

On 05/17/18 12:36, Gregory Maxwell wrote:
> On Thu, May 17, 2018 at 3:25 PM, Matt Corallo via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> I believe (1) could be skipped entirely - there is almost no reason why
>> you'd not be able to filter for, eg, the set of output scripts in a
>> transaction you know about
> 
> I think this is convincing for the txids themselves.
> 
> What about also making input prevouts filter based on the scriptpubkey
> being _spent_?  Layering wise in the processing it's a bit ugly, but
> if you validated the block you have the data needed.
> 
> This would eliminate the multiple data type mixing entirely.
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-17 Thread Matt Corallo via bitcoin-dev
(1) can be accomplished by filtering for the set of outputs in the transaction 
you created. I agree (2) would ideally be done to avoid issues with a copied 
wallet (theft or not), but I am worried about the size of the filters 
themselves, not just the size of the blocks downloaded after a match.

On May 17, 2018 3:43:15 PM UTC, Peter Todd <p...@petertodd.org> wrote:
>On Thu, May 17, 2018 at 11:25:12AM -0400, Matt Corallo via bitcoin-dev
>wrote:
>> BIP 158 currently includes the following in the "basic" filter: 1)
>> txids, 2) output scripts, 3) input prevouts.
>> 
>> I believe (1) could be skipped entirely - there is almost no reason
>why
>> you'd not be able to filter for, eg, the set of output scripts in a
>> transaction you know about and (2) and (3) may want to be split out -
>> many wallets may wish to just find transactions paying to them, as
>> transactions spending from their outputs should generally be things
>> they've created.
>
>So I think we have two cases where wallets want to find txs spending
>from their
>outputs:
>
>1) Waiting for a confirmation
>2) Detecting theft
>
>The former can be turned off once there are no expected unconfirmed
>transactions.
>
>As for the latter, this is probably a valuable thing for wallets to do.
>Modulo
>reorgs, reducing the frequency that you check for stolen funds doesn't
>decrease
>total bandwidth cost - it's one filter match per block regardless - but
>perhaps
>the real-world bandwidth cost can be reduced by, say, waiting for a
>wifi
>connection rather than using cellular data.
>
>-- 
>https://petertodd.org 'peter'[:-1]@petertodd.org
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] UHS: Full-node security without maintaining a full UTXO set

2018-05-17 Thread Matt Corallo via bitcoin-dev
Hey Cory,

I'm generally a fan of having an option to "prove a block is valid when
relaying it" instead of "just relay it", but I am concerned that this
proposal is overfitting the current UTXO set. Specifically, because UTXO
entries are (roughly) 32 bytes per output plus 32 bytes per transaction
on disk today, a material increase in batching and many-output
transactions may significantly reduce the UTXO-set-size gain in this
proposal while adding complexity to block relay as well as increase the
size of block data relayed, which can have adverse effects on
propagation. I'd love to see your tests re-run on simulated transaction
data with more batching of sends.

Matt

On 05/16/18 12:36, Cory Fields via bitcoin-dev wrote:
> Tl;dr: Rather than storing all unspent outputs, store their hashes. Untrusted
> peers can supply the full outputs when needed, with very little overhead.
> Any attempt to spoof those outputs would be apparent, as their hashes would 
> not
> be present in the hash set. There are many advantages to this, most apparently
> in disk and memory savings, as well as a validation speedup. The primary
> disadvantage is a small increase in network traffic. I believe that the
> advantages outweigh the disadvantages.
> 
> --
> 
> Bitcoin’s unspent transaction output set (usually referred to as “The UTXO
> set”) has two primary roles: providing proof that previous outputs exist to be
> spent, and providing the actual previous output data for verification when new
> transactions attempts to spend them. These roles are not usually discussed
> independently, but as Bram Cohen's TXO Bitfield [0] idea hints, there are
> compelling reasons to consider them this way.
> 
> To see why, consider running a node with the following changes:
> 
> - For each new output, gather all extra data that will be needed for
>   verification when spending it later as an input: the amount, scriptPubKey,
>   creation height, coinbaseness, and output type (p2pkh, p2sh, p2wpkh, etc.).
>   Call this the Dereferenced Prevout data.
> - Create a hash from the concatenation of the new outpoint and the 
> dereferenced
>   prevout data. Call this a Unspent Transaction Output Hash.
> - Rather than storing the full dereferenced prevout entries in a UTXO set as 
> is
>   currently done, instead store their hashes to an Unspent Transaction Output
>   Hash Set, or UHS.
> - When relaying a transaction, append the dereferenced prevout for each input.
> 
> Now when a transaction is received, it contains everything needed for
> verification, including the input amount, height, and coinbaseness, which 
> would
> have otherwise required a lookup the UTXO set.
> 
> To verify an input's unspentness, again create a hash from the concatenation 
> of
> the referenced outpoint and the provided dereferenced prevout, and check for
> its presence in the UHS. The hash will only be present if a hash of the exact
> same data was previously added to (and not since removed from) the UHS. As
> such, we are protected from a peer attempting to lie about the dereferenced
> prevout data.
> 
> ### Some benefits of the UHS model
> 
> - Requires no consensus changes, purely a p2p/implementation change.
> 
> - UHS is substantially smaller than a full UTXO set (just over half for the
>   main chain, see below). In-memory caching can be much more effective as a
>   result.
> 
> - A block’s transactions can be fully verified before doing a potentially
>   expensive database lookup for the previous output data. The UHS can be
>   queried afterwards (or in parallel) to verify previous output inclusion.
> 
> - Entire blocks could potentially be verified out-of-order because all input
>   data is provided; only the inclusion checks have to be in-order. Admittedly
>   this is likely too complicated to be realistic.
> 
> - pay-to-pubkey outputs are less burdensome on full nodes, since they use no
>   more space on-disk than pay-to-pubkey-hash or pay-to-script-hash. Taproot 
> and
>   Graftroot outputs may share the same benefits.
> 
> - The burden of holding UTXO data is technically shifted from the verifiers to
>   the spender. In reality, full nodes will likely always have a copy as well,
>   but conceptually it's a slight improvement to the incentive model.
> 
> - Block data from peers can also be used to roll backwards during a reorg. 
> This
>   potentially enables an even more aggressive pruning mode.
> 
> - UTXO storage size grows exactly linearly with UTXO count, as opposed to
>   growing linearly with UTXO data size. This may be relevant when considering
>   new larger output types which would otherwise cause the UTXO Set size to
>   increase more quickly.
> 
> - The UHS is a simple set, no need for a key-value database. LevelDB could
>   potentially be dropped as a dependency in some distant future.
> 
> - Potentially integrates nicely with Pieter Wuille's Rolling UTXO set hashes
>   [1]. Unspent Transaction Output Hashes would simply be mapped to points on a
>   

[bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-17 Thread Matt Corallo via bitcoin-dev
BIP 158 currently includes the following in the "basic" filter: 1)
txids, 2) output scripts, 3) input prevouts.

I believe (1) could be skipped entirely - there is almost no reason why
you'd not be able to filter for, eg, the set of output scripts in a
transaction you know about and (2) and (3) may want to be split out -
many wallets may wish to just find transactions paying to them, as
transactions spending from their outputs should generally be things
they've created.

In general, I'm concerned about the size of the filters making existing
SPV clients less willing to adopt BIP 158 instead of the existing bloom
filter garbage and would like to see a further exploration of ways to
split out filters to make them less bandwidth intensive. Some further
ideas we should probably play with before finalizing moving forward is
providing filters for certain script templates, eg being able to only
get outputs that are segwit version X or other similar ideas.

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-27 Thread Matt Corallo via bitcoin-dev
Gah, please no. I see no material reason why cross-input signature aggregation 
shouldn't have the signatures in the first n-1 inputs replaced with something 
like a single-byte push where a signature is required to indicate aggregation, 
and the combined signature in the last input at whatever position the signature 
is required.

On January 27, 2018 5:07:25 PM UTC, Russell O'Connor via bitcoin-dev 
 wrote:
-snip-
>Cross-input signature aggregation probably requires a new field to be
>added
>to the P2P transaction structure to hold the aggregated signature,
>since
>there isn't really a good place to put it in the existing structure
>(there
>are games you can play to make it fit, but I think it is worthwhile). 
>The
>obvious way add block commitments to a new tx field is via the witness
>reserved value mechanism present in BIP 141.  At this point I think
>there
>will be some leeway to adjust the discount on the weight of this new
>aggregated signature tx field so that even a single input taproot using
>the
>aggregated signature system (here an aggregation of 1 signature) ends
>up no
>more expensive than a single input segwit P2WPKH.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-23 Thread Matt Corallo via bitcoin-dev
common use cases can be made more efficient through
>output specialization. To take a more obvious example, lightning
>protocol is still an active area or research and I think it is
>abundantly clear that we don’t know yet what the globally optimal
>layer-2 caching protocol will be, even if we have educated guesses as
>to its broad structure. A proposal right now to standardize a more
>compact lightning script type would be rightly rejected. It is less
>obvious but just as true that the same should hold for MAST.
>
>I have argued these points before in favor of permission less
>innovation first, then application specialization later, in [1] and at
>the end of the rather long email [2]. I hope you can take the time to
>read those if you still feel we should take a specialized template
>approach instead of the user programmable BIPSs 116 and 117.
>
>[1]
>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015537.html
><https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015537.html>
>[2]
>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015029.html
><https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015029.html>
>
>> On Jan 22, 2018, at 6:51 PM, Matt Corallo via bitcoin-dev
><bitcoin-dev@lists.linuxfoundation.org> wrote:
>> 
>> Thanks Greg!
>> 
>> I'd be hesitant to deploy a MAST proposal without this clever
>application of pay-to-contract-hash now! Looks like the overhead over a
>more-naive MAST construction is rather trivial, too!
>> 
>> Matt
>> 
>> On January 23, 2018 12:30:06 AM UTC, Gregory Maxwell via bitcoin-dev
><bitcoin-dev@lists.linuxfoundation.org> wrote:
>> Interest in merkelized scriptPubKeys (e.g. MAST) is driven by two
>main
>> areas: efficiency and privacy. Efficiency because unexecuted forks of
>> a script can avoid ever hitting the chain, and privacy because hiding
>> unexecuted code leaves scripts indistinguishable to the extent that
>> their only differences are in the unexecuted parts.
>> 
>> As Mark Friedenbach and others have pointed out before it is almost
>> always the case that interesting scripts have a logical top level
>> branch which allows satisfaction of the contract with nothing other
>> than a signature by all parties.  Other branches would only be used
>> where some participant is failing to cooperate. More strongly stated,
>> I believe that _any_ contract with a fixed finite participant set
>> upfront can be and should be represented as an OR between an N-of-N
>> and whatever more complex contract you might want to represent.
>> 
>> One point that comes up while talking about merkelized scripts is can
>> we go about making fancier contract use cases as indistinguishable as
>> possible from the most common and boring payments. Otherwise, if the
>> anonymity set of fancy usage is only other fancy usage it may not be
>> very large in practice. One suggestion has been that ordinary
>> checksig-only scripts should include a dummy branch for the rest of
>> the tree (e.g. a random value hash), making it look like there are
>> potentially alternative rules when there aren't really.  The negative
>> side of this is an additional 32-byte overhead for the overwhelmingly
>> common case which doesn't need it.  I think the privacy gains are
>> worth doing such a thing, but different people reason differently
>> about these trade-offs.
>> 
>> It turns out, however, that there is no need to make a trade-off. 
>The
>> special case of a top level "threshold-signature OR
>> arbitrary-conditions" can be made indistinguishable from a normal
>> one-party signature, with no overhead at all, with a special
>> delegating CHECKSIG which I call Taproot.
>> 
>> Let's say we want to create a coin that can be redeemed by either
>> Alice && Bob   or by CSV-timelock && Bob.
>> 
>> Alice has public A, Bob has pubkey B.
>> 
>> We compute the 2-of-2 aggregate key C = A + B.  (Simplified; to
>> protect against rogue key attacks you may want to use the MuSig key
>> aggregation function [1])
>> 
>> We form our timelock script S =  " OP_CSV OP_DROP B
>OP_CHECKSIGVERIFY"
>> 
>> Now we tweak C to produce P which is the key we'll publish: P = C +
>H(C||S)G.
>> 
>> (This is the attack hardened pay-to-contract construction described
>in [2])
>> 
>> Then we pay to a scriptPubKey of [Taproot supporting version] [EC
>point P].
>> 
>> Now Alice and Bob-- assuming they are both online and agree about the
>> resolution of

Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-22 Thread Matt Corallo via bitcoin-dev
Thanks Greg!

I'd be hesitant to deploy a MAST proposal without this clever application of 
pay-to-contract-hash now! Looks like the overhead over a more-naive MAST 
construction is rather trivial, too!

Matt

On January 23, 2018 12:30:06 AM UTC, Gregory Maxwell via bitcoin-dev 
 wrote:
>Interest in merkelized scriptPubKeys (e.g. MAST) is driven by two main
>areas: efficiency and privacy. Efficiency because unexecuted forks of
>a script can avoid ever hitting the chain, and privacy because hiding
>unexecuted code leaves scripts indistinguishable to the extent that
>their only differences are in the unexecuted parts.
>
>As Mark Friedenbach and others have pointed out before it is almost
>always the case that interesting scripts have a logical top level
>branch which allows satisfaction of the contract with nothing other
>than a signature by all parties.  Other branches would only be used
>where some participant is failing to cooperate. More strongly stated,
>I believe that _any_ contract with a fixed finite participant set
>upfront can be and should be represented as an OR between an N-of-N
>and whatever more complex contract you might want to represent.
>
>One point that comes up while talking about merkelized scripts is can
>we go about making fancier contract use cases as indistinguishable as
>possible from the most common and boring payments. Otherwise, if the
>anonymity set of fancy usage is only other fancy usage it may not be
>very large in practice. One suggestion has been that ordinary
>checksig-only scripts should include a dummy branch for the rest of
>the tree (e.g. a random value hash), making it look like there are
>potentially alternative rules when there aren't really.  The negative
>side of this is an additional 32-byte overhead for the overwhelmingly
>common case which doesn't need it.  I think the privacy gains are
>worth doing such a thing, but different people reason differently
>about these trade-offs.
>
>It turns out, however, that there is no need to make a trade-off.  The
>special case of a top level "threshold-signature OR
>arbitrary-conditions" can be made indistinguishable from a normal
>one-party signature, with no overhead at all, with a special
>delegating CHECKSIG which I call Taproot.
>
>Let's say we want to create a coin that can be redeemed by either
>Alice && Bob   or by CSV-timelock && Bob.
>
>Alice has public A, Bob has pubkey B.
>
>We compute the 2-of-2 aggregate key C = A + B.  (Simplified; to
>protect against rogue key attacks you may want to use the MuSig key
>aggregation function [1])
>
>We form our timelock script S =  " OP_CSV OP_DROP B
>OP_CHECKSIGVERIFY"
>
>Now we tweak C to produce P which is the key we'll publish: P = C +
>H(C||S)G.
>
>(This is the attack hardened pay-to-contract construction described in
>[2])
>
>Then we pay to a scriptPubKey of [Taproot supporting version] [EC point
>P].
>
>Now Alice and Bob-- assuming they are both online and agree about the
>resolution of their contract-- can jointly form a 2 of 2 signature for
>P, and spend as if it were a payment to a single party (one of them
>just needs to add H(C||S) to their private key).
>
>Alternatively, the Taproot consensus rules would allow this script to
>be satisfied by someone who provides the network with C (the original
>combined pubkey), S, and does whatever S requires-- e.g. passes the
>CSV check and provides Bob's signature. With this information the
>network can verify that C + H(C||S) == P.
>
>So in the all-sign case there is zero overhead; and no one can tell
>that the contract alternative exists. In the alternative redemption
>branch the only overhead is revealing the original combined pubkey
>and, of course, the existence of the contract is made public.
>
>This composes just fine with whatever other merkelized script system
>we might care to use, as the S can be whatever kind of data we want,
>including the root of some tree.
>
>My example shows 2-of-2 but it works the same for any number of
>participants (and with setup interaction any threshold of
>participants, so long as you don't mind an inability to tell which
>members signed off).
>
>The verification computational complexity of signature path is
>obviously the same as any other plain signature (since its
>indistinguishable). Verification of the branch redemption requires a
>hash and a multiplication with a constant point which is strictly more
>efficient than a signature verification and could be efficiently fused
>into batch signature validation.
>
>The nearest competitor to this idea that I can come up with would
>supporting a simple delegation where the output can be spent by the
>named key, or a spending transaction could provide a script along with
>a signature of that script by the named key, delegating control to the
>signed script. Before paying into that escrow Alice/Bob would
>construct this signature. This idea is equally efficient in the common
>case, but larger and slower 

Re: [bitcoin-dev] Satoshilabs secret shared private key scheme

2018-01-17 Thread Matt Corallo via bitcoin-dev
Or make it a part of your secret-split logic... Gotta love how fast GF(2^8) is:
https://github.com/TheBlueMatt/shamirs/blob/master/main.c#L57

On January 17, 2018 3:31:44 PM UTC, Gregory Maxwell via bitcoin-dev 
 wrote:
>If the generalization isn't obvious, it might be helpful to make a
>little test utility that tries all possible one byte messages with all
>possible share values using the GF(256) sharing scheme proposed in the
>draft-- in this case information theory is why we can know SSS (and
>similar) have (within their limited scope) _perfect_ security, rather
>than it being a reason to speculate that they might not turn out to be
>secure at all. (or, instead of a test utility just work through some
>examples on paper in a small field).
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ivy: a higher-level language targeting Bitcoin Script

2018-01-14 Thread Matt Corallo via bitcoin-dev
I'm curious if you've considered adding some form of compiler-time enforcement 
to prevent witness malleability? With that, Ivy could help to resolve for it's 
users one of the things that can make Bitcoin scripts more complicated to 
write, instead of simply type-checking and providing a high-level language 
mapped 1-to-1 with Bitcoin script.

On December 18, 2017 8:32:17 PM UTC, Daniel Robinson via bitcoin-dev 
 wrote:
>Today, we’re releasing Ivy, a prototype higher-level language and
>development environment for creating custom Bitcoin Script programs.
>You
>can see the full announcement here
>,
>or check out the docs  and source
>code
>.
>
>Ivy is a simple smart contract language that can compile to Bitcoin
>Script.
>It aims to improve on the useability of Bitcoin Script by adding
>affordances like named variables and clauses, static (and
>domain-specific)
>types, and familiar syntax for function calls.
>
>To try out Ivy, you can use the Ivy Playground for Bitcoin
>, which allows you to create and test
>simulated contracts in a sandboxed environment.
>
>This is prototype software intended for educational and research
>purposes
>only. Please don't try to use Ivy to control real or testnet Bitcoins.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP-21 amendment proposal: -no125

2017-12-23 Thread Matt Corallo via bitcoin-dev
While the usability of non-RBF transactions tends to be quite poor, there are 
some legitimate risk-analysis-based reasons why people use them (eg to sell BTC 
based on a incoming transaction which you will need to convert to fiat, which 
has low cost if the transaction doesn't confirm), and if people want to overpay 
on fees to do so, no reason not to let them, including if the merchant is 
willing to CPFP to do so.

Honestly, I anticipate very low usage of such a flag, which is appropriate, but 
also strongly support including it. If things turn out differently with 
merchants reducing the usability of BTC without taking over the CPFP 
responsibility we could make the option imply receiver-pays-fee, but no reason 
to overcomplicate it yet.

On December 11, 2017 1:19:43 PM EST, Peter Todd via bitcoin-dev 
 wrote:
>On Tue, Dec 05, 2017 at 07:39:32PM +, Luke Dashjr via bitcoin-dev
>wrote:
>> On Tuesday 05 December 2017 7:24:04 PM Sjors Provoost wrote:
>> > I recently submitted a pull request that would turn on RBF by
>default,
>> > which triggered some discussion [2]. To ease the transition for
>merchants
>> > who are reluctant to see their customers use RBF, Matt Corallo
>suggested
>> > that wallets honor a no125=1 flag.
>> > 
>> > So a BIP-21 URI would look like this:
>> > bitcoin:175t...45W?amount=20.3=1
>> > 
>> > When this flag is set, wallets should not use RBF, regardless of
>their
>> > default, unless the user explicitly overrides the merchant's
>preference.
>> 
>> This seems counterproductive. There is no reason to ever avoid the
>RBF flag. 
>> I'm not aware of any evidence it even reduces risk of, and it
>certainly 
>> doesn't prevent double spending. Plenty of miners allow RBF
>regardless of the 
>> flag, and malicious double spending doesn't benefit much from RBF in
>any case.
>
>I'll second the objection to a no-RBF flag.
>
>-- 
>https://petertodd.org 'peter'[:-1]@petertodd.org
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Two Drivechain BIPs

2017-12-03 Thread Matt Corallo via bitcoin-dev
Process note: It looks like the BIPs have never been posted to
bitcoin-dev, only high-level discussion around the idea. As I understand
it, this is *not* sufficient for BIP number assignment nor
(realistically) sufficient to call it a hard "proposal" for a change to
consensus rules.

Would love to get feedback from some others who are looking at deploying
real-world sidechains, eg the RSK folks. We can't end up with *two*
protocols for sidechains in Bitcoin.

Comments on BIP 1:

At a high level, I'm rather dissapointed by the amount of data that is
going into the main chain here. Things such as a human readable name
have no place in the chain, IMO. Further, the use of a well-known
private key seems misplaced, why not just track the sidechain balance
with something that looks like `OP_NOPX genesis_block_hash`?

I'm not convinced by the full semantics of proposal/ack of new
sidechains. Given the lack of convincing evidence of that "Risk of
centralisation of mining" drawback in section 4.3 of the sidechains
paper has been meaningfully addressed, I'd say its pretty important that
new sidechains be an incredibly rare event. Thus, a much simpler system
(eg a version-bits-based upgrade cycle with high threshold) could be
used to add new sidechains based on well-known public parameters.

The semantics of the deposit process seem very suboptimal. You note that
only one user can deposit at a time, but this seems entirely
unnecessary. As implemented in the first Elements Alpha release (though
I believe subsequently removed in later versions due to the nature of
Elements of targeting asymmetric "federated" sidechains), if you have
outputs that look like `OP_NOPX genesis_block_hash` as the sidechain
deposit/storage address, deposit can be fully parallel. To reduce
blockchain bloat, spending them for the purpose of combining such
outputs is also allowed. You could even go further and allow some new
sighash type to define something like SIGHASH_ALL|SIGHASH_ANYONECANPAY
which further specifies some semantics for combining inputs which all
pay into the same output.

Finally, you may also want to explore some process for the removal of
sidechain balances from the main chain. As proposed it seems like a
sidechain might, over time, fade into an insecure state as mining power
shifts and new miners no longer consider it worth the value to mine an
old sidechain (as has happened over time with namecoin, arguably).


Comments on BIP 2:

I may be missing something, but I find the security model here kind of
depressing...Not only do hashpower-secured sidechains already have a
significantly reduced security level, but now you're proposing to
further (drastically) reduce it by requiring users to potentially pay in
excess of the value an attacker is willing to pay to keep their chain
secure, on a recurring basis? It seems like if a chain has 10 BTC stored
in it, and I wish to reorg it for a potential gain of, lets say, 6 BTC,
I can pay 6 * 1 BTC (1 per block) to reorg it, and users on the chain
would be forced to pay >6 BTC to avoid this?

While I appreciate the desire to implement the proposed mitigation in
section 4.3 of the sidechains paper (delegating the mining effort of a
merge-mined sidechain to an external entity), I believe it was primarily
referencing pooling the sidechain work, not blindly taking the highest
bidder. I suppose, indeed, that, ultimately, as long as the sidechain is
of relatively small value in comparison to BTC, miners do not risk the
value of their BTC/mining investment in simply taking the highest bidder
of a merge-mined block, even if its a clear attack, but I don't think
thats something to be celebrated, encouraged, or designed to be possible
by default. Instead, I'd, in line with Peter Todd's (and others')
objection to merged mining generally, call this one of the most critical
issues with the security model.

Ultimately, I dont believe your proposal here really solves the drawback
in section 4.3 of the paper, and possibly makes it worse. Instead, it
may be more useful to rely on a high threshold for the addition of new
sidechains, though I'd love to see discussion on this point specifically
on this list. Further, I'd say, at a minimum, a very stable
default-available low-bandwidth implementation of at least the
pool-based mitigation suggested in the paper must exist for something
like this to be considered readily stable enough to be deployed into the
Bitcoin ecosystem.

Matt

On 12/01/17 13:38, Paul Sztorc via bitcoin-dev wrote:
> Hello,
> 
> First, Drivechain has vaguely escaped vaporware status. If you've ever
> thought "I'd like to take a look into Drivechain when there is code",
> then now is a pretty good time. (Unfinished items include M1, and M8_V2.)
> 
> https://github.com/drivechain-project/bitcoin/tree/mainchainBMM
> 
> Also,
> Site:  http://www.drivechain.info/
> Blank sidechain:
> https://github.com/drivechain-project/bitcoin/tree/sidechainBMM
> 
> Second, I think drivechain's documentation 

Re: [bitcoin-dev] Making OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

2017-11-27 Thread Matt Corallo via bitcoin-dev
Indeed, the PR in question does *not* change the semantics of
OP_CODESEPARATOR within SegWit redeemScripts, where it is still allowed
(and Nicolas Dorier pointed out that he was using it in TumbleBit), so
there are still ways to use it, but only in places, like SegWit, where
the potential validation complexity blowup is massively reduced.

I am not sure that OP_CODESEPARATOR is entirely useless in pre-SegWit
scripts (I believe Nicolas' construction may still be relevant
pre-SegWit), though I strongly believe FindAndDelete is.

I don't think CODESEPARATOR rises to the threshold of it being "widely
known to be useless", but certainly the historical use of it (to
separate the scriptSig and the scriptPubKey in the scriptCode, which was
run as a single concatenated thing in the original design is no longer
relevant). FindAndDelete is equally irrelevant if not significantly more
irrelevant.

Matt

On 11/27/17 16:06, Mark Friedenbach wrote:
> It is relevant to note that BIP 117 makes an insecure form of
> CODESEPARATOR delegation possible, which could be made secure if some
> sort of CHECKSIGFROMSTACK opcode is added at a later point in time. It
> is not IMHO a very elegant way to achieve delegation, however, so I hope
> that one way or another this could be resolved quickly so it doesn’t
> hold up either one of those valuable additions.
> 
> I have no objections to making them nonstandard, or even to make them
> invalid if someone with a better grasp of history can attest that
> CODESEPARATOR was known to be entirely useless before the introduction
> of P2SH—not the same as saying it was useless, but that it was widely
> known to not accomplish what a early-days script author might think it
> was doing—and the UTXO set contains no scriptPubKeys making use of the
> opcode, even from the early days. Although a small handful could be
> special cased, if they exist.
> 
>> On Nov 27, 2017, at 8:33 AM, Matt Corallo > > wrote:
>>
>> I strongly disagree here - we don't only soft-fork out transactions that
>> are "fundamentally insecure", that would be significantly too
>> restrictive. We have generally been willing to soft-fork out things
>> which clearly fall outside of best-practices, especially rather
>> "useless" fields in the protocol eg soft-forking behavior into OP_NOPs,
>> soft-forking behavior into nSequence, etc.
>>
>> As a part of setting clear best-practices, making things non-standard is
>> the obvious step, though there has been active discussion of
>> soft-forking out FindAndDelete and OP_CODESEPARATOR for years now. I
>> obviously do not claim that we should be proposing a soft-fork to
>> blacklist FindAndDelete and OP_CODESEPARATOR usage any time soon, and
>> assume that it would take at least a year or three from when it was made
>> non-standard to when a soft-fork to finally remove them was proposed.
>> This should be more than sufficient time for folks using such weird (and
>> largely useless) parts of the protocol to object, which should be
>> sufficient to reconsider such a soft-fork.
>>
>> Independently, making them non-standard is a good change on its own, and
>> if nothing else should better inform discussion about the possibility of
>> anyone using these things.
>>
>> Matt
>>
>> On 11/15/17 14:54, Mark Friedenbach via bitcoin-dev wrote:
>>> As good of an idea as it may or may not be to remove this feature from
>>> the code base, actually doing so would be crossing a boundary that we
>>> have not previously been willing to do except under extraordinary
>>> duress. The nature of bitcoin is such that we do not know and cannot
>>> know what transactions exist out there pre-signed and making use of
>>> these features.
>>>
>>> It may be a good idea to make these features non standard to further
>>> discourage their use, but I object to doing so with the justification of
>>> eventually disabling them for all transactions. Taking that step has the
>>> potential of destroying value and is something that we have only done in
>>> the past either because we didn’t understand forks and best practices
>>> very well, or because the features (now disabled) were fundamentally
>>> insecure and resulted in other people’s coins being vulnerable. This
>>> latter concern does not apply here as far as I’m aware.
>>>
>>> On Nov 15, 2017, at 8:02 AM, Johnson Lau via bitcoin-dev
>>> >> 
>>> > wrote:
>>>
 In https://github.com/bitcoin/bitcoin/pull/11423 I propose to
 make OP_CODESEPARATOR and FindAndDelete in non-segwit scripts
 non-standard

 I think FindAndDelete() is one of the most useless and complicated
 functions in the script language. It is omitted from segwit (BIP143),
 but we still need to support it in non-segwit scripts. Actually,
 FindAndDelete() would only be triggered in some weird 

Re: [bitcoin-dev] Making OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

2017-11-27 Thread Matt Corallo via bitcoin-dev
I strongly disagree here - we don't only soft-fork out transactions that
are "fundamentally insecure", that would be significantly too
restrictive. We have generally been willing to soft-fork out things
which clearly fall outside of best-practices, especially rather
"useless" fields in the protocol eg soft-forking behavior into OP_NOPs,
soft-forking behavior into nSequence, etc.

As a part of setting clear best-practices, making things non-standard is
the obvious step, though there has been active discussion of
soft-forking out FindAndDelete and OP_CODESEPARATOR for years now. I
obviously do not claim that we should be proposing a soft-fork to
blacklist FindAndDelete and OP_CODESEPARATOR usage any time soon, and
assume that it would take at least a year or three from when it was made
non-standard to when a soft-fork to finally remove them was proposed.
This should be more than sufficient time for folks using such weird (and
largely useless) parts of the protocol to object, which should be
sufficient to reconsider such a soft-fork.

Independently, making them non-standard is a good change on its own, and
if nothing else should better inform discussion about the possibility of
anyone using these things.

Matt

On 11/15/17 14:54, Mark Friedenbach via bitcoin-dev wrote:
> As good of an idea as it may or may not be to remove this feature from
> the code base, actually doing so would be crossing a boundary that we
> have not previously been willing to do except under extraordinary
> duress. The nature of bitcoin is such that we do not know and cannot
> know what transactions exist out there pre-signed and making use of
> these features.
> 
> It may be a good idea to make these features non standard to further
> discourage their use, but I object to doing so with the justification of
> eventually disabling them for all transactions. Taking that step has the
> potential of destroying value and is something that we have only done in
> the past either because we didn’t understand forks and best practices
> very well, or because the features (now disabled) were fundamentally
> insecure and resulted in other people’s coins being vulnerable. This
> latter concern does not apply here as far as I’m aware.
> 
> On Nov 15, 2017, at 8:02 AM, Johnson Lau via bitcoin-dev
>  > wrote:
> 
>> In https://github.com/bitcoin/bitcoin/pull/11423 I propose to
>> make OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard
>>
>> I think FindAndDelete() is one of the most useless and complicated
>> functions in the script language. It is omitted from segwit (BIP143),
>> but we still need to support it in non-segwit scripts. Actually,
>> FindAndDelete() would only be triggered in some weird edge cases like
>> using out-of-range SIGHASH_SINGLE.
>>
>> Non-segwit scripts also use a FindAndDelete()-like function to remove
>> OP_CODESEPARATOR from scriptCode. Note that in BIP143, only executed
>> OP_CODESEPARATOR are removed so it doesn’t have the
>> FindAndDelete()-like function. OP_CODESEPARATOR in segwit scripts are
>> useful for Tumblebit so it is not disabled in this proposal
>>
>> By disabling both, it guarantees that scriptCode serialized inside
>> SignatureHash() must be constant
>>
>> If we use a softfork to remove FindAndDelete() and OP_CODESEPARATOR
>> from non-segwit scripts, we could completely remove FindAndDelete()
>> from the consensus code later by whitelisting all blocks before the
>> softfork block. The first step is to make them non-standard in the
>> next release.
>>
>>
>>  
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> 
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Matt Corallo via bitcoin-dev
OK, fair enough, just wanted to make sure we were on the same page.
"Thorny issues there and there hasn't been a ton of effort put into what
Bitcoin integration and maintainability looks like" is a perfectly fair
response :)

Matt

On 10/30/17 18:32, Mark Friedenbach wrote:
> I was just making a factual observation/correction. This is Russell’s project 
> and I don’t want to speak for him. Personally I don’t think the particulars 
> of bitcoin integration design space have been thoroughly explored enough to 
> predict the exact approach that will be used.
> 
> It is possible to support a standard library of jets that are general purpose 
> enough to allow the validation of new crypto primitives, like reusing sha2 to 
> make Lamport signatures. Or use curve-agnostic jets to do Weil pairing 
> validation. Or string manipulation and serialization jets to implement 
> covenants. So I don’t think the situation is as dire as you make it sound.
> 
>> On Oct 30, 2017, at 3:14 PM, Matt Corallo <lf-li...@mattcorallo.com> wrote:
>>
>> Are you anticipating it will be reasonably possible to execute more
>> complicated things in interpreted form even after "jets" are put in
>> place? If not its just a soft-fork to add new script operations and
>> going through the effort of making them compatible with existing code
>> and using a full 32 byte hash to represent them seems wasteful - might
>> as well just add a "SHA256 opcode".
>>
>> Either way it sounds like you're assuming a pretty aggressive soft-fork
>> cadence? I'm not sure if that's so practical right now (or are you
>> thinking it would be more practical if things were
>> drop-in-formally-verified-equivalent-replacements?).
>>
>> Matt
>>
>>> On 10/30/17 17:56, Mark Friedenbach wrote:
>>> Script versions makes this no longer a hard-fork to do. The script
>>> version would implicitly encode which jets are optimized, and what their
>>> optimized cost is.
>>>
>>>> On Oct 30, 2017, at 2:42 PM, Matt Corallo via bitcoin-dev
>>>> <bitcoin-dev@lists.linuxfoundation.org
>>>> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>>>>
>>>> I admittedly haven't had a chance to read the paper in full details,
>>>> but I was curious how you propose dealing with "jets" in something
>>>> like Bitcoin. AFAIU, other similar systems are left doing hard-forks
>>>> to reduce the sigops/weight/fee-cost of transactions every time they
>>>> want to add useful optimized drop-ins. For obvious reasons, this seems
>>>> rather impractical and a potentially critical barrier to adoption of
>>>> such optimized drop-ins, which I imagine would be required to do any
>>>> new cryptographic algorithms due to the significant fee cost of
>>>> interpreting such things.
>>>>
>>>> Is there some insight I'm missing here?
>>>>
>>>> Matt
>>>>
>>>> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev
>>>> <bitcoin-dev@lists.linuxfoundation.org
>>>> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>>>>
>>>>I've been working on the design and implementation of an
>>>>alternative to Bitcoin Script, which I call Simplicity.  Today, I
>>>>am presenting my design at the PLAS 2017 Workshop
>>>><http://plas2017.cse.buffalo.edu/> on Programming Languages and
>>>>Analysis for Security.  You find a copy of my Simplicity paper at
>>>>https://blockstream.com/simplicity.pdf
>>>><https://blockstream.com/simplicity.pdf>
>>>>
>>>>Simplicity is a low-level, typed, functional, native MAST language
>>>>where programs are built from basic combinators.  Like Bitcoin
>>>>Script, Simplicity is designed to operate at the consensus layer. 
>>>>While one can write Simplicity by hand, it is expected to be the
>>>>target of one, or multiple, front-end languages.
>>>>
>>>>Simplicity comes with formal denotational semantics (i.e.
>>>>semantics of what programs compute) and formal operational
>>>>semantics (i.e. semantics of how programs compute). These are both
>>>>formalized in the Coq proof assistant and proven equivalent.
>>>>
>>>>Formal denotational semantics are of limited value unless one can
>>>>use them in practice to reason about programs. I've used
>>>>Simpl

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Matt Corallo via bitcoin-dev
Are you anticipating it will be reasonably possible to execute more
complicated things in interpreted form even after "jets" are put in
place? If not its just a soft-fork to add new script operations and
going through the effort of making them compatible with existing code
and using a full 32 byte hash to represent them seems wasteful - might
as well just add a "SHA256 opcode".

Either way it sounds like you're assuming a pretty aggressive soft-fork
cadence? I'm not sure if that's so practical right now (or are you
thinking it would be more practical if things were
drop-in-formally-verified-equivalent-replacements?).

Matt

On 10/30/17 17:56, Mark Friedenbach wrote:
> Script versions makes this no longer a hard-fork to do. The script
> version would implicitly encode which jets are optimized, and what their
> optimized cost is.
> 
>> On Oct 30, 2017, at 2:42 PM, Matt Corallo via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org
>> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>>
>> I admittedly haven't had a chance to read the paper in full details,
>> but I was curious how you propose dealing with "jets" in something
>> like Bitcoin. AFAIU, other similar systems are left doing hard-forks
>> to reduce the sigops/weight/fee-cost of transactions every time they
>> want to add useful optimized drop-ins. For obvious reasons, this seems
>> rather impractical and a potentially critical barrier to adoption of
>> such optimized drop-ins, which I imagine would be required to do any
>> new cryptographic algorithms due to the significant fee cost of
>> interpreting such things.
>>
>> Is there some insight I'm missing here?
>>
>> Matt
>>
>> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org
>> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>>
>> I've been working on the design and implementation of an
>> alternative to Bitcoin Script, which I call Simplicity.  Today, I
>> am presenting my design at the PLAS 2017 Workshop
>> <http://plas2017.cse.buffalo.edu/> on Programming Languages and
>> Analysis for Security.  You find a copy of my Simplicity paper at
>> https://blockstream.com/simplicity.pdf
>> <https://blockstream.com/simplicity.pdf>
>>
>> Simplicity is a low-level, typed, functional, native MAST language
>> where programs are built from basic combinators.  Like Bitcoin
>> Script, Simplicity is designed to operate at the consensus layer. 
>> While one can write Simplicity by hand, it is expected to be the
>> target of one, or multiple, front-end languages.
>>
>> Simplicity comes with formal denotational semantics (i.e.
>> semantics of what programs compute) and formal operational
>> semantics (i.e. semantics of how programs compute). These are both
>> formalized in the Coq proof assistant and proven equivalent.
>>
>> Formal denotational semantics are of limited value unless one can
>> use them in practice to reason about programs. I've used
>> Simplicity's formal semantics to prove correct an implementation
>> of the SHA-256 compression function written in Simplicity.  I have
>> also implemented a variant of ECDSA signature verification in
>> Simplicity, and plan to formally validate its correctness along
>> with the associated elliptic curve operations.
>>
>> Simplicity comes with easy to compute static analyses that can
>> compute bounds on the space and time resources needed for
>> evaluation.  This is important for both node operators, so that
>> the costs are knows before evaluation, and for designing
>> Simplicity programs, so that smart-contract participants can know
>> the costs of their contract before committing to it.
>>
>> As a native MAST language, unused branches of Simplicity programs
>> are pruned at redemption time.  This enhances privacy, reduces the
>> block weight used, and can reduce space and time resource costs
>> needed for evaluation.
>>
>> To make Simplicity practical, jets replace common Simplicity
>> expressions (identified by their MAST root) and directly implement
>> them with C code.  I anticipate developing a broad set of useful
>> jets covering arithmetic operations, elliptic curve operations,
>> and cryptographic operations including hashing and digital
>> signature validation.
>>
>> The paper I am presenting at PLAS describes only the foundation of
&g

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Matt Corallo via bitcoin-dev
I admittedly haven't had a chance to read the paper in full details, but I was 
curious how you propose dealing with "jets" in something like Bitcoin. AFAIU, 
other similar systems are left doing hard-forks to reduce the 
sigops/weight/fee-cost of transactions every time they want to add useful 
optimized drop-ins. For obvious reasons, this seems rather impractical and a 
potentially critical barrier to adoption of such optimized drop-ins, which I 
imagine would be required to do any new cryptographic algorithms due to the 
significant fee cost of interpreting such things.

Is there some insight I'm missing here?

Matt

On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev 
 wrote:
>I've been working on the design and implementation of an alternative to
>Bitcoin Script, which I call Simplicity.  Today, I am presenting my
>design
>at the PLAS 2017 Workshop  on
>Programming
>Languages and Analysis for Security.  You find a copy of my Simplicity
>paper at https://blockstream.com/simplicity.pdf
>
>Simplicity is a low-level, typed, functional, native MAST language
>where
>programs are built from basic combinators.  Like Bitcoin Script,
>Simplicity
>is designed to operate at the consensus layer.  While one can write
>Simplicity by hand, it is expected to be the target of one, or
>multiple,
>front-end languages.
>
>Simplicity comes with formal denotational semantics (i.e. semantics of
>what
>programs compute) and formal operational semantics (i.e. semantics of
>how
>programs compute). These are both formalized in the Coq proof assistant
>and
>proven equivalent.
>
>Formal denotational semantics are of limited value unless one can use
>them
>in practice to reason about programs. I've used Simplicity's formal
>semantics to prove correct an implementation of the SHA-256 compression
>function written in Simplicity.  I have also implemented a variant of
>ECDSA
>signature verification in Simplicity, and plan to formally validate its
>correctness along with the associated elliptic curve operations.
>
>Simplicity comes with easy to compute static analyses that can compute
>bounds on the space and time resources needed for evaluation.  This is
>important for both node operators, so that the costs are knows before
>evaluation, and for designing Simplicity programs, so that
>smart-contract
>participants can know the costs of their contract before committing to
>it.
>
>As a native MAST language, unused branches of Simplicity programs are
>pruned at redemption time.  This enhances privacy, reduces the block
>weight
>used, and can reduce space and time resource costs needed for
>evaluation.
>
>To make Simplicity practical, jets replace common Simplicity
>expressions
>(identified by their MAST root) and directly implement them with C
>code.  I
>anticipate developing a broad set of useful jets covering arithmetic
>operations, elliptic curve operations, and cryptographic operations
>including hashing and digital signature validation.
>
>The paper I am presenting at PLAS describes only the foundation of the
>Simplicity language.  The final design includes extensions not covered
>in
>the paper, including
>
>- full convent support, allowing access to all transaction data.
>- support for signature aggregation.
>- support for delegation.
>
>Simplicity is still in a research and development phase.  I'm working
>to
>produce a bare-bones SDK that will include
>
>- the formal semantics and correctness proofs in Coq
>- a Haskell implementation for constructing Simplicity programs
>- and a C interpreter for Simplicity.
>
>After an SDK is complete the next step will be making Simplicity
>available
>in the Elements project  so that anyone
>can
>start experimenting with Simplicity in sidechains. Only after extensive
>vetting would it be suitable to consider Simplicity for inclusion in
>Bitcoin.
>
>Simplicity has a long ways to go still, and this work is not intended
>to
>delay consideration of the various Merkelized Script proposals that are
>currently ongoing.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Rebatable fees & incentive-safe fee markets

2017-09-28 Thread Matt Corallo via bitcoin-dev
I'm somewhat curious what the authors envisioned the real-world implications of 
this model to be. While blindly asking users to enter what they're willing to 
pay always works in theory, I'd imagine in such a world the fee selection UX 
would be similar to what it is today - users are provided a list of options 
with feerates and expected confirmation times from which to select. Indeed, in 
a world where users pay a lower fee if they paid more than necessary fee 
estimation could be more willing to overshoot and the UX around RBF and CPFP 
could be simplified greatly, but I'm not actually convinced that it would 
result in higher overall mining revenue.

The UX issues with RBF and CPFP, not to mention the UX issues involved in 
optimizing for quick confirmation are, indeed, quite significant, but I believe 
them to be solveable with rather striaght-forward changes. Making the market 
more useable (for higher or lower overall miner revenue) may be a sufficient 
goal, however, to want to consider something like this.

On September 28, 2017 9:06:29 PM EDT, Mark Friedenbach via bitcoin-dev 
 wrote:
>This article by Ron Lavi, Or Sattath, and Aviv Zohar was forwarded to
>me and is of interest to this group:
>
>"Redesigning Bitcoin's fee market"
>https://arxiv.org/abs/1709.08881
>
>I'll briefly summarize before providing some commentary of my own,
>including transformation of the proposed mechanism into a relatively
>simple soft-fork.  The article points out that bitcoin's auction
>model for transaction fees / inclusion in a block is broken in the
>sense that it does not achieve maximum clearing price* and to prevent
>strategic bidding behavior.
>
>(* Maximum clearing price meaning highest fee the user is willing to
>   pay for the amount of time they had to wait.  In other words, miner
>   income.  While this is a common requirement of academic work on
>   auction protocols, it's not obvious that it provides intrinsic
>   benefit to bitcoin for miners to extract from users the maximum
>   amount of fee the market is willing to support.  However strategic
>   bidding behavior (e.g. RBF and CPFP) does have real network and
>   usability costs, which a more "correct" auction model would reduce
>   in some use cases.)
>
>Bitcoin is a "pay your bid" auction, where the user makes strategic
>calculations to determine what bid (=fee) is likely to get accepted
>within the window of time in which they want confirmation.  This bid
>can then be adjusted through some combination of RBF or CPFP.
>
>The authors suggest moving to a "pay lowest winning bid" model where
>all transactions pay only the smallest fee rate paid by any
>transaction in the block, for which the winning strategy is to bid the
>maximum amount you are willing to pay to get the transaction
>confirmed:
>
>> Users can then simply set their bids truthfully to exactly the
>> amount they are willing to pay to transact, and do not need to
>> utilize fee estimate mechanisms, do not resort to bid shading and do
>> not need to adjust transaction fees (via replace-by-fee mechanisms)
>> if the mempool grows.
>
>
>Unlike other proposed fixes to the fee model, this is not trivially
>broken by paying the miner out of band.  If you pay out of band fee
>instead of regular fee, then your transaction cannot be included with
>other regular fee paying transactions without the miner giving up all
>regular fee income.  Any transaction paying less fee in-band than the
>otherwise minimum fee rate needs to also provide ~1Mvbyte * fee rate
>difference fee to make up for that lost income.  So out of band fee is
>only realistically considered when it pays on top of a regular feerate
>paying transaction that would have been included in the block anyway.
>And what would be the point of that?
>
>
>As an original contribution, I would like to note that something
>strongly resembling this proposal could be soft-forked in very easily.
>The shortest explanation is:
>
>For scriptPubKey outputs of the form "", where
>the pushed data evaluates as true, a consensus rule is added that
>the coinbase must pay any fee in excess of the minimum fee rate
>for the block to the push value, which is a scriptPubKey.
>
>Beyond fixing the perceived problems of bitcoin's fee auction model
>leading to costly strategic behavior (whether that is a problem is a
>topic open to debate!), this would have the additional benefits of:
>
>1. Allowing pre-signed transactions, of payment channel close-out
>   for example, to provide sufficient fee for confirmation without
>   knowledge of future rates or overpaying or trusting a wallet to
>   be online to provide CPFP fee updates.
>
>2. Allowing explicit fees in multi-party transaction creation
>   protocols where final transaction sizes are not known prior to
>   signing by one or more of the parties, while again not
>   overpaying or trusting on CPFP, etc.
>
>3. Allowing 

Re: [bitcoin-dev] Responsible disclosure of bugs

2017-09-10 Thread Matt Corallo via bitcoin-dev
I believe there continues to be concern over a number of altcoins which
are running old, unpatched forks of Bitcoin Core, making it rather
difficult to disclose issues without putting people at risk (see, eg,
some of the dos issues which are preventing release of the alert key).
I'd encourage the list to have a discussion about what reasonable
approaches could be taken there.

On 09/10/17 18:03, Simon Liu via bitcoin-dev wrote:
> Hi,
> 
> Given today's presentation by Chris Jeffrey at the Breaking Bitcoin
> conference, and the subsequent discussion around responsible disclosure
> and industry practice, perhaps now would be a good time to discuss
> "Bitcoin and CVEs" which has gone unanswered for 6 months.
> 
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013751.html
> 
> To quote:
> 
> "Are there are any vulnerabilities in Bitcoin which have been fixed but
> not yet publicly disclosed?  Is the following list of Bitcoin CVEs
> up-to-date?
> 
> https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures
> 
> There have been no new CVEs posted for almost three years, except for
> CVE-2015-3641, but there appears to be no information publicly available
> for that issue:
> 
> https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3641
> 
> It would be of great benefit to end users if the community of clients
> and altcoins derived from Bitcoin Core could be patched for any known
> vulnerabilities.
> 
> Does anyone keep track of security related bugs and patches, where the
> defect severity is similar to those found on the CVE list above?  If
> yes, can that list be shared with other developers?"
> 
> Best Regards,
> Simon
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Segwit2x BIP

2017-07-07 Thread Matt Corallo via bitcoin-dev
This is horribly under-specified (ie not possible to implement from what
you've written, and your implementation doesn't match at all, last I heard).

> Specification

> The plain block size is defined as the serialized block size without
> witness programs.
> Deploy a modified BIP91 to activate Segwit. The only modification is
> that the signal "segsignal" is replaced by "segwit2x".

This is not a protocol change. I have no idea why you included it in the
"specification" section.

> If segwit2x (BIP91 signal) activates at block N, then block N+12960
> activates a new plain block size limit of 2 MB (2,000,000 bytes). In
> this case, at block N+12960 a hard-fork occurs.

This is not a hard fork, simply adding a new limit is a soft fork. You
appear to be confused - as originally written, AFAIR, Jeff's btc1 branch
did not increase the block size, your specification here matches that
original change, and does not increase the block size.

> The block that activates the hard-fork must have a plain block size
> greater than 1 MB.

There is no hard fork, and this would violate consensus rules. Not sure
what you mean. If you do add a hard fork to this BIP, you really need to
flip the hard fork bit.

> Any transaction with a non-witness serialized size exceeding 1,000,000
> is invalid.

This is far from sufficient to protect from DoS attacks, you really
should take a look through the mailing list archives and read some of
the old discussions on the issues here.

Matt

On 07/07/17 18:25, Sergio Demian Lerner via bitcoin-dev wrote:
> Hello,
> 
> Here is a BIP that matches the reference code that the Segwit2x group
> has built and published a week ago. 
> 
> This BIP and code satisfies the requests of a large part of the Bitcoin
> community for a moderate increase in the Bitcoin non-witness block space
> coupled with the activation of Segwit.
> 
> You can find the BIP draft in the following link:
> 
> https://github.com/SergioDemianLerner/BIPs/blob/master/BIP-draft-sergiolerner-segwit2x.mediawiki
> 
> Reference source was kindly provided by the Segwit2x group.
> 
> Best regards,
>  Sergio.
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-01 Thread Matt Corallo via bitcoin-dev
Quick comment before I finish reading it completely, looks like you have no way 
to match the input prevouts being spent, which is rather nice from a "watch for 
this output being spent" pov.

On June 1, 2017 3:01:14 PM EDT, Olaoluwa Osuntokun via bitcoin-dev 
 wrote:
>Hi y'all,
>
>Alex Akselrod and I would like to propose a new light client BIP for
>consideration:
>*
>https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki
>
>This BIP proposal describes a concrete specification (along with a
>reference implementations[1][2][3]) for the much discussed client-side
>filtering reversal of BIP-37. The precise details are described in the
>BIP, but as a summary: we've implemented a new light-client mode that
>uses
>client-side filtering based off of Golomb-Rice coded sets. Full-nodes
>maintain an additional index of the chain, and serve this compact
>filter
>(the index) to light clients which request them. Light clients then
>fetch
>these filters, query the locally and _maybe_ fetch the block if a
>relevant
>item matches. The cool part is that blocks can be fetched from _any_
>source, once the light client deems it necessary. Our primary
>motivation
>for this work was enabling a light client mode for lnd[4] in order to
>support a more light-weight back end paving the way for the usage of
>Lightning on mobile phones and other devices. We've integrated neutrino
>as a back end for lnd, and will be making the updated code public very
>soon.
>
>One specific area we'd like feedback on is the parameter selection.
>Unlike
>BIP-37 which allows clients to dynamically tune their false positive
>rate,
>our proposal uses a _fixed_ false-positive. Within the document, it's
>currently specified as P = 1/2^20. We've done a bit of analysis and
>optimization attempting to optimize the following sum:
>filter_download_bandwidth + expected_block_false_positive_bandwidth.
>Alex
>has made a JS calculator that allows y'all to explore the affect of
>tweaking the false positive rate in addition to the following
>variables:
>the number of items the wallet is scanning for, the size of the blocks,
>number of blocks fetched, and the size of the filters themselves. The
>calculator calculates the expected bandwidth utilization using the CDF
>of
>the Geometric Distribution. The calculator can be found here:
>https://aakselrod.github.io/gcs_calc.html. Alex also has an empirical
>script he's been running on actual data, and the results seem to match
>up
>rather nicely.
>
>We we're excited to see that Karl Johan Alm (kallewoof) has done some
>(rather extensive!) analysis of his own, focusing on a distinct
>encoding
>type [5]. I haven't had the time yet to dig into his report yet, but I
>think I've read enough to extract the key difference in our encodings:
>his
>filters use a binomial encoding _directly_ on the filter contents, will
>we
>instead create a Golomb-Coded set with the contents being _hashes_ (we
>use
>siphash) of the filter items.
>
>Using a fixed fp=20, I have some stats detailing the total index size,
>as
>well as averages for both mainnet and testnet. For mainnet, using the
>filter contents as currently described in the BIP (basic + extended),
>the
>total size of the index comes out to 6.9GB. The break down is as
>follows:
>
>* total size:  6976047156
>* total avg:  14997.220622758816
>* total median:  3801
>* total max:  79155
>* regular size:  3117183743
>* regular avg:  6701.372750217131
>* regular median:  1734
>* regular max:  67533
>* extended size:  3858863413
>* extended avg:  8295.847872541684
>* extended median:  2041
>* extended max:  52508
>
>In order to consider the average+median filter sizes in a world worth
>larger blocks, I also ran the index for testnet:
>
>* total size:  2753238530
>* total avg:  5918.95736054141
>* total median:  60202
>* total max:  74983
>* regular size:  1165148878
>* regular avg:  2504.856172982827
>* regular median:  24812
>* regular max:  64554
>* extended size:  1588089652
>* extended avg:  3414.1011875585823
>* extended median:  35260
>* extended max:  41731
>
>Finally, here are the testnet stats which take into account the
>increase
>in the maximum filter size due to segwit's block-size increase. The max
>filter sizes are a bit larger due to some of the habitual blocks I
>created last year when testing segwit (transactions with 30k inputs,
>30k
>outputs, etc).
>
> * total size:  585087597
> * total avg:  520.8839608674402
> * total median:  20
> * total max:  164598
> * regular size:  299325029
> * regular avg:  266.4790836307566
> * regular median:  13
> * regular max:  164583
> * extended size:  285762568
> * extended avg:  254.4048772366836
> * extended median:  7
> * extended max:  127631
>
>For those that are interested in the raw data, I've uploaded a CSV file
>of raw data for each 

Re: [bitcoin-dev] Barry Silbert segwit agreement

2017-05-26 Thread Matt Corallo via bitcoin-dev
While I'm not 100% convinced there are strict technical reasons for needing to 
wait till after segwit is active before a hard fork can be started (you can, 
after all, activate segwit as a part of the HF), there are useful design and 
conservatism reasons (not causing massive discontinuity in fee market, handling 
major system changes one at a time, etc).

Still, totally agree that attempting to design, code, and test a new hard fork 
in six months, let alone deploy it, let alone simultaneously with segwit, is a 
joke and fails to take seriously the investment many have made in the bitcoin 
system. Previous, rather simple, soft forks required similar if not more 
development time, not counting deployment and activation time.

If the community is unable to form consensus around segwit alone for political 
reasons, further research into hard fork design may help, but even forks tied 
together would nearly certainly need to activate months apart.

On May 26, 2017 5:30:37 PM EDT, James Hilliard  
wrote:
>Mandatory signalling is the only way to lock in segwit with less than
>95% hashpower without a full redeployment(which for a number of
>technical reasons isn't feasible until after the existing segwit
>deployment expires). There's no reason not to signal BIP141 bit 1
>while also signalling bit 4, but you would want to use bit 4 to
>coordinate bit 1 mandatory signalling.
>
>It would not be feasible to schedule any HF until one can be
>completely sure BIP141 is active(at least not without waiting for the
>timeout and doing a redeployment) due to activation/p2p codepath
>complexity. This is why the mandatory signalling period is needed.
>
>Since it is likely a HF will take months of development and testing I
>see this or something similar as the fastest safe path forward:
>1. Use BIP91 or similar to activate BIP141 via mandatory signalling
>ASAP(likely using bit 4)
>2. Develop HF code based on assumption that BIP141 is active so that
>you only have to test the BIP141->HF upgrade/activation codepath.
>3. Deploy HF after BIP141 lock in(bit 4 can be reused again here since
>this will be after BIP91 expiration)
>
>When rolling out some features it often makes sense to combine them
>into a single fork for example BIP's 68, 112, 113 were rolled out at
>the same time as are BIP's 141, 143, 144, 145 for segwit, however a HF
>has required features that would conflict with with features in the
>segwit rollout which is why attempting to simultaneously deploy them
>would cause major complexity/testing issues(you would have to deal
>with feature conflict resolution for multiple potential activation
>scenarios). By doing a staged rollout of segwit and then a HF you get
>a single testable upgrade path.
>
>Based on past development experiences I wouldn't expect a 6 month
>timeline to be realistic for a properly tested HF, but this proposed
>upgrade path should be the fastest available for both SegWit and a HF
>regardless.
>
>On Fri, May 26, 2017 at 4:10 PM, Jacob Eliosoff via bitcoin-dev
> wrote:
>> Just to clarify one thing, what I described differs from BIP91 in
>that
>> there's no orphaning.  Just when Segwit2MB support reaches 80%, those
>80%
>> join everyone else in signaling for BIP141.  BIP91 orphaning is an
>optional
>> addition but my guess is it wouldn't be needed.
>>
>>
>> On May 26, 2017 4:02 PM, "Matt Corallo" 
>wrote:
>>>
>>> Your proposal seems to be simply BIP 91 tied to the
>>> as-yet-entirely-undefined hard fork Barry et al proposed.
>>>
>>> Using James' BIP 91 instead of the Barry-bit-4/5/whatever proposal,
>as
>>> you propose, would make the deployment on the incredibly short
>timeline
>>> Barry et al proposed slightly more realistic, though I would expect
>to
>>> see hard fork code readily available and well-tested at this point
>in
>>> order to meet that timeline.
>>>
>>> Ultimately, due to their aggressive timeline, the Barry et al
>proposal
>>> is incredibly unlikely to meet the requirements of a
>>> multi-billion-dollar system, and continued research into meeting the
>>> spirit, not the text, of their agreement seems warranted.
>>>
>>> Matt
>>>
>>> On 05/26/17 17:47, Jacob Eliosoff via bitcoin-dev wrote:
>>> > Forgive me if this is a dumb question.  Suppose that rather than
>>> > directly activating segwit, the Silbert/NYC Segwit2MB proposal's
>lock-in
>>> > just triggered BIP141 signaling (plus later HF).  Would that avoid
>>> > incompatibility with existing BIP141 nodes, and get segwit
>activated
>>> > sooner?  Eg:
>>> >
>>> > - Bit 4 (or bit 5 or whatever, now that BIP91 uses 4) signals
>support
>>> > for "segwit now, HF (TBD) at scheduled date (Nov 23?)"
>>> > - If bit 4 support reaches 80%, it locks in two things: the
>scheduled HF
>>> > (conditional on segwit), and *immediately* turning on bit 1
>(BIP141
>>> > support)
>>> >
>>> > I realize this would still leave problems like the aggressive HF
>>> > 

Re: [bitcoin-dev] BIP149 timeout-- why so far in the future?

2017-05-26 Thread Matt Corallo via bitcoin-dev
A more important consideration than segwit's timeout is when code can be
released, which will no doubt be several months after SegWit's current
timeout.

Greg's proposed 6 months seems much more reasonable to me, assuming its
still many months after the formal release of code implementing it.

Matt

On 05/24/17 04:26, Rusty Russell via bitcoin-dev wrote:
> Gregory Maxwell via bitcoin-dev 
> writes:
>> Based on how fast we saw segwit adoption, why is the BIP149 timeout so
>> far in the future?
>>
>> It seems to me that it could be six months after release and hit the
>> kind of density required to make a stable transition.
> 
> Agreed, I would suggest 16th December, 2017 (otherwise, it should be
> 16th January 2018; during EOY holidays seems a bad idea).
> 
> This means this whole debacle has delayed segwit exactly 1 (2) month(s)
> beyond what we'd have if it used BIP8 in the first place.
> 
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Barry Silbert segwit agreement

2017-05-26 Thread Matt Corallo via bitcoin-dev
Your proposal seems to be simply BIP 91 tied to the
as-yet-entirely-undefined hard fork Barry et al proposed.

Using James' BIP 91 instead of the Barry-bit-4/5/whatever proposal, as
you propose, would make the deployment on the incredibly short timeline
Barry et al proposed slightly more realistic, though I would expect to
see hard fork code readily available and well-tested at this point in
order to meet that timeline.

Ultimately, due to their aggressive timeline, the Barry et al proposal
is incredibly unlikely to meet the requirements of a
multi-billion-dollar system, and continued research into meeting the
spirit, not the text, of their agreement seems warranted.

Matt

On 05/26/17 17:47, Jacob Eliosoff via bitcoin-dev wrote:
> Forgive me if this is a dumb question.  Suppose that rather than
> directly activating segwit, the Silbert/NYC Segwit2MB proposal's lock-in
> just triggered BIP141 signaling (plus later HF).  Would that avoid
> incompatibility with existing BIP141 nodes, and get segwit activated
> sooner?  Eg:
> 
> - Bit 4 (or bit 5 or whatever, now that BIP91 uses 4) signals support
> for "segwit now, HF (TBD) at scheduled date (Nov 23?)"
> - If bit 4 support reaches 80%, it locks in two things: the scheduled HF
> (conditional on segwit), and *immediately* turning on bit 1 (BIP141 support)
> 
> I realize this would still leave problems like the aggressive HF
> schedule, possible chain split at the HF date between Segwit2MB nodes
> and any remaining BIP141 nodes, etc.  My focus here is how
> incompatibility with existing nodes could be minimized.
> 
> (BIP91 could also be used if BIP141 support still fell short of 95%. 
> But if Segwit2MB support reaches 80%, it seems likely that an additional
> 15% will support BIP141-without-HF.)
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reduced signalling threshold activation of existing segwit deployment

2017-05-22 Thread Matt Corallo via bitcoin-dev
Given the overwhelming support for SegWit across the ecosystem of businesses 
and users, this seems reasonable to me.

On May 22, 2017 6:40:13 PM EDT, James Hilliard via bitcoin-dev 
 wrote:
>I would like to propose an implementation that accomplishes the first
>part of the Barry Silbert proposal independently from the second:
>
>"Activate Segregated Witness at an 80% threshold, signaling at bit 4"
>in a way that
>
>The goal here is to minimize chain split risk and network disruption
>while maximizing backwards compatibility and still providing for rapid
>activation of segwit at the 80% threshold using bit 4.
>
>By activating segwit immediately and separately from any HF we can
>scale quickly without risking a rushed combined segwit+HF that would
>almost certainly cause widespread issues.
>
>Draft proposal:
>https://github.com/jameshilliard/bips/blob/bip-segsignal/bip-segsignal.mediawiki
>
>Proposal text:
>
>  BIP: segsignal
>  Layer: Consensus (soft fork)
>Title: Reduced signalling threshold activation of existing segwit
>deployment
>  Author: James Hilliard 
>  Status: Draft
>  Type: Standards Track
>  Created: 2017-05-22
>  License: BSD-3-Clause
>   CC0-1.0
>
>
>==Abstract==
>
>This document specifies a method to activate the existing BIP9 segwit
>deployment with a majority hashpower less than 95%.
>
>==Definitions==
>
>"existing segwit deployment" refer to the BIP9 "segwit" deployment
>using bit 1, between November 15th 2016 and November 15th 2017 to
>activate BIP141, BIP143 and BIP147.
>
>==Motivation==
>
>Segwit increases the blocksize, fixes transaction malleability, and
>makes scripting easier to upgrade as well as bringing many other
>[https://bitcoincore.org/en/2016/01/26/segwit-benefits/ benefits].
>
>This BIP provides a way for a simple majority of miners to coordinate
>activation of the existing segwit deployment with less than 95%
>hashpower. For a number of reasons a complete redeployment of segwit
>is difficulty to do until the existing deployment expires. This is due
>to 0.13.1+ having many segwit related features active already,
>including all the P2P components, the new network service flag, the
>witness-tx and block messages, compact blocks v2 and preferential
>peering. A redeployment of segwit will need to redefine all these
>things and doing so before expiry would greatly complicate testing.
>
>==Specification==
>
>While this BIP is active, all blocks must set the nVersion header top
>3 bits to 001 together with bit field (1<<1) (according to the
>existing segwit deployment). Blocks that do not signal as required
>will be rejected.
>
>==Deployment==
>
>This BIP will be deployed by a "version bits" with an 80%(this can be
>adjusted if desired) activation threshold BIP9 with the name
>"segsignal" and using bit 4.
>
>This BIP will have a start time of midnight June 1st, 2017 (epoch time
>1496275200) and timeout on midnight November 15th 2017 (epoch time
>1510704000). This BIP will cease to be active when segwit is
>locked-in.
>
>=== Reference implementation ===
>
>
>// Check if Segregated Witness is Locked In
>bool IsWitnessLockedIn(const CBlockIndex* pindexPrev, const
>Consensus::Params& params)
>{
>LOCK(cs_main);
>return (VersionBitsState(pindexPrev, params,
>Consensus::DEPLOYMENT_SEGWIT, versionbitscache) ==
>THRESHOLD_LOCKED_IN);
>}
>
>// SEGSIGNAL mandatory segwit signalling.
>if ( VersionBitsState(pindex->pprev, chainparams.GetConsensus(),
>Consensus::DEPLOYMENT_SEGSIGNAL, versionbitscache) == THRESHOLD_ACTIVE
>&&
> !IsWitnessLockedIn(pindex->pprev, chainparams.GetConsensus()) &&
>// Segwit is not locked in
> !IsWitnessEnabled(pindex->pprev, chainparams.GetConsensus()) ) //
>and is not active.
>{
>bool fVersionBits = (pindex->nVersion & VERSIONBITS_TOP_MASK) ==
>VERSIONBITS_TOP_BITS;
>bool fSegbit = (pindex->nVersion &
>VersionBitsMask(chainparams.GetConsensus(),
>Consensus::DEPLOYMENT_SEGWIT)) != 0;
>if (!(fVersionBits && fSegbit)) {
>return state.DoS(0, error("ConnectBlock(): relayed block must
>signal for segwit, please upgrade"), REJECT_INVALID, "bad-no-segwit");
>}
>}
>
>
>https://github.com/bitcoin/bitcoin/compare/0.14...jameshilliard:segsignal-v0.14.1
>
>==Backwards Compatibility==
>
>This deployment is compatible with the existing "segwit" bit 1
>deployment scheduled between midnight November 15th, 2016 and midnight
>November 15th, 2017. Miners will need to upgrade their nodes to
>support segsignal otherwise they may build on top of an invalid block.
>While this bip is active users should either upgrade to segsignal or
>wait for additional confirmations when accepting payments.
>
>==Rationale==
>
>Historically we have used IsSuperMajority() to activate soft forks
>such as BIP66 which has a mandatory signalling requirement for miners
>once activated, this ensures that miners are aware of new rules being
>enforced. This technique can be leveraged to lower the 

Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-10 Thread Matt Corallo via bitcoin-dev
I highly disagree about the "not shit" part.  You're advocating for throwing 
away one of the key features of Segwit, something that is very important for 
Bitcoin's long-term reliability! If you think doing so is going to somehow help 
get support in a divided community, I don't understand how - more likely you're 
only going to make things significantly worse.

On May 10, 2017 11:25:27 AM EDT, Sergio Demian Lerner 
 wrote:
>Jaja. But no shit. Not perfect maybe, but Bitcoin was never perfect. It
>has
>always been good enough. And at the beginning it was quite simple.
>Simple
>enough it allowed gradual improvements that anyone with some technical
>background could understand. Now we need a full website to explain an
>improvement.
>But this is becoming more and more out of topic.
>
>
>On Wed, May 10, 2017 at 11:05 AM, Matt Corallo
>
>wrote:
>
>> I'm highly unconvinced of this point. Sure, you can change fewer
>lines
>> of code, but if the result is, lets be honest, shit, how do you
>believe
>> its going to have a higher chance of getting acceptance from the
>broader
>> community? I think you're over-optimizing in the wrong direction.
>>
>> Matt
>>
>> On 05/09/17 20:58, Sergio Demian Lerner wrote:
>> > I agree with you Matt.
>> > I'm artificially limiting myself to changing the parameters of
>Segwit as
>> > it is..
>> >
>> > This is motivated by the idea that a consensual HF in the current
>state
>> > would have greater chance of acceptance if it changes the minimum
>number
>> > of lines of code.
>> >
>> >
>> >
>> > On Tue, May 9, 2017 at 5:13 PM, Gregory Maxwell > > > wrote:
>> >
>> > On Tue, May 9, 2017 at 7:42 PM, Matt Corallo
>> > >
>wrote:
>> > > at beast.
>> >
>> > Rawr.
>> >
>> >
>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-10 Thread Matt Corallo via bitcoin-dev
I'm highly unconvinced of this point. Sure, you can change fewer lines
of code, but if the result is, lets be honest, shit, how do you believe
its going to have a higher chance of getting acceptance from the broader
community? I think you're over-optimizing in the wrong direction.

Matt

On 05/09/17 20:58, Sergio Demian Lerner wrote:
> I agree with you Matt. 
> I'm artificially limiting myself to changing the parameters of Segwit as
> it is.. 
> 
> This is motivated by the idea that a consensual HF in the current state
> would have greater chance of acceptance if it changes the minimum number
> of lines of code.
> 
> 
> 
> On Tue, May 9, 2017 at 5:13 PM, Gregory Maxwell  > wrote:
> 
> On Tue, May 9, 2017 at 7:42 PM, Matt Corallo
> > wrote:
> > at beast.
> 
> Rawr.
> 
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-09 Thread Matt Corallo via bitcoin-dev
There is something in between throwing the SegWit goals out the window
(as Sergio seems to be advocating for) and having a higher discount
ratio (which is required for the soft fork version to be useful).

When I first started looking at the problem I very much wanted to reduce
the worst-case block size (though have come around to caring a bit less
about that thanks to all the work in FIBRE and other similar systems
over the past year or two), but rapidly realized that just reducing the
Segwit discount wasn't really the right solution here.

You might as well take the real win and reduce the cost of the input
prevout itself so that average inputs are less expensive than outputs
(which SegWit doesn't quite achieve due to the large prevout size - 40
bytes). This way you can reduce the discount, still get the SegWit goal,
and get a lower ratio between worst-case and average-case block size,
though, frankly, I'm less interested in the last one these days, at
least for reasonable parameters. If you're gonna look at hard forks,
limiting yourself to just the parameters that we can tweak in a soft
fork seems short-sighted, at beast.

Matt

On 05/09/17 19:30, Gregory Maxwell wrote:
> On Tue, May 9, 2017 at 7:15 PM, Sergio Demian Lerner via bitcoin-dev
>  wrote:
>> The capacity of Segwit(50%)+2MbHF is 50% more than Segwit, and the maximum
>> block size is the same.
> 
> And the UTXO bloat potential is twice as large and the cost of that
> UTXO bloat is significantly reduced.  So you're basically gutting the
> most of the gain from weight, making something incompatible, etc.
> 
> I'm not sure what to explain-- even that page on segwit.org explains
> that the values are selected to balance worst case costs not to
> optimize one to the total exclusion of others. Raw size is not very
> relevant in the long run, but if your goal were to optimize for it
> (which it seems to be), then the limit should be pure size.
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-09 Thread Matt Corallo via bitcoin-dev
I'm not sure who wrote segwit.org, but I wouldn't take it as
authoritative reasoning why we must do X over Y.

You seem to be claiming that there is not cost for a miner to fill
"extra witness space", but this is very untrue - in order to do so they
must forgo fees on other transactions. Your analysis on worst-case vs
normal-case blocks also seems flawed - there is a single limit, and not
a separate, secondary, witness limit.

You suggested "If the maximum block weight is set to 2.7M, each byte of
non-witness block costs 1.7", but these numbers dont work out - setting
the discount to 1.7 gets you a maximum block size of 1.7MB (in a soft
fork), not 2.7MB. If you set the max block weight to 2.7 with a 1.7x
discount, you have a hard fork. If you set the discount to 2.7x with a
2.7 weight limit, you dont get 2.7MB average-sized blocks, but smaller,
and still have the potential for padding blocks with pure-witness data
to create larger blocks.

Additionally, note that by padding blocks with larger witness data you
lose some of the CPU cost to validate as you no longer have as many
inputs (which have a maximal validation cost).

Further, I'm not sure why you're arguing for a given witness discount on
the basis of a future hardfork - it seems highly unlikely the community
is in a position to pull something like that off, and even if it were,
why set the witness discount with that assumption? If there were to be a
hardfork, we should probably tweak a bunch of parameters (see, eg, my
post from February of last year at
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012403.html).

Maybe you could clarify your proposal a bit here, because the way I read
it you seem to have misunderstood SegWit's discount system.

On 05/09/17 13:49, Sergio Demian Lerner via bitcoin-dev wrote:
> This [1] article says the current discount prevents witness spam.
> Witness spam is free space in the witness part of the block that can be
> filled by miners to create bigger blocks with almost no cost for the
> benefit a cluster of miners with low latency, increasing centralization.
> 
> The 75% discount does not prevent it, but on the contrary leaves a lot
> of extra witness space for spam.
> 
> If the maximum block weight is set to 2.7M, each byte of non-witness
> block costs 1.7, and each byte of witness costs 1, then a normal filled
> block would be 2.7M bytes (1.7+1), and there will be no need to create
> ever a 4 Mbyte block. The worst case would be the average case, and the
> transaction rate would be the maximum possible.
> 
> The current 75% discount can only achieve more transactions per second
> if the type of transactions change. Therefore the current 75% discount
> only makes the block size worst case worse (4 Mbytes when it should be
> 2.7 Mbytes).
> 
> 80% of all inputs/outputs are P2PKH. The only way to make use of the
> extra witness 
> space If most P2PKH transactions are replaced by multisigs (typically
> for LN).
> 
> So it seems the 75% discount has been chosen with the idea that in the
> future the current transaction pattern will shift towards multisigs.
> This is not a bad idea, as it's the only direction Bitcoin can scale
> without a HF. 
> But it's a bad idea if we end up doing, for example, a 2X blocksize
> increase HF in the future. In that case it's much better to use a 50%
> witness discount, and do not make scaling risky by making the worse case
> block size 8 Mbytes, when it could have been 2*2.7=5.4 Mbytes.
> 
> I've uploaded the code here:
> https://github.com/SergioDemianLerner/SegwitStats
> 
>  [1] https://segwit.org/why-a-discount-factor-of-4-why-not-2-or-8-bbcebe91721e
> .
> 
> 
> On Mon, May 8, 2017 at 8:47 PM, Alphonse Pace via bitcoin-dev
>  > wrote:
> 
> Sergio,
> 
> I'm not sure what the data you present has to do with the discount. 
> A 75% discount prevents witness spam precisely because it is 75%,
> nothing more.  The current usage simply gives a guideline on how
> much capacity is gained through a particular discount.  With the
> data you show, it would imply that those blocks, with SegWit used
> where possible, would result in blocks of ~1.8MB.
> 
> 
> 
> On Mon, May 8, 2017 at 5:42 PM, Sergio Demian Lerner via bitcoin-dev
>  > wrote:
> 
> I have processed 1000 blocks starting from Block #461653.
> 
> I computed several metrics, including the supposed size of
> witness data and non-witness data (onchain), assuming all P2SH
> inputs/outputs are converted to P2PWSH and all P2PKH
> inputs/outputs are converted to P2WPKH.
> 
> This takes into account that other types of transactions will
> not be modified by Segwit (e.g. OP_RETURN outputs, 

Re: [bitcoin-dev] Full node "tip" function

2017-05-03 Thread Matt Corallo via bitcoin-dev
If we ever have a problem getting blocks, we could consider adding something to 
pay to receive historical blocks but luckily that isn't a problem we have today 
- the available connection slots and bandwidth on the network today appears to 
be more than sufficient to saturate nearly any fully-validating node.

On May 3, 2017 5:53:07 PM EDT, Gregory Maxwell via bitcoin-dev 
 wrote:
>On Wed, May 3, 2017 at 9:08 PM, Erik Aronesty via bitcoin-dev
> wrote:
>> CONS:
>
>The primary result would be paying people to sybil attack the network.
>It's far cheaper to run one node behind thousands of IPs than it is to
>run many nodes.
>
>Suggestions like this have come up many times before.
>___
>bitcoin-dev mailing list
>bitcoin-dev@lists.linuxfoundation.org
>https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Matt Corallo via bitcoin-dev
Hey Sergio,

You appear to have ignored the last two years of Bitcoin hardfork
research and understanding, recycling instead BIP 102 from 2015. There
are many proposals which have pushed the state of hard fork research
much further since then, and you may wish to read some of the posts on
this mailing list listed at https://bitcoinhardforkresearch.github.io/
and make further edits based on what you learn. It seems your goal of
"avoid any technical changes" doesn't have any foundation aside from a
perceived compromise for compromise sake, only making for fork riskier
in the process.

At a minimum, in terms of pure technical changes, you should probably
consider (probably among others):

a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
b) Either limiting non-SegWit transactions in some way to fix the n**2
sighash and FindAndDelete runtime and memory usage issues or fix them by
utilizing the new sighash type which many wallets and projects have
already implemented for SegWit in the spending of non-SegWit outputs.
c) Your replay protection isn't really ideal - XXX. The clever fix from
Spoonnet for poor scaling of optionally allowing non-SegWit outputs to
be spent with SegWit's sighash provides this all in one go.
d) You may wish to consider the possibility of tweaking the witness
discount and possibly discounting other parts of the input - SegWit went
a long ways towards making removal of elements from the UTXO set cheaper
than adding them, but didn't quite get there, you should probably finish
that job. This also provides additional tuneable parameters to allow you
to increase the block size while not having a blowup in the worst-case
block size.
e) Additional commitments at the top of the merkle root - both for
SegWit transactions and as additional space for merged mining and other
commitments which we may wish to add in the future, this should likely
be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.

Additionally, I think your parameters here pose very significant risk to
the Bitcoin ecosystem broadly.

a) Activating a hard fork with less than 18/24 months (and even then...)
from a fully-audited and supported release of full node software to
activation date poses significant risks to many large software projects
and users. I've repeatedly received feedback from various folks that a
year or more is likely required in any hard fork to limit this risk, and
limited pushback on that given the large increase which SegWit provides
itself buying a ton of time.

b) Having a significant discontinuity in block size increase only serves
to confuse and mislead users and businesses, forcing them to rapidly
adapt to a Bitcoin which changed overnight both by hardforking, and by
fees changing suddenly. Instead, having the hard fork activate technical
changes, and then slowly increasing the block size over the following
several years keeps things nice and continuous and also keeps us from
having to revisit ye old blocksize debate again six months after activation.

c) You should likely consider the effect of the many technological
innovations coming down the pipe in the coming months. Technologies like
Lightning, TumbleBit, and even your own RootStock could significantly
reduce fee pressure as transactions move to much faster and more
featureful systems.

Commitments to aggressive hard fork parameters now may leave miners
without much revenue as far out as the next halving (which current
transaction growth trends are indicating we'd just only barely reach 2MB
of transaction volume, let alone if you consider the effects of users
moving to systems which provide more features for Bitcoin transactions).
This could lead to a precipitous drop in hashrate as miners are no
longer sufficiently compensated.

Remember that the "hashpower required to secure bitcoin" is determined
as a percentage of total Bitcoins transacted on-chain in each block, so
as subsidy goes down, miners need to be paid with fees, not just price
increases. Even if we were OK with hashpower going down compared to the
value it is securing, betting the security of Bitcoin on its price
rising exponentially to match decreasing subsidy does not strike me as a
particularly inspiring tradeoff.

There aren't many great technical solutions to some of these issues, as
far as I'm aware, but it's something that needs to be incredibly
carefully considered before betting the continued security of Bitcoin on
exponential on-chain growth, something which we have historically never
seen.

Matt


On March 31, 2017 5:09:18 PM EDT, Sergio Demian Lerner via bitcoin-dev 
 wrote:
>Hi everyone,
>
>Segwit2Mb is the project to merge into Bitcoin a minimal patch that
>aims to
>untangle the current conflict between different political positions
>regarding segwit activation vs. an increase of the on-chain blockchain
>space through a standard block size increase. It is not a new solution,
>but
>it should 

Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Matt Corallo via bitcoin-dev
Hey Sergio,

You appear to have ignored the last two years of Bitcoin hardfork
research and understanding, recycling instead BIP 102 from 2015. There
are many proposals which have pushed the state of hard fork research
much further since then, and you may wish to read some of the posts on
this mailing list listed at https://bitcoinhardforkresearch.github.io/
and make further edits based on what you learn. Your goal of "avoid
technical changes" appears to not have any basis outside of perceived
compromise for compromise sake, only making such a hardfork riskier
instead.

At a minimum, in terms of pure technical changes, you should probably
consider (probably among others):

a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
b) Either limiting non-SegWit transactions in some way to fix the n**2
sighash and FindAndDelete runtime and memory usage issues or fix them by
utilizing the new sighash type which many wallets and projects have
already implemented for SegWit in the spending of non-SegWit outputs.
c) Your really should have replay protection in any HF. The clever fix from
Spoonnet for poor scaling of optionally allowing non-SegWit outputs to
be spent with SegWit's sighash provides this all in one go.
d) You may wish to consider the possibility of tweaking the witness
discount and possibly discounting other parts of the input - SegWit went
a long ways towards making removal of elements from the UTXO set cheaper
than adding them, but didn't quite get there, you should probably finish
that job. This also provides additional tuneable parameters to allow you
to increase the block size while not having a blowup in the worst-case
block size.
e) Additional commitments at the top of the merkle root - both for
SegWit transactions and as additional space for merged mining and other
commitments which we may wish to add in the future, this should likely
be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.

Additionally, I think your parameters here pose very significant risk to
the Bitcoin ecosystem broadly.

a) Activating a hard fork with less than 18/24 months (and even then...)
from a fully-audited and supported release of full node software to
activation date poses significant risks to many large software projects
and users. I've repeatedly received feedback from various folks that a
year or more is likely required in any hard fork to limit this risk, and
limited pushback on that given the large increase which SegWit provides
itself buying a ton of time.

b) Having a significant discontinuity in block size increase only serves
to confuse and mislead users and businesses, forcing them to rapidly
adapt to a Bitcoin which changed overnight both by hardforking, and by
fees changing suddenly. Instead, having the hard fork activate technical
changes, and then slowly increasing the block size over the following
several years keeps things nice and continuous and also keeps us from
having to revisit ye old blocksize debate again six months after activation.

c) You should likely consider the effect of the many technological
innovations coming down the pipe in the coming months. Technologies like
Lightning, TumbleBit, and even your own RootStock could significantly
reduce fee pressure as transactions move to much faster and more
featureful systems.

Commitments to aggressive hard fork parameters now may leave miners
without much revenue as far out as the next halving (which current
transaction growth trends are indicating we'd just only barely reach 2MB
of transaction volume, let alone if you consider the effects of users
moving to systems which provide more features for Bitcoin transactions).
This could lead to a precipitous drop in hashrate as miners are no
longer sufficiently compensated.

Remember that the "hashpower required to secure bitcoin" is determined
as a percentage of total Bitcoins transacted on-chain in each block, so
as subsidy goes down, miners need to be paid with fees, not just price
increases. Even if we were OK with hashpower going down compared to the
value it is securing, betting the security of Bitcoin on its price
rising exponentially to match decreasing subsidy does not strike me as a
particularly inspiring tradeoff.

There aren't many great technical solutions to some of these issues, as
far as I'm aware, but it's something that needs to be incredibly
carefully considered before betting the continued security of Bitcoin on
exponential on-chain growth, something which we have historically never
seen.

Matt


On March 31, 2017 5:09:18 PM EDT, Sergio Demian Lerner via bitcoin-dev 
 wrote:
>Hi everyone,
>
>Segwit2Mb is the project to merge into Bitcoin a minimal patch that
>aims to
>untangle the current conflict between different political positions
>regarding segwit activation vs. an increase of the on-chain blockchain
>space through a standard block size increase. It is not a new solution,
>but
>it should be seen 

Re: [bitcoin-dev] Segregated witness p2p layer compatibility

2017-03-27 Thread Matt Corallo via bitcoin-dev
Just to expand a tiny bit here, while the testnet setup of only a few nodes 
acting as "bridges", mainnet already has many systems which act as effective 
bridges today - there are several relay networks in use which effectively 
bypass the P2P network, including my legacy relay network (which many miners 
historically have used, and I'd expect those who aren't paying attention and 
don't upgrade will not turn off, fixing the issue for them), ViaBTC's super 
aggressive bandwidth-wasting block announcement network which pushes blocks 
from several pools to many nodes globally, and Bitcoin.com's private relay 
network. (Of course many other miners and pools have private relay networks, 
but the several other such networks I'm aware of are already segwit-compatible, 
even for pools not signaling segwit).

Matt

On March 27, 2017 12:22:43 PM PDT, Suhas Daftuar via bitcoin-dev 
 wrote:
>Hi,
>
>There have been two threads recently that have made references to
>peer-to-peer implementation details in Bitcoin Core's Segregated
>Witness
>code that I would like to clarify.
>
>In the thread "Issolated Bitcoin Nodes" (
>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013765.html),
>there was some discussion about how Bitcoin Core's block download logic
>behaves after segwit activation.  After segwit activation, Bitcoin Core
>nodes will not (currently) attempt to download any blocks from
>non-segwit
>peers (nodes that do not set the NODE WITNESS service bit).  This is a
>bandwidth optimization to prevent a node from downloading a block that
>may
>be invalid only because the sender omitted the witness, requiring
>re-download until the block is received with the required witness data.
>
>But to be clear, non-segwit blocks -- that is, blocks without a witness
>commitment in the coinbase, and whose transactions are serialized
>without
>witnesses, and whose transactions are not spending segwit outputs which
>require a witness -- are evaluated under the same rules as prior,
>pre-segwit versions of the software.  So such non-segwit blocks that
>are
>valid to older, pre-segwit nodes are also valid to segwit-nodes.
>
>In
>https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013796.html,
>Eric Voskuil wrote:
>
>Given the protocol requirements of the segwit proposal this is not the
>> case. A miner running pre-segwit code will produce blocks that no
>> segwit node will ever receive.
>
>
>The phrase "protocol requirements of segwit" is confusing here, because
>there are two different layers that need consideration: the consensus
>protocol layer and the peer-to-peer protocol layer.  But in neither
>layer
>is the behavior of not downloading blocks from non-NODE WITNESS peers a
>"requirement".  This is an implementation detail in the Bitcoin Core
>code
>that alternate implementations compliant with BIP 144 could implement
>differently.
>
>At the consensus layer, non-segwit blocks (described above) that are
>valid
>to older nodes are also valid to segwit nodes.  That means that if a
>miner
>was using an older, pre-segwit version of Bitcoin Core to produce
>blocks
>after segwit activates, that blocks they find will be valid to all
>nodes.
>
>At the p2p layer, though, segwit-enabled Bitcoin Core nodes will only
>try
>to download those blocks if announced by a segwit-enabled peer.  But
>this
>is not a protocol requirement; other implementations can remain
>compatible
>even they take different approaches here.  (As an example, I could
>imagine
>an implementation that downloaded a new block from any peer, but if the
>block has a witness commitment in the coinbase and was received from a
>non-segwit peer, then the node would attempt re-download from a segwit
>peer.  I'm sure many other reasonable block download strategies could
>be
>devised.)
>
>Still, if a miner wants to continue mining post-segwit activation, but
>using pre-segwit software, they would need a way to relay their blocks
>to a
>segwit-enabled peer.
>
>There are a few ways to do this that I can think of:
>
>- Use the RPC call "submitblock" on a segwit-enabled node.  Calling
>"submitblock" on a Bitcoin Core 0.13.1 (0.13.0 in the case of testnet)
>or
>later node works fine as long as the block is valid (whether or not it
>has
>a witness commitment or witness transactions), and once a
>segwit-enabled
>peer has the block it will relay to other segwit peers.
>
>- Explicitly deliver the block to a segwit node over the p2p network,
>even
>if unrequested.  Currently Bitcoin Core at least will process
>unrequested
>blocks, and valid blocks that update the tip will then be relayed to
>other
>peers.
>
>- Run a bridge node, which advertises NODE_WITNESS and can serialize
>blocks
>with witness data, which downloads blocks even from non-NODE WITNESS
>peers.  Anyone can do this to bridge the networks for the benefit of
>the
>whole network (I have personally been running a few nodes that do this,
>for
>several months 

Re: [bitcoin-dev] Issolated Bitcoin Nodes

2017-03-23 Thread Matt Corallo via bitcoin-dev
I haven't investigated, but you may be seeing segwit-invalid blocks...0.13.0+ 
nodes will enforce segwit as it activated some time ago on testnet, 0.12.X 
nodes will not.

On March 23, 2017 3:37:34 PM PDT, Juan Garavaglia via bitcoin-dev 
 wrote:
>We notice some reorgs in Bitcoin testnet, while reorgs in testnet are
>common and may be part of different tests and experiments, it seems the
>forks are not created by a single user and multiple blocks were mined
>by different users in each chain.  My first impression was that the
>problem was related to network issues but some Bitcoin explorers were
>following one chain while others follow the other one.  Nonetheless,
>well established explorers like blocktrail.com or blockr.io were
>following different chains at different heights which led to me to
>believe that it was not a network issue. After some time, a reorg
>occurs and it all comes to normal state as a single chain.
>We started investigating more and we identified that the fork occurs
>with nodes 0.12; in some situations, nodes 0.12 has longer/different
>chains. The blocks in both chains are valid so something must be
>occurring in the communication between nodes but not related with the
>network itself.
>Long story short, when nodes 0.13+ receive blocks from 0.13+ nodes all
>is ok, and those blocks propagate to older nodes with no issues. But
>when a block tries to be propagated from bitcoind 0.12.+ to newer ones
>those blocks are NOT being propagated to the peers with newer versions
>while these newer blocks are being propagated to peers with older
>versions with no issues.
>My conclusion is that we have a backward compatibility issue between
>0.13.X+ and older versions.
>The issue is simple to replicate, first, get latest version of
>bitcoind, complete the IBD after is at current height, then force it to
>use exclusively one or more peers of versions 0.12.X and older, and you
>will notice that the latest version node will never receive a new
>block.
>Probably some alternative bitcoin implementations act as bridges
>between these two versions and facilitate the chain reorgs.
>I have not yet found any way where/how it can be used in a malicious
>way or be exploited by a miner but in theory Bitcoin 0.13.X+ should
>remain compatible with older ones, but a 0.13+ node may become isolated
>by 0.12 peers, and there is not notice for the node owner.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fraud proofs for block size/weight

2017-03-22 Thread Matt Corallo via bitcoin-dev
It works today and can be used to prove exact size: the key observation is that 
all you need to show the length and hash of a transaction is the final SHA256 
midstate and chunk (max 64 bytes). It also uses the observation that a valid 
transaction must be at least 60 bytes long for compression (though much of that 
compression possibility goes away if you're proving something other than "too 
large").

On March 22, 2017 1:49:08 PM PDT, Bram Cohen via bitcoin-dev 
 wrote:
>Some questions:
>
>Does this require information to be added to blocks, or can it work
>today
>on the existing format?
>
>Does this count number of transactions or their total length? The block
>limit is in bytes rather than number of transactions, but transaction
>number can be a reasonable proxy if you allow for some false negatives
>but
>want a basic sanity check.
>
>Does this allow for proofs of length in the positive direction,
>demonstrating that a block is good, or does it only serve to show that
>blocks are bad? Ideally we'd like an extension to SPV protocol so light
>clients require proofs of blocks not being too big, given the credible
>threat of there being an extremely large-scale attack on the network of
>that form.
>
>
>On Wed, Mar 22, 2017 at 1:47 AM, Luke Dashjr via bitcoin-dev <
>bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Despite the generalised case of fraud proofs being likely impossible,
>there
>> have recently been regular active proposals of miners attacking with
>simply
>> oversized blocks in an attempt to force a hardfork. This specific
>attack
>> can
>> be proven, and reliably so, since the proof cannot be broken without
>also
>> breaking their attempted hardfork at the same time.
>>
>> While ideally all users ought to use their own full node for
>validation
>> (even
>> when using a light client for their wallet), many bitcoin holders
>still do
>> not. As such, they are likely to need protection from these attacks,
>to
>> ensure
>> they remain on the Bitcoin blockchain.
>>
>> I've written up a draft BIP for fraud proofs and how light clients
>can
>> detect
>> blockchains that are simply invalid due to excess size and/or weight:
>>
>>
>https://github.com/luke-jr/bips/blob/bip-sizefp/bip-sizefp.mediawiki
>>
>> I believe this draft is probably ready for implementation already,
>but if
>> anyone has any idea on how it might first be improved, please feel
>free to
>> make suggestions.
>>
>> Luke
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP151 protocol incompatibility

2017-02-13 Thread Matt Corallo via bitcoin-dev
Sorry, I'm still missing it...
So your claim is that a) ignoring incoming messages of a type you do not 
recognize is bad, and thus b) we should be disconnecting/banning peers which 
send us messages we do not recognize (can you spell out why? Anyone is free to 
send your host address messages/transactions they are generating/etc/etc, we 
don't ban nodes for such messages, as that would be crazy - why should we ban a 
peer for sending us an extra 50 bytes which we ignore?), and thus c) this would 
be backwards incompatible with software which does not currently exist?

Usually "backwards incompatible" refers to breaking existing software, not 
breaking theoretical software. Note that, last I heard, BIP 151 is still a 
draft, if such software actually exists we can discuss changing it, but there 
are real wins in sending these messages before VERSION.

On February 13, 2017 12:17:11 PM GMT+01:00, Eric Voskuil  
wrote:
>On 02/13/2017 03:11 AM, Matt Corallo wrote:
>> I believe many, if not all, of those messages are sent irrespective
>of version number.
>
>In the interest of perfect clarity, see your code:
>
>https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L1372-L1403
>
>Inside of the VERACK handler (i.e. after the handshake) there is a peer
>version test before sending SENDCMPCT (and SENDHEADERS).
>
>I have no idea where the fee filter message is sent, if it is sent at
>all. But I have *never* seen any control messages arrive before the
>handshake is complete.
>
>> In any case, I fail to see how adding any additional messages which
>are ignored by old peers amounts to a lack of backward compatibility.
>
>See preceding messages in this thread, I think it's pretty clearly
>spelled out.
>
>e
>
>> On February 13, 2017 11:54:23 AM GMT+01:00, Eric Voskuil
> wrote:
>>> On 02/13/2017 02:16 AM, Matt Corallo wrote:
 For the reasons Pieter listed, an explicit part of our version
>>> handshake and protocol negotiation is the exchange of
>otherwise-ignored
>>> messages to set up optional features.
>>>
>>> Only if the peer is at the protocol level that allows the message:
>>>
>>> compact blocks:
>>>
>>>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L217-L242
>>>
>>> fee filter:
>>>
>>>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L211-L216
>>>
>>> send headers:
>>>
>>>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L204-L210
>>>
>>> filters:
>>>
>>>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L170-L196
>>>
 Peers that do not support this ignore such messages, just as if
>they
>>> had indicated they wouldn't support it, see, eg BIP 152's handshake.
>>> Not
>>> sure why you consider this backwards incompatible, as I would say
>it's
>>> pretty clearly allowing old nodes to communicate just fine.
>>>
>>> No, it is not the same as BIP152. Control messages apart from BIP151
>>> are
>>> not sent until *after* the version is negotiated.
>>>
>>> I assume that BIP151 is different in this manner because it has a
>>> desire
>>> to negotiate encryption before any other communications, including
>>> version.
>>>
>>> e
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP151 protocol incompatibility

2017-02-13 Thread Matt Corallo via bitcoin-dev
I believe many, if not all, of those messages are sent irrespective of version 
number.

In any case, I fail to see how adding any additional messages which are ignored 
by old peers amounts to a lack of backward compatibility.

On February 13, 2017 11:54:23 AM GMT+01:00, Eric Voskuil  
wrote:
>On 02/13/2017 02:16 AM, Matt Corallo wrote:
>> For the reasons Pieter listed, an explicit part of our version
>handshake and protocol negotiation is the exchange of otherwise-ignored
>messages to set up optional features.
>
>Only if the peer is at the protocol level that allows the message:
>
>compact blocks:
>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L217-L242
>
>fee filter:
>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L211-L216
>
>send headers:
>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L204-L210
>
>filters:
>
>https://github.com/bitcoin/bitcoin/blob/master/src/protocol.h#L170-L196
>
>> Peers that do not support this ignore such messages, just as if they
>had indicated they wouldn't support it, see, eg BIP 152's handshake.
>Not
>sure why you consider this backwards incompatible, as I would say it's
>pretty clearly allowing old nodes to communicate just fine.
>
>No, it is not the same as BIP152. Control messages apart from BIP151
>are
>not sent until *after* the version is negotiated.
>
>I assume that BIP151 is different in this manner because it has a
>desire
>to negotiate encryption before any other communications, including
>version.
>
>e
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP151 protocol incompatibility

2017-02-13 Thread Matt Corallo via bitcoin-dev
For the reasons Pieter listed, an explicit part of our version handshake and 
protocol negotiation is the exchange of otherwise-ignored messages to set up 
optional features.

Peers that do not support this ignore such messages, just as if they had 
indicated they wouldn't support it, see, eg BIP 152's handshake. Not sure why 
you consider this backwards incompatible, as I would say it's pretty clearly 
allowing old nodes to communicate just fine.

On February 13, 2017 10:36:21 AM GMT+01:00, Eric Voskuil via bitcoin-dev 
 wrote:
>On 02/13/2017 12:47 AM, Pieter Wuille wrote:
>> On Feb 12, 2017 23:58, "Eric Voskuil via bitcoin-dev"
>> > 
>> The BIP151 proposal states:
>> 
>> > This proposal is backward compatible. Non-supporting peers will
>ignore
>> the encinit messages.
>> 
>> This statement is incorrect. Sending content that existing nodes
>do not
>> expect is clearly an incompatibility. An implementation that
>ignores
>> invalid content leaves itself wide open to DOS attacks. The
>version
>> handshake must be complete before the protocol level can be
>determined.
>> While it may be desirable for this change to precede the version
>> handshake it cannot be described as backward compatible.
>> 
>> The worst possible effect of ignoring unknown messages is a waste of
>> downstream bandwidth. The same is already possible by being sent addr
>> messages.
>> 
>> Using the protocol level requires a strict linear progression of
>> (allowed) network protocol features, which I expect to become harder
>and
>> harder to maintain.
>> 
>> Using otherwise ignored messages for determining optional features is
>> elegant, simple and opens no new attack vectors. I think it's very
>much
>> preferable over continued increments of the protocol version.
>
>As I said, it *may* be desirable, but it is *not* backward compatible,
>and you do not actually dispute that above.
>
>There are other control messages that qualify as "optional messages"
>but
>these are only sent if the peer is at a version to expect them -
>explicit in their BIPs. All adopted BIPs to date have followed this
>pattern. This is not the same and it is not helpful to imply that it is
>just following that pattern.
>
>As for DOS, waste of bandwidth is not something to be ignored. If a
>peer
>is flooding a node with addr message the node can manage it because it
>understands the semantics of addr messages. If a node is required to
>allow any message that it cannot understand it has no recourse. It
>cannot determine whether it is under attack or if the behavior is
>correct and for proper continued operation must be ignored.
>
>This approach breaks any implementation that validates traffic, which
>is
>clearly correct behavior given the existence of the version handshake.
>Your comments make it clear that this is a *change* in network behavior
>- essentially abandoning the version handshake. Whether is is harder to
>maintain is irrelevant to the question of whether it is a break with
>existing protocol.
>
>If you intend for the network to abandon the version handshake and/or
>promote changes that break it I propose that you write up this new
>behavior as a BIP and solicit community feedback. There are a lot of
>devices connected to the network and it would be irresponsible to break
>something as fundamental as the P2P protocol handshake because you have
>a feeling it's going to be hard to maintain.
>
>e
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2017-01-28 Thread Matt Corallo via bitcoin-dev
Replies inline.

On 01/28/17 07:28, Johnson Lau wrote:
> 
>> On 28 Jan 2017, at 10:32, Matt Corallo > > wrote:
>>
>> Looks cool, though I have a few comments inline.
>>
>> One general note - it looks like you're letting complexity run away from
>> you a bit here. If the motivation for something is only weak, its
>> probably not worth doing! A hard fork is something that must be
>> undertaken cautiously because it has so much inherent risk, lets not add
>> tons to it.
>>
> 
> I think the following features are necessary for a hardfork. The rest
> are optional:
> 
> 1. A secondary header
> 2. Anti-replay
> 3. SigHash limit for old scripts
> 4. New tx weight accounting

Agreed.

> Optional:
> 1. New coinbase format is nice but not strictly needed. But this can’t
> be reintroduced later with softfork due to the 100 block maturity
> requirement
> 2. Smooth halving: could be a less elegant softfork
> 3. Mekle sum tree: definitely could be a softfork

Agreed. Would like 1, dont care about 2, not a fan of 3. 2 could even be
implemented easily as a softfork if we allow the
spend-other-coinbase-outputs from 1.

>>
>> On 01/14/17 21:14, Johnson Lau via bitcoin-dev wrote:
>>> I created a second version of forcenet with more experimental features
>>> and stopped my forcenet1 node.
>>>
>>> 1. It has a new header format: Height (4), BIP9 signalling field (4),
>>> hardfork signalling field (2), Hash TMR (32), Hash WMR (32), Merkle sum
>>> root (32), number of tx (4), prev hash (32), timestamp (4), nBits (4),
>>> nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), merkle branches
>>> leading to header C (compactSize + 32 bit hashes)
>>
>> In order of appearance:
>>
>> First of all lets try to minimize header size. We really dont want any
>> more space taken up here than we absolutely need to.
>>
>> I'm super unconvinced that we need more than one merkle tree for
>> transactions. Lets just have one merkle tree who's leaves are
>> transactions hashed 2 ways (without witnesses and only witnesses).
>>
>> Why duplicate the nBits here? shouldn't the PoW proof be the
>> responsibility of the parent header?
>>
> 
> Without nBits in the header, the checking of PoW become contextual and I
> think that may involve too much change. The saving of these 4 bytes, if
> it is really desired, might be done on a p2p level 

Hmm? I'm saying that "the header" should be viewed as both the
"top-level" PoW-proving header, and the sub-header. There is no need to
have nBits in both?

>> I have to agree with Tadge here, variable-length header fields are evil,
>> lets avoid them.
>>
>> Why have merkle branches to yet another header? Lets just leave it as an
>> opaque commitment header (32).
>>
>> Finally, lets not jump through hoops here - the transaction merkle root
>> of the "old-style" (now PoW) header should simply be the hash of the new
>> header. No coinbase transaction, just the hash of the secondary header.
>> This saves space without giving up utility - SPV nodes are already not
>> looking at the coinbase transaction, so no harm in not having one to give.
> 
> 
> Regarding the header format, a big question we never came into consensus
> is the format of the hardfork. Although I designed forcenet to be a
> soft-hardfork, I am now more inclined to suggest a simple hardfork,
> given that the warning system is properly fixed (at the
> minimum: https://github.com/bitcoin/bitcoin/pull/9443)
> 
> Assuming a simple hardfork is made, the next question is whether we want
> to keep existing light wallets functioning without upgrade, cheating
> them by hiding the hash of the new header somewhere in the transaction
> merkle tree.
> 
> We also need to think about the Stratum protocol. Ideally we should not
> require firmware upgrade.
> 
> For the primary 80 bytes header, I think it will always be a fixed size.
> But for the secondary header, I’m not quite sure. Actually, one may
> argue that we already have a secondary header (i.e. coinbase tx), and it
> is not fixed size.

We can safely disable SPV clients post-fork by just keeping the header
format sufficiently compatible with PR#9443 without caring about the
coinbase transaction, which I think should be the goal.

Regarding firmware upgrade, you make a valid point. I suppose we need
something that looks sufficiently like a coinbase transaction that
miners can do nonce-rolling using existing algorithms. Personally, I'd
kinda prefer something like a two-leaf merkle tree root as the merkle
root in the "primary 80-byte header" (can we agree on terminology for
this before we go any further?) - the left one is a
coinbase-transaction-looking thing, the right one the header of the new
block header.

>>>
>>> 4. A totally new way to define tx weight. Tx weight is the maximum of
>>> the following metrics:
>>> a. SigHashSize (see the bip in point 3)
>>> b. Witness serialised size * 2 * 90
>>> c. Adjusted size * 90. Adjusted size = tx weight (BIP141) 

Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2017-01-27 Thread Matt Corallo via bitcoin-dev
Oops, forgot to mention, in the "parent" (ie old) block header, we should:

1) fix the version field so its a static constant
2) swap first 2 bytes of the merkle root with the timestamp's two
high-order bytes (preferably more, I'm not sure how much ASIC hardware
has timestamp-rolling in it anymore, but if there is none left we should
take all 4 bytes from the timestamp field).

Matt

On 01/28/17 02:32, Matt Corallo via bitcoin-dev wrote:
> Looks cool, though I have a few comments inline.
> 
> One general note - it looks like you're letting complexity run away from
> you a bit here. If the motivation for something is only weak, its
> probably not worth doing! A hard fork is something that must be
> undertaken cautiously because it has so much inherent risk, lets not add
> tons to it.
> 
> Matt
> 
> On 01/14/17 21:14, Johnson Lau via bitcoin-dev wrote:
>> I created a second version of forcenet with more experimental features
>> and stopped my forcenet1 node.
>>
>> 1. It has a new header format: Height (4), BIP9 signalling field (4),
>> hardfork signalling field (2), Hash TMR (32), Hash WMR (32), Merkle sum
>> root (32), number of tx (4), prev hash (32), timestamp (4), nBits (4),
>> nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), merkle branches
>> leading to header C (compactSize + 32 bit hashes)
> 
> In order of appearance:
> 
> First of all lets try to minimize header size. We really dont want any
> more space taken up here than we absolutely need to.
> 
> I'm super unconvinced that we need more than one merkle tree for
> transactions. Lets just have one merkle tree who's leaves are
> transactions hashed 2 ways (without witnesses and only witnesses).
> 
> Why duplicate the nBits here? shouldn't the PoW proof be the
> responsibility of the parent header?
> 
> I have to agree with Tadge here, variable-length header fields are evil,
> lets avoid them.
> 
> Why have merkle branches to yet another header? Lets just leave it as an
> opaque commitment header (32).
> 
> Finally, lets not jump through hoops here - the transaction merkle root
> of the "old-style" (now PoW) header should simply be the hash of the new
> header. No coinbase transaction, just the hash of the secondary header.
> This saves space without giving up utility - SPV nodes are already not
> looking at the coinbase transaction, so no harm in not having one to give.
> 
>> 2. Anti-tx-replay. If, after masking the highest byte, the tx nVersion
>> is >=3, the sighash for both segwit and non-segwit outputs is calculated
>> with BIP143, except 0x200 is added to the nHashType. Such signatures
>> are invalid for legacy nodes. But since they are non-std due the
>> nVersion, they won’t be relayed nor validated by legacy nodes. This also
>> removes the O(n^2) sighash problem when spending non-segwit outputs.
>> (anti-replay is a long story and I will discuss in a separate post/BIP)
> 
> Will comment on the anti-replay post.
> 
>> 3. Block sighashlimit
>> (https://github.com/jl2012/bips/blob/sighash/bip-sighash.mediawiki). Due
>> to point 2, SigHashSize is counted only for legacy non-segwit inputs
>> (with masked tx nVersion < 3). We have to support legacy signature to
>> make sure time-locked txs made before the hard fork are still valid.
>>
>> 4. A totally new way to define tx weight. Tx weight is the maximum of
>> the following metrics:
>> a. SigHashSize (see the bip in point 3)
>> b. Witness serialised size * 2 * 90
>> c. Adjusted size * 90. Adjusted size = tx weight (BIP141) + (number of
>> non-OP_RETURN outputs - number of inputs) * 41 * 4
>> d. nSigOps * 50 * 90. All SigOps are equal (no witness scaling). For
>> non-segwit txs, the sigops in output scriptPubKey are not counted, while
>> the sigops in input scriptPubKey are counted.
> 
> This is definitely too much. On the one hand its certainly nice to be
> able to use max() for limits, and nice to add all the reasonable limits
> we might want to, but on the other hand this can make things like coin
> selection super complicated - how do you take into consideration the 4
> different limits? Can we do something much, much simpler like
> max(serialized size with some input discount, nSigOps * X) (which is
> what we effectively already have in our mining code)?
> 
>> 90 is the scaling factor for SigHashSize, to maintain the 1:90 ratio
>> (see the BIP in point 3)
>> 50 is the scaling factor for nSigOps, maintaining the 1:50 ratio in BIP141
>>
>> Rationale for adjusted size: 4 is witness scaling factor. 41 is the
>> minimum size for an input (32 hash + 4 index + 4 nSequence + 1
>> scriptSig). This requ

Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2017-01-27 Thread Matt Corallo via bitcoin-dev
Looks cool, though I have a few comments inline.

One general note - it looks like you're letting complexity run away from
you a bit here. If the motivation for something is only weak, its
probably not worth doing! A hard fork is something that must be
undertaken cautiously because it has so much inherent risk, lets not add
tons to it.

Matt

On 01/14/17 21:14, Johnson Lau via bitcoin-dev wrote:
> I created a second version of forcenet with more experimental features
> and stopped my forcenet1 node.
> 
> 1. It has a new header format: Height (4), BIP9 signalling field (4),
> hardfork signalling field (2), Hash TMR (32), Hash WMR (32), Merkle sum
> root (32), number of tx (4), prev hash (32), timestamp (4), nBits (4),
> nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), merkle branches
> leading to header C (compactSize + 32 bit hashes)

In order of appearance:

First of all lets try to minimize header size. We really dont want any
more space taken up here than we absolutely need to.

I'm super unconvinced that we need more than one merkle tree for
transactions. Lets just have one merkle tree who's leaves are
transactions hashed 2 ways (without witnesses and only witnesses).

Why duplicate the nBits here? shouldn't the PoW proof be the
responsibility of the parent header?

I have to agree with Tadge here, variable-length header fields are evil,
lets avoid them.

Why have merkle branches to yet another header? Lets just leave it as an
opaque commitment header (32).

Finally, lets not jump through hoops here - the transaction merkle root
of the "old-style" (now PoW) header should simply be the hash of the new
header. No coinbase transaction, just the hash of the secondary header.
This saves space without giving up utility - SPV nodes are already not
looking at the coinbase transaction, so no harm in not having one to give.

> 2. Anti-tx-replay. If, after masking the highest byte, the tx nVersion
> is >=3, the sighash for both segwit and non-segwit outputs is calculated
> with BIP143, except 0x200 is added to the nHashType. Such signatures
> are invalid for legacy nodes. But since they are non-std due the
> nVersion, they won’t be relayed nor validated by legacy nodes. This also
> removes the O(n^2) sighash problem when spending non-segwit outputs.
> (anti-replay is a long story and I will discuss in a separate post/BIP)

Will comment on the anti-replay post.

> 3. Block sighashlimit
> (https://github.com/jl2012/bips/blob/sighash/bip-sighash.mediawiki). Due
> to point 2, SigHashSize is counted only for legacy non-segwit inputs
> (with masked tx nVersion < 3). We have to support legacy signature to
> make sure time-locked txs made before the hard fork are still valid.
> 
> 4. A totally new way to define tx weight. Tx weight is the maximum of
> the following metrics:
> a. SigHashSize (see the bip in point 3)
> b. Witness serialised size * 2 * 90
> c. Adjusted size * 90. Adjusted size = tx weight (BIP141) + (number of
> non-OP_RETURN outputs - number of inputs) * 41 * 4
> d. nSigOps * 50 * 90. All SigOps are equal (no witness scaling). For
> non-segwit txs, the sigops in output scriptPubKey are not counted, while
> the sigops in input scriptPubKey are counted.

This is definitely too much. On the one hand its certainly nice to be
able to use max() for limits, and nice to add all the reasonable limits
we might want to, but on the other hand this can make things like coin
selection super complicated - how do you take into consideration the 4
different limits? Can we do something much, much simpler like
max(serialized size with some input discount, nSigOps * X) (which is
what we effectively already have in our mining code)?

> 90 is the scaling factor for SigHashSize, to maintain the 1:90 ratio
> (see the BIP in point 3)
> 50 is the scaling factor for nSigOps, maintaining the 1:50 ratio in BIP141
> 
> Rationale for adjusted size: 4 is witness scaling factor. 41 is the
> minimum size for an input (32 hash + 4 index + 4 nSequence + 1
> scriptSig). This requires people to pre-pay majority of the fee of
> spending an UTXO. It makes creation of UTXO more expensive, while
> spending of UTXO cheaper, creates a strong incentive to limit the growth
> of UTXO set.
> 
> Rationale for taking the maximum of different metrics: this indirectly
> set an upper block resources for _every_ metrics, while making the tx
> fee estimation a linear function. Currently, there are 2 block resources
> limits: block weight and nSigOp cost (BIP141). However, since users do
> not know what the other txs are included in the next block, it is
> difficult to determine whether tx weight of nSigOp cost is a more
> important factor in determining the tx fee. (This is not a real problem
> now, because weight is more important in most cases). With an unified
> definition of tx weight, the fee estimation becomes a linear problem.
> 
> Translating to new metric, the current BIP141 limit is 360,000,000. This
> is equivalent to 360MB of 

  1   2   >