Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-27 Thread Bram Cohen via bitcoin-dev
On Mon, Mar 26, 2018 at 11:34 PM, Anthony Towns  wrote:

> On Wed, Mar 21, 2018 at 05:47:01PM -0700, Bram Cohen via bitcoin-dev wrote:
> > [...] Most unused opcodes should be reclaimed as RETURN_VALID,
> > but there should still be one OP_NOP and there should be a 'real'
> RETURN_VALID,
> > which (a) is guaranteed to not be soft forked into something else in the
> > future, and (b) doesn't have any parsing weirdness.
>
> What's the reason for those? I could see an argument for RETURN_VALID, I
> guess:
>
>   confA IF condB IF condC IF [pathA] RETURN_VALID ENDIF ENDIF ENDIF [pathB]
>
> is probably simpler and saves 3 bytes compared to:
>
>   1 condA IF condB IF condC IF [pathA] NOT ENDIF ENDIF ENDIF IF [pathB]
> ENDIF
>
> but that doesn't seem crazy compelling?


Mostly yes it's for that case and also for:

   condA IF RETURN_VALID ENDIF condb IF RETURN_VALID ENDIF condc

Technically that can be done with fewer opcodes using OP_BOOLOR but maybe
in the future there will be some incentive for short circuit evaluation

But there's also the general principle that it's only one opcode and if
there are a lot of things which look like RETURN_VALID there should be one
thing which actually is RETURN_VALID


> I don't see a reason to just keep one OP_NOP though.
>

Mostly based on momentum because there are several of them there right now.
If noone else wants to defend it I won't either.


> > By far the most expedient option is (e) cause a RETURN_VALID at parse
> time.
> > There's even precedent for this sort of behavior in the other direction
> with
> > disabled opcodes causing failure at parse time even if they aren't being
> > executed.
>
> You're probably right. That still doesn't let you implement intercal's
> COMEFROM statement as a new opcode, of course. :)
>

That can be in the hardfork wishlist :-)


> > A lot can be said about all the options, but one thing I feel like
> snarking
> > about is that if you get rid of IFs using MAST, then it's highly unclear
> > whether OP_DEPTH should be nuked as well. My feeling is that it should
> and that
> > strict parsing should require that the bottom thing in the witness gets
> > referenced at some point.
>
> I guess when passing the script you could perhaps check if each witness
> item could have been replaced with OP_FALSE or OP_1 and still get the
> same result, and consider the transaction non-standard if so?
>

Essentially all opcodes including OP_PICK make clear at runtime how deep
they go and anything below the max depth can be safely eliminated (or used
as grounds for rejecting in strict mode). The big exception is OP_DEPTH
which totally mangles the assumptions. It's trivial to make scripts which
use OP_DEPTH which become invalid with things added below the stack then go
back to being valid again with more things added even though the individual
items are never even accessed.


>
> > Hacking in a multisig opcode isn't a horrible idea, but it is very stuck
> > specifically on m-of-n and doesn't support more complex formulas for how
> > signatures can be combined, which makes it feel hacky and weird.
>
> Hmm? The opcode I suggested works just as easily with arbitrary formulas,
> eg, "There must be at least 1 signer from pka{1,2,3}, and 3 signers all
> up, except each of pkb{1,2,3,4,5,6} only counts for half":
>
>   0 pkb6 pkb5 pkb4 pkb3 pkb2 pkb1 pka3 pka2 pka1 9 CHECK_AGGSIG_VERIFY
> (declare pubkeys)
>   0b111 CHECK_AGG_SIGNERS VERIFY
> (one of pka{1,2,3} must sign)
>   0b001 CHECK_AGG_SIGNERS
>   0b010 CHECK_AGG_SIGNERS ADD
>   0b100 CHECK_AGG_SIGNERS ADD
>   DUP ADD
> (pka{1,2,3} count double)
>   0b01000 CHECK_AGG_SIGNERS ADD
>   0b1 CHECK_AGG_SIGNERS ADD
>   0b00010 CHECK_AGG_SIGNERS ADD
>   0b00100 CHECK_AGG_SIGNERS ADD
>   0b01000 CHECK_AGG_SIGNERS ADD
>   0b1 CHECK_AGG_SIGNERS ADD
> (pkb{1..6} count single)
>   6 EQUAL
> (summing to a total of 3 doubled)
>
> Not sure that saves it from being "hacky and weird" though...
>

That is very hacky and weird. Doing MAST on lots of possibilities is always
reasonably elegant, and it only gets problematic when the number of
possibilities is truly massive.

It's also the case that BLS can support complex key agreement schemes
without even giving away that it isn't a simple single signature. Just
saying.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Optimized Header Sync

2018-03-27 Thread Jim Posen via bitcoin-dev
Based on some ideas that were thrown around in this thread (
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015385.html),
I have been working on a P2P extension that will allow faster header sync
mechanisms. The one-sentence summary is that by encoding headers more
efficiently (eg. omitting prev_hash) and downloading evenly spaced
checkpoints throughout history (say every 1,000th) from all peers first, we
could speed up header sync, which would be a huge improvement for light
clients. Here is a draft of the BIP:
https://github.com/jimpo/bips/blob/headers-sync/headersv2.mediawiki. The
full text is below as well.

I'd love to hear any feedback people have.

--

== Abstract ==

This BIP describes a P2P network extension enabling faster, more
reliable methods for syncing the block header chain. New P2P messages
are proposed as more efficient replacements for
getheaders and headers during initial block
download. The proposed header download protocol reduces bandwidth
usage by ~40%-50% and supports downloading headers ranges from
multiple peers in parallel, which is not possible with the current
mechanism. This also enables sync strategies with better resistance to
denial-of-service attacks.

== Motivation ==

Since 2015, optimized Bitcoin clients fetch all block headers before
blocks themselves in order to avoid downloading ones that are not part
of the most work chain. The protocol currently in use for fetching
headers leaves room for further optimization, specifically by
compressing header data and downloading more headers
simulaneouslyhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015385.html.
Any savings here should have a large impact given that both full nodes
and light clients must sync the header chain as a first step, and that
the time to validate and index the headers is negligible compared to
the time spent downloading them from the network. Furthermore, some
current implementations of headers syncing rely on preconfigured
checkpoints to discourage attackers attempting to fill up a victim's
disk space with low-work headers. The proposed messages enable sync
strategies that are resilient against these types of attacks. The P2P
messages are designed to be flexible, supporting multiple header sync
strategies and leaving room for future innovations, while also
compact.

== Definitions ==

''double-SHA256'' is a hash algorithm defined by two invocations of
SHA-256: double-SHA256(x) = SHA256(SHA256(x)).

== Specification ==

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.

=== New Structures ===

 Compressed Headers 

Bitcoin headers are serialized by default in 80 bytes as follows:

{| class="wikitable"
! Field Name
! Data Type
! Byte Size
! Description
|-
| version
| int32_t
| 4
| Block version information
|-
| prev_block
| uint256
| 32
| The hash of the previous block
|-
| merkle_root
| uint256
| 32
| The root hash of the transaction Merkle tree
|-
| timestamp
| uint32_t
| 4
| A Unix timestamp of the block creation time, as reported by the miner
|-
| bits
| uint32_t
| 4
| The calculated difficulty target for this block
|-
| nonce
| uint32_t
| 4
| A nonce that is set such that the header's hash matches the difficulty target
|}

When deserializing a correctly-formed sequence of block headers
encoded in this way, it can be noted that:

* The prev_block field should always match the double-SHA256 hash of
the previous header, making it redundant
* According to Bitcoin consensus rules, the bits field only changes
every 2016 blocks
* The version often matches that of a recent ancestor block
* The timestamp is often a small delta from the preceding header's timestamp

To take advantage of these possible savings, this document defines a
variable-sized ''compressed encoding'' of block headers that occur in
a range. Note that no savings are possible when serializing a single
header; it should only be used for vectors of sequential headers. The
full headers are reconstructed using data from previous headers in the
range. The serialization begins with an ''encoding indicator'', which
is a bitfield specifying how each field is serialized. The bits of the
indicator have the following semantics:

{| class="wikitable"
! Bit Index
! Reconstruction
! Description
|-
| 0
| prev_block[i] = DSHA256(header[i-1])
| The prev_block field is ommitted and assigned to the double-SHA256
hash of the previous uncompressed header.
|-
| 1
| nbits[i] = nbits[i-1]
| The nbits field is omitted and matches that of the previous header.
|-
| 2
| timestamp[i] = timestamp[i-1] + value
| The timestamp is replaced by a 2-byte signed short int, representing
an offset from the previous block's timestamp
|-
| 3
|
| Interpreted along with bits 4 & 5.
|-
| 4
|
| Interpreted along with bits 3 & 5.
|-

Re: [bitcoin-dev] {sign|verify}message replacement

2018-03-27 Thread Karl Johan Alm via bitcoin-dev
Pieter,

Thanks for the feedback. Comments below:

On Mon, Mar 26, 2018 at 5:53 PM, Pieter Wuille  wrote:
> One solution is to include a version number in the signature, which
> explicitly corresponds to a set of validation flags. When the version number
> is something a verifier doesn't know, it can be reported as inconclusive
> (it's relying on features you don't know about).
>
> An solution is to verify twice; once with all consensus rules you know
> about, and once with standardness rules. If they're both valid, the
> signature is valid. If they're both invalid, the signature is invalid. If
> they're different (consensus valid but standardness invalid), you report the
> signature validation as inconclusive (it's relying on features you don't
> know about). This approach works as long as new features only use previous
> standardness-invalid scripts, but perhaps a version number is still needed
> to indicate the standardness flags.

I think the double verify approach seems promising. I assume old nodes
consider new consensus rule enforcing transactions as non-standard but
valid. If this is always the case, it may be an idea to simply fail
verification with a message indicating the node is unable to verify
due to unknown consensus rules.

>> RPC commands:
>>
>> sign   [=false]
>
> Why not extend the existing signmessage/verifymessage RPC? For legacy
> addresses it can fall back to the existing signature algorithm, while using
> the script-based approach for all others.

Yes, I initially thought it would be better to not use it as the
legacy behavior could be depended on god knows where, but I think
adding a legacy mode or simply doing the old way for 1xx is
sufficient.

>> (**) If  is true,  is the sighash, otherwise
>> sighash=sha256d(message).
>
>
> That's very dangerous I'm afraid. It could be used to trick someone into
> signing off on an actual transaction, if you get them to sign a "random
> looking" prehashed message. Even if you have a prehashed message, there is
> no problem with treating it as hex input to a second hashing step, so I
> think the prehashed option isn't needed. It's why the existing message
> signing functionality always forcibly prefixes "Bitcoin signed message:", to
> avoid signing something that unintentionally corresponds to a message
> intended for another goal.

Eek.. good point...
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-27 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 21, 2018 at 05:47:01PM -0700, Bram Cohen via bitcoin-dev wrote:
> [...] Most unused opcodes should be reclaimed as RETURN_VALID,
> but there should still be one OP_NOP and there should be a 'real' 
> RETURN_VALID,
> which (a) is guaranteed to not be soft forked into something else in the
> future, and (b) doesn't have any parsing weirdness.

What's the reason for those? I could see an argument for RETURN_VALID, I guess:

  confA IF condB IF condC IF [pathA] RETURN_VALID ENDIF ENDIF ENDIF [pathB]

is probably simpler and saves 3 bytes compared to:

  1 condA IF condB IF condC IF [pathA] NOT ENDIF ENDIF ENDIF IF [pathB] ENDIF

but that doesn't seem crazy compelling? I don't see a reason to just keep
one OP_NOP though.

> The parsing weirdness of
> all the unclaimed opcodes is interesting. Because everything in an IF clause
> needs to be parsed in order to find where the ELSE is, you have a few options
> for dealing with an unknown opcode getting parsed in an unexecuted section of
> code. They are (a) avoid the problem completely by exterminating IF and 
> MASTing
> (b) avoid the problem completely by getting rid of IF and adding IFJUMP,
> IFNJUMP, and JUMP which specify a number of bytes (this also allows for script
> merkleization) (c) require all new opcodes have fixed length 1, even after
> they're soft forked, (d) do almost like (c) but require that on new soft forks
> people hack their old scripts to still parse properly by avoiding the OP_ELSE
> in inopportune places (yuck!) (e) make it so that the unknown opcodes case a
> RETURN_VALID even when they're parsed, regardless of whether they're being
> executed.

I was figuring (c), fwiw, and assuming that opcodes will just be about
manipulating stack values and marking the script as invalid, rather than,
say, introducing new flow control ops.

> By far the most expedient option is (e) cause a RETURN_VALID at parse time.
> There's even precedent for this sort of behavior in the other direction with
> disabled opcodes causing failure at parse time even if they aren't being
> executed.

You're probably right. That still doesn't let you implement intercal's
COMEFROM statement as a new opcode, of course. :)

> A lot can be said about all the options, but one thing I feel like snarking
> about is that if you get rid of IFs using MAST, then it's highly unclear
> whether OP_DEPTH should be nuked as well. My feeling is that it should and 
> that
> strict parsing should require that the bottom thing in the witness gets
> referenced at some point.

I guess when passing the script you could perhaps check if each witness
item could have been replaced with OP_FALSE or OP_1 and still get the
same result, and consider the transaction non-standard if so?

> Hacking in a multisig opcode isn't a horrible idea, but it is very stuck
> specifically on m-of-n and doesn't support more complex formulas for how
> signatures can be combined, which makes it feel hacky and weird.

Hmm? The opcode I suggested works just as easily with arbitrary formulas,
eg, "There must be at least 1 signer from pka{1,2,3}, and 3 signers all
up, except each of pkb{1,2,3,4,5,6} only counts for half":

  0 pkb6 pkb5 pkb4 pkb3 pkb2 pkb1 pka3 pka2 pka1 9 CHECK_AGGSIG_VERIFY
(declare pubkeys)
  0b111 CHECK_AGG_SIGNERS VERIFY
(one of pka{1,2,3} must sign)
  0b001 CHECK_AGG_SIGNERS
  0b010 CHECK_AGG_SIGNERS ADD
  0b100 CHECK_AGG_SIGNERS ADD
  DUP ADD
(pka{1,2,3} count double)
  0b01000 CHECK_AGG_SIGNERS ADD
  0b1 CHECK_AGG_SIGNERS ADD
  0b00010 CHECK_AGG_SIGNERS ADD
  0b00100 CHECK_AGG_SIGNERS ADD
  0b01000 CHECK_AGG_SIGNERS ADD
  0b1 CHECK_AGG_SIGNERS ADD
(pkb{1..6} count single)
  6 EQUAL
(summing to a total of 3 doubled)

Not sure that saves it from being "hacky and weird" though...

(There are different ways you could do "CHECK_AGG_SIGNERS": for
instance, take a bitmask of keys and return the bitwise-and with the
keys that signed, or take a bitmask and just return the number of keys
matching that bitmask that signed, or take a pubkey index and return a
boolean whether that key signed)

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev