[bitcoin-dev] Soft Fork Activation & Enforcement w/o Signaling?

2018-03-21 Thread Samad Sajanlal via bitcoin-dev
Is it possible to activate soft forks such as BIP65 and BIP66 without prior
signaling from miners? I noticed in chainparams.cpp that there are block
heights where the enforcement begins.

I understand this is already active on bitcoin. I'm working on a project
that is a clone of a clone of bitcoin, and we currently do not have BIP65
or BIP66 enforced - no signaling of these soft forks either (most of the
network is on a source code fork of bitcoin 0.9). This project does not and
never intends to attempt to replace bitcoin - we know that without bitcoin
our project could never exist, so we owe a great deal of gratitude to the
bitcoin developers.

If the entire network upgrades to the correct version of the software
(based on bitcoin 0.15), which includes the block height that has
enforcement, can we simply skip over the signaling and go straight into
activation/enforcement?

At this time we are lucky that our network is very small, so it is
reasonable to assume that the whole network will upgrade their clients
within a short window (~2 weeks). We would schedule the activation ~2
months out from when the client is released, just to ensure everyone has
time to upgrade.

We have been stuck on the 0.9 code branch and my goal is to bring it up to
0.15 at least, so that we can implement Segwit and other key features that
bitcoin has introduced. The 0.15 client currently works with regards to
sending and receiving transactions but the soft forks are not active. I
understand that activating them will segregate the 0.15 clients onto their
own fork, which is why I'd like to understand the repercussions of doing it
without any signaling beforehand. I also would prefer not to have to make
intermediate releases such as 0.10, 0.11.. etc to get the soft forks
activated.

Another related question - does the block version get bumped up
automatically at the time that a soft fork activates, or is there
additional stuff that I need to do within the code to ensure it bumps up at
the same time? From what I saw in the code it appears that it will bump up
automatically, but I would like some confirmation on that.

Regards,
Samad
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-21 Thread Bram Cohen via bitcoin-dev
Regarding the proposed segwit v2 with reclaiming most things as
RETURN_VALID, the net result for what's being proposed in the near future
for supporting aggregated signatures in the not-so-near future is to punt.
A number of strategies are possible for how to deal with new opcodes being
added later on, and the general strategy of making unused opcodes be
RETURN_VALID for now and figuring out how to handle it later works for all
of them. I think this is the right approach, but wanted to clarify that it
is in fact the approach being proposed.

That said, there are some subtleties to getting it right which the last
message doesn't really cover. Most unused opcodes should be reclaimed as
RETURN_VALID, but there should still be one OP_NOP and there should be a
'real' RETURN_VALID, which (a) is guaranteed to not be soft forked into
something else in the future, and (b) doesn't have any parsing weirdness.
The parsing weirdness of all the unclaimed opcodes is interesting. Because
everything in an IF clause needs to be parsed in order to find where the
ELSE is, you have a few options for dealing with an unknown opcode getting
parsed in an unexecuted section of code. They are (a) avoid the problem
completely by exterminating IF and MASTing (b) avoid the problem completely
by getting rid of IF and adding IFJUMP, IFNJUMP, and JUMP which specify a
number of bytes (this also allows for script merkleization) (c) require all
new opcodes have fixed length 1, even after they're soft forked, (d) do
almost like (c) but require that on new soft forks people hack their old
scripts to still parse properly by avoiding the OP_ELSE in inopportune
places (yuck!) (e) make it so that the unknown opcodes case a RETURN_VALID
even when they're parsed, regardless of whether they're being executed.

By far the most expedient option is (e) cause a RETURN_VALID at parse time.
There's even precedent for this sort of behavior in the other direction
with disabled opcodes causing failure at parse time even if they aren't
being executed.

A lot can be said about all the options, but one thing I feel like snarking
about is that if you get rid of IFs using MAST, then it's highly unclear
whether OP_DEPTH should be nuked as well. My feeling is that it should and
that strict parsing should require that the bottom thing in the witness
gets referenced at some point.

Hacking in a multisig opcode isn't a horrible idea, but it is very stuck
specifically on m-of-n and doesn't support more complex formulas for how
signatures can be combined, which makes it feel hacky and weird.

Also it may make sense to seriously consider BLS signatures, which have a
lot of practical benefits starting with them being noninteractively
aggregatable so you can always assume that they're aggregated instead of
requiring complex semantics to specify what's aggregated with what. My team
is working on an implementation which has several advantages over what's
currently in the published literature but it isn't quite ready for public
consumption yet. This should probably go on the pile of reasons why it's
premature to finalize a plan for aggregation at this point.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-21 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,




​Sent with ProtonMail Secure Email.​

‐‐‐ Original Message ‐‐‐

On March 21, 2018 7:21 PM, Anthony Towns  wrote:

> On Wed, Mar 21, 2018 at 03:53:59AM -0400, ZmnSCPxj wrote:
> 
> > Good morning aj,
> 
> Good evening Zeeman!
> 
> [pulled from the bottom of your mail]
> 
> > This way, rather than gathering signatures, we gather public keys for 
> > aggregate signature checking.
> 
> Sorry, I probably didn't explain it well (or at all): during the script,
> 
> you're collecting public keys and messages (ie, BIP 143 style digests)
> 
> which then go into the signing/verification algorithm to produce/check
> 
> the signature.

Yes, I think this is indeed what OP_CHECK_AGG_SIG really does.

What I propose is that we have two places where we aggregate public keys: one 
at the script level, and one at the transaction level.  OP_ADD_AGG_PUBKEY adds 
to the script-level aggregate, then OP_CHECK_AGG_SIG adds the script-level 
aggregate to the transaction-level aggregate.

Unfortunately it will not work since transaction-level aggregate (which is 
actually what gets checked) is different between pre-fork and post-fork nodes.

It looks like signature aggregation is difficult to reconcile with script...

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-21 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

I am probably wrong, but could solution 2 be simplified by using the below 
opcodes for aggregated signatures?

OP_ADD_AGG_PUBKEY - Adds a public key for verification of an aggregated 
signature.

OP_CHECK_AGG_SIG[VERIFY] - Check that the gathered public keys matches the 
aggregated signature.

Then:

 pubkey1 OP_ADD_AGG_PUBKEY
 OP_IF
   pubkey2 OP_ADD_AGG_PUBKEY
 OP_ELSE
   cond OP_CHECKCOVENANT
 OP_ENDIF
 OP_CHECK_AGG_SIG

(omitting the existence of buckets)

I imagine that aggregated signatures, being linear, would allow pubkey to be 
aggregated also by adding the pubkey points (but note that I am not a 
mathematician, I only parrot what better mathematicians say) so 
OP_ADD_AGG_PUBKEY would not require storing all public keys, just adding them 
linearly.

The effect is that in the OP_CHECKCOVENANT case, pre-softfork nodes will not 
actually do any checking.

OP_CHECK_AGG_SIG might accept the signature on the stack (combined signature of 
pubkey1 and pubkey2 and from other inputs), or the bucket the signature is 
stored in.

We might even consider using the altstack: no more OP_ADD_AGG_PUBKEY (one less 
opcode to reserve!), just push pubkeys on the altstack, and OP_CHECK_AGG_SIG 
would take the entire altstack as all the public keys to be used in aggregated 
signature checking.

This way, rather than gathering signatures, we gather public keys for aggregate 
signature checking.  OP_RETURN_TRUE interacts with that by not performing 
aggregate signature checking at all if we encounter OP_RETURN_TRUE first (which 
makes sense: old nodes have no idea what OP_RETURN_TRUE is really doing, and 
would fail to understand all its details).


I am very probably wrong but am willing to learn how to break the above, 
though.  I am probably making a mistake somewhere.

Regards,
ZmnSCPxj

​Sent with ProtonMail Secure Email.​

‐‐‐ Original Message ‐‐‐

On March 21, 2018 12:06 PM, Anthony Towns via bitcoin-dev 
 wrote:

> Hello world,
> 
> There was a lot of discussion on Schnorr sigs and key and signature
> 
> aggregation at the recent core-dev-tech meeting (one relevant conversation
> 
> is transcribed at \[0\]).
> 
> Quick summary, with more background detail in the corresponding footnotes:
> 
> signature aggregation is awesome \[1\], and the possibility of soft-forking
> 
> in new opcodes via OP\_RETURN\_VALID opcodes (instead of OP_NOP) is also
> 
> awesome \[2\].
> 
> Unfortunately doing both of these together may turn out to be awful.
> 
> RETURN_VALID and Signature Aggregation
> 
> 
> -
> 
> Bumping segwit script versions and redefining OP_NOP opcodes are
> 
> fairly straightforward to deal with even with signature aggregation,
> 
> the straightforward implementation of both combined is still a soft-fork.
> 
> RETURN_VALID, unfortunately, has a serious potential pitfall: any
> 
> aggregatable signature operations that occur after it have to go into
> 
> separate buckets.
> 
> As an example of why this is the case, imagine introducing a covenant
> 
> opcode that pulls a potentially complicated condition from the stack
> 
> (perhaps, "an output pays at least 5 satoshi to address xyzzy"),
> 
> checks the condition against the transaction, and then pushes 1 (or 0)
> 
> back onto the stack indicating compliance with the covenant (or not).
> 
> You might then write a script allowing a single person to spend the coins
> 
> if they comply with the covenant, and allow breaking the covenant with
> 
> someone else's sign-off in addition. You could write this as:
> 
> pubkey1 CHECKSIGVERIFY
> 
> cond CHECKCOVENANT IFDUP NOTIF pubkey2 CHECKSIG ENDIF
> 
> If you pass the covenant, you supply "SIGHASHALL|BUCKET_1" and aggregate
> 
> the signature for pubkey1 into bucket1 and you're set; otherwise you supply
> 
> "SIGHASHALL|BUCKET\_1 SIGHASHALL|BUCKET\_1" and aggregate signatures for both
> 
> pubkey1 and pubkey2 into bucket1 and you're set. Great!
> 
> But this isn't a soft-fork: old nodes would see this script as:
> 
> pubkey1 CHECKSIGVERIFY
> 
> cond RETURN_VALID IFDUP NOTIF pubkey2 CHECKSIG ENDIF
> 
> which it would just interpret as:
> 
> pubkey1 CHECKSIGVERIFY cond RETURN_VALID
> 
> which is fine if the covenant was passing; but no good if the covenant
> 
> didn't pass -- they'd be expecting the aggregted sig to just be for
> 
> pubkey1 when it's actually pubkey1+pubkey2, so old nodes would fail the
> 
> tx and new nodes 

Re: [bitcoin-dev] feature: Enhance privacy by change obfuscation

2018-03-21 Thread Eric Voskuil via bitcoin-dev
> This would be really expensive for the network due to the bloat in UTXO size, 
> a cost everyone has to pay for.

Without commenting on the merits of this proposal, I’d just like to correct 
this common misperception. There is no necessary additional cost to the network 
from the count of unspent outputs. This perception arises from an 
implementation detail of particular node software. There is no requirement for 
redundant indexing of unspent outputs.

e
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-21 Thread Andrew Poelstra via bitcoin-dev
On Wed, Mar 21, 2018 at 02:06:18PM +1000, Anthony Towns via bitcoin-dev wrote:
> 
> That leads me to think that interactive signature aggregation is going to
> take a lot of time and work, and it would make sense to do a v1-upgrade
> that's "just" Schnorr (and taproot and MAST and re-enabling opcodes and
> ...) in the meantime. YMMV.
>

Unfortunately I agree. Another complication with aggregate signatures is
that they complicate blind signature protocols such as [1]. In particular
they break the assumption "one signature can spend at most one UTXO"
meaning that a blind signer cannot tell how many coins they're authorizing
with a given signature, even if they've ensured that the key they're using
only controls UTXOs of a fixed value.

This seems solvable with creative use of ZKPs, but the fact that it's even
a problem caught me off guard, and makes me think that signature aggregation
is much harder to think about than e.g. Taproot which does not change
signature semantics at all.


Andrew



[1] 
https://github.com/jonasnick/scriptless-scripts/blob/blind-swaps/md/partially-blind-swap.md



-- 
Andrew Poelstra
Mathematics Department, Blockstream
Email: apoelstra at wpsoftware.net
Web:   https://www.wpsoftware.net/andrew

"A goose alone, I suppose, can know the loneliness of geese
 who can never find their peace,
 whether north or south or west or east"
   --Joanna Newsom



signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Soft-forks and schnorr signature aggregation

2018-03-21 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 21, 2018 at 03:53:59AM -0400, ZmnSCPxj wrote:
> Good morning aj,

Good evening Zeeman!

[pulled from the bottom of your mail]
> This way, rather than gathering signatures, we gather public keys for 
> aggregate signature checking.  

Sorry, I probably didn't explain it well (or at all): during the script,
you're collecting public keys and messages (ie, BIP 143 style digests)
which then go into the signing/verification algorithm to produce/check
the signature.

You do need to gather signatures from each private key holder when
producing the aggregate signature, but that happens at the wallet/p2p
level, rather than the consensus level.

> I am probably wrong, but could solution 2 be simplified by using the below 
> opcodes for aggregated signatures?
> 
> OP_ADD_AGG_PUBKEY - Adds a public key for verification of an aggregated 
> signature.
> OP_CHECK_AGG_SIG[VERIFY] - Check that the gathered public keys matches the 
> aggregated signature.

Checking the gathered public keys match the aggregated signature is
something that only happens for the entire transaction as a whole, so
you don't need an opcode for it in the scripts, since they're per-input.

Otherwise, I think that's pretty similar to what I was already saying;
having:

   SIGHASH_ALL|BUCKET_1 pubkey OP_CHECKSIG

would be adding "pubkey" and a message hash calculated via the SIGHASH_ALL
hashing rules to the list of things that the signature for bucket 1 verifies.

FWIW, the Bellare-Neven verification algorithm looks something like:

   s*G = R + K   (s,R is the signature)
   K = sum( H(R, L, i, m) * X_i )   for i corresponding to each pubkey X_i
   L = the concatenation of all the pubkeys, X_0..X_n
   m = the concatenation of all the message hashes, m_0..m_n

So the way I look at it is each input puts a public key and a message hash
(X_i, m_i) into the bucket via a CHECKSIG operation (or similar), and once
you're done, you look into the bucket and there's just a single signature
(s,R) left to verify. You can't start verifying any of it until you've
looked through all the scripts because you need to know L and m before
you can do anything, and both of those require info from every part of
the aggregation.

[0] [1]

> The effect is that in the OP_CHECKCOVENANT case, pre-softfork nodes will not 
> actually do any checking.

Pre-softfork nodes not doing any checking doesn't work with cross-input
signature aggregation as far as I can see. If it did, all you would have
to do to steal people's funds is mine a non-standard transaction:

  inputs:
   my-millions:
 pay-to-pubkey pubkey1
 witness=SIGHASH_ALL|BUCKET_1
   your-two-cents:
 pay-to-script-hash script=[1 OP_RETURN_TRUE pubkey2 CHECKSIG]
 witness=SIGHASH_ALL|BUCKET_1

   bucket1: 64-random-bytes
  output:
   all-the-money: you

Because there's no actual soft-fork at this point every node is an "old"
node, so they all see the OP_RETURN_TRUE and stop validating signatures,
accepting the transaction as valid, and giving you all my money, despite
you being unable to actually produce my signature.

Make sense?

Cheers,
aj

[0] For completeness: constructing the signature for Bellare-Neven
requires two communication phases amongst the signers, and looks
roughly like:

 1. each party generates a random variable r_i, and sharing the
corresponding curve point R_i=r_i*G and their sighash choice
(ie, m_i) with the other signers.

 2. this allows each party to calculate R=sum(R_i) and m,
and hence H(R,L,i,m), at which point each party calculates a
partial signature using their respective private key, x_i:

  s_i = r_i + H(R,L,i,m)*x_i

all these s_i values are then communicated to each signer.

 3. these combine to give the final signature (s,R),
with s=sum(s_i), allowing each signer to verify that the signing
protocol completed successfully, and any signer can broadcast
the transaction to the blockchain

[1] muSig differs in the details, but is basically the same.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev