Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 08, 2016 at 07:26:48PM +, Matt Corallo via bitcoin-dev wrote:
> As what a hard fork should look like in the context of segwit has never
> (!) been discussed in any serious sense, I'd like to kick off such a
> discussion with a (somewhat) specific proposal.

> Here is a proposed outline (to activate only after SegWit and with the
> currently-proposed version of SegWit):

Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?

> 1) The segregated witness discount is changed from 75% to 50%. The block
> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
> maximum block size of 3MB and a "network-upgraded" block size of roughly
> 2.1MB. This still significantly discounts script data which is kept out
> of the UTXO set, while keeping the maximum-sized block limited.

This would mean the limits go from:

   pre-segwit  segwit pkh  segwit 2/2 msig  worst case
   1MB -   -1MB
   1MB 1.7MB   2MB  4MB
   1.5MB   2.1MB   2.2MB3MB

That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?

On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.

Doing a "2x" hardfork with the same reduction to a 50% segwit discount
would (I think) look like:

   pre-segwit  segwit pkh  segwit 2/2 msig  worst case
   1MB -   -1MB
   1MB 1.7MB   2MB  4MB
   2MB 2.8MB   2.9MB4MB

which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?

> 2) In order to prevent significant blowups in the cost to validate
> [...] and transactions are only allowed to contain
> up to 20 non-segwit inputs. [...]

This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?

> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
> 1-per-50-bytes across the entire block to a per-transaction limit which
> is slightly looser (though not too much looser - even with libsecp256k1
> 1-per-50-bytes represents 2 seconds of single-threaded validation in
> just sigops on my high-end workstation).

I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
limit would be a good addition, ie:

  #define MAX_BLOCK_SIZE   150
  #define MAX_BLOCK_DATA_SIZE  300
  #define MAX_BLOCK_SIGOPS 5

  #define MAX_COST 300
  #define SIGOP_COST   (MAX_COST / MAX_BLOCK_SIGOPS)
  #define BLOCK_COST   (MAX_COST / MAX_BLOCK_SIZE)
  #define DATA_COST(MAX_COST / MAX_BLOCK_DATA_SIZE)

  if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
   > MAX_COST)
  {
  block_is_invalid();
  }

Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.

> 4) Instead of requiring the first four bytes of the previous block hash
> field be 0s, we allow them to contain any value. This allows Bitcoin
> mining hardware to reduce the required logic, making it easier to
> produce competitive hardware [1].
> [1] Simpler here may not be entirely true. There is potential for
> optimization if you brute force the SHA256 midstate, but if nothing
> else, this will prevent there being a strong incentive to use the
> version field as nonce space. This may need more investigation, as we
> may wish to just set the minimum difficulty higher so that we can add
> more than 4 nonce-bytes.

Could you just use leading non-zero bytes of the prevhash as additional
nonce?

So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.

That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Nicolas Dorier via bitcoin-dev
> 2) In order to prevent significant blowups in the cost to validate
> [...] and transactions are only allowed to contain
> up to 20 non-segwit inputs. [...]

There is two kind of hard fork, the one who breaks things, and the one who
does not.
Restricting the non-segwit inputs would disrupt lots of services, and
potentially invalidating
hash time locked transactions, which is a very bad precedent.
So I'm strongly against this particular point.

> scriptPubKeys are now limited to 100 bytes in
> size and may not contain OP_CODESEPARATOR, scriptSigs must be push-only
> (ie no non-push opcodes)

Same problem for native multisig, however potentially less important than
the previous point.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A roadmap to a better header format and bigger block size

2016-02-09 Thread Matt Corallo via bitcoin-dev
As for your stages idea, I generally like the idea (and mentioned it may
be a good idea in my proposal), but am worried about scheduling two
hard-forks at onceLets do our first hard-fork first with the things
we think we will need anytime in the visible future that we have
reasonable designs for now, and talk about a second one after we've seen
what did/didnt blow up with the first one.

Anyway, this generally seems reasonable - it looks like most of this
matches up with what I said more specifically in my mail yesterday, with
the addition of timewarp fixes, which we should probably add, and Luke's
header changes, which I need to spend some more time thinking about.

Matt

On 02/09/16 14:16, jl2012--- via bitcoin-dev wrote:
> I would like to present a 2-3 year roadmap to a better header format and
> bigger block size
> 
> Objectives:
> 
> 1. Multistage rule changes to make sure everyone will have enough time to
> upgrade
> 2. Make mining easier, without breaking existing mining hardware and the
> Stratum protocol
> 3. Make future hardfork less disruptive (with Luke-Jr's proposal)
> 
> Stage 1 is Segregated Witness (BIP141), which will not break any existing
> full or light nodes. This may happen in Q2-Q3 2016
> 
> Stage 2 is fixes that will break existing full nodes, but not light nodes:
> a. Increase the MAX_BLOCK_SIZE (the exact value is not suggested in this
> roadmap), potentially change the witness discount
> b. Anti-DoS rules for the O(n^2) validation of non-segwit scripts
> c. (optional) Move segwit's commitments to the header Merkle tree. This is
> optional at this stage as it will be fixed in Stage 3 anyway
> This may happen in Q1-Q2 2017
> 
> Stage 3 is fixes that will break all existing full nodes and light nodes:
> a. Full nodes upgraded to Stage 2 will not need to upgrade again, as the
> rules and activation logic should be included already
> b. Change the header format to Luke-Jr's proposal, and move all commitments
> (tx, witness, etc) to the new structure. All existing mining hardware with
> Stratum protocol should work.
> c. Reclaiming unused bits in header for mining. All existing mining chips
> should still work. Newly designed chips should be ready for the new rule.
> d. Fix the time warp attack
> This may happen in 2018 to 2019
> 
> Pros:
> a. Light nodes (usually less tech-savvy users) will have longer time to
> upgrade
> b. The stage 2 is opt-in for full nodes.
> c. The stage 3 is opt-in for light nodes.
> 
> Cons:
> a. The stage 2 is not opt-in for light nodes. They will blindly follow the
> longest chain which they might actually don't want to
> b. Non-upgraded full nodes will follow the old chain at Stage 2, which is
> likely to have lower value.
> c. Non-upgraded light nodes will follow the old chain at Stage 3, which is
> likely to have lower value. (However, this is not a concern as no one should
> be mining on the old chain at that time)
> 
> ---
> An alternative roadmap would be:
> 
> Stage 2 is fixes that will break existing full nodes and light nodes.
> However, they will not follow the minority chain
> a. Increase the MAX_BLOCK_SIZE, potentially change the witness discount
> b. Anti-DoS rules for the O(n^2) validation of non-segwit scripts
> c. Change the header format to Luke-Jr's proposal, and move all commitments
> (tx, witness, etc) to the new structure.
> This may happen in mid 2017 or later
> 
> Stage 3 is fixes that will break all existing full nodes and light nodes. 
> a. Full nodes and light nodes upgraded to Stage 2 will not need to upgrade
> again, as the rules and activation logic should be included already
> b. Reclaiming unused bits in header for mining. All existing mining chips
> should still work.
> c. Fix the time warp attack
> This may happen in 2018 to 2019
> 
> Pros:
> a. The stage 2 and 3 are opt-in for everyone
> b. Even failing to upgrade, full nodes and light nodes won't follow the
> minority chain at stage 2
> 
> Cons:
> a. Non-upgraded full/light nodes will follow the old chain at Stage 3, which
> is likely to have lower value. (However, this is not a concern as no one
> should be mining on the old chain at that time)
> b. It takes longer to implement stage 2 to give enough time for light node
> users to upgrade
> 
> ---
> 
> In terms of safety, the second proposal is better. In terms of disruption,
> the first proposal is less disruptive
> 
> I would also like to emphasize that it is miners' responsibility, not the
> devs', to confirm that the supermajority of the community accept changes in
> Stage 2 and 3.
> 
> Reference:
> Matt Corallo's proposal:
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012403.
> html
> Luke-Jr's proposal:
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012377.
> html
> 
> 
> 
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> 

Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Matt Corallo via bitcoin-dev
Thanks for keeping on-topic, replying to the proposal, and being civil!

Replies inline.

On 02/09/16 09:00, Anthony Towns via bitcoin-dev wrote:
> On Mon, Feb 08, 2016 at 07:26:48PM +, Matt Corallo via bitcoin-dev wrote:
>> As what a hard fork should look like in the context of segwit has never
>> (!) been discussed in any serious sense, I'd like to kick off such a
>> discussion with a (somewhat) specific proposal.
> 
>> Here is a proposed outline (to activate only after SegWit and with the
>> currently-proposed version of SegWit):
> 
> Is this intended to be activated soon (this year?) or a while away
> (2017, 2018?)?

It's intended to activate when we have clear and broad consensus around
a hard proposal across the community.

>> 1) The segregated witness discount is changed from 75% to 50%. The block
>> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
>> maximum block size of 3MB and a "network-upgraded" block size of roughly
>> 2.1MB. This still significantly discounts script data which is kept out
>> of the UTXO set, while keeping the maximum-sized block limited.
> 
> This would mean the limits go from:
> 
>pre-segwit  segwit pkh  segwit 2/2 msig  worst case
>1MB -   -1MB
>1MB 1.7MB   2MB  4MB
>1.5MB   2.1MB   2.2MB3MB
> 
> That seems like a fairly small gain (20% for pubkeyhash, which would
> last for about 3 months if you're growth rate means doubling every 9
> months), so this probably makes the most sense as a "quick cleanup"
> change, that also safely demonstrates how easy/difficult doing a hard
> fork is in practice?
>
> On the other hand, if segwit wallet deployment takes longer than
> hoped, the 50% increase for pre-segwit transactions might be a useful
> release-valve.
> 
> Doing a "2x" hardfork with the same reduction to a 50% segwit discount
> would (I think) look like:
> 
>pre-segwit  segwit pkh  segwit 2/2 msig  worst case
>1MB -   -1MB
>1MB 1.7MB   2MB  4MB
>2MB 2.8MB   2.9MB4MB
> 
> which seems somewhat more appealing, without making the worst-case any
> worse; but I guess there's concern about the relay networking scaling
> above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?


The goal isnt really to get a "gain" here...its mostly to decrease the
worst-case (4MB is pretty crazy) and keep the total size in-line with
what the network could handle. Getting 1MB blocks through the network in
under a second is already incredibly difficult...2MB is pretty scary and
will take lots of work...3MB is over the bound of "yea, we can pretty
for sure get that to work pretty well".


>> 2) In order to prevent significant blowups in the cost to validate
>> [...] and transactions are only allowed to contain
>> up to 20 non-segwit inputs. [...]
> 
> This could potentially make old, signed, but time-locked transactions
> invalid. Is that a good idea?


Hmm...you make a valid point. I was trying to avoid breaking old
transactions, but didnt think too much about time-locked ones.
Hmm...we could apply the limits to txn that dont have at least one
"newer than the fork input", but I'm not sure I like that either.


>> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
>> 1-per-50-bytes across the entire block to a per-transaction limit which
>> is slightly looser (though not too much looser - even with libsecp256k1
>> 1-per-50-bytes represents 2 seconds of single-threaded validation in
>> just sigops on my high-end workstation).
> 
> I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
> limit would be a good addition, ie:
> 
>   #define MAX_BLOCK_SIZE   150
>   #define MAX_BLOCK_DATA_SIZE  300
>   #define MAX_BLOCK_SIGOPS 5
> 
>   #define MAX_COST 300
>   #define SIGOP_COST   (MAX_COST / MAX_BLOCK_SIGOPS)
>   #define BLOCK_COST   (MAX_COST / MAX_BLOCK_SIZE)
>   #define DATA_COST(MAX_COST / MAX_BLOCK_DATA_SIZE)
> 
>   if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
>> MAX_COST)
>   {
>   block_is_invalid();
>   }
> 
> Though I think you'd need to bump up the worst-case limits somewhat to
> make that work cleanly.


There is a clear goal here of NOT using block-based limits and switching
to transaction-based limits. By switching to transaction-based limits we
avoid nasty issues with mining code either getting complicated or
enforcing too-strict limits on individual transactions.


>> 4) Instead of requiring the first four bytes of the previous block hash
>> field be 0s, we allow them to contain any value. This allows Bitcoin
>> mining hardware to reduce the required logic, making it easier to
>> produce competitive hardware [1].
>> [1] Simpler here may not be entirely true. There is potential for
>> optimization if you brute force the SHA256 

Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Matt Corallo via bitcoin-dev
Oops, forgot to reply to your last point.

Indeed, we could push for more place by just always having one 0-byte,
but I'm not sure the added complexity helps anything? ASICs can never be
designed which use more extra-nonce-space than what they can reasonably
assume will always be available, so we might as well just set the
maximum number of bytes and let ASIC designers know exactly what they
have available. Currently blocks start with at least 8 0-bytes. We could
just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
those? Anyway, someone needs to take a closer look at the midstate
optimization stuff to see what is reasonable required.

Matt


>>> 4) Instead of requiring the first four bytes of the previous block hash
>>> field be 0s, we allow them to contain any value. This allows Bitcoin
>>> mining hardware to reduce the required logic, making it easier to
>>> produce competitive hardware [1].
>>> [1] Simpler here may not be entirely true. There is potential for
>>> optimization if you brute force the SHA256 midstate, but if nothing
>>> else, this will prevent there being a strong incentive to use the
>>> version field as nonce space. This may need more investigation, as we
>>> may wish to just set the minimum difficulty higher so that we can add
>>> more than 4 nonce-bytes.
>>
>> Could you just use leading non-zero bytes of the prevhash as additional
>> nonce?
>>
>> So to work out the actual prev hash, set leading bytes to zero until
>> you hit a zero. Conversely, to add nonce info to a hash, if there are
>> N leading zero bytes, fill up the first N-1 (or less) of them with
>> non-zero values.
>>
>> That would give a little more than 255**(N-1) possible values
>> ((255**N-1)/254) to be exact). That would actually scale automatically
>> with difficulty, and seems easy enough to make use of in an ASIC?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Luke Dashjr via bitcoin-dev
On Tuesday, February 09, 2016 10:00:44 PM Matt Corallo via bitcoin-dev wrote:
> Indeed, we could push for more place by just always having one 0-byte,
> but I'm not sure the added complexity helps anything? ASICs can never be
> designed which use more extra-nonce-space than what they can reasonably
> assume will always be available, so we might as well just set the
> maximum number of bytes and let ASIC designers know exactly what they
> have available. Currently blocks start with at least 8 0-bytes. We could
> just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
> those?

The extranonce rolling doesn't necessarily need to happen in the ASIC itself. 
With the current extranonce-in-gentx, an old RasPi 1 can only handle creating 
work for up to 5 Gh/s with a 500k gentx.

Furthermore, there is a direct correlation between ASIC speeds and difficulty, 
so increasing the extranonce space dynamically makes a lot of sense.

I don't see any reason *not* to increase the minimum difficulty at the same 
time, though.

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Question regarding Confidential Transactions

2016-02-09 Thread Jeremy Papp via bitcoin-dev
My understanding of the paper is that the blinding factor would be 
included in the extra data which is incorporated into the ring 
signatures used in the range proof.


Although, since I think the range proof is optional for single output 
transactions (or at least, one output per transaction doesn't require a 
range proof since there's only one possible value that it can be to make 
the whole thing work, and that value must be in range, I'm not entirely 
sure how you'd transmit it then, though in any case, since using it will 
pretty much require segwit, adding extraneous data isn't much of a 
problem.  In both cases, I imagine the blinding factor would be 
protected from outside examination via some form of shared secret 
generation... Although that would require the sender to know the 
recipient's unhashed public key; I don't know of any shared secret 
schemes that will work on hashed keys.


Jeremy Papp

On 2/9/2016 7:12 AM, Henning Kopp via bitcoin-dev wrote:

Hi all,

I am trying to fully grasp confidential transactions.

When a sender creates a confidential transaction and picks the blinding
values correctly, anyone can check that the transaction is valid. It
remains publically verifiable.
But how can the receiver of the transaction check which amount was
sent to him?
I think he needs to learn the blinding factor to reveal the commit
somehow off-chain. Am I correct with this assumption?
If yes, how does this work?

All the best
Henning



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A roadmap to a better header format and bigger block size

2016-02-09 Thread Ricardo Filipe via bitcoin-dev
I believe i've seen Luke say this several times before, but there are
several more things that the majority of the devs agree should be in
bitcoin.
I would suggest to compile that list for your stage 3, so that you can have
an hardfork that fixes most of those things, and there should be some
repository with those changes deployed.

2016-02-09 14:16 GMT+00:00 jl2012--- via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:

> I would like to present a 2-3 year roadmap to a better header format and
> bigger block size
>
> Objectives:
>
> 1. Multistage rule changes to make sure everyone will have enough time to
> upgrade
> 2. Make mining easier, without breaking existing mining hardware and the
> Stratum protocol
> 3. Make future hardfork less disruptive (with Luke-Jr's proposal)
>
> Stage 1 is Segregated Witness (BIP141), which will not break any existing
> full or light nodes. This may happen in Q2-Q3 2016
>
> Stage 2 is fixes that will break existing full nodes, but not light nodes:
> a. Increase the MAX_BLOCK_SIZE (the exact value is not suggested in this
> roadmap), potentially change the witness discount
> b. Anti-DoS rules for the O(n^2) validation of non-segwit scripts
> c. (optional) Move segwit's commitments to the header Merkle tree. This is
> optional at this stage as it will be fixed in Stage 3 anyway
> This may happen in Q1-Q2 2017
>
> Stage 3 is fixes that will break all existing full nodes and light nodes:
> a. Full nodes upgraded to Stage 2 will not need to upgrade again, as the
> rules and activation logic should be included already
> b. Change the header format to Luke-Jr's proposal, and move all commitments
> (tx, witness, etc) to the new structure. All existing mining hardware with
> Stratum protocol should work.
> c. Reclaiming unused bits in header for mining. All existing mining chips
> should still work. Newly designed chips should be ready for the new rule.
> d. Fix the time warp attack
> This may happen in 2018 to 2019
>
> Pros:
> a. Light nodes (usually less tech-savvy users) will have longer time to
> upgrade
> b. The stage 2 is opt-in for full nodes.
> c. The stage 3 is opt-in for light nodes.
>
> Cons:
> a. The stage 2 is not opt-in for light nodes. They will blindly follow the
> longest chain which they might actually don't want to
> b. Non-upgraded full nodes will follow the old chain at Stage 2, which is
> likely to have lower value.
> c. Non-upgraded light nodes will follow the old chain at Stage 3, which is
> likely to have lower value. (However, this is not a concern as no one
> should
> be mining on the old chain at that time)
>
> ---
> An alternative roadmap would be:
>
> Stage 2 is fixes that will break existing full nodes and light nodes.
> However, they will not follow the minority chain
> a. Increase the MAX_BLOCK_SIZE, potentially change the witness discount
> b. Anti-DoS rules for the O(n^2) validation of non-segwit scripts
> c. Change the header format to Luke-Jr's proposal, and move all commitments
> (tx, witness, etc) to the new structure.
> This may happen in mid 2017 or later
>
> Stage 3 is fixes that will break all existing full nodes and light nodes.
> a. Full nodes and light nodes upgraded to Stage 2 will not need to upgrade
> again, as the rules and activation logic should be included already
> b. Reclaiming unused bits in header for mining. All existing mining chips
> should still work.
> c. Fix the time warp attack
> This may happen in 2018 to 2019
>
> Pros:
> a. The stage 2 and 3 are opt-in for everyone
> b. Even failing to upgrade, full nodes and light nodes won't follow the
> minority chain at stage 2
>
> Cons:
> a. Non-upgraded full/light nodes will follow the old chain at Stage 3,
> which
> is likely to have lower value. (However, this is not a concern as no one
> should be mining on the old chain at that time)
> b. It takes longer to implement stage 2 to give enough time for light node
> users to upgrade
>
> ---
>
> In terms of safety, the second proposal is better. In terms of disruption,
> the first proposal is less disruptive
>
> I would also like to emphasize that it is miners' responsibility, not the
> devs', to confirm that the supermajority of the community accept changes in
> Stage 2 and 3.
>
> Reference:
> Matt Corallo's proposal:
>
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012403.
> html
> Luke-Jr's proposal:
>
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012377.
> html
>
>
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Matt Corallo via bitcoin-dev


On 02/09/16 22:10, Luke Dashjr wrote:
> On Tuesday, February 09, 2016 10:00:44 PM Matt Corallo via bitcoin-dev wrote:
>> Indeed, we could push for more place by just always having one 0-byte,
>> but I'm not sure the added complexity helps anything? ASICs can never be
>> designed which use more extra-nonce-space than what they can reasonably
>> assume will always be available, so we might as well just set the
>> maximum number of bytes and let ASIC designers know exactly what they
>> have available. Currently blocks start with at least 8 0-bytes. We could
>> just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
>> those?
> 
> The extranonce rolling doesn't necessarily need to happen in the ASIC itself. 
> With the current extranonce-in-gentx, an old RasPi 1 can only handle creating 
> work for up to 5 Gh/s with a 500k gentx.


Did you read the footnote on my original email? There is some potential
for optimization if you can brute-force the midstate, which today
requires using the nVersion space as nonce. In order to fix this we need
to add nonce space in the first compression function, so this is an
ideal place. Even ignoring that reducing complexity of mining control
stuff is really nice. If we could go back to just providing block
headers to miners instead of having to provide the entire
transaction-hash-list we could move a ton of complexity back into
Bitcoin Core from mining setups, which have historically been pretty
poorly-reviewed codebases.


> Furthermore, there is a direct correlation between ASIC speeds and 
> difficulty, 
> so increasing the extranonce space dynamically makes a lot of sense.
> 
> I don't see any reason *not* to increase the minimum difficulty at the same 
> time, though.

Meh, my point was less that its a really bad idea and more that it adds
compexity that I dont see much need for.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A roadmap to a better header format and bigger block size

2016-02-09 Thread jl2012--- via bitcoin-dev
I am actually suggesting 1 hardfork, not 2. However, different rules are
activated at different time to enhance safety and reduce disruption. The
advantage is people are required to upgrade once, not twice. Any clients
designed for stage 2 should also be ready for stage 3.


-Original Message-
From: Matt Corallo [mailto:lf-li...@mattcorallo.com] 
Sent: Wednesday, 10 February, 2016 06:15
To: jl2...@xbt.hk; bitcoin-dev@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] A roadmap to a better header format and bigger
block size

As for your stages idea, I generally like the idea (and mentioned it may be
a good idea in my proposal), but am worried about scheduling two hard-forks
at onceLets do our first hard-fork first with the things we think we
will need anytime in the visible future that we have reasonable designs for
now, and talk about a second one after we've seen what did/didnt blow up
with the first one.

Anyway, this generally seems reasonable - it looks like most of this matches
up with what I said more specifically in my mail yesterday, with the
addition of timewarp fixes, which we should probably add, and Luke's header
changes, which I need to spend some more time thinking about.

Matt


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-09 Thread Yifu Guo via bitcoin-dev
Happy Lunar New Year Everyone!

Gavin,

> I suspect there ARE a significant percentage of un-maintained full
> nodes-- probably 30 to 40%. Losing those nodes will not be a problem, for
> three reasons:


The notion of large set ( 30% to 40% ) of un-maintained full nodes are not
evident on the network. below is data based on a personal snap shot taken
around Dec, 2015. with the following assumptions.
1) nodes running non standard version strings are considered a preference
by the node operator and are not included.
2) nodes below 0.10 are counted as so called "un-maintained" even though
they also can be a choice of the node operator.

Satoshi:0.9.3, 105
Satoshi:0.8.6, 74
Satoshi:0.9.1, 49
Satoshi:0.9.2.1, 42
Satoshi:0.8.5, 39
Satoshi:0.8.1, 35
Satoshi:0.9.5, 14
Satoshi:0.8.3, 12
Satoshi:0.9.4, 10
Satoshi:0.9.99, 10
Satoshi:0.9.0, 5
Satoshi:0.9.2, 5
Satoshi:0.8.0, 4
Satoshi:0.8.99, 1
Satoshi:0.8.4, 1

There are 406 nodes total that falls under the un-maintained category,
which is below 10% of the network.
Luke also have some data here that shows similar results.
http://luke.dashjr.org/programs/bitcoin/files/charts/versions.txt

> The network could shrink by 60% and it would still have plenty of open
> connection slots


I'm afraid we have to agree to disagree if you think dropping support for
60% of the nodes on the network when rolling out an upgrade is the sane
default.

>
> > People are committing to spinning up thousands of supports-2mb-nodes
> during the grace period.


thousands of nodes?! where did you get this figure? who are these people?
*Please* elaborate.

> We could wait a year and pick up maybe 10 or 20% more.


I don't understand this statement at all, please explicate.

-- 
*Yifu Guo*
*"Life is an everlasting self-improvement."*

On Sat, Feb 6, 2016 at 10:37 AM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Responding to "28 days is not long enough" :
>
> I keep seeing this claim made with no evidence to back it up.  As I said,
> I surveyed several of the biggest infrastructure providers and the btcd
> lead developer and they all agree "28 days is plenty of time."
>
> For individuals... why would it take somebody longer than 28 days to
> either download and restart their bitcoind, or to patch and then re-run
> (the patch can be a one-line change MAX_BLOCK_SIZE from 100 to 200)?
>
> For the Bitcoin Core project:  I'm well aware of how long it takes to roll
> out new binaries, and 28 days is plenty of time.
>
> I suspect there ARE a significant percentage of un-maintained full nodes--
> probably 30 to 40%. Losing those nodes will not be a problem, for three
> reasons:
> 1) The network could shrink by 60% and it would still have plenty of open
> connection slots
> 2) People are committing to spinning up thousands of supports-2mb-nodes
> during the grace period.
> 3) We could wait a year and pick up maybe 10 or 20% more.
>
> I strongly disagree with the statement that there is no cost to a longer
> grace period. There is broad agreement that a capacity increase is needed
> NOW.
>
> To bring it back to bitcoin-dev territory:  are there any TECHNICAL
> arguments why an upgrade would take a business or individual longer than 28
> days?
>
>
> Responding to Luke's message:
>
> On Sat, Feb 6, 2016 at 1:12 AM, Luke Dashjr via bitcoin-dev
>>  wrote:
>> > On Friday, February 05, 2016 8:51:08 PM Gavin Andresen via bitcoin-dev
>> wrote:
>> >> Blog post on a couple of the constants chosen:
>> >>   http://gavinandresen.ninja/seventyfive-twentyeight
>> >
>> > Can you put this in the BIP's Rationale section (which appears to be
>> mis-named
>> > "Discussion" in the current draft)?
>>
>
> I'll rename the section and expand it a little. I think standards
> documents like BIPs should be concise, though (written for implementors),
> so I'm not going to recreate the entire blog post there.
>
>
>> >
>> >> Signature operations in un-executed branches of a Script are not
>> counted
>> >> OP_CHECKMULTISIG evaluations are counted accurately; if the signature
>> for a
>> >> 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the
>> top
>> >> of the execution stack, it is counted as one signature operation. If
>> it is
>> >> satisfied by the public key nearest the bottom of the execution stack,
>> it
>> >> is counted as twenty signature operations. Signature operations
>> involving
>> >> invalidly encoded signatures or public keys are not counted towards the
>> >> limit
>> >
>> > These seem like they will break static analysis entirely. That was a
>> noted
>> > reason for creating BIP 16 to replace BIP 12. Is it no longer a
>> concern? Would
>> > it make sense to require scripts to commit to the total accurate-sigop
>> count
>> > to fix this?
>>
>
> After implementing static counting and accurate counting... I was wrong.
> Accurate/dynamic counting/limiting is quick and simple and can be
> completely safe (the counting code can 

Re: [bitcoin-dev] On Hardforks in the Context of SegWit

2016-02-09 Thread Anthony Towns via bitcoin-dev
On Tue, Feb 09, 2016 at 10:00:44PM +, Matt Corallo wrote:
> Indeed, we could push for more place by just always having one 0-byte,
> but I'm not sure the added complexity helps anything? ASICs can never be
> designed which use more extra-nonce-space than what they can reasonably
> assume will always be available,

I was thinking ASICs could be passed a mask of which bytes they could
use for nonce; in which case the variable-ness can just be handled prior
to passing the work to the ASIC.

But on second thoughts, the block already specifies the target difficulty,
so maybe that could be used to indicate which bytes of the previous hash
must be zero? You have to be a bit careful to deal with the possibility
that you just did a maximum difficulty increase compared to the previous
block (in which case there may be fewer bits in the previous hash that
are zero), but that's just a factor of 4, so:

#define RETARGET_THRESHOLD ((1ul<<24) / 4)
y = 32 - bits[0];
if (bits[1]*65536 + bits[2]*256 + bits[3] >= RETARGET_THRESHOLD)
y -= 1;
memset(prevhash, 0x00, y); // clear "first" y bytes of prevhash

should work correctly/safely, and give you 8 bytes of additional nonce
to play with at current difficulty (or 3 bytes at minimum difficulty),
and scale as difficulty increases. No need to worry about avoiding zeroes
that way either.



As far as midstate optimisations go, rearranging the block to be:

 version ; time ; bits ; merkleroot ; prevblock ; nonce

would mean that the last 12 bytes of prevblock and the 4 bytes of nonce
would be available for manipulation [0] if the first round of sha256
was pre-calculated prior to being sent to ASICs (and also that version
and time wouldn't be available). Worth considering?



I don't see how you'd make either of these changes compatible
with Luke-Jr's soft-hardfork approach [1] to ensuring non-upgraded
clients/nodes can't be tricked into following a shorter chain, though.
I think the approach I suggested in my mail avoid Gavin's proposed hard
fork might work though [2].



Combining these with making merge-mining easier [1] and Luke-Jr/Peter
Todd's ideas [3] about splitting the proof of work between something
visible to miners, and something only visible to pool operators to avoid
the block withholding attack on pooled mining would probably make sense
though, to reduce the number of hard forks visible to lightweight clients?

Cheers,
aj

[0] Giving a total of 128 bits, or 96 bits with difficulty such that
only the last 8 bytes of prevblock are available.

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012377.html

[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012046.html

[3] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012384.html
In particular, the paragraph beginning "Alternatively, if the old
blockchain has 10% of less hashpower ..."
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-09 Thread Gavin Andresen via bitcoin-dev
On Tue, Feb 9, 2016 at 8:59 AM, Yifu Guo  wrote:

>
> There are 406 nodes total that falls under the un-maintained category,
> which is below 10% of the network.
> Luke also have some data here that shows similar results.
> http://luke.dashjr.org/programs/bitcoin/files/charts/versions.txt
>

I love seeing data!  I was considering 0.10 nodes as 'unmaintained' because
it has been a long time since the 0.11 release.


>
> > The network could shrink by 60% and it would still have plenty of open
>> connection slots
>
>
> I'm afraid we have to agree to disagree if you think dropping support for
> 60% of the nodes on the network when rolling out an upgrade is the sane
> default.
>

That is my estimate of the worst-case-- not 'sane default.'

My point is that even if the number of nodes shrank by 60%, we would not
see any issues (SPV nodes would still have no problem finding a full node
to connect to, full nodes would not have any problem connecting to each
other, and we would not be significantly more vulnerable to Sybil attacks
or "governments get together and try to ban running a full node" attacks).



>
>> > People are committing to spinning up thousands of supports-2mb-nodes
>> during the grace period.
>
>
> thousands of nodes?! where did you get this figure? who are these people?
> *Please* elaborate.
>

There are over a thousand people subscribed to the Classic slack channel,
many of whom have privately told me they are willing and able to run an
extra node or three (or a hundred-and-eleven) once there is a final release.

I'm not going to name names, because
 a) these were private communications, and
 b) risk of death threats, extortion, doxxing, DoS attacks, etc.  Those
risks aren't theoretical, they are very real.

To be clear: I will discourage and publicly condemn anybody who runs
'pseudo nodes' or plans to spin up lots of nodes to try to influence the
debate. The only legitimate reason to run extra nodes is to fill in a
possible gap in total node count that might be caused by old, unmaintained
nodes that stop serving blocks because the rest of the network has upgraded.


> We could wait a year and pick up maybe 10 or 20% more.
>
>
> I don't understand this statement at all, please explicate.
>

The adoption curve for a new major release is exponential: lots of adoption
in the first 30 days or so, then it rapidly tapers off.  Given that
people's nodes will be alerting them that they must upgrade, and given that
every source of Bitcoin news will probably be covering the miner adoption
vote like it was a presidential election, I expect the adoption curve for
the 2mb bump to be steeper than we've ever seen.  So my best guess is
70-80% of nodes will upgrade within 30 days of the miner voting hitting 50%
of blocks and triggering the automatic 'version obsolete; upgrade required'
warning.

Wait a year, and my guess is you might reach another 10-20% (80 to
90-something percent).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-09 Thread David Vorick via bitcoin-dev
>  I love seeing data!  I was considering 0.10 nodes as 'unmaintained'
because it has been a long time since the 0.11 release.

https://packages.gentoo.org/packages/net-p2p/bitcoin-qt

The Gentoo package manager still has 0.10.2 as the most recent stable
version. Getting a later version of the software on a gentoo setup requires
explicitly telling the package manger to grab a later version. I don't know
what percent of nodes are Gentoo 0.10.2, but I think it's evidence that
0.10 should not be considered 'unmaintained'. People who update their
software regularly will be running 0.10 on Gentoo.

> many of whom have privately told me they are willing and able to run an
extra node or three (or a hundred-and-eleven) once there is a final release.

I'm not clear on the utility of more nodes. Perhaps there is significant
concern about SPV nodes getting enough bandwidth or the network struggling
from the load? Generally though, I believe that when people talk about the
deteriorating full node count they are talking about a reduction in
decentralization. Full nodes are a weak indicator of how likely something
like a change in consensus rules is to get caught, or how many people you
would need to open communication with / extort in order to be able to force
rules upon the network. Having a person spin up multiple nodes doesn't
address either of those concerns, which in my understanding is what most
people care about. My personal concern is with the percentage of the
economy that is dependent on trusting the full nodes they are connected to,
and the overall integrity of that trust. (IE how likely is it that my SPV
node is going to lie to me about whether or not I've received a payment).

I will also point out that lots of people will promise things when they are
seeking political change. I don't know what percentage of promised nodes
would actually be spun up, but I'm guessing that it's going to be
significantly less than 100%. I have similar fears for companies that claim
they have tested their infrastructure for supporting 2MB blocks. Talk is
cheap.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev