Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Sergio Demian Lerner via bitcoin-dev
Even if the proposal involves a political compromise, any change to the
code must be technically evaluated.
The patch was made to require the least possible time for auditing. I'm
talking about reviewing 120 lines of code (not counting comments or
space) which 30 of them are changes to constants. A core programmer audited
it in less than one hour.

Also you're risking the unique opportunity to see segwit activated for
what?
Maybe we can reach a similar agreement for segwit activation in two years.
That's will be too late. The remaining cryptocurrency ecosystem do move
forward.



On Sat, Apr 1, 2017 at 12:03 AM, Samson Mow  wrote:

> A compromise for the sake of compromise doesn't merit technical
> discussions. There are no benefits to be gained from a 2MB hard-fork at
> this time and it would impose an unnecessary cost to the ecosystem for
> testing and implementation.
>
> On Fri, Mar 31, 2017 at 3:13 PM, Sergio Demian Lerner via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>>
>> On Fri, Mar 31, 2017 at 6:22 PM, Matt Corallo 
>> wrote:
>>
>>> Hey Sergio,
>>>
>>> You appear to have ignored the last two years of Bitcoin hardfork
>>> research and understanding, recycling instead BIP 102 from 2015. There
>>> are many proposals which have pushed the state of hard fork research
>>> much further since then, and you may wish to read some of the posts on
>>> this mailing list listed at https://bitcoinhardforkresearch.github.io/
>>> and make further edits based on what you learn.
>>
>>
>> I've read every proposal that was published in the last two years and the
>> choice for NOT implementing any of the super cool research you cite is
>> intentional.
>>
>> We're in a deadlock and it seems we can't go forward adding more
>> functionality to segwit without the community approval (which include
>> miners). This is obvious to me.Then we have to go back.
>>
>> If this last resort solution is merged, we could go back to discuss
>> improvements with the
>>
>> Your goal of "avoid
>>> technical changes" appears to not have any basis outside of perceived
>>> compromise for compromise sake, only making such a hardfork riskier
>>> instead.
>>>
>>> You're are totally correct. It's a compromise for the compromise sake. I
>> couldn't have expressed it more clearly. However the only "riskier" element
>> is the hard forking date. We can move the date forward.
>>
>>
>>> At a minimum, in terms of pure technical changes, you should probably
>>> consider (probably among others):
>>>
>> a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
>>>
>>
>> This I could consider, as it requires probably a single line of code.
>> Which BIP specifies this?
>>
>>
>>> b) Either limiting non-SegWit transactions in some way to fix the n**2
>>> sighash and FindAndDelete runtime and memory usage issues or fix them by
>>> utilizing the new sighash type which many wallets and projects have
>>> already implemented for SegWit in the spending of non-SegWit outputs.
>>>
>>
>> The Seghash problem has already been addressed by limiting the maximum
>> size of a transaction to 1 Mb.
>> The FindAndDelete problem has already been solved by the Core Developers,
>> so we don't have to worry about it anymore.
>>
>>
>>> c) Your really should have replay protection in any HF.
>>
>>
>> We could add a simple protection, although if we reach community
>> consensus and 95% of hashing power, does we really need to? Can the old
>> chain still be alive?
>> If more people ask for replay protection, I will merge Spoonet scheme or
>> develop the minimum possible replay protection (a simple signaling bit in
>> transaction version)
>>
>>
>>> d) You may wish to consider the possibility of tweaking the witness
>>> discount and possibly discounting other parts of the input - SegWit went
>>> a long ways towards making removal of elements from the UTXO set cheaper
>>> than adding them, but didn't quite get there, you should probably finish
>>> that job. This also provides additional tuneable parameters to allow you
>>> to increase the block size while not having a blowup in the worst-case
>>> block size.
>>>
>>
>> That is an interesting economic change and would be out of the scope of
>> segwit2mb.
>>
>>
>>> e) Additional commitments at the top of the merkle root - both for
>>> SegWit transactions and as additional space for merged mining and other
>>> commitments which we may wish to add in the future, this should likely
>>> be implemented an "additional header" ala Johnson Lau's Spoonnet
>>> proposal.
>>>
>>> That is an interesting technical improvement that is out of the scope of
>> segwit2mb.
>> We can keep discussing spoonet while we merge segwit2mb, as spoonnet
>> includes most of technical innovations.
>>
>>
>>> Additionally, I think your parameters here pose very significant risk to
>>> the Bitcoin ecosystem broadly.
>>>
>>> a) Activating a hard fork with less than 18/24 months (and even 

Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Samson Mow via bitcoin-dev
A compromise for the sake of compromise doesn't merit technical
discussions. There are no benefits to be gained from a 2MB hard-fork at
this time and it would impose an unnecessary cost to the ecosystem for
testing and implementation.

On Fri, Mar 31, 2017 at 3:13 PM, Sergio Demian Lerner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
> On Fri, Mar 31, 2017 at 6:22 PM, Matt Corallo 
> wrote:
>
>> Hey Sergio,
>>
>> You appear to have ignored the last two years of Bitcoin hardfork
>> research and understanding, recycling instead BIP 102 from 2015. There
>> are many proposals which have pushed the state of hard fork research
>> much further since then, and you may wish to read some of the posts on
>> this mailing list listed at https://bitcoinhardforkresearch.github.io/
>> and make further edits based on what you learn.
>
>
> I've read every proposal that was published in the last two years and the
> choice for NOT implementing any of the super cool research you cite is
> intentional.
>
> We're in a deadlock and it seems we can't go forward adding more
> functionality to segwit without the community approval (which include
> miners). This is obvious to me.Then we have to go back.
>
> If this last resort solution is merged, we could go back to discuss
> improvements with the
>
> Your goal of "avoid
>> technical changes" appears to not have any basis outside of perceived
>> compromise for compromise sake, only making such a hardfork riskier
>> instead.
>>
>> You're are totally correct. It's a compromise for the compromise sake. I
> couldn't have expressed it more clearly. However the only "riskier" element
> is the hard forking date. We can move the date forward.
>
>
>> At a minimum, in terms of pure technical changes, you should probably
>> consider (probably among others):
>>
> a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
>>
>
> This I could consider, as it requires probably a single line of code.
> Which BIP specifies this?
>
>
>> b) Either limiting non-SegWit transactions in some way to fix the n**2
>> sighash and FindAndDelete runtime and memory usage issues or fix them by
>> utilizing the new sighash type which many wallets and projects have
>> already implemented for SegWit in the spending of non-SegWit outputs.
>>
>
> The Seghash problem has already been addressed by limiting the maximum
> size of a transaction to 1 Mb.
> The FindAndDelete problem has already been solved by the Core Developers,
> so we don't have to worry about it anymore.
>
>
>> c) Your really should have replay protection in any HF.
>
>
> We could add a simple protection, although if we reach community consensus
> and 95% of hashing power, does we really need to? Can the old chain still
> be alive?
> If more people ask for replay protection, I will merge Spoonet scheme or
> develop the minimum possible replay protection (a simple signaling bit in
> transaction version)
>
>
>> d) You may wish to consider the possibility of tweaking the witness
>> discount and possibly discounting other parts of the input - SegWit went
>> a long ways towards making removal of elements from the UTXO set cheaper
>> than adding them, but didn't quite get there, you should probably finish
>> that job. This also provides additional tuneable parameters to allow you
>> to increase the block size while not having a blowup in the worst-case
>> block size.
>>
>
> That is an interesting economic change and would be out of the scope of
> segwit2mb.
>
>
>> e) Additional commitments at the top of the merkle root - both for
>> SegWit transactions and as additional space for merged mining and other
>> commitments which we may wish to add in the future, this should likely
>> be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.
>>
>> That is an interesting technical improvement that is out of the scope of
> segwit2mb.
> We can keep discussing spoonet while we merge segwit2mb, as spoonnet
> includes most of technical innovations.
>
>
>> Additionally, I think your parameters here pose very significant risk to
>> the Bitcoin ecosystem broadly.
>>
>> a) Activating a hard fork with less than 18/24 months (and even then...)
>> from a fully-audited and supported release of full node software to
>> activation date poses significant risks to many large software projects
>> and users. I've repeatedly received feedback from various folks that a
>> year or more is likely required in any hard fork to limit this risk, and
>> limited pushback on that given the large increase which SegWit provides
>> itself buying a ton of time.
>>
>> The feedback I received is slightly different from your feedback. Many
> company CTOs have expressed that one year for a Bitcoin hard-fork was
> period they could schedule a secure upgrade.
>
>
>
>> b) Having a significant discontinuity in block size increase only serves
>> to confuse and mislead users and businesses, forcing them to rapidly
>> adapt to a 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Rodney Morris via bitcoin-dev
I didn't say typical, I said every. Currently a raspberry pi on shitty adsl
can run a full node. What's wrong with needing a high end pc and good
connectivity to run a full node?

People that want to, can. People that don't want to, won't, no matter how
low spec the machine you need.

If nobody uses bitcoin, all the security in the world provides no value.
The value of bitcoin is provided by people using bitcoin, and people will
only use bitcoin if it provides value to them.  Security is one aspect
only. And the failure to understand that is what has led to the block size
debate.

Rodney

On 1 Apr 2017 10:12, "Eric Voskuil"  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.

e
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
=HPCK
-END PGP SIGNATURE-
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Eric Voskuil via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.

e
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
=HPCK
-END PGP SIGNATURE-
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Sergio Demian Lerner via bitcoin-dev
On Fri, Mar 31, 2017 at 6:22 PM, Matt Corallo 
wrote:

> Hey Sergio,
>
> You appear to have ignored the last two years of Bitcoin hardfork
> research and understanding, recycling instead BIP 102 from 2015. There
> are many proposals which have pushed the state of hard fork research
> much further since then, and you may wish to read some of the posts on
> this mailing list listed at https://bitcoinhardforkresearch.github.io/
> and make further edits based on what you learn.


I've read every proposal that was published in the last two years and the
choice for NOT implementing any of the super cool research you cite is
intentional.

We're in a deadlock and it seems we can't go forward adding more
functionality to segwit without the community approval (which include
miners). This is obvious to me.Then we have to go back.

If this last resort solution is merged, we could go back to discuss
improvements with the

Your goal of "avoid
> technical changes" appears to not have any basis outside of perceived
> compromise for compromise sake, only making such a hardfork riskier
> instead.
>
> You're are totally correct. It's a compromise for the compromise sake. I
couldn't have expressed it more clearly. However the only "riskier" element
is the hard forking date. We can move the date forward.


> At a minimum, in terms of pure technical changes, you should probably
> consider (probably among others):
>
a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
>

This I could consider, as it requires probably a single line of code. Which
BIP specifies this?


> b) Either limiting non-SegWit transactions in some way to fix the n**2
> sighash and FindAndDelete runtime and memory usage issues or fix them by
> utilizing the new sighash type which many wallets and projects have
> already implemented for SegWit in the spending of non-SegWit outputs.
>

The Seghash problem has already been addressed by limiting the maximum size
of a transaction to 1 Mb.
The FindAndDelete problem has already been solved by the Core Developers,
so we don't have to worry about it anymore.


> c) Your really should have replay protection in any HF.


We could add a simple protection, although if we reach community consensus
and 95% of hashing power, does we really need to? Can the old chain still
be alive?
If more people ask for replay protection, I will merge Spoonet scheme or
develop the minimum possible replay protection (a simple signaling bit in
transaction version)


> d) You may wish to consider the possibility of tweaking the witness
> discount and possibly discounting other parts of the input - SegWit went
> a long ways towards making removal of elements from the UTXO set cheaper
> than adding them, but didn't quite get there, you should probably finish
> that job. This also provides additional tuneable parameters to allow you
> to increase the block size while not having a blowup in the worst-case
> block size.
>

That is an interesting economic change and would be out of the scope of
segwit2mb.


> e) Additional commitments at the top of the merkle root - both for
> SegWit transactions and as additional space for merged mining and other
> commitments which we may wish to add in the future, this should likely
> be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.
>
> That is an interesting technical improvement that is out of the scope of
segwit2mb.
We can keep discussing spoonet while we merge segwit2mb, as spoonnet
includes most of technical innovations.


> Additionally, I think your parameters here pose very significant risk to
> the Bitcoin ecosystem broadly.
>
> a) Activating a hard fork with less than 18/24 months (and even then...)
> from a fully-audited and supported release of full node software to
> activation date poses significant risks to many large software projects
> and users. I've repeatedly received feedback from various folks that a
> year or more is likely required in any hard fork to limit this risk, and
> limited pushback on that given the large increase which SegWit provides
> itself buying a ton of time.
>
> The feedback I received is slightly different from your feedback. Many
company CTOs have expressed that one year for a Bitcoin hard-fork was
period they could schedule a secure upgrade.



> b) Having a significant discontinuity in block size increase only serves
> to confuse and mislead users and businesses, forcing them to rapidly
> adapt to a Bitcoin which changed overnight both by hardforking, and by
> fees changing suddenly. Instead, having the hard fork activate technical
> changes, and then slowly increasing the block size over the following
> several years keeps things nice and continuous and also keeps us from
> having to revisit ye old blocksize debate again six months after
> activation.
>
> This is something worth considering. There is the old Pieter BIP103
proposal has good parameters (17.7% per year).

c) You should likely 

[bitcoin-dev] The TXO bitfield

2017-03-31 Thread Bram Cohen via bitcoin-dev
Looking forward in node scaling we can envision a future in which blocks
are required to come with proofs of their validity and nodes can be run
entirely in memory and never have to hit disk. Ideally we'd like for proofs
to be able to be stored in client wallets which plan to spend their utxos
later, or at least be able to have a full node make a single not terribly
expensive disk access to form the proof which can then be passed along to
other peers.

Such proofs will probably be significantly larger than the blocks they
prove (this is merkle root stuff, not zero knowledge stuff), but if we
accept that as a given then this should be doable, although the details of
how to do it aren't obvious.

This vision can be implemented simply and efficiently by playing some games
with the semantics of the term 'proof'. A proof is a thing which convinces
someone of something. What we've discussed in the past for such proofs
mostly has to do with maintaining a hash root of everything and having
proofs lead to that. This is an extrema of complexity of the proof and
simplicity of the checker, at the expense of forcing the root to be
maintained at all times and the proof to be reasonably fresh. Some tricks
can be applied to keep that problem under control, but there's an
alternative approach where the amount of data necessary to do validation is
much larger but still entirely reasonable to keep in memory, and the sizes
of proofs and their required freshness is much smaller.

In the previous discussion on Merkle sets I commented that insertion
ordering's main practical utility may be that it allows for compression. It
turns out that a constant factor of 256 makes a big difference. Since
there's only really one bit stored for each txo (stored or not) once you
have an insertion ordering you can simply store a bitfield of all txos so
far, which is entirely reasonable to hold in memory, and can be made even
more reasonable by compactifying down the older, mostly spent portions of
it (how best to compress a bitfield while maintaining random access is an
interesting problem but entirely doable).

This approach meets all the design goals, even allowing wallets to remember
their own 'proofs', which are just proofs of insertion ordering. Those
don't even change once the risk of reorgs has passed, so they can be stored
for years without being maintained.

Proofs of insertion ordering can be made by having a canonical way of
calculating a root of position commitments for each block, and nodes
calculate those roots when evaluating the block history and store them all
in memory. A proof of position is a path to one of those roots.

I've intentionally skipped over most of the details here, because it's
probably best to have a high level discussion of this as a general approach
before getting lost in the weeds.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Sergio Demian Lerner via bitcoin-dev
Praxelogy_guy,
Yes I understand that segwit2mb represents a "potential" 4Mb block size
increase.
But Segwit does not immediately lead to 2 Mb blocks, but can only achieve
close to a 2Mb increase if all Bitcoin wallets switch to segwit, which will
take a couple of years.
Therefore I don't expect transactions per block to quadruple from one day
to another.


On Fri, Mar 31, 2017 at 6:22 PM, praxeology_guy <
praxeology_...@protonmail.com> wrote:

> Sergio Demian Lerner: Please confirm that you understand that:
>
> The current SegWit being proposed comes bundled with an effective 2MB
> block size increase.
>
> Are you proposing the remove this bundled policy change, and then have a
> different BIP that increases the block size?  Not quite clear if you
> understand what the current proposal is.
>
> Cheers,
> Praxeology
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread praxeology_guy via bitcoin-dev
Sergio Demian Lerner: Please confirm that you understand that:

The current SegWit being proposed comes bundled with an effective 2MB block 
size increase.

Are you proposing the remove this bundled policy change, and then have a 
different BIP that increases the block size? Not quite clear if you understand 
what the current proposal is.

Cheers,
Praxeology___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Refund Excesss Fee Hard Fork Proposal

2017-03-31 Thread praxeology_guy via bitcoin-dev
> That would have the unfortunate effect of incentivizing miners to not 
> completely fill blocks, because low fee marginal transactions could cost them 
> money.

Look at the fee distribution. The vast majority of fee income comes from txs w/ 
fees near the LIFB. The blocks will be full... but I guess this would make 
Child Pays For Parent undesirable. CPFP would need a flag saying it is CPFP so 
that the parent fee isn't considered the LIFB.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Rodney Morris via bitcoin-dev
You guessed wrong. Multiple data centres are as much about redundancy and
resiliency, and latency.

As for the cost, data centre space, business grade communication lines, and
staff are orders of magnitude more expensive than the physical hardware
they support.

I'd like to call you out on your continuing reduction to absurdity and
slippery slope arguments. Just because we can't handle 4GB blocks today,
doesn't mean we shouldn't aim in that direction. Doesn't mean we shouldn't
be taking our first second and third baby steps in that direction.

If the obsession with every personal computer being able to run a fill node
continues then bitcoin will be consigned to the dustbin of history, a
footnote to the story of the global crypto currency that eventually took
over the world.

Thanks
Rodney


Date: Fri, 31 Mar 2017 12:14:42 -0400
From: David Vorick 
To: Jared Lee Richardson 
Cc: Bitcoin Dev 
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Message-ID:

Content-Type: text/plain; charset="utf-8"


Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.

I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.



Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.

I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.

This whole thread has been absurdly low quality.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Matt Corallo via bitcoin-dev
Hey Sergio,

You appear to have ignored the last two years of Bitcoin hardfork
research and understanding, recycling instead BIP 102 from 2015. There
are many proposals which have pushed the state of hard fork research
much further since then, and you may wish to read some of the posts on
this mailing list listed at https://bitcoinhardforkresearch.github.io/
and make further edits based on what you learn. It seems your goal of
"avoid any technical changes" doesn't have any foundation aside from a
perceived compromise for compromise sake, only making for fork riskier
in the process.

At a minimum, in terms of pure technical changes, you should probably
consider (probably among others):

a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
b) Either limiting non-SegWit transactions in some way to fix the n**2
sighash and FindAndDelete runtime and memory usage issues or fix them by
utilizing the new sighash type which many wallets and projects have
already implemented for SegWit in the spending of non-SegWit outputs.
c) Your replay protection isn't really ideal - XXX. The clever fix from
Spoonnet for poor scaling of optionally allowing non-SegWit outputs to
be spent with SegWit's sighash provides this all in one go.
d) You may wish to consider the possibility of tweaking the witness
discount and possibly discounting other parts of the input - SegWit went
a long ways towards making removal of elements from the UTXO set cheaper
than adding them, but didn't quite get there, you should probably finish
that job. This also provides additional tuneable parameters to allow you
to increase the block size while not having a blowup in the worst-case
block size.
e) Additional commitments at the top of the merkle root - both for
SegWit transactions and as additional space for merged mining and other
commitments which we may wish to add in the future, this should likely
be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.

Additionally, I think your parameters here pose very significant risk to
the Bitcoin ecosystem broadly.

a) Activating a hard fork with less than 18/24 months (and even then...)
from a fully-audited and supported release of full node software to
activation date poses significant risks to many large software projects
and users. I've repeatedly received feedback from various folks that a
year or more is likely required in any hard fork to limit this risk, and
limited pushback on that given the large increase which SegWit provides
itself buying a ton of time.

b) Having a significant discontinuity in block size increase only serves
to confuse and mislead users and businesses, forcing them to rapidly
adapt to a Bitcoin which changed overnight both by hardforking, and by
fees changing suddenly. Instead, having the hard fork activate technical
changes, and then slowly increasing the block size over the following
several years keeps things nice and continuous and also keeps us from
having to revisit ye old blocksize debate again six months after activation.

c) You should likely consider the effect of the many technological
innovations coming down the pipe in the coming months. Technologies like
Lightning, TumbleBit, and even your own RootStock could significantly
reduce fee pressure as transactions move to much faster and more
featureful systems.

Commitments to aggressive hard fork parameters now may leave miners
without much revenue as far out as the next halving (which current
transaction growth trends are indicating we'd just only barely reach 2MB
of transaction volume, let alone if you consider the effects of users
moving to systems which provide more features for Bitcoin transactions).
This could lead to a precipitous drop in hashrate as miners are no
longer sufficiently compensated.

Remember that the "hashpower required to secure bitcoin" is determined
as a percentage of total Bitcoins transacted on-chain in each block, so
as subsidy goes down, miners need to be paid with fees, not just price
increases. Even if we were OK with hashpower going down compared to the
value it is securing, betting the security of Bitcoin on its price
rising exponentially to match decreasing subsidy does not strike me as a
particularly inspiring tradeoff.

There aren't many great technical solutions to some of these issues, as
far as I'm aware, but it's something that needs to be incredibly
carefully considered before betting the continued security of Bitcoin on
exponential on-chain growth, something which we have historically never
seen.

Matt


On March 31, 2017 5:09:18 PM EDT, Sergio Demian Lerner via bitcoin-dev 
 wrote:
>Hi everyone,
>
>Segwit2Mb is the project to merge into Bitcoin a minimal patch that
>aims to
>untangle the current conflict between different political positions
>regarding segwit activation vs. an increase of the on-chain blockchain
>space through a standard block size increase. It is not a new solution,
>but
>it should 

Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Matt Corallo via bitcoin-dev
Hey Sergio,

You appear to have ignored the last two years of Bitcoin hardfork
research and understanding, recycling instead BIP 102 from 2015. There
are many proposals which have pushed the state of hard fork research
much further since then, and you may wish to read some of the posts on
this mailing list listed at https://bitcoinhardforkresearch.github.io/
and make further edits based on what you learn. Your goal of "avoid
technical changes" appears to not have any basis outside of perceived
compromise for compromise sake, only making such a hardfork riskier
instead.

At a minimum, in terms of pure technical changes, you should probably
consider (probably among others):

a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
b) Either limiting non-SegWit transactions in some way to fix the n**2
sighash and FindAndDelete runtime and memory usage issues or fix them by
utilizing the new sighash type which many wallets and projects have
already implemented for SegWit in the spending of non-SegWit outputs.
c) Your really should have replay protection in any HF. The clever fix from
Spoonnet for poor scaling of optionally allowing non-SegWit outputs to
be spent with SegWit's sighash provides this all in one go.
d) You may wish to consider the possibility of tweaking the witness
discount and possibly discounting other parts of the input - SegWit went
a long ways towards making removal of elements from the UTXO set cheaper
than adding them, but didn't quite get there, you should probably finish
that job. This also provides additional tuneable parameters to allow you
to increase the block size while not having a blowup in the worst-case
block size.
e) Additional commitments at the top of the merkle root - both for
SegWit transactions and as additional space for merged mining and other
commitments which we may wish to add in the future, this should likely
be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.

Additionally, I think your parameters here pose very significant risk to
the Bitcoin ecosystem broadly.

a) Activating a hard fork with less than 18/24 months (and even then...)
from a fully-audited and supported release of full node software to
activation date poses significant risks to many large software projects
and users. I've repeatedly received feedback from various folks that a
year or more is likely required in any hard fork to limit this risk, and
limited pushback on that given the large increase which SegWit provides
itself buying a ton of time.

b) Having a significant discontinuity in block size increase only serves
to confuse and mislead users and businesses, forcing them to rapidly
adapt to a Bitcoin which changed overnight both by hardforking, and by
fees changing suddenly. Instead, having the hard fork activate technical
changes, and then slowly increasing the block size over the following
several years keeps things nice and continuous and also keeps us from
having to revisit ye old blocksize debate again six months after activation.

c) You should likely consider the effect of the many technological
innovations coming down the pipe in the coming months. Technologies like
Lightning, TumbleBit, and even your own RootStock could significantly
reduce fee pressure as transactions move to much faster and more
featureful systems.

Commitments to aggressive hard fork parameters now may leave miners
without much revenue as far out as the next halving (which current
transaction growth trends are indicating we'd just only barely reach 2MB
of transaction volume, let alone if you consider the effects of users
moving to systems which provide more features for Bitcoin transactions).
This could lead to a precipitous drop in hashrate as miners are no
longer sufficiently compensated.

Remember that the "hashpower required to secure bitcoin" is determined
as a percentage of total Bitcoins transacted on-chain in each block, so
as subsidy goes down, miners need to be paid with fees, not just price
increases. Even if we were OK with hashpower going down compared to the
value it is securing, betting the security of Bitcoin on its price
rising exponentially to match decreasing subsidy does not strike me as a
particularly inspiring tradeoff.

There aren't many great technical solutions to some of these issues, as
far as I'm aware, but it's something that needs to be incredibly
carefully considered before betting the continued security of Bitcoin on
exponential on-chain growth, something which we have historically never
seen.

Matt


On March 31, 2017 5:09:18 PM EDT, Sergio Demian Lerner via bitcoin-dev 
 wrote:
>Hi everyone,
>
>Segwit2Mb is the project to merge into Bitcoin a minimal patch that
>aims to
>untangle the current conflict between different political positions
>regarding segwit activation vs. an increase of the on-chain blockchain
>space through a standard block size increase. It is not a new solution,
>but
>it should be seen 

[bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-03-31 Thread Sergio Demian Lerner via bitcoin-dev
Hi everyone,

Segwit2Mb is the project to merge into Bitcoin a minimal patch that aims to
untangle the current conflict between different political positions
regarding segwit activation vs. an increase of the on-chain blockchain
space through a standard block size increase. It is not a new solution, but
it should be seen more as a least common denominator.

Segwit2Mb combines segwit as it is today in Bitcoin 0.14+ with a 2MB block
size hard-fork activated ONLY if segwit activates (95% of miners
signaling), but at a fixed future date.

The sole objective of this proposal is to re-unite the Bitcoin community
and avoid a cryptocurrency split. Segwit2Mb does not aim to be best
possible technical solution to solve Bitcoin technical limitations.
However, this proposal does not imply a compromise to the future
scalability or decentralization of Bitcoin, as a small increase in block
size has been proven by several core and non-core developers not to affect
Bitcoin value propositions.

In the worst case, a 2X block size increase has much lower economic impact
than the last bitcoin halving (<10%), which succeeded without problem.

On the other side, Segwit2Mb primary goal is to be minimalistic: in this
patch some choices have been made to reduce the number of lines modified in
the current Bitcoin Core state (master branch), instead of implementing the
most elegant solution. This is because I want to reduce the time it takes
for core programmers and reviewers to check the correctness of the code,
and to report and correct bugs.

The patch was built by forking the master branch of Bitcoin Core, mixing a
few lines of code from Jeff Garzik's BIP102,  and defining a second
versionbits activation bit (bit 2) for the combined activation.

The combined activation of segwit and 2Mb hard-fork nVersion bit is 2
(DEPLOYMENT_SEGWIT_AND_2MB_BLOCKS).

This means that segwit can still be activated without the 2MB hard-fork by
signaling bit 1 in nVersion  (DEPLOYMENT_SEGWIT).

The tentative lock-in and hard-fork dates are the following:

Bit 2 signaling StartTime = 1493424000; // April 29th, 2017

Bit 2 signaling Timeout = 1503964800; // August 29th, 2017

HardForkTime = 1513209600; // Thu, 14 Dec 2017 00:00:00 GMT


The hard-fork is conditional to 95% of the hashing power has approved the
segwit2mb soft-fork and the segwit soft-fork has been activated (which
should occur 2016 blocks after its lock-in time)

For more information on how soft-forks are signaled and activated, see
https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki

This means that segwit would be activated before 2Mb: this is inevitable,
as versionbits have been designed to have fixed activation periods and
thresholds for all bits. Making segwit and 2Mb fork activate together at a
delayed date would have required a major re-write of this code, which would
contradict the premise of creating a minimalistic patch. However, once
segwit is activated, the hard-fork is unavoidable.

Although I have coded a first version of the segwit2mb patch (which
modifies 120 lines of code, and adds 220 lines of testing code), I would
prefer to wait to publish the source code until more comments have been
received from the community.

To prevent worsening block verification time because of the O(N^2) hashing
problem, the simple restriction that transactions cannot be larger than 1Mb
has been kept. Therefore the worse-case of block verification time has only
doubled.

Regarding the hard-fork activation date, I want to give enough time to all
active economic nodes to upgrade. As of Fri Mar 31 2017,
https://bitnodes.21.co/nodes/ reports that 6332 out of 6955 nodes (91%)
have upgraded to post 0.12 versions. Upgrade to post 0.12 versions can be
used to identify economic active nodes, because in the 0.12 release dynamic
fees were introduced, and currently no Bitcoin automatic payment system can
operate without automatic discovery of the current fee rate. A pre-0.12
would require constant manual intervention.
Therefore I conclude that no more than 91% of the network nodes reported by
bitnodes are active economic nodes.

As Bitcoin Core 0.12 was released on February 2016, the time for this 91%
to upgrade has been around one year (under a moderate pressure of
operational problems with unconfirmed transactions).
Therefore we can expect a similar or lower time to upgrade for a hard-fork,
after developers have discussed and approved the patch, and it has been
reviewed and merged and 95% of the hashing power has signaled for it (the
pressure not to upgrade being a complete halt of the operations). However I
suggest that we discuss the hard-fork date and delay it if there is a real
need to.

Currently time works against the Bitcoin community, and so is delaying a
compromise solution. Most of the community agree that halting the
innovation for several years is a very bad option.

After the comments collected by the community, a BIP will be written
describing the resulting proposal details.


Re: [bitcoin-dev] Refund Excesss Fee Hard Fork Proposal

2017-03-31 Thread Bram Cohen via bitcoin-dev
On Fri, Mar 31, 2017 at 1:29 PM, praxeology_guy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Change the fee policy to cause all fee/byte in a block to be reduced to
> the lowest included fee/byte. Change transactions to specify how/which
> outputs get what portions of [(TX_fee/TX_length - LIFB)*TX_length].
> Transactions of course could still offer the remainder to the miner if they
> don't want to modify some of the outputs and don't want to reveal their
> change output.
>

That would have the unfortunate effect of incentivizing miners to not
completely fill blocks, because low fee marginal transactions could cost
them money.

An alternate approach would be to incentivize miners to follow transaction
fees more by reducing mining rewards, which could be done by soft forking
in a requirement that a chunk of all mining rewards be sent to an
unspendable address.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Refund Excesss Fee Hard Fork Proposal

2017-03-31 Thread praxeology_guy via bitcoin-dev
TL;DR (In layman terms): Refund any excess fee amounts higher than the lowest 
included fee in a block.

=== Proposed Hard Fork Change ===

LIFB: Lowest Included Fee/Byte

Change the fee policy to cause all fee/byte in a block to be reduced to the 
lowest included fee/byte. Change transactions to specify how/which outputs get 
what portions of [(TX_fee/TX_length - LIFB)*TX_length]. Transactions of course 
could still offer the remainder to the miner if they don't want to modify some 
of the outputs and don't want to reveal their change output.

=== Economic Analysis Of Why Such Is Desirable ===

Pure profit seeking miners attempt to fill their block with transactions that 
have the highest fee/byte. So what happens is the users who are willing to 
offer the highest fee/byte get included first in a block until it gets filled. 
At fill, there is some fee/byte where the next available tx in mempool doesn't 
get included. And right above that fee/byte is the last transaction that was 
selected to be included in the block, which has the lowest fee/byte of any of 
the transactions in the block.

Users who want to create transactions with the lowest fee watch the LIFB with 
https://bitcoinfees.21.co/ or similar systems... so that they can make a 
transaction that offers a fee at or above the LIFB so that it can be included 
in a block in reasonable time.

Some users have transactions with very high confirmation time 
sensitivity/importance... so they offer a fee much higher than the LIFB to 
guarantee quick confirmation. But they would prefer that even though they are 
willing to pay a higher fee, that they would still only pay the LIFB fee/byte 
amount.

This becomes even more of an issue when someone wants to create a transaction 
now that they want to be included in a block at a much later time... because it 
becomes harder and harder to predict what the LIFB will be as you try to 
predict further into the future. It would be nice to be able to specify a very 
high fee/byte in such a transaction, and then when the transaction is confirmed 
only have to pay the LIFB.

Users will look for the money that offers the greatest money transfer 
efficiency, and tx fees are a big and easily measurable component. So a money 
system is better if its users can pay lower fees than competing money options. 
Refund Excees Fee is one way to reduce fees.

=== Technical Difficulties ===

I realize this is a big change... and I'm not sure of the performance problems 
this might entail... I'm just throwing this idea out there. Of course if fees 
are very small, and there is little difference between a high priority fee/byte 
and the LIFB, then this issue is not really that big of a deal. Also... hard 
forks are very hard to do in general, so such a hard fork as this might not be 
worthwhile.

Cheers,
Praxeology Guy___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Better MMR Definition

2017-03-31 Thread Bram Cohen via bitcoin-dev
On Wed, Mar 1, 2017 at 2:31 PM, Peter Todd  wrote:

>
> A better way to present your work would have been to at least explain that
> at
> the top of the file, and perhaps even better, split the reference
> implementation and optimized implementation into two separate files. If
> you did
> this, you'd be more likely to get others to review your work.
>

I've now added explanation to the README, reorganized the files, and added
some comments:

https://github.com/bramcohen/MerkleSet

In fact, I'd suggest that for things like edge cases, you test edge cases in
> separate unit tests that explain what edge cases you're trying to catch.
>

The tests work by doing a lot of exercising on pseudorandom data, an
approach which does a good job of hitting all the lines of code and edge
cases and requiring very little updating as the implementation changes, at
the expense of it taking a while for tests to run. The advantage of very
custom unit tests is that they run almost instantly, at the cost of
requiring painstaking maintenance and missing more stuff. I've come to
favor this approach in my old age.

The proportion of code devoted to tests is more than it looks like at first
blush, because all the audit methods are just for testing.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Eric Voskuil via bitcoin-dev
As an independently verifiable, decentralized store of public information, the 
Bitcoin block tree and transaction DAG do have an advantage over systems such 
as Visa. The store is just a cache. There is no need to implement reliability 
in storage or in communications. It is sufficient to be able to detect 
invalidity. And even if a subset of nodes fail to do so, the system overall 
compensates.

As such the architecture of a Bitcoin node and its supporting hardware 
requirements are very different from an unverifiable, centralized store of 
private information. So in that sense the comparison below is not entirely 
fair. Many, if not most, of the high costs of a Visa datacenter do not apply 
because of Bitcoin's information architecture.

However, if the system cannot remain decentralized these architectural 
advantages will not hold. At that point your considerations below are entirely 
valid. Once the information is centralized it necessarily becomes private and 
fragile. Conversely, once it becomes private it necessarily becomes centralized 
and fragile. This fragility requires significant investment by the central 
authority to maintain.

So as has been said, we can have decentralization and its benefit of 
trustlessness or we can have Visa. We already have Visa. Making another is 
entirely uninteresting.

e 

> On Mar 31, 2017, at 11:23 AM, David Vorick via bitcoin-dev 
>  wrote:
> 
> Sure, your math is pretty much entirely irrelevant because scaling systems to 
> massive sizes doesn't work that way.
> 
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a 
> database size of petabytes. How much RAM do you need to process blocks like 
> that? Can you fit that much RAM into a single machine? Okay, you can't fit 
> that much RAM into a single machine. So you have to rework the code to 
> operate on a computer cluster.
> 
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin 
> to do block validation on a computer cluster overnight. Further, are storage 
> costs consistent when we're talking about setting up clusters? Are bandwidth 
> costs consistent when we're talking about setting up clusters? Are RAM and 
> CPU costs consistent when we're talking about setting up clusters? No, they 
> aren't. Clusters are a lot more expensive to set up per-resource because they 
> need to talk to eachother and synchronize with eachother and you have a LOT 
> more parts, so you have to build in redundancies that aren't necessary in 
> non-clusters.
> 
> Also worth pointing out that peak transaction volumes are typically 20-50x 
> the size of typical transaction volumes. So your cluster isn't going to need 
> to plan to handle 15k transactions per second, you're really looking at more 
> like 200k or even 500k transactions per second to handle peak-volumes. And if 
> it can't, you're still going to see full blocks.
> 
> You'd need a handful of experts just to maintain such a thing. Disks are 
> going to be failing every day when you are storing multiple PB, so you can't 
> just count a flat cost of $20/TB and expect that to work. You're going to 
> need redundancy and tolerance so that you don't lose the system when a few of 
> your hard drives all fail within minutes of eachother. And you need a way to 
> rebuild everything without taking the system offline.
> 
> This isn't even my area of expertise. I'm sure there are a dozen other 
> significant issues that one of the Visa architects could tell you about when 
> dealing with mission-critical data at this scale.
> 
> 
> 
> Massive systems operate very differently and are much more costly per-unit 
> than tiny systems. Once we grow the blocksize large enough that a single 
> computer can't do all the processing all by itself we get into a world of 
> much harder, much more expensive scaling problems. Especially because we're 
> talking about a distributed system where the nodes don't even trust each 
> other. And transaction processing is largely non-parallel. You have to check 
> each transaction against each other transaction to make sure that they aren't 
> double spending eachother. This takes synchronization and prevents 500 CPUs 
> from all crunching the data concurrently. You have to be a lot more clever 
> than that to get things working and consistent.
> 
> When talking about scalability problems, you should ask yourself what other 
> systems in the world operate at the scales you are talking about. None of 
> them have cost structures in the 6 digit range, and I'd bet (without actually 
> knowing) that none of them have cost structures in the 7 digit range either. 
> In fact I know from working in a related industry that the cost structures 
> for the datacenters (plus the support engineers, plus the software 
> management, etc.) that do airline ticket processing are above $5 million per 
> year for the larger airlines. Visa is probably even more expensive than 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread David Vorick via bitcoin-dev
Sure, your math is pretty much entirely irrelevant because scaling systems
to massive sizes doesn't work that way.

At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
database size of petabytes. How much RAM do you need to process blocks like
that? Can you fit that much RAM into a single machine? Okay, you can't fit
that much RAM into a single machine. So you have to rework the code to
operate on a computer cluster.

Already we've hit a significant problem. You aren't going to rewrite
Bitcoin to do block validation on a computer cluster overnight. Further,
are storage costs consistent when we're talking about setting up clusters?
Are bandwidth costs consistent when we're talking about setting up
clusters? Are RAM and CPU costs consistent when we're talking about setting
up clusters? No, they aren't. Clusters are a lot more expensive to set up
per-resource because they need to talk to eachother and synchronize with
eachother and you have a LOT more parts, so you have to build in
redundancies that aren't necessary in non-clusters.

Also worth pointing out that peak transaction volumes are typically 20-50x
the size of typical transaction volumes. So your cluster isn't going to
need to plan to handle 15k transactions per second, you're really looking
at more like 200k or even 500k transactions per second to handle
peak-volumes. And if it can't, you're still going to see full blocks.

You'd need a handful of experts just to maintain such a thing. Disks are
going to be failing every day when you are storing multiple PB, so you
can't just count a flat cost of $20/TB and expect that to work. You're
going to need redundancy and tolerance so that you don't lose the system
when a few of your hard drives all fail within minutes of eachother. And
you need a way to rebuild everything without taking the system offline.

This isn't even my area of expertise. I'm sure there are a dozen other
significant issues that one of the Visa architects could tell you about
when dealing with mission-critical data at this scale.



Massive systems operate very differently and are much more costly per-unit
than tiny systems. Once we grow the blocksize large enough that a single
computer can't do all the processing all by itself we get into a world of
much harder, much more expensive scaling problems. Especially because we're
talking about a distributed system where the nodes don't even trust each
other. And transaction processing is largely non-parallel. You have to
check each transaction against each other transaction to make sure that
they aren't double spending eachother. This takes synchronization and
prevents 500 CPUs from all crunching the data concurrently. You have to be
a lot more clever than that to get things working and consistent.

When talking about scalability problems, you should ask yourself what other
systems in the world operate at the scales you are talking about. None of
them have cost structures in the 6 digit range, and I'd bet (without
actually knowing) that none of them have cost structures in the 7 digit
range either. In fact I know from working in a related industry that the
cost structures for the datacenters (plus the support engineers, plus the
software management, etc.) that do airline ticket processing are above $5
million per year for the larger airlines. Visa is probably even more
expensive than that (though I can only speculate).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
I guess I should caveat, a rounding error is a bit of exaggeration -
mostly because I previously assumed that it would take 14 years for
the network to reach such a level, something I didn't say and that you
might not grant me.

I don't know why paypal has multiple datacenters, but I'm guessing it
probably has a lot more to do with everything else they do -
interface, support, tax compliance, replication, redundancy - than it
does with the raw numbers of transaction volumes.

What I do know is the math, though.  WW tx volume = 426,000,000,000 in
2015.  Assuming tx size of ~500 bytes, that's 669 terabytes of data
per year.  At a hard drive cost of 0.021 per GB, that's $36k a year or
so and declines ~14% a year.

The bandwidth is the really big cost.  You are right that if this
hypothetical node also had to support historical syncing, the numbers
would probably be unmanagable.  But that can be solved with a simple
checkpointing system for the vast majority of users, and nodes could
solve it by not supporting syncing / reducing peer count.  With a peer
count of 25 I measured ~75 Gb/month with today's blocksize cap.  That
works out to roughly 10 relays(sends+receives) per transaction
assuming all blocks were full, which was a pretty close approximation.
The bandwidth data of our 426 billion transactions per year works out
to 942 mbit/s.  That's 310 Terabytes per month of bandwidth - At
today's high-volume price of 0.05 per GB, that's $18,500 a month or
$222,000 a year.  Plus the $36k for storage per year brings it to
~$250k per year.  Not a rounding error, but within the rough costs of
running an exchange - a team of 5 developers works out to ~$400-600k a
year, and the cost of compliance with EU and U.S. entities (including
lawyers) runs upwards of a million dollars a year.  Then there's the
support department, probably ~$100-200k a year.

The reason I said a rounding error was that I assumed that it would
take until 2032 to reach that volume of transactions (Assuming
+80%/year of growth, which is our 4-year and 2-year historical average
tx/s growth).  If hard drive prices decline by 14% per year, that cost
becomes $3,900 a year, and if bandwidth prices decline by 14% a year
that cost becomes $1800 a month($21,600 a year).  Against a
multi-million dollar budget, even 3x that isn't a large concern,
though not, as I stated, a rounding error.  My bad.

I didn't approximate for CPU usage, as I don't have any good estimates
for it, and I don't have significant reason to believe that it is a
higher cost than bandwidth, which seems to be the controlling cost
compared to adding CPU's.

> I'm not going to take the time to refute everything you've been saying

Care to respond to the math?

> This whole thread has been absurdly low quality.

Well, we agree on something at least.

On Fri, Mar 31, 2017 at 9:14 AM, David Vorick  wrote:
> No one is suggesting anything like this.  The cost of running a node
> that could handle 300% of the 2015 worldwide nonbitcoin transaction
> volume today would be a rounding error for most exchanges even if
> prices didn't rise.
>
>
> Then explain why PayPal has multiple datacenters. And why Visa has multiple
> datacenters. And why the banking systems have multiple datacenters each.
>
> I'm guessing it's because you need that much juice to run a global payment
> system at the transaction volumes that they run at.
>
> Unless you have professional experience working directly with transaction
> processors handling tens of millions of financial transactions per day, I
> think we can fully discount your assessment that it would be a rounding
> error in the budget of a major exchange or Bitcoin processor to handle that
> much load. And even if it was, it wouldn't matter because it's extremely
> important to Bitcoin's security that it's everyday users are able to and are
> actively running full nodes.
>
> I'm not going to take the time to refute everything you've been saying but I
> will say that most of your comments have demonstrated a similar level of
> ignorance as the one above.
>
> This whole thread has been absurdly low quality.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris

Correct me if I'm wrong, but nothing  possible if the client software
was electrum-like and used two independent sources for verification.
No?

> Do thought experiments and take it to the extremes where nobody runs a node, 
> what can miners do now which they could not do before?

This and the next point are just reductio ad absurdem, since no one is
suggesting anything of the sort. Even in that situation, I can't think
of anything miners could do if clients used more than one independent
source for verification, ala electrum question above.

> Why don't exchanges run SPV nodes?

No one is suggesting anything like this.  The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.



On Fri, Mar 31, 2017 at 1:19 AM, Luv Khemani  wrote:
>> Err, no, that's what happens when you double click the Ethereum icon
>
> instead of the Bitcoin icon.  Just because you run "Bitcoin SPV"
> instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
> somehow going to get Ethereum payments.  Your verification is just
> different and the risks that come along with that are different.  It's
> only confusing if you make it confusing.
>
> This is false. You could get coins which don't even exist  as long as a
> miner mined the invalid transaction.
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris
>
> The only reason SPV wallets do not accept ethereum payments is because of
> transaction/block format differences.
> SPV wallets have no clue what is a valid bitcoin, they trust miners fully.
>
> In the event of a hardfork, SPV wallets will blindly follow the longest
> chain.
>
>> If every block that is mined for them is deliberately empty because of
> an attacker, that's nonfunctional.  You can use whatever semantics you
> want to describe that situation, but that's clearly what I meant.
>
> Not sure why you are bringing this up, this is not the case today nor does
> it have anything to do with blocksize.
>
>> As above, if someone operates Bitcoin in SPV mode they are not
> magically at risk of getting Dashcoins.  They send and receive
> Bitcoins just like everyone else running Bitcoin software.  There's no
> confusion about it and it doesn't have anything to do with hashrates
> of anyone.
>
> As mentioned earlier, you are at risk of receiving made up money.
> SPV has everything to do with hashrate, it trusts hashrate fully.
> Crafting a bitcoin transaction paying you money that i do not have is not
> difficult, as long as a miner mines a block with it, your SPV wallet will
> accept it.
>
>> The debate is a choice between nodes paying more to allow greater growth
>> and adoption,
> or nodes constraining adoption in favor of debatable security
> concerns.
>
> Onchain transactions are not the only way to use Bitcoin the currency.
> Trades you do on an exchange are not onchain, yet transacted with Bitcoin.
>
>> And even if there was, the software would choose it for you?
>
> People choose the software, not the other way round.
>
>> Yes you do, if the segment options are known (and if they aren't,
> running a node likely won't help you choose either, it will choose by
> accident and you'll have no idea).  You would get to choose whose
> verifications to request/check from, and thus choose which segment to
> follow, if any.
>
> SPV does not decide, they follow longest chain.
> Centralised/Server based wallets follow the server they are connecting to.
> Full Nodes do not depend on a 3rd party to decide if the money received is
> valid.
>
>>  Are you really this dense?  If the cost of on-chain transactions
> rises, numerous use cases get killed off.  At $0.10 per tx you
> probably won't buy in-game digital microtransactions with it, but you
> might buy coffee with it.  At $1 per tx, you probably won't buy coffee
> with it but you might pay your ISP bill with it.  At $20 per tx, you
> probably won't pay your ISP bill with it, but you might pay your rent.
> At $300 per tx you probably won't use it for anything, but a company
> purchasing goods from China might.  At $4000 per tx that company
> probably won't use it, but international funds settlement for
> million-dollar transactions might use it.
>> At each fee step along the way you kill of hundreds or thousands of
> possible uses of Bitcoin.  Killing those off means fewer people will
> use it, so they will use something else instead.
>
> No need to get personal.
> As mentioned earlier, all these low value transactions can happen offchain.
> None of the use cases will be killed off. We have sub dollar trades
> happening on exchanges offchain.
>
>> The average person doesn't 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread David Vorick via bitcoin-dev
No one is suggesting anything like this.  The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.


Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.

I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.

Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.

I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.

This whole thread has been absurdly low quality.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Proposing the SiTAH-fork, a mechanism for compromise.

2017-03-31 Thread SHAroshima Nakamati via bitcoin-dev
Soft-fork - A soft-fork is a change to the bitcoin protocol wherein only 
previously valid blocks/transactions are made invalid. Since old nodes will 
recognise the new blocks as valid, a soft-fork is backward-compatible.

Hard-fork - A hard-fork is a change to the bitcoin protocol that makes 
previously invalid blocks/transactions valid, and therefore requires all users 
to upgrade.

The difficulty about pulling off a hard-fork is that for it to happen safely 
and without a massive disruption, everyone needs to be ready for it before it 
occurs. Coordinating this is a daunting task, however, safe hard-forks *will* 
be necessary in Bitcoin's future. Furthermore, the majority of stakeholders - 
devs, miners as well as users - want both soft-fork SegWit and a hard-fork TX 
block size increase, but favor one over the other and want to perform their 
favorite fork first out of fear that activating the other prevents their choice 
from being activated at a later point. This is a proposal to solve this 
stalemate by providing a path to activating both.

I propose the Soft-into-Time-Activated-Hard fork, or SiTAH-fork in short. The 
core concept is that it is a soft-fork that comes with a lock-in to 
transitioning into a hard-fork at a predetermined later point in time, drawing 
elements from Spoonnet. This means that, after the soft-fork takes place, there 
is a transition period during which everyone has the opportunity and incentive 
to upgrade their software, before the hard-fork inevitably happens. As a 
consequence of this, those who want the hard-fork have incentive to support 
activating the the soft-fork, and those who want the soft-fork have the 
incentive to pledge to activate the hard-fork, rather than opposing their 
secondary choice out of fear of their primary choice being indefinitely 
postponed.

As for how this can be implemented for SegWit; SegWit is made soft-fork 
compatible by virtue of the block weight function. From BIP-0141:

> Block weight is defined as Base size * 3 + Total size.
>
> Base size is the block size in bytes with the original transaction 
> serialization without any witness-related data, as seen by a non-upgraded 
> node.
>
> Total size is the block size in bytes with transactions serialized as 
> described in BIP144, including base data and witness data.
>
> The new rule is block weight ≤ 4,000,000.

This block weight function can be replaced by a version that allows for >1MB of 
pure TX data when the median time-past of the last 11 blocks is greater than 
the HardForkTime. In other words:

BlockWeight = MedianTimePast < HardForkTime ? BaseSize * 3 + TotalSize : 
TotalSize;

The post-hard-fork part of this formula can of course be tweaked.

I pose that implementing this would greatly improve support for the 
introduction of SegWit as it provides a pledge to increase space for pure TX 
data, which is something many are asking for. LN is a great long-term scaling 
solution but is not ready yet, and this measure provides a way to enable SegWit 
so that LN can later be built on top, while at the same time providing a 
promise of TX-space relief to those skeptical or unconvinced of SW/LN providing 
this relief in the short to medium term.

- SHAroshima___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Luv Khemani via bitcoin-dev
> Err, no, that's what happens when you double click the Ethereum icon

instead of the Bitcoin icon.  Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments.  Your verification is just
different and the risks that come along with that are different.  It's
only confusing if you make it confusing.

This is false. You could get coins which don't even exist  as long as a miner 
mined the invalid transaction.
Peter Todd has demonstrated this on mainstream SPV wallets,
https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris

The only reason SPV wallets do not accept ethereum payments is because of 
transaction/block format differences.
SPV wallets have no clue what is a valid bitcoin, they trust miners fully.

In the event of a hardfork, SPV wallets will blindly follow the longest chain.

> If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional.  You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.

Not sure why you are bringing this up, this is not the case today nor does it 
have anything to do with blocksize.

> As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins.  They send and receive
Bitcoins just like everyone else running Bitcoin software.  There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone.

As mentioned earlier, you are at risk of receiving made up money.
SPV has everything to do with hashrate, it trusts hashrate fully.
Crafting a bitcoin transaction paying you money that i do not have is not 
difficult, as long as a miner mines a block with it, your SPV wallet will 
accept it.

> The debate is a choice between nodes paying more to allow greater growth and 
> adoption,
or nodes constraining adoption in favor of debatable security
concerns.

Onchain transactions are not the only way to use Bitcoin the currency.
Trades you do on an exchange are not onchain, yet transacted with Bitcoin.

> And even if there was, the software would choose it for you?

People choose the software, not the other way round.

> Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea).  You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.

SPV does not decide, they follow longest chain.
Centralised/Server based wallets follow the server they are connecting to.
Full Nodes do not depend on a 3rd party to decide if the money received is 
valid.

>  Are you really this dense?  If the cost of on-chain transactions
rises, numerous use cases get killed off.  At $0.10 per tx you
probably won't buy in-game digital microtransactions with it, but you
might buy coffee with it.  At $1 per tx, you probably won't buy coffee
with it but you might pay your ISP bill with it.  At $20 per tx, you
probably won't pay your ISP bill with it, but you might pay your rent.
At $300 per tx you probably won't use it for anything, but a company
purchasing goods from China might.  At $4000 per tx that company
probably won't use it, but international funds settlement for
million-dollar transactions might use it.
> At each fee step along the way you kill of hundreds or thousands of
possible uses of Bitcoin.  Killing those off means fewer people will
use it, so they will use something else instead.

No need to get personal.
As mentioned earlier, all these low value transactions can happen offchain.
None of the use cases will be killed off. We have sub dollar trades happening 
on exchanges offchain.

> The average person doesn't need that level of security.

Precisely why they do not need to be on-chain.

It is clear to me that you have not yet grasped Bitcoin's security model, 
especially the role Full-Nodes play in it.
Id suggest you do some more reading up and thinking about it.
Do thought experiments and take it to the extremes where nobody runs a node, 
what can miners do now which they could not do before?
Why don't exchanges run SPV nodes?

Further correspondence will not be fruitful until you grasp this.



On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani  wrote:
>
> > Nodes don't do politics.  People do, and politics is a lot larger with a 
> > lot more moving parts than just node operation.
>
>
> Node operation is making a stand on what money you will accept.
>
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without 
> being able to run a node, you have no way to independently determine what you 
> are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
>
>
> > Full nodes protect from nothing if the chain they attempt to use is 
> > nonfunctional.
>
> This is highly subjective.
> Just because it is 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
> Node operation is making a stand on what money you will accept.

> Ie Your local store will only accept US Dollars and not Japanese Yen. Without 
> being able to run a node, you have no way to independently determine what you 
> are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.

Err, no, that's what happens when you double click the Ethereum icon
instead of the Bitcoin icon.  Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments.  Your verification is just
different and the risks that come along with that are different.  It's
only confusing if you make it confusing.

> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to 
> existing users.

If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional.  You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.

> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a 
> Bitcoin forked with inflation, you will not get any goods regardless of how 
> much hashrate those coins have.

As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins.  They send and receive
Bitcoins just like everyone else running Bitcoin software.  There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone.  It is just a different method of verification with
corresponding different costs of use and different security
guarantees.

> You should also take into consideration the number of independent mining 
> entities it takes to achieve 51% hashrate. It will be of little use to have 
> thousands on independent miners/pools  if 3 large pools make up 51% of hash 
> rate and collude to attack the network.

We're already fucked, China has 61% of the hashrate and the only thing
we can do about it is to wait for the Chinese electrical
supply/demand/transmission system to rebalance itself.  Aside from
that little problem, mining distributions and pool distributions don't
significantly factor into the blocksize debate.  The debate is a
choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.

> Nodes define which network they want to follow.

Do you really consider it choosing when there is only a single option?
 And even if there was, the software would choose it for you?  If it
is a Bitcoin client, it follows the Bitcoin blockchain.  There is no
BU blockchain at the moment, and Bitcoin software can't possibly start
following Ethereum blockchains.

> Without a Node, you don't even get to decide which segement you are on.

Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea).  You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.

> Ability to run a node and validate rules => Confidence in currency

This is only true for the small minority that actually need that added
level of security & confidence, and the paranoid people who believe
they need it when they really, really don't.  Some guy on reddit
spouted off the same garbage logic, but was much quieter when I got
him to admit that he didn't actually read the code of Bitcoin that he
downloaded and ran, nor any of the code of the updates.  He trusted.
*gasp*

The average person doesn't need that level of security.  They do
however need to be able to use it, which they cannot right now if you
consider "average" to be at least 50% of the population.

> Higher demand => Higher exchange rate

Demand comes from usage and adoption.  Neither can happen us being
willing to give other people the option to trade security features for
lower costs.

> I would not be holding any Bitcoins if it was unfeasible for me to run a Node 
> and instead had to trust some 3rd party that the currency was not being 
> inflated/censored.

Great.  Somehow I think Bitcoin's future involves very few more people
like you, and very many people who aren't paranoid and just want to be
able to send and receive Bitcoins.

> Bitcoin has value because of it's trustless properties. Otherwise, there is 
> no difference between cryptocurrencies and fiat.

No, it has its value for many, many reasons, trustless properties is
only one of them.  What I'm suggesting doesn't involve giving up
trustless properties except in your head (And not even then, since you
would almost certainly be able to afford to run a node for the rest of
your life if Bitcoin's value continues to rise as it has in the past).
And even if it did, there's a lot more reasons that a lot more people
than you would use it.

> Blocksize has nothing to do with utility, only cost of on-chain transactions.

Are you