Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Eric Voskuil via bitcoin-dev
The presumption of the mining aspect of the Bitcoin security model is that the 
mining majority is a broadly distributed set of independent people, not one 
person who controls a majority of the hash power. 

You seem to have overlooked a qualifier in your Satoshi quote: "...by nodes 
that are not cooperating to attack the network". A single miner with majority 
hash power is of course cooperating with himself. At that point the question of 
whether he is attacking the network is moot, it's his network.

I believe that Pieter's point is that a system optimized for orphan rate may in 
effect be optimized for a single entity providing all double spend protection. 
That works directly against the central principle of Bitcoin security. The 
security of the money is a function of the number of independent miners and 
sellers.

e

> On Dec 10, 2016, at 7:17 PM, Daniele Pinna via bitcoin-dev 
>  wrote:
> 
> How is the adverse scenario you describe different from a plain old 51% 
> attack? Each proposed protocol change  where 51% or more  of the network can 
> potentially game the rules and break the system should be considered just as 
> acceptable/unacceptable as another. 
> 
> There comes a point where some form of basic honesty must be assumed on 
> behalf of participants benefiting from the system working properly and 
> reliably. 
> 
> Afterall, what magic line of code prohibits all miners from simultaneously 
> turning all their equipment off...  just because? 
> 
> Maybe this 'one':
> 
> "As long as a majority of CPU power is controlled by nodes that are not 
> cooperating to attack the network, they'll generate the longest chain and 
> outpace attackers. The network itself requires minimal structure."
> 
> Is there such a thing as an unrecognizable 51% attack?  One where the 
> remaining 49% get dragged in against their will? 
> 
> Daniele 
> 
>> On Dec 10, 2016 6:39 PM, "Pieter Wuille"  wrote:
>>> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev 
>>>  wrote:
>>> We have models for estimating the probability that a block is orphaned 
>>> given average network bandwidth and block size. 
>>> 
>>> The question is, do we have objective measures of these two quantities? 
>>> Couldn't we target an orphan_rate < max_rate? 
>> 
>> Models can predict orphan rate given block size and network/hashrate 
>> topology, but you can't control the topology (and things like FIBRE hide the 
>> effect of block size on this as well). The result is that if you're purely 
>> optimizing for minimal orphan rate, you can end up with a single 
>> (conglomerate of) pools producing all the blocks. Such a setup has no 
>> propagation delay at all, and as a result can always achieve 0 orphans.
>> 
>> Cheers,
>> 
>> -- 
>> Pieter
>> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Daniele Pinna via bitcoin-dev
How is the adverse scenario you describe different from a plain old 51%
attack? Each proposed protocol change  where 51% or more  of the network
can potentially game the rules and break the system should be considered
just as acceptable/unacceptable as another.

There comes a point where some form of basic honesty must be assumed on
behalf of participants benefiting from the system working properly and
reliably.

Afterall, what magic line of code prohibits all miners from simultaneously
turning all their equipment off...  just because?

Maybe this 'one':

"As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and
outpace attackers. The network itself requires minimal structure."

Is there such a thing as an unrecognizable 51% attack?  One where the
remaining 49% get dragged in against their will?

Daniele

On Dec 10, 2016 6:39 PM, "Pieter Wuille"  wrote:

> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> We have models for estimating the probability that a block is orphaned
>> given average network bandwidth and block size.
>>
>> The question is, do we have objective measures of these two quantities?
>> Couldn't we target an orphan_rate < max_rate?
>>
>
> Models can predict orphan rate given block size and network/hashrate
> topology, but you can't control the topology (and things like FIBRE hide
> the effect of block size on this as well). The result is that if you're
> purely optimizing for minimal orphan rate, you can end up with a single
> (conglomerate of) pools producing all the blocks. Such a setup has no
> propagation delay at all, and as a result can always achieve 0 orphans.
>
> Cheers,
>
> --
> Pieter
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Bram Cohen via bitcoin-dev
Miners individually have an incentive to include every transaction they can
when they mine a block, but they also sometimes have an incentive to
collectively cooperate to reduce throughput to make more money as a group.
Under schemes where limits can be adjusted both possibilities must be taken
into account.

On Sat, Dec 10, 2016 at 4:40 PM, James Hilliard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Miners in general are naturally incentivized to always mine max size
> blocks to maximize transaction fees simply because there is very
> little marginal cost to including extra transactions(there will always
> be a transaction backlog of some sort available to mine since demand
> for block space is effectively unbounded as fees approach 0 and they
> can even mine their own transactions without any fees). This proposal
> would almost certainly cause runaway block size growth and encourage
> much more miner centralization.
>
> On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
>  wrote:
> > Miners 'gaming' the Block75 system -
> > There is no financial incentive for miners to attempt to game the Block75
> > system. Even if it were attempted and assuming the goal was to create
> bigger
> > blocks, the maximum possible increase would be 25% over the previous
> block
> > size. And, that size would only last for two weeks before readjusting
> down.
> > It would cost them more in transaction fees to stuff the network than
> they
> > could ever make up. To game the system, they'd have to game it forever
> with
> > no possibility of profit.
> >
> > Blocks would get too big -
> > Eventually, blocks would get too big, but only if bandwidth stopped
> > increasing and the cost of disk space stopped decreasing. Otherwise, the
> > incremental adjustments made by Block75 (especially in combination with
> > SegWit) wouldn't break anyone's connection or result in significantly
> more
> > orphaned blocks.
> >
> > The frequent and small adjustments made by Block75 have the added
> benefit of
> > being more easily adapted to, both psychologically and technologically,
> with
> > regards to miners/node operators.
> >
> > -t.k
> >
> > On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
> >  wrote:
> >>
> >> t. khan via bitcoin-dev wrote:
> >> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> >> > difficulty (aka Block75)
> >> >
> >> > The every two-week adjustment of difficulty has proven to be a
> >> > reasonably effective and predictable way of managing how quickly
> blocks
> >> > are mined. Bitcoin needs a reasonably effective and predictable way of
> >> > managing the maximum block size.
> >> >
> >> > It’s clear at this point that human beings should not be involved in
> the
> >> > determination of max block size, just as they’re not involved in
> >> > deciding the difficulty.
> >> >
> >> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.)
> or
> >> > passing the decision to miners/pool operators, the max block size
> should
> >> > be adjusted every two weeks (2016 blocks) using a system similar to
> how
> >> > difficulty is calculated.
> >> >
> >> > Put another way: let’s stop thinking about what the max block size
> >> > should be and start thinking about how full we want the average block
> to
> >> > be regardless of size. Over the last year, we’ve had averages of 75%
> or
> >> > higher, so aiming for 75% full seems reasonable, hence naming this
> >> > concept ‘Block75’.
> >> >
> >> > The target capacity over 2016 blocks would be 75%. If the last 2016
> >> > blocks are more than 75% full, add the difference to the max block
> size.
> >> > Like this:
> >> >
> >> > MAX_BLOCK_BASE_SIZE = 100
> >> > TARGET_CAPACITY = 75
> >> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> >> > TARGET_CAPACITY
> >> >
> >> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE +
> AVERAGE_OVER_CAP)
> >> >
> >> > For example, if the last 2016 blocks are 85% full (average block is
> 850
> >> > KB), add 10% to the max block size. The new max block size would be
> >> > 1,100 KB until the next 2016 blocks are mined, then reset and
> >> > recalculate. The 1,000,000 byte limit that exists currently would
> >> > remain, but would effectively be the minimum max block size.
> >> >
> >> > Another two weeks goes by, the last 2016 blocks are again 85% full,
> but
> >> > now that means they average 935 KB out of the 1,100 KB max block size.
> >> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> >> > that to make the new max block size of 1,185 KB.
> >> >
> >> > Another two weeks passes. This time, the average block is 1,050 KB.
> The
> >> > new max block size is calculated to 1,300 KB (as blocks were 105%
> full,
> >> > minus the 75% capacity target, so 30% added to max block size).
> >> >
> >> > Repeat every 2016 blocks, forever.
> >> >
> >> > If Block75 had been applied at the difficulty adjustment on November
> >> > 18th, the max block size wo

Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread t. khan via bitcoin-dev
Agreed, the clear goal of 10 minutes per block is why the difficulty
adjustment works well. Blocks averaging 75% full is the clear goal of the
described method. That's the target to attempt.

Under Block75, there will still be full blocks. There will still be
transaction fees and a fee market. The fees will be lower than they are now
of course.

Hardcoding a cap will inevitably become a roadblock (again), and we'll be
back in the same position as we are now. Permanent solutions are preferred.

On Sat, Dec 10, 2016 at 6:12 PM, Bram Cohen  wrote:

> On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>> Put another way: let’s stop thinking about what the max block size should
>> be and start thinking about how full we want the average block to be
>> regardless of size. Over the last year, we’ve had averages of 75% or
>> higher, so aiming for 75% full seems reasonable, hence naming this concept
>> ‘Block75’.
>>
>
> That's effectively making the blocksize limit completely uncapped and only
> preventing spikes, and even in the case of spikes it doesn't differentiate
> between 'real' traffic and low value spam attacks. It suffers from the same
> fundamental problems as bitcoin unlimited: There are in the end no
> transaction fees, and inevitably some miners will want to impose some cap
> on block size for practical purposes, resulting in a fork.
>
> Difficulty adjustment works because there's a clear goal of having a
> certain rate of making new blocks. Without a target to attempt automatic
> adjustment makes no sense.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread James Hilliard via bitcoin-dev
Miners in general are naturally incentivized to always mine max size
blocks to maximize transaction fees simply because there is very
little marginal cost to including extra transactions(there will always
be a transaction backlog of some sort available to mine since demand
for block space is effectively unbounded as fees approach 0 and they
can even mine their own transactions without any fees). This proposal
would almost certainly cause runaway block size growth and encourage
much more miner centralization.

On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
 wrote:
> Miners 'gaming' the Block75 system -
> There is no financial incentive for miners to attempt to game the Block75
> system. Even if it were attempted and assuming the goal was to create bigger
> blocks, the maximum possible increase would be 25% over the previous block
> size. And, that size would only last for two weeks before readjusting down.
> It would cost them more in transaction fees to stuff the network than they
> could ever make up. To game the system, they'd have to game it forever with
> no possibility of profit.
>
> Blocks would get too big -
> Eventually, blocks would get too big, but only if bandwidth stopped
> increasing and the cost of disk space stopped decreasing. Otherwise, the
> incremental adjustments made by Block75 (especially in combination with
> SegWit) wouldn't break anyone's connection or result in significantly more
> orphaned blocks.
>
> The frequent and small adjustments made by Block75 have the added benefit of
> being more easily adapted to, both psychologically and technologically, with
> regards to miners/node operators.
>
> -t.k
>
> On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
>  wrote:
>>
>> t. khan via bitcoin-dev wrote:
>> > BIP Proposal - Managing Bitcoin’s block size the same way we do
>> > difficulty (aka Block75)
>> >
>> > The every two-week adjustment of difficulty has proven to be a
>> > reasonably effective and predictable way of managing how quickly blocks
>> > are mined. Bitcoin needs a reasonably effective and predictable way of
>> > managing the maximum block size.
>> >
>> > It’s clear at this point that human beings should not be involved in the
>> > determination of max block size, just as they’re not involved in
>> > deciding the difficulty.
>> >
>> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
>> > passing the decision to miners/pool operators, the max block size should
>> > be adjusted every two weeks (2016 blocks) using a system similar to how
>> > difficulty is calculated.
>> >
>> > Put another way: let’s stop thinking about what the max block size
>> > should be and start thinking about how full we want the average block to
>> > be regardless of size. Over the last year, we’ve had averages of 75% or
>> > higher, so aiming for 75% full seems reasonable, hence naming this
>> > concept ‘Block75’.
>> >
>> > The target capacity over 2016 blocks would be 75%. If the last 2016
>> > blocks are more than 75% full, add the difference to the max block size.
>> > Like this:
>> >
>> > MAX_BLOCK_BASE_SIZE = 100
>> > TARGET_CAPACITY = 75
>> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
>> > TARGET_CAPACITY
>> >
>> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>> >
>> > For example, if the last 2016 blocks are 85% full (average block is 850
>> > KB), add 10% to the max block size. The new max block size would be
>> > 1,100 KB until the next 2016 blocks are mined, then reset and
>> > recalculate. The 1,000,000 byte limit that exists currently would
>> > remain, but would effectively be the minimum max block size.
>> >
>> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
>> > now that means they average 935 KB out of the 1,100 KB max block size.
>> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
>> > that to make the new max block size of 1,185 KB.
>> >
>> > Another two weeks passes. This time, the average block is 1,050 KB. The
>> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
>> > minus the 75% capacity target, so 30% added to max block size).
>> >
>> > Repeat every 2016 blocks, forever.
>> >
>> > If Block75 had been applied at the difficulty adjustment on November
>> > 18th, the max block size would have been 1,080KB, as the average block
>> > during that period was 83% full, so 8% is added to the 1,000KB limit.
>> > The current size, after the December 2nd adjustment would be 1,150K.
>> >
>> > Block75 would allow the max block size to grow (or shrink) in response
>> > to transaction volume, and does so predictably, reasonably quickly, and
>> > in a method that prevents wild swings in block size or transaction fees.
>> > It attempts to keep blocks at 75% total capacity over each two week
>> > period, the same way difficulty tries to keep blocks mined every ten
>> > minutes. It also keeps blocks as small as possible.
>> >
>> > Th

Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread t. khan via bitcoin-dev
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the Block75
system. Even if it were attempted and assuming the goal was to create
bigger blocks, the maximum possible increase would be 25% over the previous
block size. And, that size would only last for two weeks before readjusting
down. It would cost them more in transaction fees to stuff the network than
they could ever make up. To game the system, they'd have to game it forever
with no possibility of profit.

Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly more
orphaned blocks.

The frequent and small adjustments made by Block75 have the added benefit
of being more easily adapted to, both psychologically and technologically,
with regards to miners/node operators.

-t.k

On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 100
> > TARGET_CAPACITY = 75
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size.  If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China 

Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Bram Cohen via bitcoin-dev
On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Put another way: let’s stop thinking about what the max block size should
> be and start thinking about how full we want the average block to be
> regardless of size. Over the last year, we’ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this concept
> ‘Block75’.
>

That's effectively making the blocksize limit completely uncapped and only
preventing spikes, and even in the case of spikes it doesn't differentiate
between 'real' traffic and low value spam attacks. It suffers from the same
fundamental problems as bitcoin unlimited: There are in the end no
transaction fees, and inevitably some miners will want to impose some cap
on block size for practical purposes, resulting in a fork.

Difficulty adjustment works because there's a clear goal of having a
certain rate of making new blocks. Without a target to attempt automatic
adjustment makes no sense.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-10 Thread Luke Dashjr via bitcoin-dev
On Saturday, December 10, 2016 9:29:09 PM Tier Nolan via bitcoin-dev wrote:
> On Sun, Dec 4, 2016 at 7:34 PM, Johnson Lau via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Something not yet done:
> > 1. The new merkle root algorithm described in the MMHF BIP
> 
> Any new merkle algorithm should use a sum tree for partial validation and
> fraud proofs.

PR welcome.

> Is there something special about 216 bits?  I guess at most 448 bits total
> means only one round of SHA256.  16 bits for flags would give 216 for each
> child.

See 
https://github.com/luke-jr/bips/blob/bip-mmhf/bip-mmhf.mediawiki#Merkle_tree_algorithm

But yes, the 448 bits total target is to optimise the tree-building.

> Even better would be to make the protocol extendable.  Allow blocks to
> indicate new trees and legacy nodes would just ignore the extra ones.  If
> Bitcoin supported that then the segregated witness tree could have been
> added as a easier soft fork.

It already is. This is a primary goal of the new protocol.

> The sum-tree could be added later as an extra tree.

Adding new trees means more hashing to validate blocks, so it'd be better to 
keep it at a minimum.

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-10 Thread Tier Nolan via bitcoin-dev
On Sun, Dec 4, 2016 at 7:34 PM, Johnson Lau via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Something not yet done:
> 1. The new merkle root algorithm described in the MMHF BIP
>

Any new merkle algorithm should use a sum tree for partial validation and
fraud proofs.

Is there something special about 216 bits?  I guess at most 448 bits total
means only one round of SHA256.  16 bits for flags would give 216 for each
child.

Even better would be to make the protocol extendable.  Allow blocks to
indicate new trees and legacy nodes would just ignore the extra ones.  If
Bitcoin supported that then the segregated witness tree could have been
added as a easier soft fork.

The sum-tree could be added later as an extra tree.


> 3. Communication with legacy nodes. This version can’t talk to legacy
> nodes through the P2P network, but theoretically they could be linked up
> with a bridge node
>

The bridge would only need to transfer the legacy blocks which are coinbase
only, so very little data.


> 5. Many other interesting hardfork ideas, and softfork ideas that works
> better with a header redesign
>

That is very true.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Pieter Wuille via bitcoin-dev
On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> We have models for estimating the probability that a block is orphaned
> given average network bandwidth and block size.
>
> The question is, do we have objective measures of these two quantities?
> Couldn't we target an orphan_rate < max_rate?
>

Models can predict orphan rate given block size and network/hashrate
topology, but you can't control the topology (and things like FIBRE hide
the effect of block size on this as well). The result is that if you're
purely optimizing for minimal orphan rate, you can end up with a single
(conglomerate of) pools producing all the blocks. Such a setup has no
propagation delay at all, and as a result can always achieve 0 orphans.

Cheers,

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Daniele Pinna via bitcoin-dev
y two-week adjustment of difficulty has proven to be a
> reasonably effective and predictable way of managing how quickly blocks
> are mined. Bitcoin needs a reasonably effective and predictable way of
> managing the maximum block size.
>
> It?s clear at this point that human beings should not be involved in the
> determination of max block size, just as they?re not involved in
> deciding the difficulty.
>
> Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> passing the decision to miners/pool operators, the max block size should
> be adjusted every two weeks (2016 blocks) using a system similar to how
> difficulty is calculated.
>
> Put another way: let?s stop thinking about what the max block size
> should be and start thinking about how full we want the average block to
> be regardless of size. Over the last year, we?ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this
> concept ?Block75?.
>
> The target capacity over 2016 blocks would be 75%. If the last 2016
> blocks are more than 75% full, add the difference to the max block size.
> Like this:
>
> MAX_BLOCK_BASE_SIZE = 100
> TARGET_CAPACITY = 75
> AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> TARGET_CAPACITY
>
> To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>
> For example, if the last 2016 blocks are 85% full (average block is 850
> KB), add 10% to the max block size. The new max block size would be
> 1,100 KB until the next 2016 blocks are mined, then reset and
> recalculate. The 1,000,000 byte limit that exists currently would
> remain, but would effectively be the minimum max block size.
>
> Another two weeks goes by, the last 2016 blocks are again 85% full, but
> now that means they average 935 KB out of the 1,100 KB max block size.
> This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> that to make the new max block size of 1,185 KB.
>
> Another two weeks passes. This time, the average block is 1,050 KB. The
> new max block size is calculated to 1,300 KB (as blocks were 105% full,
> minus the 75% capacity target, so 30% added to max block size).
>
> Repeat every 2016 blocks, forever.
>
> If Block75 had been applied at the difficulty adjustment on November
> 18th, the max block size would have been 1,080KB, as the average block
> during that period was 83% full, so 8% is added to the 1,000KB limit.
> The current size, after the December 2nd adjustment would be 1,150K.
>
> Block75 would allow the max block size to grow (or shrink) in response
> to transaction volume, and does so predictably, reasonably quickly, and
> in a method that prevents wild swings in block size or transaction fees.
> It attempts to keep blocks at 75% total capacity over each two week
> period, the same way difficulty tries to keep blocks mined every ten
> minutes. It also keeps blocks as small as possible.
>
> Thoughts?
>
> -t.k.
>

I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size.  If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.

The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.

-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161210/c231038d/attachment-0001.sig>

--

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


End of bitcoin-dev Digest, Vol 19, Issue 4
**
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread Hampus Sjöberg via bitcoin-dev
> While disk space requirements might not be a big problem, block
propagation time is

Is block propagation time really still a problem? Compact blocks and FIBRE
should help here.

> Bitcoin, because its fundamental design, can scale by using offchain
solutions.

I agree.
However, I believe that on-chain scaling will be needed regardless of which
off-chain solution gains popularity.

2016-12-10 11:44 GMT+01:00 s7r via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:

> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 100
> > TARGET_CAPACITY = 75
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size.  If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China is worse than in
> Europe, and miners in China have more than 50% of the hashing power,
> blocks mined by European miners might get orphaned.
>
> The system as described can also be gamed, by filling the network with
> transactions. Miners have the monetary interest to include as many
> transactions as possible in a block in order to collect the fees.
> Regardless how you think about it, there has to be a maximum block size
> that the network will allow as a consensus rule. Increasing it
> dynamically based on transaction volume will reach a point where the
> number got big enough that it broke things. Bitcoin, because its
> fundamental design, can scale by usin

Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-10 Thread Tom Zander via bitcoin-dev
On Sunday, 4 December 2016 21:37:39 CET Hampus Sjöberg via bitcoin-dev 
wrote:
> > Also how about making timestamp 8 bytes?  2106 is coming up soon 
> 
> AFAICT this was fixed in this commit:
> https://github.com/jl2012/bitcoin/commit/
fa80b48bb4237b110ceffe11edc14c813
> 0672cd2#diff-499d7ee7998a27095063ed7b4dd7c119R200

That commit hacks around it, a new block header fixes it. Subtle difference.

-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread s7r via bitcoin-dev
t. khan via bitcoin-dev wrote:
> BIP Proposal - Managing Bitcoin’s block size the same way we do
> difficulty (aka Block75)
> 
> The every two-week adjustment of difficulty has proven to be a
> reasonably effective and predictable way of managing how quickly blocks
> are mined. Bitcoin needs a reasonably effective and predictable way of
> managing the maximum block size.
> 
> It’s clear at this point that human beings should not be involved in the
> determination of max block size, just as they’re not involved in
> deciding the difficulty.
> 
> Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> passing the decision to miners/pool operators, the max block size should
> be adjusted every two weeks (2016 blocks) using a system similar to how
> difficulty is calculated.
> 
> Put another way: let’s stop thinking about what the max block size
> should be and start thinking about how full we want the average block to
> be regardless of size. Over the last year, we’ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this
> concept ‘Block75’.
> 
> The target capacity over 2016 blocks would be 75%. If the last 2016
> blocks are more than 75% full, add the difference to the max block size.
> Like this:
> 
> MAX_BLOCK_BASE_SIZE = 100
> TARGET_CAPACITY = 75
> AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> TARGET_CAPACITY
> 
> To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> 
> For example, if the last 2016 blocks are 85% full (average block is 850
> KB), add 10% to the max block size. The new max block size would be
> 1,100 KB until the next 2016 blocks are mined, then reset and
> recalculate. The 1,000,000 byte limit that exists currently would
> remain, but would effectively be the minimum max block size. 
> 
> Another two weeks goes by, the last 2016 blocks are again 85% full, but
> now that means they average 935 KB out of the 1,100 KB max block size.
> This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> that to make the new max block size of 1,185 KB.
> 
> Another two weeks passes. This time, the average block is 1,050 KB. The
> new max block size is calculated to 1,300 KB (as blocks were 105% full,
> minus the 75% capacity target, so 30% added to max block size).
> 
> Repeat every 2016 blocks, forever.
> 
> If Block75 had been applied at the difficulty adjustment on November
> 18th, the max block size would have been 1,080KB, as the average block
> during that period was 83% full, so 8% is added to the 1,000KB limit.
> The current size, after the December 2nd adjustment would be 1,150K.
> 
> Block75 would allow the max block size to grow (or shrink) in response
> to transaction volume, and does so predictably, reasonably quickly, and
> in a method that prevents wild swings in block size or transaction fees.
> It attempts to keep blocks at 75% total capacity over each two week
> period, the same way difficulty tries to keep blocks mined every ten
> minutes. It also keeps blocks as small as possible.
> 
> Thoughts?
> 
> -t.k.
> 

I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size.  If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.

The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.



signature.asc
Description: OpenPGP digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-10 Thread t. khan via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do difficulty
(aka Block75)

The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.

It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in deciding
the difficulty.

Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.

Put another way: let’s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
‘Block75’.

The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:

MAX_BLOCK_BASE_SIZE = 100
TARGET_CAPACITY = 75
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY

To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)

For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.

Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.

Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).

Repeat every 2016 blocks, forever.

If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.

Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.

Thoughts?

-t.k.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev