Re: [bitcoin-dev] Proposal: Demonstration of Phase in Full Network Upgrade Activated by Miners

2017-06-13 Thread Jared Lee Richardson via bitcoin-dev
> and allows for resource requirements
> that are too high for many users to validate. The block size settings
> there are effectively placebo controls.

Right, but that's my point.  Any level of control the fullnodes believe
they have is effectively a placebo, unless the opposition to the miners is
essentially unanimous (and maybe not even then, if the chainsplit doesn't
have any miners to get to the next difficulty change or gets attacked
repeatedly).

> I'm advocating that resource requirements be low
> enough that full validation remains possible for a large percentage of
> the economy.

We're derailed from the main thread at this point, but just wanted to state
that I agree in part.  The part I don't agree with is when a single
transaction begins to cost more than a month's worth of full validation,
which has already happened at least once last week, the full validation is
on its way to becoming worthless.  The two costs have to be balanced for
the coin to have utility for its users.

I agree with the rest.

Jared

On Tue, Jun 13, 2017 at 5:23 PM, James Hilliard 
wrote:

> On Tue, Jun 13, 2017 at 2:35 PM, Jared Lee Richardson
>  wrote:
> >> Wallet nodes being able to fully validate and choose whether or not to
> > accept a particular chain is an important part of bitcoins security
> > model.
> >
> > What you're describing is effectively the same as BU.
>
> BU by default uses an "Accept Depth" parameter which effectively lets
> miners decide block size rules and allows for resource requirements
> that are too high for many users to validate. The block size settings
> there are effectively placebo controls.
>
> >
> > Nodes follow chains, they do not decide the victor.  The average user
> > follows the default of the software, which is to follow the longest valid
> > chain.  Forcing the average user to decide which software to run is far
> more
> > valuable than allowing "the software" to decide things, when in fact all
> it
> > will do is decide the previous default.
>
> That's largely true that they typically don't decide the victor in
> soft forks unless they are the ones to activate the rules
> changes(satoshi did this a few times in the early days), however they
> make it very difficult for a hard fork to be activated without
> consent. Yes, I'm not advocating for having runtime consensus settings
> for nodes either, I'm advocating that resource requirements be low
> enough that full validation remains possible for a large percentage of
> the economy.
>
> >
> >> One would not want to
> >> use this method to try and activate a controversial hard fork since
> >> it's trivial for miners to false signal. The orphaning period
> >> effectively forces miners to make a decision but does not necessarily
> >> force them to make a particular decision
> >
> > This is true and a good point.  A false signal from miners could trick
> the
> > honest miners into forking off prematurely with a minority.
>
> More likely the false signal would be used during the orphaning period
> to prevent blocks from being orphaned for miners that don't want to
> follow the fork.
>
> >
> >>  it only lets
> >> you see the nversion of the current stratum job since you don't get a
> >> full bock header. There's always a risk here that miners build on top
> >> of invalid blocks when SPV mining.
> >
> > This is the job of the stratum server and the pool operator.  These are
> > distinct responsibilities; Miners should choose a pool operator in line
> with
> > their desires.  Solo mining is basically dead, as it will never again be
> > practical(and has not been for at least 2 years) for the same hardware
> that
> > does the mining to also do full node operation.
> >
> > If the pool operator/stratum server also does not do validation, then any
> > number of problems could occur.
>
> Yes, there is a good amount of risk with validationless mining right
> now here since it's well known that over half of mining pools use
> validationless mining to some degree. This may not be too bad though
> due to fallbacks but the risk is probably fairly implementation
> specific.
>
> >
> >
> >
> >
> > On Mon, Jun 12, 2017 at 10:44 PM, James Hilliard via bitcoin-dev
> >  wrote:
> >>
> >> On Mon, Jun 12, 2017 at 9:23 PM, Zheming Lin via bitcoin-dev
> >>  wrote:
> >> > The BIP is described using Chinese and English. If any part is missing
> >> > or need more specific, please reply. Forgive for my poor English.
> >> >
> >> > This method will incorporate any upgrade that affects non-mining
> nodes.
> >> > They should beware that the rule has been changed.
> >> >
> >> > TLDR: Major miners activate and orphan the minor. That ensures all
> >> > miners upgrades. Then invalid the tx from not upgrading nodes. Nodes
> must
> >> > upgrade (with other protocol upgrade codes) in order to work. Then
> the final
> >> > miner vote over protocol upgrade, with all nodes has the same upgraded
> >> > codes.
> >> >
> >> > 
> >> > BIP: ???
> >> > Title: Demonstration of Phase in

Re: [bitcoin-dev] Proposal: Demonstration of Phase in Full Network Upgrade Activated by Miners

2017-06-13 Thread Jared Lee Richardson via bitcoin-dev
> Wallet nodes being able to fully validate and choose whether or not to
accept a particular chain is an important part of bitcoins security
model.

What you're describing is effectively the same as BU.

Nodes follow chains, they do not decide the victor.  The average user
follows the default of the software, which is to follow the longest valid
chain.  Forcing the average user to decide which software to run is far
more valuable than allowing "the software" to decide things, when in fact
all it will do is decide the previous default.

> One would not want to
> use this method to try and activate a controversial hard fork since
> it's trivial for miners to false signal. The orphaning period
> effectively forces miners to make a decision but does not necessarily
> force them to make a particular decision

This is true and a good point.  A false signal from miners could trick the
honest miners into forking off prematurely with a minority.

>  it only lets
> you see the nversion of the current stratum job since you don't get a
> full bock header. There's always a risk here that miners build on top
> of invalid blocks when SPV mining.

This is the job of the stratum server and the pool operator.  These are
distinct responsibilities; Miners should choose a pool operator in line
with their desires.  Solo mining is basically dead, as it will never again
be practical(and has not been for at least 2 years) for the same hardware
that does the mining to also do full node operation.

If the pool operator/stratum server also does not do validation, then any
number of problems could occur.




On Mon, Jun 12, 2017 at 10:44 PM, James Hilliard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mon, Jun 12, 2017 at 9:23 PM, Zheming Lin via bitcoin-dev
>  wrote:
> > The BIP is described using Chinese and English. If any part is missing
> or need more specific, please reply. Forgive for my poor English.
> >
> > This method will incorporate any upgrade that affects non-mining nodes.
> They should beware that the rule has been changed.
> >
> > TLDR: Major miners activate and orphan the minor. That ensures all
> miners upgrades. Then invalid the tx from not upgrading nodes. Nodes must
> upgrade (with other protocol upgrade codes) in order to work. Then the
> final miner vote over protocol upgrade, with all nodes has the same
> upgraded codes.
> >
> > 
> > BIP: ???
> > Title: Demonstration of Phase in Full Network Upgrade Activated by Miners
> > Author: LIN Zheming
> > Status: Draft
> > Type: Standards Track
> > Created: 2017-06-12
> > 
> >
> > ==Summary==
> >
> > 本方法并不是来源于个人,而是中文比特币社区中集体智慧的结果。
> > This idea was not created by an individual but is a product of
> collaboration in the Chinese bitcoin community between different interest
> groups.
> >
> > 这是一种在协议升级时,对全网挖矿和非挖矿节点进行保护和激励的方法,避免不参与挖矿的节点没有升级的动力而受到损失。
> > This method is put forth to incentivize and to protect mining nodes and
> non-mining nodes during protocol upgrading. With this incentive mechanism,
> the non-mining nodes will not suffer monetary loss from chain
> splitting.
> >
> > 发信号的多数矿工在达到激活条件后第一个宽限期(一个难度周期)后设置新区块版本号,孤立未升级矿工的低版本号的块。
> 通过最初的中本聪共识,在第一个宽限期结束后,所有矿工将升级至最新版本或使用最新版本。在第二个宽限期(一个难度周期)后,矿工将仅接受新版本的交易,
> 未升级的客户端发送的旧版本交易将无法得到新节点的转播也无法进入新版本区块。这将在保护用户资产的同时,提醒不挖矿的钱包节点升级。
> 并在升级代码中加入对协议进行改动的部分。钱包升级后将由挖矿节点投票实施该项改动,以达成协议改动的广泛部署。
> >
> > After the activation condition is met, majority miners will set a new
> block versionbits after the first grace period(a difficulty change of 2016
> blocks). The blocks with lower versionbits will be orphaned. In terms of
> the Nakamoto Consensus, the end of the first grace period will force all
> mining nodes upgraded to signal a new version of consensus. After the
> second grace period ( a difficulty change of 2016 blocks), mining nodes
> will only accept transactions with new versionbits. Transactions from nodes
> not upgrading will not be relayed nor included in blocks with new
> versionbits. This will protect funds of non-mining nodes from utilizing
> replay attack and will function as a notification for them to upgrade.
> Codes dealing with protocol upgrade could be included in the upgrade. After
> the non-mining node upgrades, mining nodes will vote to activate the
> protocol upgrade and this will achieve the broad/widespread deployment of
> the protocol upgrade.
> >
> > 在该项改动广泛部署至客户端之后,依然由其激活条件控制。
> > The protocol upgrade depends on its activate condition independently
> even after the change deployed among nodes.
> >
> > ==Motivation==
> >
> > 鉴于最初的比特币协议并未考虑不参与挖矿的钱包节点,导致这些钱包节点的协议升级是被动的,懒惰的。
> 当在升级方向上出现分歧时,矿工也不愿意在错误的链上挖矿,但矿工又没有任何方法可以确保正在延长的链是被钱包节点广泛接受
> 的链。这将影响钱包节点的安全。
> > In view of the fact that the original Bitcoin consensus did not consider
> the non-mining wallet nodes(as mentioned above), the result is that
> upgrading the consensus of these wallet nodes is passive and lazy. When
> there is disagreement in the direction of the upgrade, the miners have no
> mechanism to ensure that the chain being extend

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> If you're looking for hard numbers at this point you aren't likely to
> find them because not everything is easy to measure directly.

There's quite a few hard numbers that are available that are of varying
use.  Mining commitments are a major one because of the stalled chain
problem.  Node signaling represents some data because while it can be
sybiled, they are cheap but not free to run.  Upvotes and comments on
reddit and other forums might be of some use, but there's not a clear
supermajority driving every pro-uasf comment up and every anti-uasf comment
down, and Reddit obscures the upvote/downvotes pretty well.  It could be a
gleaned datapoint if someone pulled the comments, manually evaluated their
likely position on the matter(neutrally), and then reported on it, but that
is a lot of work and I think it is unlikely to show anything except how
deep the rifts in the community are.  Of the two main statistics available,
they do not support the idea that UASF has any chance of success.  Of the
third, it at least shows that there is deep opposition that is nearly equal
to the support amongst the forums most likely to support UASF.

So I'll take anything, any statistic that actually indicates UASF has a
chance in hell of succeeding, at least that would be worth something.
Otherwise it's all much ado about nothing.

> We'll know more as we get closer to BIP148 activation by looking at the
markets.

What markets?  Where?  How would we know?

> > It doesn't have those issues during the segwit activation, ergo there
is no
> > reason for segwit-activation problems to take priority over the very
real
> > hardfork activation problems.

> And yet segwit2x is insisting on activation bundling which needlessly
> complicates and delays SegWit activation.

Because it is not segwit that has appears to have the supermajority
consensus.

> Sure, technical changes can be made for political reasons, we should
> at least be clear in regards to why particular decisions are being
> made. I'm supportive of a hard fork for technical reasons but not
> political ones as are many others.

Well, then we have a point of agreement at least. :)


On Wed, Jun 7, 2017 at 5:44 PM, James Hilliard 
wrote:

> On Wed, Jun 7, 2017 at 7:20 PM, Jared Lee Richardson 
> wrote:
> >> Not really, there are a few relatively simple techniques such as RBF
> >> which can be leveraged to get confirmations on on-side before double
> >> spending on another. Once a transaction is confirmed on the non-BIP148
> >> chain then the high fee transactions can be made on only the BIP148
> >> side of the split using RBF.
> >
> > Ah, so the BIP148 client handles this on behalf of its less technical
> users
> > on their behalf then, yes?
> It's not automatic but exchanges will likely handle it on behalf of
> the less technical users. BIP148 is not intended to cause a permanent
> chain split however which is why this was not built in.
> >
> >>  Exchanges will likely do this splitting
> >> automatically for uses as well.
> >
> > Sure, Exchanges are going to dedicate hundreds of developer hours and
> > thousands of support hours to support something that they've repeatedly
> told
> > everyone must have replay protection to be supported.  They're going to
> do
> > this because 8% of nodes and <0.5% of miners say they'll be rewarded
> richly.
> > Somehow I find that hard to believe.
> They are very likely to, most have contingency plans for this sort of
> thing ready to go due to their experience with the ETH/ETC fork.
> >
> > Besides, if the BIP148 client does it for them, they wouldn't have to
> > dedicate those hundreds of developer hours.  Right?
> >
> > I can't imagine how this logic is getting you from where the real data
> is to
> > the assumption that an economic majority will push BIP148 into being
> such a
> > more valuable chain that switching chains will be attractive to enough
> > miners.  There's got to be some real data that convinces you of this
> > somewhere?
> If you're looking for hard numbers at this point you aren't likely to
> find them because not everything is easy to measure directly.
> >
> >> Both are issues, but wipeout risk is different, the ETH/ETC split for
> >> example didn't have any wipeout risk for either side the same is not
> >> true for BIP148(and it is the non-BIP148 side that carries the risk of
> >> chain wipeout).
> >
> > Wipeout risk is a serious issue when 45% of the miners support one chain
> and
> > 55% support the other chain.  Segwit doesn't even have 35% of the miners;
> > There's no data or statements anywhere that indicate that UASF is going
> to
> > reach the point where wipeout risk is even comparable to abandonment
> risk.
> It's mostly economic support that will dictate this, not hashpower
> support since the hashpower follows the economy.
> >
> >> Yes, miners aren't likely to waste operational mining costs, that's
> >> ultimately why miners would follow the BIP148 side of the chain
> >> assuming it has sufficient econ

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> Not really, there are a few relatively simple techniques such as RBF
> which can be leveraged to get confirmations on on-side before double
> spending on another. Once a transaction is confirmed on the non-BIP148
> chain then the high fee transactions can be made on only the BIP148
> side of the split using RBF.

Ah, so the BIP148 client handles this on behalf of its less technical users
on their behalf then, yes?

>  Exchanges will likely do this splitting
> automatically for uses as well.

Sure, Exchanges are going to dedicate hundreds of developer hours and
thousands of support hours to support something that they've repeatedly
told everyone must have replay protection to be supported.  They're going
to do this because 8% of nodes and <0.5% of miners say they'll be rewarded
richly.  Somehow I find that hard to believe.

Besides, if the BIP148 client does it for them, they wouldn't have to
dedicate those hundreds of developer hours.  Right?

I can't imagine how this logic is getting you from where the real data is
to the assumption that an economic majority will push BIP148 into being
such a more valuable chain that switching chains will be attractive to
enough miners.  There's got to be some real data that convinces you of this
somewhere?

> Both are issues, but wipeout risk is different, the ETH/ETC split for
> example didn't have any wipeout risk for either side the same is not
> true for BIP148(and it is the non-BIP148 side that carries the risk of
> chain wipeout).

Wipeout risk is a serious issue when 45% of the miners support one chain
and 55% support the other chain.  Segwit doesn't even have 35% of the
miners; There's no data or statements anywhere that indicate that UASF is
going to reach the point where wipeout risk is even comparable to
abandonment risk.

> Yes, miners aren't likely to waste operational mining costs, that's
> ultimately why miners would follow the BIP148 side of the chain
> assuming it has sufficient economic support or if it's more profitable
> to mine.

To convince miners you would have to have some data SOMEWHERE supporting
the economic majority argument.  Is there any such data?

> segwit2x has more issues since the HF part requires users to reach
consensus

It doesn't have those issues during the segwit activation, ergo there is no
reason for segwit-activation problems to take priority over the very real
hardfork activation problems.

> That's a political reason not a technical reason.

In a consensus system they are frequently the same, unfortunately.
Technical awesomeness without people agreeing = zero consensus.  So the
choice is either to "technically" break the consensus without a
super-majority and see what happens, or to go with the choice that has real
data showing the most consensus and hope the tiny minority chain actually
dies off.

Jared

On Wed, Jun 7, 2017 at 5:01 PM, James Hilliard 
wrote:

> On Wed, Jun 7, 2017 at 6:43 PM, Jared Lee Richardson 
> wrote:
> >> BIP148 however is a consensus change that can
> >> be rectified if it gets more work, this would act as an additional
> >> incentive for mine the BIP148 side since there would be no wipeout
> >> risk there.
> >
> > This statement is misleading.  Wipeout risks don't apply to any consensus
> > changes; It is a consensus change, it can only be abandoned.  The BIP148
> > chain carries just as many risks of being abandoned or even more with
> > segwit2x on the table.  No miner would consider "wipeout risk" an
> advantage
> > when the real threat is chain abandonment.
> Both are issues, but wipeout risk is different, the ETH/ETC split for
> example didn't have any wipeout risk for either side the same is not
> true for BIP148(and it is the non-BIP148 side that carries the risk of
> chain wipeout).
> >
> >> Higher transaction fees on a minority chain can compensate miners for
> >> a lower price which would likely be enough to get the BIP148 chain to
> >> a difficulty reduction.
> >
> > Higher transaction fees suffers the same problem as exchange support
> does.
> > Without replay protection, it is very difficult for any average user to
> > force transactions onto one chain or the other.  Thus, without replay
> > protection, the UASF chain is unlikely to develop any viable fee market;
> Its
> > few miners 99% of the time will simply choose from the highest fees that
> > were already available to the other chain, which is basically no
> advantage
> > at all.
> Not really, there are a few relatively simple techniques such as RBF
> which can be leveraged to get confirmations on on-side before double
> spending on another. Once a transaction is confirmed on the non-BIP148
> chain then the high fee transactions can be made on only the BIP148
> side of the split using RBF. Exchanges will likely do this splitting
> automatically for uses as well.
> >
> >>  ETC replay protection was done after the fork on an as
> >> needed basis(there are multiple reliable techniques that can be used
> >> to split UTXO's), the same

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> BIP148 however is a consensus change that can
> be rectified if it gets more work, this would act as an additional
> incentive for mine the BIP148 side since there would be no wipeout
> risk there.

This statement is misleading.  Wipeout risks don't apply to any consensus
changes; It is a consensus change, it can only be abandoned.  The BIP148
chain carries just as many risks of being abandoned or even more with
segwit2x on the table.  No miner would consider "wipeout risk" an advantage
when the real threat is chain abandonment.

> Higher transaction fees on a minority chain can compensate miners for
> a lower price which would likely be enough to get the BIP148 chain to
> a difficulty reduction.

Higher transaction fees suffers the same problem as exchange support does.
Without replay protection, it is very difficult for any average user to
force transactions onto one chain or the other.  Thus, without replay
protection, the UASF chain is unlikely to develop any viable fee market;
Its few miners 99% of the time will simply choose from the highest fees
that were already available to the other chain, which is basically no
advantage at all.

>  ETC replay protection was done after the fork on an as
> needed basis(there are multiple reliable techniques that can be used
> to split UTXO's), the same can happen with BIP148 and it is easier to
> do with Bitcoin than with the ETH/ETC split IMO.

ETC replay protection was added because they were already a hardfork and
without it they would not have had a viable chain.  BIP148 is not supposed
to be a hardfork, and if it added replay protection to remain viable it
would lose the frequently touted "wipeout advantage" as well as the ability
to call itself a softfork.  And are you seriously suggesting that what
happened with ETC and ETH is a desirable and good situation for Bitcoin,
and that UASF is ETC?

> A big reason BIP148 still has support is because up until SegWit
> actually activates there's no guarantee segwit2mb will actually have
> the necessary support to activate SegWit.

For a miners blowing through six million dollars a day in mining
operational costs, that's a pretty crappy reason.  Serious miners can't
afford to prop up a non-viable chain based on philosophy or maybes.  BIP148
is based entirely upon people who aren't putting anything on the line
trying to convince others to take the huge risks for them.  With
deceptively fallacious logic, in my opinion.

Even segwit2x is based on the assumption that all miners can reach
consensus.  Break that assumption and segwit2x will have the same problems
as UASF.

> This is largely an issue due to segwit2x's bundling, if the SW and HF
> part of segwit2x were unbundled then there would be no reason to delay
> BIP91 activation

They are bundled.  Segwit alone doesn't have the desired overwhelming
consensus, unless core wishes to fork 71% to 29%, and maybe not even that
high.  That's the technical reason, and they can't be unbundled without
breaking that consensus.

Jared


On Wed, Jun 7, 2017 at 4:11 PM, James Hilliard 
wrote:

> On Wed, Jun 7, 2017 at 5:53 PM, Jared Lee Richardson 
> wrote:
> >> There are 2 primary factors involved here, economic support and
> > hashpower either of which is enough to make a permanent chain split
> > unlikely, miners will mine whichever chain is most profitable(see
> > ETH/ETC hashpower profitability equilibrium for an example of how this
> > works in practice)
> >
> > That's not a comparable example.  ETC did not face potentially years of
> slow
> > blocktimes before it normalized, whereas BIP148 is on track to do exactly
> > that.  Moreover, ETC represented a fundamental break from the majority
> > consensus that could not be rectified, whereas BIP148 represents only a
> > minority attempt to accelerate something that an overwhelming majority of
> > miners have already agreed to activate under segwit2x.  Lastly ETC was
> > required to add replay protection, just like any minority fork proposed
> by
> > any crypto-currency has been, something that BIP148 both lacks and
> refuses
> > to add or even acknowledge the necessity of.  Without replay protection,
> ETC
> > could not have become profitable enough to be a viable minority chain.
> If
> > BIP148's chain is not the majority chain and it does not have replay
> > protection, it will face the same problems, but that required replay
> > protection will turn it into a hardfork.  This will be a very bad
> position
> > for UASF supporters to find themselves in - Either hardfork and hope the
> > price is higher and the majority converts, or die as the minority chain
> with
> > no reliable methods of economic conversion.
> Higher transaction fees on a minority chain can compensate miners for
> a lower price which would likely be enough to get the BIP148 chain to
> a difficulty reduction. BIP148 however is a consensus change that can
> be rectified if it gets more work, this would act as an additional
> incentive for mine the BIP148 sid

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> There are 2 primary factors involved here, economic support and
hashpower either of which is enough to make a permanent chain split
unlikely, miners will mine whichever chain is most profitable(see
ETH/ETC hashpower profitability equilibrium for an example of how this
works in practice)

That's not a comparable example.  ETC did not face potentially years of
slow blocktimes before it normalized, whereas BIP148 is on track to do
exactly that.  Moreover, ETC represented a fundamental break from the
majority consensus that could not be rectified, whereas BIP148 represents
only a minority attempt to accelerate something that an overwhelming
majority of miners have already agreed to activate under segwit2x.  Lastly
ETC was required to add replay protection, just like any minority fork
proposed by any crypto-currency has been, something that BIP148 both lacks
and refuses to add or even acknowledge the necessity of.  Without replay
protection, ETC could not have become profitable enough to be a viable
minority chain.  If BIP148's chain is not the majority chain and it does
not have replay protection, it will face the same problems, but that
required replay protection will turn it into a hardfork.  This will be a
very bad position for UASF supporters to find themselves in - Either
hardfork and hope the price is higher and the majority converts, or die as
the minority chain with no reliable methods of economic conversion.

I believe, but don't have data to back this, that most of the BIP148
insistence comes not from a legitimate attempt to gain consensus (or else
they would either outright oppose segwit2mb for its hardfork, or they would
outright support it), but rather from an attempt for BIP148 supporters to
save face for BIP148 being a failure.  If I'm correct, that's a terrible
and highly non-technical reason for segwit2mb to bend over backwards
attempting to support BIP148's attempt to save face.

> The main issue is just one of activation timelines, BIP91 as
is takes too long to activate unless started ahead of the existing
segwit2x schedule and it's unlikely that BIP148 will get pushed back
any further.

Even if I'm not correct on the above, I and others find it hard to accept
that this timeline conflict is segwit2x's fault.  Segwit2x has both some
flexibility and broad support that crosses contentious pro-segwit and
pro-blocksize-increase divisions that have existed for two years.  BIP148
is attempting to hold segwit2x's timelines and code hostage by claiming
inflexibility and claiming broad support, and not only are neither of those
assertions are backed by real data, BIP148 (by being so inflexible) is
pushing a position that deepens the divides between those groups.  For
there to be technical reasons for compatibility (so long as there are
tradeoffs, which there are), there needs to be hard data showing that
BIP148 is a viable minority fork that won't simply die off on its own.

Jared


On Wed, Jun 7, 2017 at 3:23 PM, James Hilliard 
wrote:

> On Wed, Jun 7, 2017 at 4:50 PM, Jared Lee Richardson 
> wrote:
> > Could this risk mitigation measure be an optional flag?  And if so,
> > could it+BIP91 signal on a different bit than bit4?
> It's fairly trivial for miners to signal for BIP91 on bit4 or a
> different bit at the same time as the code is trivial enough to
> combine
> >
> > The reason being, if for some reason the segwit2x activation cannot
> > take place in time, it would be preferable for miners to have a more
> > standard approach to activation that requires stronger consensus and
> > may be more forgiving than BIP148.  If the segwit2x activation is on
> > time to cooperate with BIP148, it could be signaled through the
> > non-bit4 approach and everything could go smoothly.  Thoughts on that
> > idea?  It may add more complexity and more developer time, but may
> > also address your concerns among others.
> This does give miners another approach to activate segwit ahead of
> BIP148, if segwit2x activation is rolled out and activated immediately
> then this would not be needed however based on the timeline here
> https://segwit2x.github.io/ it would not be possible for BIP91 to
> enforce mandatory signalling ahead of BIP148. Maybe that can be
> changed though, I've suggested an immediate rollout with a placeholder
> client timeout instead of the HF code initially in order to accelerate
> that.
> >
> >> Since this BIP
> >> only activates with a clear miner majority it should not increase the
> >> risk of an extended chain split.
> >
> > The concern I'm raising is more about the psychology of giving BIP148
> > a sense of safety that may not be valid.  Without several more steps,
> > BIP148 is definitely on track to be a risky chainsplit, and without
> > segwit2x it will almost certainly be a small minority chain. (Unless
> > the segwit2x compromise falls apart before then, and even in that case
> > it is likely to be a minority chain)
> There are 2 primary factors involved here, economic support and

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
Could this risk mitigation measure be an optional flag?  And if so,
could it+BIP91 signal on a different bit than bit4?

The reason being, if for some reason the segwit2x activation cannot
take place in time, it would be preferable for miners to have a more
standard approach to activation that requires stronger consensus and
may be more forgiving than BIP148.  If the segwit2x activation is on
time to cooperate with BIP148, it could be signaled through the
non-bit4 approach and everything could go smoothly.  Thoughts on that
idea?  It may add more complexity and more developer time, but may
also address your concerns among others.

> Since this BIP
> only activates with a clear miner majority it should not increase the
> risk of an extended chain split.

The concern I'm raising is more about the psychology of giving BIP148
a sense of safety that may not be valid.  Without several more steps,
BIP148 is definitely on track to be a risky chainsplit, and without
segwit2x it will almost certainly be a small minority chain. (Unless
the segwit2x compromise falls apart before then, and even in that case
it is likely to be a minority chain)

Jared


On Wed, Jun 7, 2017 at 2:42 PM, James Hilliard
 wrote:
> I don't really see how this would increase the likelihood of an
> extended chain split assuming BIP148 is going to have
> non-insignificant economic backing. This BIP is designed to provide a
> risk mitigation measure that miners can safely deploy. Since this BIP
> only activates with a clear miner majority it should not increase the
> risk of an extended chain split. At this point it is not completely
> clear how much economic support there is for BIP148 but support
> certainly seems to be growing and we have nearly 2 months until BIP148
> activation. I intentionally used a shorter activation period here so
> that decisions by miners can be made close to the BIP148 activation
> date.
>
> On Wed, Jun 7, 2017 at 4:29 PM, Jared Lee Richardson  
> wrote:
>> I think this BIP represents a gamble, and the gamble may not be a good
>> one.  The gamble here is that if the segwit2x changes are rolled out
>> on time, and if the signatories accept the bit4 + bit1 signaling
>> proposals within BIP91, the launch will go smoother, as intended.  But
>> conversely, if either the segwit2x signatories balk about the Bit1
>> signaling OR if the timelines for segwit2mb are missed even by a bit,
>> it may cause the BIP148 chainsplit to be worse than it would be
>> without.  Given the frequent concerns raised in multiple places about
>> the aggressiveness of the segwit2x timelines, including the
>> non-hardfork timelines, this does not seem like a great gamble to be
>> making.
>>
>> The reason I say it may make the chainsplit be worse than it would
>> otherwise be is that it may provide a false sense of safety for BIP148
>> that currently does not currently exist(and should not, as it is a
>> chainsplit).  That sense of safety would only be legitimate if the
>> segwit2x signatories were on board, and the segwit2x code effectively
>> enforced BIP148 simultaneously, neither of which are guaranteed.  If
>> users and more miners had a false sense that BIP148 was *not* going to
>> chainsplit from default / segwit2x, they might not follow the news if
>> suddenly the segwit2x plan were delayed for a few days.  While any
>> additional support would definitely be cheered on by BIP148
>> supporters, the practical reality might be that this proposal would
>> take BIP148 from the "unlikely to have any viable chain after flag day
>> without segwit2x" category into the "small but viable minority chain"
>> category, and even worse, it might strengthen the chainsplit just days
>> before segwit is activated on BOTH chains, putting the BIP148
>> supporters on the wrong pro-segwit, but still-viable chain.
>>
>> If Core had taken a strong stance to include BIP148 into the client,
>> and if BIP148 support were much much broader, I would feel differently
>> as the gamble would be more likely to discourage a chainsplit (By
>> forcing the acceleration of segwit2x) rather than encourage it (by
>> strengthening an extreme minority chainsplit that may wind up on the
>> wrong side of two segwit-activated chains).  As it stands now, this
>> seems like a very dangerous attempt to compromise with a small but
>> vocal group that are the ones creating the threat to begin with.
>>
>> Jared
>>
>> On Tue, Jun 6, 2017 at 5:56 PM, James Hilliard via bitcoin-dev
>>  wrote:
>>> Due to the proposed calendar(https://segwit2x.github.io/) for the
>>> SegWit2x agreement being too slow to activate SegWit mandatory
>>> signalling ahead of BIP148 using BIP91 I would like to propose another
>>> option that miners can use to prevent a chain split ahead of the Aug
>>> 1st BIP148 activation date.
>>>
>>> The splitprotection soft fork is essentially BIP91 but using BIP8
>>> instead of BIP9 with a lower activation threshold and immediate
>>> mandatory signalling lock-in. This allows for a m

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> Keep in mind that this is only temporary until segwit has locked in,
after that happens it becomes optional for miners again.

I missed that, that does effectively address that concern.  It appears
that BIP148 implements the same rule as would be required to prevent a
later chainsplit as well, no?

This comment did bring to mind another concern about BIP148/91 though,
which I'll raise in the pull request discussion.  Feel free to respond
to it there.

Jared

On Wed, Jun 7, 2017 at 2:21 PM, James Hilliard
 wrote:
> Keep in mind that this is only temporary until segwit has locked in,
> after that happens it becomes optional for miners again.
>
> On Wed, Jun 7, 2017 at 4:09 PM, Jared Lee Richardson  
> wrote:
>>> This is, by far, the safest way for miners to quickly defend against a 
>>> chain split, much better than a -bip148 option.   This allows miners to 
>>> defend themselves, with very little risk, since the defense is only 
>>> activated if the majority of miners do so. I would move for a very rapid 
>>> deployment.   Only miners would need to upgrade.   Regular users would not 
>>> have to concern themselves with this release.
>>
>> FYI, even if very successful, this deployment and change may have a
>> severe negative impact on a small group of miners.  Any miners/pools
>> who are not actively following the forums, news, or these discussions
>> may be difficult to reach and communicate with in time, particularly
>> with language barriers.  Of those, any who are also either not
>> signaling segwit currently or are running an older software version
>> will have their blocks continuously and constantly orphaned, but may
>> not have any alarms or notifications set up for such an unexpected
>> failure.  That may or may not be a worthy consideration, but it is
>> definitely brusque and a harsh price to pay.  Considering the
>> opposition mentioned against transaction limits for the rare cases
>> where a very large transaction has already been signed, it seems that
>> this would be worthy of consideration.  For the few miners in that
>> situation, it does turn segwit from an optional softfork into a
>> punishing hardfork.
>>
>> I don't think that's a sufficient reason alone to kill the idea, but
>> it should be a concern.
>>
>> Jared
>>
>> On Wed, Jun 7, 2017 at 7:10 AM, Erik Aronesty via bitcoin-dev
>>  wrote:
>>> This is, by far, the safest way for miners to quickly defend against a chain
>>> split, much better than a -bip148 option.   This allows miners to defend
>>> themselves, with very little risk, since the defense is only activated if
>>> the majority of miners do so. I would move for a very rapid deployment.
>>> Only miners would need to upgrade.   Regular users would not have to concern
>>> themselves with this release.
>>>
>>> On Wed, Jun 7, 2017 at 6:13 AM, James Hilliard via bitcoin-dev
>>>  wrote:

 I think even 55% would probably work out fine simply due to incentive
 structures, once signalling is over 51% it's then clear to miners that
 non-signalling blocks will be orphaned and the rest will rapidly
 update to splitprotection/BIP148. The purpose of this BIP is to reduce
 chain split risk for BIP148 since it's looking like BIP148 is going to
 be run by a non-insignificant percentage of the economy at a minimum.

 On Wed, Jun 7, 2017 at 12:20 AM, Tao Effect  wrote:
 > See thread on replay attacks for why activating regardless of threshold
 > is a
 > bad idea [1].
 >
 > BIP91 OTOH seems perfectly reasonable. 80% instead of 95% makes it more
 > difficult for miners to hold together in opposition to Core. It gives
 > Core
 > more leverage in negotiations.
 >
 > If they don't activate with 80%, Core can release another BIP to reduce
 > it
 > to 75%.
 >
 > Each threshold reduction makes it both more likely to succeed, but also
 > increases the likelihood of harm to the ecosystem.
 >
 > Cheers,
 > Greg
 >
 > [1]
 >
 > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014497.html
 >
 > --
 > Please do not email me anything that you are not comfortable also
 > sharing
 > with the NSA.
 >
 > On Jun 6, 2017, at 6:54 PM, James Hilliard 
 > wrote:
 >
 > This is a BIP8 style soft fork so mandatory signalling will be active
 > after Aug 1st regardless.
 >
 > On Tue, Jun 6, 2017 at 8:51 PM, Tao Effect 
 > wrote:
 >
 > What is the probability that a 65% threshold is too low and can allow a
 > "surprise miner attack", whereby miners are kept offline before the
 > deadline, and brought online immediately after, creating potential
 > havoc?
 >
 > (Nit: "simple majority" usually refers to >50%, I think, might cause
 > confusion.)
 >
 > -Greg Slepak
 >
 > --
 > Please do not email me anything that you are not comfortable also
 > sharing
 > with the NSA.
 >

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
I think this BIP represents a gamble, and the gamble may not be a good
one.  The gamble here is that if the segwit2x changes are rolled out
on time, and if the signatories accept the bit4 + bit1 signaling
proposals within BIP91, the launch will go smoother, as intended.  But
conversely, if either the segwit2x signatories balk about the Bit1
signaling OR if the timelines for segwit2mb are missed even by a bit,
it may cause the BIP148 chainsplit to be worse than it would be
without.  Given the frequent concerns raised in multiple places about
the aggressiveness of the segwit2x timelines, including the
non-hardfork timelines, this does not seem like a great gamble to be
making.

The reason I say it may make the chainsplit be worse than it would
otherwise be is that it may provide a false sense of safety for BIP148
that currently does not currently exist(and should not, as it is a
chainsplit).  That sense of safety would only be legitimate if the
segwit2x signatories were on board, and the segwit2x code effectively
enforced BIP148 simultaneously, neither of which are guaranteed.  If
users and more miners had a false sense that BIP148 was *not* going to
chainsplit from default / segwit2x, they might not follow the news if
suddenly the segwit2x plan were delayed for a few days.  While any
additional support would definitely be cheered on by BIP148
supporters, the practical reality might be that this proposal would
take BIP148 from the "unlikely to have any viable chain after flag day
without segwit2x" category into the "small but viable minority chain"
category, and even worse, it might strengthen the chainsplit just days
before segwit is activated on BOTH chains, putting the BIP148
supporters on the wrong pro-segwit, but still-viable chain.

If Core had taken a strong stance to include BIP148 into the client,
and if BIP148 support were much much broader, I would feel differently
as the gamble would be more likely to discourage a chainsplit (By
forcing the acceleration of segwit2x) rather than encourage it (by
strengthening an extreme minority chainsplit that may wind up on the
wrong side of two segwit-activated chains).  As it stands now, this
seems like a very dangerous attempt to compromise with a small but
vocal group that are the ones creating the threat to begin with.

Jared

On Tue, Jun 6, 2017 at 5:56 PM, James Hilliard via bitcoin-dev
 wrote:
> Due to the proposed calendar(https://segwit2x.github.io/) for the
> SegWit2x agreement being too slow to activate SegWit mandatory
> signalling ahead of BIP148 using BIP91 I would like to propose another
> option that miners can use to prevent a chain split ahead of the Aug
> 1st BIP148 activation date.
>
> The splitprotection soft fork is essentially BIP91 but using BIP8
> instead of BIP9 with a lower activation threshold and immediate
> mandatory signalling lock-in. This allows for a majority of miners to
> activate mandatory SegWit signalling and prevent a potential chain
> split ahead of BIP148 activation.
>
> This BIP allows for miners to respond to market forces quickly ahead
> of BIP148 activation by signalling for splitprotection. Any miners
> already running BIP148 should be encouraged to use splitprotection.
>
> 
>   BIP: splitprotection
>   Layer: Consensus (soft fork)
>   Title: User Activated Soft Fork Split Protection
>   Author: James Hilliard 
>   Comments-Summary: No comments yet.
>   Comments-URI:
>   Status: Draft
>   Type: Standards Track
>   Created: 2017-05-22
>   License: BSD-3-Clause
>CC0-1.0
> 
>
> ==Abstract==
>
> This document specifies a coordination mechanism for a simple majority
> of miners to prevent a chain split ahead of BIP148 activation.
>
> ==Definitions==
>
> "existing segwit deployment" refer to the BIP9 "segwit" deployment
> using bit 1, between November 15th 2016 and November 15th 2017 to
> activate BIP141, BIP143 and BIP147.
>
> ==Motivation==
>
> The biggest risk of BIP148 is an extended chain split, this BIP
> provides a way for a simple majority of miners to eliminate that risk.
>
> This BIP provides a way for a simple majority of miners to coordinate
> activation of the existing segwit deployment with less than 95%
> hashpower before BIP148 activation. Due to time constraints unless
> immediately deployed BIP91 will likely not be able to enforce
> mandatory signalling of segwit before the Aug 1st activation of
> BIP148. This BIP provides a method for rapid miner activation of
> SegWit mandatory signalling ahead of the BIP148 activation date. Since
> the primary goal of this BIP is to reduce the chance of an extended
> chain split as much as possible we activate using a simple miner
> majority of 65% over a 504 block interval rather than a higher
> percentage. This BIP also allows miners to signal their intention to
> run BIP148 in order to prevent a chain split.
>
> ==Specification==
>
> While this BIP is active, all blocks must set the nVersion header top
> 3 bits to 001 together with bit field (1<<1) (ac

Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-07 Thread Jared Lee Richardson via bitcoin-dev
> This is, by far, the safest way for miners to quickly defend against a chain 
> split, much better than a -bip148 option.   This allows miners to defend 
> themselves, with very little risk, since the defense is only activated if the 
> majority of miners do so. I would move for a very rapid deployment.   Only 
> miners would need to upgrade.   Regular users would not have to concern 
> themselves with this release.

FYI, even if very successful, this deployment and change may have a
severe negative impact on a small group of miners.  Any miners/pools
who are not actively following the forums, news, or these discussions
may be difficult to reach and communicate with in time, particularly
with language barriers.  Of those, any who are also either not
signaling segwit currently or are running an older software version
will have their blocks continuously and constantly orphaned, but may
not have any alarms or notifications set up for such an unexpected
failure.  That may or may not be a worthy consideration, but it is
definitely brusque and a harsh price to pay.  Considering the
opposition mentioned against transaction limits for the rare cases
where a very large transaction has already been signed, it seems that
this would be worthy of consideration.  For the few miners in that
situation, it does turn segwit from an optional softfork into a
punishing hardfork.

I don't think that's a sufficient reason alone to kill the idea, but
it should be a concern.

Jared

On Wed, Jun 7, 2017 at 7:10 AM, Erik Aronesty via bitcoin-dev
 wrote:
> This is, by far, the safest way for miners to quickly defend against a chain
> split, much better than a -bip148 option.   This allows miners to defend
> themselves, with very little risk, since the defense is only activated if
> the majority of miners do so. I would move for a very rapid deployment.
> Only miners would need to upgrade.   Regular users would not have to concern
> themselves with this release.
>
> On Wed, Jun 7, 2017 at 6:13 AM, James Hilliard via bitcoin-dev
>  wrote:
>>
>> I think even 55% would probably work out fine simply due to incentive
>> structures, once signalling is over 51% it's then clear to miners that
>> non-signalling blocks will be orphaned and the rest will rapidly
>> update to splitprotection/BIP148. The purpose of this BIP is to reduce
>> chain split risk for BIP148 since it's looking like BIP148 is going to
>> be run by a non-insignificant percentage of the economy at a minimum.
>>
>> On Wed, Jun 7, 2017 at 12:20 AM, Tao Effect  wrote:
>> > See thread on replay attacks for why activating regardless of threshold
>> > is a
>> > bad idea [1].
>> >
>> > BIP91 OTOH seems perfectly reasonable. 80% instead of 95% makes it more
>> > difficult for miners to hold together in opposition to Core. It gives
>> > Core
>> > more leverage in negotiations.
>> >
>> > If they don't activate with 80%, Core can release another BIP to reduce
>> > it
>> > to 75%.
>> >
>> > Each threshold reduction makes it both more likely to succeed, but also
>> > increases the likelihood of harm to the ecosystem.
>> >
>> > Cheers,
>> > Greg
>> >
>> > [1]
>> >
>> > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014497.html
>> >
>> > --
>> > Please do not email me anything that you are not comfortable also
>> > sharing
>> > with the NSA.
>> >
>> > On Jun 6, 2017, at 6:54 PM, James Hilliard 
>> > wrote:
>> >
>> > This is a BIP8 style soft fork so mandatory signalling will be active
>> > after Aug 1st regardless.
>> >
>> > On Tue, Jun 6, 2017 at 8:51 PM, Tao Effect 
>> > wrote:
>> >
>> > What is the probability that a 65% threshold is too low and can allow a
>> > "surprise miner attack", whereby miners are kept offline before the
>> > deadline, and brought online immediately after, creating potential
>> > havoc?
>> >
>> > (Nit: "simple majority" usually refers to >50%, I think, might cause
>> > confusion.)
>> >
>> > -Greg Slepak
>> >
>> > --
>> > Please do not email me anything that you are not comfortable also
>> > sharing
>> > with the NSA.
>> >
>> > On Jun 6, 2017, at 5:56 PM, James Hilliard via bitcoin-dev
>> >  wrote:
>> >
>> > Due to the proposed calendar(https://segwit2x.github.io/) for the
>> > SegWit2x agreement being too slow to activate SegWit mandatory
>> > signalling ahead of BIP148 using BIP91 I would like to propose another
>> > option that miners can use to prevent a chain split ahead of the Aug
>> > 1st BIP148 activation date.
>> >
>> > The splitprotection soft fork is essentially BIP91 but using BIP8
>> > instead of BIP9 with a lower activation threshold and immediate
>> > mandatory signalling lock-in. This allows for a majority of miners to
>> > activate mandatory SegWit signalling and prevent a potential chain
>> > split ahead of BIP148 activation.
>> >
>> > This BIP allows for miners to respond to market forces quickly ahead
>> > of BIP148 activation by signalling for splitprotection. Any miners
>> > already running BIP148 should be encouraged to

Re: [bitcoin-dev] Compatibility-Oriented Omnibus Proposal

2017-06-02 Thread Jared Lee Richardson via bitcoin-dev
> The above decision may quickly become very controversial. I don't think
it's what most users had/have in mind when they discuss a "2MB+SegWit"
solution.
> With the current 1MB+SegWit, testing has shown us that normal usage
results in ~2 or 2.1MB blocks.
> I think most users will expect a linear increase when Base Size is
increased to 200 bytes and Total Weight is increased to 800 bytes.
With normal usage, the expected results would then be ~4 or 4.2MB blocks.

I think Calvin is correct here, the secondary limit is not what people
anticipated with the segwit + 2mb agreement.  It would not kill the
agreement for me, but it might for others.

What is the justification for the secondary limitation?  Is there hard data
to back this?  The quadratic hashing problem is frequently brought up, but
that is trivially handled with a hard 1mb transaction limit and on the
other thread there's talk/suggestions of an even lower limit.  Are there
any other reasons for this limitation, and is there data to justify those
concerns?  If not, this should be left out in favor of a transaction size
limit.  If so, hard data would go a long way to dealing with the conversy
this will create.


> Shaolin Fry’s “Mandatory activation of segwit deployment”[3] is included
to:
> > cause the existing "segwit" deployment to activate without needing to
release a new deployment.
> Both of the aforementioned activation options (“fast-activation” and
“flag-day activation”) serve to prevent unnecessary delays in the network
upgrade process, addressing a common criticism of the Scaling Agreement and
providing an opportunity for cooperation and unity instead.

This is likely to cause more controversy and unfortunately has the tightest
timelines.  Unlike the SW2mb working group's timelines, a hard-coded
timeline couldn't be changed with mutual agreement from the signers.

Given the chance of bit1 accidental activation without clear signaling for
the required bit4 2mb hard fork, I don't think the fair or acceptable
tradeoff is for flag day to require bit1 signaling only.  *Flag day should
be modified to accept either bit1 signaling, OR to accept bit4 signaling IF
the 80% threshold hasn't been met.*  In this way the anti-segwit working
group members are not in danger of an activated bit1 segwit without also
getting their portion of the compromise, the bit4 signaled HF.  If flag day
accepts bit4 OR bit1, AND bit4 requires both bit1 and bit4 once 80% is
reached, flag day is nearly guaranteed to get its stated desire within 1750
blocks (bit4 accepted until block 800; bit4+bit1 signaled afterwards until
95%), but without the chance that the WG signers won't get what they agreed
to.

*That seems like a minor compromise for BIP148.  Thoughts on this change to
flag day / BIP148?*

In addition, the aggressiveness of the timelines and the complexity of the
merged COOP proposal may require the BIP148 flag day to be pushed back.  I
would think some day in September is achievable, but I'm not sure if August
1st will be.

Jared


On Tue, May 30, 2017 at 3:20 PM, CalvinRechner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> In principle, there is complete flexibility when it comes to the specific
> consensus details of the hard fork. One common suggestion has been to phase
> in a gradual blocksize increase beyond the initial 2MB cap included in
> Luke-Jr's proposal (a la BIP103); this would certainly be a welcome
> inclusion in the Omnibus Proposal, provided that is what we want. The
> reasoning behind incorporating Luke-Jr's 2MB limit and discount-rebalancing
> was to satisfy the conditions of the Scaling Agreement while ensuring
> maximum safety, minimum code discrepancies, and minimum controversy among
> the community; these priorities seem imperative, considering the extreme
> timeline constraints we are working under and the goals of the proposal. To
> put it more simply, the intent of the proposal was to serve as a template
> for the minimum viable fork that can achieve true consensus. A gradual
> increase to a larger size cap, especially if it were reasonably
> conservative, would be wholly in accordance with the Omnibus Proposal if
> that is what it takes to achieve the cooperation between community,
> industry, and developers in this critical moment of Bitcoin's history.
>
>
> The purpose of the Omnibus Proposal is singlefold: to achieve the goals of
> the Consensus 2017 Scaling Agreement in the most maximally-compatible way.
> We can minimize disruption and loss potential all around by solving these
> problems in a compatibility-oriented manner. It is possible to fulfill both
> the letter and the spirit of the Scaling Agreement, to the complete
> satisfaction of all involved, while preventing chain-split risks in the
> meantime.
>
>
> There is no justification for incompatibility with existing deployment
> approaches, when there is the possibility to work together towards our
> mutual goals instead. The most rational option is

Re: [bitcoin-dev] Hypothetical 2 MB hardfork to follow BIP148

2017-06-02 Thread Jared Lee Richardson via bitcoin-dev
> Maybe there's some hole in Jorge's logic and scrapping blockmaxsize has 
> quadratic hashing risks, and maybe James' 10KB is too ambitious; but even if 
> so, a simple 1MB tx size limit would clearly do the trick.  The broader point 
> is that quadratic hashing is not a compelling reason to keep blockmaxsize 
> post-HF: does someone have a better one?

I think this is exactly the right direction to head.  There are
arguments to be made for various maximum sizes... Maybe the limit
could be set to 1mb initially, and at a distant future block
height(years?) automatically drop to 500kb or 100kb?  That would give
anyone with existing systems or pre-signed transactions several years
to adjust to the change.  Notification could (?possibly?) be done with
a non-default parameter that must be changed to continue to use 100kb
- <1mb transactions, so no one running modern software could claim
they were not informed when that future date hits.

I don't see any real advantages to continuing to support transactions
larger than 100kb excepting the need to update legacy use cases /
already signed transactions.

On Tue, May 30, 2017 at 8:07 PM, Jacob Eliosoff via bitcoin-dev
 wrote:
> Maybe there's some hole in Jorge's logic and scrapping blockmaxsize has
> quadratic hashing risks, and maybe James' 10KB is too ambitious; but even if
> so, a simple 1MB tx size limit would clearly do the trick.  The broader
> point is that quadratic hashing is not a compelling reason to keep
> blockmaxsize post-HF: does someone have a better one?
>
>
>
> On May 30, 2017 9:46 PM, "Jean-Paul Kogelman via bitcoin-dev"
>  wrote:
>>
>> That would invalidate any pre-signed transactions that are currently out
>> there. You can't just change the rules out from under people.
>>
>>
>> On May 30, 2017, at 4:50 PM, James MacWhyte via bitcoin-dev
>>  wrote:
>>
>>
>>>
>>>  The 1MB classic block size prevents quadratic hashing
>>> problems from being any worse than they are today.
>>>
>>
>> Add a transaction-size limit of, say, 10kb and the quadratic hashing
>> problem is a non-issue. Donezo.
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Small Modification to Segwit

2017-04-09 Thread Jared Lee Richardson via bitcoin-dev
I can speak from personal experience regarding another very prominent
altcoin that attempted to utilize an asic-resistant proof of work
algorithm, it is only a matter of time before the "asic resistant"
algorithm gets its own Asics.  The more complicated the algorithm, the more
secretive the asic technology is developed.  Even without it,
multi-megawatt gpu farms have already formed in the areas of the world with
low energy costs.  I'd support the goal if I thought it possible, but I
really don't think centralization of mining can be prevented.

On Apr 9, 2017 1:16 PM, "Erik Aronesty via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Curious: I'm not sure why a serious discussion of POW change is not on the
> table as a part of a longer-term roadmap.
>
> Done right, a ramp down of reliance on SHA-256 and a ramp-up on some of
> the proven, np-complete graph-theoretic or polygon manipulation POW would
> keep Bitcoin in commodity hardware and out of the hands of centralized
> manufacturing for many years.
>
> Clearly a level-playing field is critical to keeping centralization from
> being a "defining feature" of Bitcoin over the long term.   I've heard the
> term "level playing field" bandied about quite a bit.   And it seems to me
> that the risk of state actor control and botnet attacks is less than
> state-actor manipulation of specialized manufacturing of "SHA-256 forever"
> hardware.   Indeed, the reliance on a fairly simple hash seems less and
> less likely a "feature" and more of a baggage.
>
> Perhaps regular, high-consensus POW changes might even be *necessary* as a
> part of good maintenance of cryptocurrency in general.   Killing the
> existing POW, and using an as-yet undefined, but deployment-bit ready POW
> field to flip-flop between the current and the "next one" every 8 years or
> or so, with a ramp down beginning in the 7th year  A stub function that
> is guaranteed to fail unless a new consensus POW is selected within 7
> years.
>
> Something like that?
>
> Haven't thought about it *that* much, but I think the network would
> respond well to a well known cutover date.   This would enable
> rapid-response to quantum tech, or some other needed POW switch as well...
> because the mechanisms would be in-place and ready to switch as needed.
>
> Lots of people seem to panic over POW changes as "irresponsible", but it's
> only irresponsible if done irresponsibly.
>
>
> On Fri, Apr 7, 2017 at 9:48 PM, praxeology_guy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Jimmy Song,
>>
>> Why would the actual end users of Bitcoin (the long term and short term
>> owners of bitcoins) who run fully verifying nodes want to change Bitcoin
>> policy in order to make their money more vulnerable to 51% attack?
>>
>> If anything, we would be making policy changes to prevent the use of
>> patented PoW algorithms instead of making changes to enable them.
>>
>> Thanks,
>> Praxeology Guy
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Inhibiting a covert attack on the Bitcoin POW function

2017-04-06 Thread Jared Lee Richardson via bitcoin-dev
> Just checking to see if I understand this optimization correctly. In order to 
> find merkle roots in which the rightmost 32 bits are identical (i.e. partial 
> hash collisions), we want to compute as many merkle root hashes as quickly as 
> possible. The fastest way to do this is to take the top level of the Merkle 
> tree, and to collect a set of left branches and right branches which can be 
> independently manipulated. While the left branch can easily be manipulated by 
> changing the extranonce in the coinbase transaction, the right branch would 
> need to be modified by changing one of the transactions in the right branch 
> or by changing the number of transactions in the right branch. Correct so far?

Envisioning it in my head and trying to read the white paper, it
sounds like the process for a non-stratum mining farm would be this:

On primary server with sufficient memory, calculate ~4k-6k valid
left-side merkle tree roots and ~4k-6k right-side merkle tree roots.
Then try hashing every left-side option with every right-side option.
I'm not sure if modern asic chips are sufficiently generic that they
can also sha256-double-hash those combinations, but it seems logical
to assume that the permutations of those hashes could be computed on
an asic, perhaps via additional hardware installed on the server.
Hashing these is easier if there are fewer steps, i.e., fewer
transactions.

Out of this will come N(2-16 at most, higher not needed) colliding
merkle roots where the last 4 bytes are identical.  Those N different
merkle combinations are what can be used on the actual mining devices,
and those are all that needs to be sent for the optimization to work.

On the actual mining device, what is done is to take the identical
(collision) right 4 bytes of the merkle root and hash it with one
nonce value.  Since you have N(assume 8) inputs that all work with the
same value, calculating this single hash of once nonce is equivalent
to calculating 8 nonce hashes during the normal process, and this step
is 1/4th of the normal hashing process.  This hash(or mid-value?) is
then sent to 8 different cores which complete the remaining 3 hash
steps with each given collision value.  Then you increment the nonce
once and start over.

This works out to a savings of (assuming compressor and expander steps
of SHA2 require computationally the same amount of time) 25% * (7 / 8)
where N=8.

Greg, or someone else, can you confirm that this is the right
understanding of the approach?

> I have not seen or heard of any hardware available that can run more 
> efficiently using getblocktemplate.

As above, it doesn't require such a massive change.  They just need to
retrieve N different sets of work from the central server instead of 1
set of work.  The central server itself might need substantial
bandwidth if it farmed out the merkle-root hashing computational space
to miners.  Greg, is that what you're assuming they are doing?  Now
that I think about it, even that situation could be improved.  Suppose
you have N miners who can do either a merkle-tree combinatoric
double-sha or a block-nonce double-sha.  The central server calculates
the left and right merkle treeset to be combined and also assigns each
miner each a unique workspace within those combinatorics.  The miners
compute each hash in their workspace and shard the results within
themselves according to the last 16 bits.  Each miner then needs only
the memory for 1/Nth of the workspace, and can report back to the
central server only the highest number of collisions it has found
until the central server is satisfied and returns the miners to normal
(collided) mining.

Seems quite workable in a large mining farm to me, and would allow the
collisions to be found very, very quickly.

That said, it strikes me that there may be some statistical method by
which we can isolate which pools seem to have used this approach
against the background noise of other pools.  Hmm...

Jared



On Wed, Apr 5, 2017 at 7:10 PM, Jonathan Toomim via bitcoin-dev
 wrote:
> Just checking to see if I understand this optimization correctly. In order to 
> find merkle roots in which the rightmost 32 bits are identical (i.e. partial 
> hash collisions), we want to compute as many merkle root hashes as quickly as 
> possible. The fastest way to do this is to take the top level of the Merkle 
> tree, and to collect a set of left branches and right branches which can be 
> independently manipulated. While the left branch can easily be manipulated by 
> changing the extranonce in the coinbase transaction, the right branch would 
> need to be modified by changing one of the transactions in the right branch 
> or by changing the number of transactions in the right branch. Correct so far?
>
> With the stratum mining protocol, the server (the pool) includes enough 
> information for the coinbase transaction to be modified by stratum client 
> (the miner), but it does not include any information about the right s

Re: [bitcoin-dev] BIP proposal: Inhibiting a covert attack on the Bitcoin POW function

2017-04-06 Thread Jared Lee Richardson via bitcoin-dev
To me, all of these miss the main objection.  If a miner found an
optimization and kept it for themselves, that's their prerogative.
But if that optimization also happens to directly discourage the
growth and improvement of the protocol in many unforseen ways, and it
also encourages the miner to include fewer transactions per block,
that directly hurts Bitcoin and its future.  Something should clearly
be done about it when the latter is at issue.  I agree with you that
the former is a relative nonissue.

On Wed, Apr 5, 2017 at 11:24 PM, Jonathan Toomim via bitcoin-dev
 wrote:
> Ethically, this situation has some similarities to the DAO fork. We have an 
> entity who closely examined the code, found an unintended characteristic of 
> that code, and made use of that characteristic in order to gain tens of 
> millions of dollars. Now that developers are aware of it, they want to modify 
> the code in order to negate as much of the gains as possible.
>
> There are differences, too, of course: the DAO attacker was explicitly 
> malicious and stole Ether from others, whereas Bitmain is just optimizing 
> their hardware better than anyone else and better than some of us think they 
> should be allowed to.
>
> In both cases, developers are proposing that the developers and a majority of 
> users collude to reduce the wealth of a single entity by altering the 
> blockchain rules.
>
> In the case of the DAO fork, users were stealing back stolen funds, but that 
> justification doesn't apply in this case. On the other hand, in this case 
> we're talking about causing someone a loss by reducing the value of hardware 
> investments rather than forcibly taking back their coins, which is less 
> direct and maybe more justifiable.
>
> While I don't like patented mining algorithms, I also don't like the idea of 
> playing Calvin Ball on the blockchain. Rule changes should not be employed as 
> a means of disempowering and empoverishing particular entities without very 
> good reason. Whether patenting a mining optimization qualifies as good reason 
> is questionable.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Inhibiting a covert attack on the Bitcoin POW function

2017-04-06 Thread Jared Lee Richardson via bitcoin-dev
> We are not going to invalidate covert asicboost without a fight. And we are 
> working with a system that actively (and is demonstrably very effective at 
> doing it) resists changes which are contentious. This is definitely a 
> contentious change, because an important part of the community (the miners) 
> is going to be actively resisting it.

I'd just like to point out, invalidating asicboost has only a very
limited number of potential detractors.  Only a mining farm that
self-mined and used custom software would be able to exploit this.
Every other mining farm on the planet, plus any users wishing for more
transactions to be included in blocks would be in favor of this,
assuming the theory that it favors fewer transactions is correct.
That makes it less contentious than many other alternatives.  It might
even force the mining operation(s) in question to flip and support SW
in order to avoid losing face and/or appearing guilty.

As an additional plus, nearly all of the BU crowd and most BU
supporting miners would have little reason to object to Asicboost -
Based on philosophy alone, but not based on any practical
considerations.

Jared

On Wed, Apr 5, 2017 at 8:23 PM, David Vorick via bitcoin-dev
 wrote:
> I have a practical concern related to the amount of activation energy
> required to get something like this through. We are talking about
> implementing something that would remove tens to hundreds of millions of
> dollars of mining revenue for miners who have already gambled that this
> income would be available to them.
>
> That's not something they are going to let go of without a fight, and we've
> already seen this with the segwit resistance. Further, my understanding is
> that this makes a UASF a lot more difficult. Mining hardware that has unique
> optimizations on one chain only can resist a UASF beyond a simple economic
> majority, because they can do more hashes on the same amount of revenue.
> Threshold for success is no longer 51%, especially if you are expecting the
> miners to struggle (and this is a case where they have a very good reason to
> struggle). Any resistance from the hashrate during the early days of a UASF
> will inevitably cause large reorgs for older nodes, and is not much better
> than a hardfork.
>
> I don't know what the right answer is. But I know that we are not going to
> get segwit without a fight. We are not going to invalidate covert asicboost
> without a fight. And we are working with a system that actively (and is
> demonstrably very effective at doing it) resists changes which are
> contentious. This is definitely a contentious change, because an important
> part of the community (the miners) is going to be actively resisting it.
>
> I urge everybody to realize how difficult something like this is going to be
> to pull off. We are literally talking about invalidating hardware (or at
> least the optimized bits). It's only going to succeed if everybody is
> conclusively on board. As you consider proposals, realize that anything
> which is not the simplest and least contentious is already dead.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-04-01 Thread Jared Lee Richardson via bitcoin-dev
That's a quoted general statement that is highly subjective, not a
description of an attack.  If you can't articulate a specific attack vector
that we're defending against, such a defense has no value.

On Apr 1, 2017 12:41 AM, "Eric Voskuil"  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.

"Governments are good at cutting off the heads of a centrally
controlled networks..."

e
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBCAAGBQJY31m0AAoJEDzYwH8LXOFOayIH/0DcWukHZUVTV8952mkWnqjS
RCM8StQOuuTQ/2elvKoZa/nEv1PvpOQEO/AxJDEdIKOqjdXoc/QdZT/Qj834yyFi
mmNLm3x8voO7rTFEVtBrXQ4VYO7Zj5gVy6nRyMrhSGtzg4XqYiyGVoijiumfXOvq
ejLwyWJEf8klBwegIPkX4XX6UYjNyBt+E32Je7NxUbi54EPDRszWpEGGKfJrWiCQ
JO2jqB3O2RbMd0J1onBt2AGsjeQSE3HO0EBQSkdGQZ7PVSdE3I49uT2aAaScnPOt
ymbNz4QtlUWWpUgEI6VSjxHCGjX4+Vrn3HLRwjLe4nS2EX3mOVNY8MHMvbCeAuY=
=tD9k
-END PGP SIGNATURE-
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit2Mb - combined soft/hard fork - Request For Comments

2017-04-01 Thread Jared Lee Richardson via bitcoin-dev
> Remember that the "hashpower required to secure bitcoin" is determined
> as a percentage of total Bitcoins transacted on-chain in each block

Can you explain this statement a little better?  What do you mean by
that?  What does the total bitcoins transacted have to do with
hashpower required?

On Fri, Mar 31, 2017 at 2:22 PM, Matt Corallo via bitcoin-dev
 wrote:
> Hey Sergio,
>
> You appear to have ignored the last two years of Bitcoin hardfork
> research and understanding, recycling instead BIP 102 from 2015. There
> are many proposals which have pushed the state of hard fork research
> much further since then, and you may wish to read some of the posts on
> this mailing list listed at https://bitcoinhardforkresearch.github.io/
> and make further edits based on what you learn. Your goal of "avoid
> technical changes" appears to not have any basis outside of perceived
> compromise for compromise sake, only making such a hardfork riskier
> instead.
>
> At a minimum, in terms of pure technical changes, you should probably
> consider (probably among others):
>
> a) Utilizing the "hard fork signaling bit" in the nVersion of the block.
> b) Either limiting non-SegWit transactions in some way to fix the n**2
> sighash and FindAndDelete runtime and memory usage issues or fix them by
> utilizing the new sighash type which many wallets and projects have
> already implemented for SegWit in the spending of non-SegWit outputs.
> c) Your really should have replay protection in any HF. The clever fix from
> Spoonnet for poor scaling of optionally allowing non-SegWit outputs to
> be spent with SegWit's sighash provides this all in one go.
> d) You may wish to consider the possibility of tweaking the witness
> discount and possibly discounting other parts of the input - SegWit went
> a long ways towards making removal of elements from the UTXO set cheaper
> than adding them, but didn't quite get there, you should probably finish
> that job. This also provides additional tuneable parameters to allow you
> to increase the block size while not having a blowup in the worst-case
> block size.
> e) Additional commitments at the top of the merkle root - both for
> SegWit transactions and as additional space for merged mining and other
> commitments which we may wish to add in the future, this should likely
> be implemented an "additional header" ala Johnson Lau's Spoonnet proposal.
>
> Additionally, I think your parameters here pose very significant risk to
> the Bitcoin ecosystem broadly.
>
> a) Activating a hard fork with less than 18/24 months (and even then...)
> from a fully-audited and supported release of full node software to
> activation date poses significant risks to many large software projects
> and users. I've repeatedly received feedback from various folks that a
> year or more is likely required in any hard fork to limit this risk, and
> limited pushback on that given the large increase which SegWit provides
> itself buying a ton of time.
>
> b) Having a significant discontinuity in block size increase only serves
> to confuse and mislead users and businesses, forcing them to rapidly
> adapt to a Bitcoin which changed overnight both by hardforking, and by
> fees changing suddenly. Instead, having the hard fork activate technical
> changes, and then slowly increasing the block size over the following
> several years keeps things nice and continuous and also keeps us from
> having to revisit ye old blocksize debate again six months after activation.
>
> c) You should likely consider the effect of the many technological
> innovations coming down the pipe in the coming months. Technologies like
> Lightning, TumbleBit, and even your own RootStock could significantly
> reduce fee pressure as transactions move to much faster and more
> featureful systems.
>
> Commitments to aggressive hard fork parameters now may leave miners
> without much revenue as far out as the next halving (which current
> transaction growth trends are indicating we'd just only barely reach 2MB
> of transaction volume, let alone if you consider the effects of users
> moving to systems which provide more features for Bitcoin transactions).
> This could lead to a precipitous drop in hashrate as miners are no
> longer sufficiently compensated.
>
> Remember that the "hashpower required to secure bitcoin" is determined
> as a percentage of total Bitcoins transacted on-chain in each block, so
> as subsidy goes down, miners need to be paid with fees, not just price
> increases. Even if we were OK with hashpower going down compared to the
> value it is securing, betting the security of Bitcoin on its price
> rising exponentially to match decreasing subsidy does not strike me as a
> particularly inspiring tradeoff.
>
> There aren't many great technical solutions to some of these issues, as
> far as I'm aware, but it's something that needs to be incredibly
> carefully considered before betting the continued security of Bitcoin on
> exponential on-chain gr

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-04-01 Thread Jared Lee Richardson via bitcoin-dev
> If a typical personal computer cannot run a node
> there is no security.

If you can't describe an attack that is made possible when typical
personal computers can't run nodes, this kind of logic has no place in
this discussion.

On Fri, Mar 31, 2017 at 4:13 PM, Eric Voskuil via bitcoin-dev
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
>> If the obsession with every personal computer being able to run a
>> fill node continues then bitcoin will be consigned to the dustbin
>> of history,
>
> The cause of the block size debate is the failure to understand the
> Bitcoin security model. This failure is perfectly exemplified by the
> above statement. If a typical personal computer cannot run a node
> there is no security.
>
> e
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
> zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
> NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
> OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
> 8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
> dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
> =HPCK
> -END PGP SIGNATURE-
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-04-01 Thread Jared Lee Richardson via bitcoin-dev
> So your cluster isn't going to need to plan to handle 15k transactions per 
> second, you're really looking at more like 200k or even 500k transactions per 
> second to handle peak-volumes. And if it can't, you're still going to see 
> full blocks.

When I first began to enter the blocksize debate slime-trap that we
have all found ourselves in, I had the same line of reasoning that you
have now.  It is clearly untenable that blockchains are an incredibly
inefficient and poorly designed system for massive scales of
transactions, as I'm sure you would agree.  Therefore, I felt it was
an important point for people to accept this reality now and stop
trying to use Blockchains for things they weren't good for, as much
for their own good as anyone elses.  I backed this by calculating some
miner fee requirements as well as the very issue you raised.  A few
people argued with me rationally, and gradually I was forced to look
at a different question: Granted that we cannot fit all desired
transactions on a blockchain, how many CAN we effectively fit?

It took another month before I actually changed my mind.  What changed
it was when I tried to make estimations, assuming all the reasonable
trends I could find held, about future transaction fees and future
node costs.  Did they need to go up exponentially?  How fast, what
would we be dealing with in the future?  After seeing the huge
divergence in node operational costs without size increases($3 vs
$3000 after some number of years stands out in my memory), I tried to
adjust various things, until I started comparing the costs in BTC
terms.  I eventually realized that comparing node operational costs in
BTC per unit time versus transaction costs in dollars revealed that
node operational costs per unit time could decrease without causing
transaction fees to rise.  The transaction fees still had to hit $1 or
$2, sometimes $4, to remain a viable protection, but otherwise they
could become stable around those points and node operational costs per
unit time still decreased.

None of that may mean anything to you, so you may ignore it all if you
like, but my point in all of that is that I once used similar logic,
but any disagreements we may have does not mean I magically think as
you implied above.  Some people think blockchains should fit any
transaction of any size, and I'm sure you and I would both agree
that's ridiculous.  Blocks will nearly always be full in the future.
There is no need to attempt to handle unusual volume increases - The
fee markets will balance it and the use-cases that can barely afford
to fit on-chain will simply have to wait for awhile.  The question is
not "can we handle all traffic," it is "how many use-cases can we
enable without sacrificing our most essential features?"  (And for
that matter, what is each essential feature, and what is it worth?)

There are many distinct cut-off points that we could consider.  On the
extreme end, Raspberry Pi's and toasters are out.  Data-bound mobile
phones are out for at least the next few years if ever.  Currently the
concern is around home user bandwidth limits.  The next limit after
that may either be the CPU, memory, or bandwidth of a single top-end
PC.  The limit after that may be the highest dataspeeds that large,
remote Bitcoin mining facilities are able to afford, but after fees
rise and a few years, they may remove that limit for us.  Then the
next limit might be on the maximum amount of memory available within a
single datacenter server.

At each limit we consider, we have a choice of killing off a number of
on-chain usecases versus the cost of losing the nodes who can't reach
the next limit effectively.  I have my inclinations about where the
limits would be best set, but the reality is I don't know the numbers
on the vulnerability and security risks associated with various node
distributions.  I'd really like to, because if I did I could begin
evaluating the costs on each side.

> How much RAM do you need to process blocks like that?

That's a good question, and one I don't have a good handle on.  How
does Bitcoin's current memory usage scale?  It can't be based on the
UTXO, which is 1.7 GB while my node is only using ~450mb of ram.  How
does ram consumption increase with a large block versus small ones?
Are there trade-offs that can be made to write to disk if ram usage
grew too large?

If that proved to be a prohibitively large growth number, that becomes
a worthwhile number to consider for scaling.  Of note, you can
currently buy EC2 instances with 256gb of ram easily, and in 14 years
that will be even higher.

> So you have to rework the code to operate on a computer cluster.

I believe this is exactly the kind of discussion we should be having
14 years before it might be needed.  Also, this wouldn't be unique -
Some software I have used in the past (graphite metric collection)
came pre-packaged with the ability to scale out to multiple machines
split loads and replicate the data, and so cou

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
I guess I should caveat, a rounding error is a bit of exaggeration -
mostly because I previously assumed that it would take 14 years for
the network to reach such a level, something I didn't say and that you
might not grant me.

I don't know why paypal has multiple datacenters, but I'm guessing it
probably has a lot more to do with everything else they do -
interface, support, tax compliance, replication, redundancy - than it
does with the raw numbers of transaction volumes.

What I do know is the math, though.  WW tx volume = 426,000,000,000 in
2015.  Assuming tx size of ~500 bytes, that's 669 terabytes of data
per year.  At a hard drive cost of 0.021 per GB, that's $36k a year or
so and declines ~14% a year.

The bandwidth is the really big cost.  You are right that if this
hypothetical node also had to support historical syncing, the numbers
would probably be unmanagable.  But that can be solved with a simple
checkpointing system for the vast majority of users, and nodes could
solve it by not supporting syncing / reducing peer count.  With a peer
count of 25 I measured ~75 Gb/month with today's blocksize cap.  That
works out to roughly 10 relays(sends+receives) per transaction
assuming all blocks were full, which was a pretty close approximation.
The bandwidth data of our 426 billion transactions per year works out
to 942 mbit/s.  That's 310 Terabytes per month of bandwidth - At
today's high-volume price of 0.05 per GB, that's $18,500 a month or
$222,000 a year.  Plus the $36k for storage per year brings it to
~$250k per year.  Not a rounding error, but within the rough costs of
running an exchange - a team of 5 developers works out to ~$400-600k a
year, and the cost of compliance with EU and U.S. entities (including
lawyers) runs upwards of a million dollars a year.  Then there's the
support department, probably ~$100-200k a year.

The reason I said a rounding error was that I assumed that it would
take until 2032 to reach that volume of transactions (Assuming
+80%/year of growth, which is our 4-year and 2-year historical average
tx/s growth).  If hard drive prices decline by 14% per year, that cost
becomes $3,900 a year, and if bandwidth prices decline by 14% a year
that cost becomes $1800 a month($21,600 a year).  Against a
multi-million dollar budget, even 3x that isn't a large concern,
though not, as I stated, a rounding error.  My bad.

I didn't approximate for CPU usage, as I don't have any good estimates
for it, and I don't have significant reason to believe that it is a
higher cost than bandwidth, which seems to be the controlling cost
compared to adding CPU's.

> I'm not going to take the time to refute everything you've been saying

Care to respond to the math?

> This whole thread has been absurdly low quality.

Well, we agree on something at least.

On Fri, Mar 31, 2017 at 9:14 AM, David Vorick  wrote:
> No one is suggesting anything like this.  The cost of running a node
> that could handle 300% of the 2015 worldwide nonbitcoin transaction
> volume today would be a rounding error for most exchanges even if
> prices didn't rise.
>
>
> Then explain why PayPal has multiple datacenters. And why Visa has multiple
> datacenters. And why the banking systems have multiple datacenters each.
>
> I'm guessing it's because you need that much juice to run a global payment
> system at the transaction volumes that they run at.
>
> Unless you have professional experience working directly with transaction
> processors handling tens of millions of financial transactions per day, I
> think we can fully discount your assessment that it would be a rounding
> error in the budget of a major exchange or Bitcoin processor to handle that
> much load. And even if it was, it wouldn't matter because it's extremely
> important to Bitcoin's security that it's everyday users are able to and are
> actively running full nodes.
>
> I'm not going to take the time to refute everything you've been saying but I
> will say that most of your comments have demonstrated a similar level of
> ignorance as the one above.
>
> This whole thread has been absurdly low quality.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris

Correct me if I'm wrong, but nothing  possible if the client software
was electrum-like and used two independent sources for verification.
No?

> Do thought experiments and take it to the extremes where nobody runs a node, 
> what can miners do now which they could not do before?

This and the next point are just reductio ad absurdem, since no one is
suggesting anything of the sort. Even in that situation, I can't think
of anything miners could do if clients used more than one independent
source for verification, ala electrum question above.

> Why don't exchanges run SPV nodes?

No one is suggesting anything like this.  The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.



On Fri, Mar 31, 2017 at 1:19 AM, Luv Khemani  wrote:
>> Err, no, that's what happens when you double click the Ethereum icon
>
> instead of the Bitcoin icon.  Just because you run "Bitcoin SPV"
> instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
> somehow going to get Ethereum payments.  Your verification is just
> different and the risks that come along with that are different.  It's
> only confusing if you make it confusing.
>
> This is false. You could get coins which don't even exist  as long as a
> miner mined the invalid transaction.
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris
>
> The only reason SPV wallets do not accept ethereum payments is because of
> transaction/block format differences.
> SPV wallets have no clue what is a valid bitcoin, they trust miners fully.
>
> In the event of a hardfork, SPV wallets will blindly follow the longest
> chain.
>
>> If every block that is mined for them is deliberately empty because of
> an attacker, that's nonfunctional.  You can use whatever semantics you
> want to describe that situation, but that's clearly what I meant.
>
> Not sure why you are bringing this up, this is not the case today nor does
> it have anything to do with blocksize.
>
>> As above, if someone operates Bitcoin in SPV mode they are not
> magically at risk of getting Dashcoins.  They send and receive
> Bitcoins just like everyone else running Bitcoin software.  There's no
> confusion about it and it doesn't have anything to do with hashrates
> of anyone.
>
> As mentioned earlier, you are at risk of receiving made up money.
> SPV has everything to do with hashrate, it trusts hashrate fully.
> Crafting a bitcoin transaction paying you money that i do not have is not
> difficult, as long as a miner mines a block with it, your SPV wallet will
> accept it.
>
>> The debate is a choice between nodes paying more to allow greater growth
>> and adoption,
> or nodes constraining adoption in favor of debatable security
> concerns.
>
> Onchain transactions are not the only way to use Bitcoin the currency.
> Trades you do on an exchange are not onchain, yet transacted with Bitcoin.
>
>> And even if there was, the software would choose it for you?
>
> People choose the software, not the other way round.
>
>> Yes you do, if the segment options are known (and if they aren't,
> running a node likely won't help you choose either, it will choose by
> accident and you'll have no idea).  You would get to choose whose
> verifications to request/check from, and thus choose which segment to
> follow, if any.
>
> SPV does not decide, they follow longest chain.
> Centralised/Server based wallets follow the server they are connecting to.
> Full Nodes do not depend on a 3rd party to decide if the money received is
> valid.
>
>>  Are you really this dense?  If the cost of on-chain transactions
> rises, numerous use cases get killed off.  At $0.10 per tx you
> probably won't buy in-game digital microtransactions with it, but you
> might buy coffee with it.  At $1 per tx, you probably won't buy coffee
> with it but you might pay your ISP bill with it.  At $20 per tx, you
> probably won't pay your ISP bill with it, but you might pay your rent.
> At $300 per tx you probably won't use it for anything, but a company
> purchasing goods from China might.  At $4000 per tx that company
> probably won't use it, but international funds settlement for
> million-dollar transactions might use it.
>> At each fee step along the way you kill of hundreds or thousands of
> possible uses of Bitcoin.  Killing those off means fewer people will
> use it, so they will use something else instead.
>
> No need to get personal.
> As mentioned earlier, all these low value transactions can happen offchain.
> None of the use cases will be killed off. We have sub dollar trades
> happening on exchanges offchain.
>
>> The average person doesn't need that level of s

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-31 Thread Jared Lee Richardson via bitcoin-dev
> Node operation is making a stand on what money you will accept.

> Ie Your local store will only accept US Dollars and not Japanese Yen. Without 
> being able to run a node, you have no way to independently determine what you 
> are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.

Err, no, that's what happens when you double click the Ethereum icon
instead of the Bitcoin icon.  Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments.  Your verification is just
different and the risks that come along with that are different.  It's
only confusing if you make it confusing.

> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to 
> existing users.

If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional.  You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.

> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a 
> Bitcoin forked with inflation, you will not get any goods regardless of how 
> much hashrate those coins have.

As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins.  They send and receive
Bitcoins just like everyone else running Bitcoin software.  There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone.  It is just a different method of verification with
corresponding different costs of use and different security
guarantees.

> You should also take into consideration the number of independent mining 
> entities it takes to achieve 51% hashrate. It will be of little use to have 
> thousands on independent miners/pools  if 3 large pools make up 51% of hash 
> rate and collude to attack the network.

We're already fucked, China has 61% of the hashrate and the only thing
we can do about it is to wait for the Chinese electrical
supply/demand/transmission system to rebalance itself.  Aside from
that little problem, mining distributions and pool distributions don't
significantly factor into the blocksize debate.  The debate is a
choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.

> Nodes define which network they want to follow.

Do you really consider it choosing when there is only a single option?
 And even if there was, the software would choose it for you?  If it
is a Bitcoin client, it follows the Bitcoin blockchain.  There is no
BU blockchain at the moment, and Bitcoin software can't possibly start
following Ethereum blockchains.

> Without a Node, you don't even get to decide which segement you are on.

Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea).  You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.

> Ability to run a node and validate rules => Confidence in currency

This is only true for the small minority that actually need that added
level of security & confidence, and the paranoid people who believe
they need it when they really, really don't.  Some guy on reddit
spouted off the same garbage logic, but was much quieter when I got
him to admit that he didn't actually read the code of Bitcoin that he
downloaded and ran, nor any of the code of the updates.  He trusted.
*gasp*

The average person doesn't need that level of security.  They do
however need to be able to use it, which they cannot right now if you
consider "average" to be at least 50% of the population.

> Higher demand => Higher exchange rate

Demand comes from usage and adoption.  Neither can happen us being
willing to give other people the option to trade security features for
lower costs.

> I would not be holding any Bitcoins if it was unfeasible for me to run a Node 
> and instead had to trust some 3rd party that the currency was not being 
> inflated/censored.

Great.  Somehow I think Bitcoin's future involves very few more people
like you, and very many people who aren't paranoid and just want to be
able to send and receive Bitcoins.

> Bitcoin has value because of it's trustless properties. Otherwise, there is 
> no difference between cryptocurrencies and fiat.

No, it has its value for many, many reasons, trustless properties is
only one of them.  What I'm suggesting doesn't involve giving up
trustless properties except in your head (And not even then, since you
would almost certainly be able to afford to run a node for the rest of
your life if Bitcoin's value continues to rise as it has in the past).
And even if it did, there's a lot more reasons that a lot more people
than you would use it.

> Blocksize has nothing to do with utility, only cost of on-chain transactions.

Are you 

Re: [bitcoin-dev] High fees / centralization

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
That would be blockchain sharding.

Would be amazing if someone could figure out how to do it trustlessly.  So
far I'm not convinced it is possible to resolve the conflicts between the
shards and commit transactions between shards.

On Thu, Mar 30, 2017 at 6:39 PM, Vladimir Zaytsev <
vladimir.zayt...@gmail.com> wrote:

> There must be a way to organize “branches” of smaller activity to join
> main tree after they grow. Outsider a bit, I see going circles here, but
> not everything must be accepted in the chain. Good idea as it is, it’s just
> too early to record every sight….
>
>
>
> On Mar 30, 2017, at 5:52 PM, Jared Lee Richardson via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> > Further, we are very far from the point (in my appraisal) where fees
> are high enough to block home users from using the network.
>
> This depends entirely on the usecase entirely.  Most likely even without a
> blocksize increase, home purchases will be large enough to fit on the
> blocksize in the forseeable future.  Microtransactions(<$0.25) on the other
> hand aren't viable no matter what we try to do - There's just too much data.
>
> Most likely, transaction fees above $1 per tx will become unappealing for
> many consumers, and above $10 is likely to be niche-level.  It is hard to
> say with any certainty, but average credit card fees give us some
> indications to work with - $1.2 on a $30 transaction, though paid by the
> business and not the consumer.
>
> Without blocksize increases, fees higher than $1/tx are basically
> inevitable, most likely before 2020.  Running a node only costs $10/month
> if that.  If we were going to favor node operational costs that highly in
> the weighting, we'd better have a pretty solid justification with
> mathematical models or examples.
>
> > We should not throw away the core innovation of monetary sovereignty in
> pursuit of supporting 0.1% of the world's daily transactions.
>
> If we can easily have both, why not have both?
>
> An altcoin with both will take Bitcoin's monetary sovereignty crown by
> default.  No crown, no usecases, no Bitcoin.
>
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] High fees / centralization

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
> Further, we are very far from the point (in my appraisal) where fees are
high enough to block home users from using the network.

This depends entirely on the usecase entirely.  Most likely even without a
blocksize increase, home purchases will be large enough to fit on the
blocksize in the forseeable future.  Microtransactions(<$0.25) on the other
hand aren't viable no matter what we try to do - There's just too much data.

Most likely, transaction fees above $1 per tx will become unappealing for
many consumers, and above $10 is likely to be niche-level.  It is hard to
say with any certainty, but average credit card fees give us some
indications to work with - $1.2 on a $30 transaction, though paid by the
business and not the consumer.

Without blocksize increases, fees higher than $1/tx are basically
inevitable, most likely before 2020.  Running a node only costs $10/month
if that.  If we were going to favor node operational costs that highly in
the weighting, we'd better have a pretty solid justification with
mathematical models or examples.

> We should not throw away the core innovation of monetary sovereignty in
pursuit of supporting 0.1% of the world's daily transactions.

If we can easily have both, why not have both?

An altcoin with both will take Bitcoin's monetary sovereignty crown by
default.  No crown, no usecases, no Bitcoin.



On Thu, Mar 30, 2017 at 9:14 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mar 30, 2017 12:04 PM, "Tom Harding via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Raystonn,
>
> Your logic is very hard to dispute. An important special case is small
> miners.
>
> Small miners use pools exactly because they want smaller, more frequent
> payments.
>
> Rising fees force them to take payments less frequently, and will only
> tend to make more of them give up.
>
> With fees rising superlinearly, this centralizing effect is much stronger
> than the oft-cited worry of small miners joining large pools to decrease
> orphan rates.
>
>
> Miners get paid on average once every ten minutes. The size of fees and
> the number of fee transactions does not change the payout rate.
>
> Further, we are very far from the point (in my appraisal) where fees are
> high enough to block home users from using the network.
>
> Bitcoin has many high-value use cases such as savings. We should not throw
> away the core innovation of monetary sovereignty in pursuit of supporting
> 0.1% of the world's daily transactions.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
> There have been attacks demonstrated where a malicious miner with
sufficient hashrate can leverage large blocks to exacerbate selfish mining.

Can you give me a link to this?  Having done a lot of mining, I really
really doubt this.  I'm assuming the theory relies upon propagation times
and focuses on small miners versus large ones, but that's wrong.
Propagation times don't affect small miners disproportionately, though they
might affect small POOLS disproportionately, that isn't the same thing at
all.  No miner since at least 2014 has operated a full node directly with
each miner - it is incredibly impractical to do so.  They retrieve only the
merkle root hash and other parameters from the stratum server, which is a
very small packet and does not increase with the size of the blocks.  If
they really want to select which transactions to include, some pools offer
options of that sort(or can, I believe) but almost no one does.  If they
don't like how their pool picks transactions, they'll use a different pool,
that simple.

If there's some other theory about a miner exploiting higher blocksizes
selfishly then I'd love to read up on it to understand it.  If what
you/others actually meant by that was smaller "pools," that's a much much
smaller problem.  Pools don't earn major profits and generally are at the
mercy of their miners if they make bad choices or can't fix low
performance.  For pools, block propagation time was a major major issue
even before blocks were full, and latency + packet loss between mining
units and the pool is also a big concern.  I was seeing occasional block
propagation delays(over a minute) on a fiber connection in 2013/4 due to
minute differences in peering.  If a pool can't afford enough bandwidth to
keep propagation times down, they can't be a pool.  Bigger blocksizes will
make it so they even more totally-can't-be-a-pool, but they already can't
be a pool, so who cares.  Plus, compact blocks should have already solve
nearly all of this problem as I understand it.

So definitely want to know more if I'm misunderstanding the attack vector.

> We already know that large empty blocks (rather, blocks with fake
transactions) can be leveraged in ways that both damages the network and
increases miner profits.

Maybe you're meaning an attack where other pools get stuck on validation
due to processing issues?  This is also a nonissue.  The smallest viable
pool has enough difficulties with other, non-hardware related issues that
buying the largest, beefiest standard processor available with ample RAM
won't even come up on the radar.  No one cares about $600 in hardware
versus $1000 in hardware when it takes you 6 weeks to get your peering and
block propagation configuration just right and another 6 months to convince
miners to substantially use your pool.

If you meant miners and not pools, that's also wrong.  Mining hardware
doesn't validate blocks anymore, it hasn't been practical for years.  They
only get the merkle root hash of the valid transaction set.  The pool
handles the rest.

> In general, fear of other currencies passing Bitcoin is unsubstantiated.
Bitcoin has by far the strongest development team, and also is by far the
most decentralized.

Markets only care a little bit what your development team is like.
Ethereum has Vitalik, who is an incredibly smart and respectable dude,
while BU absolutely hates the core developers right now.  Markets are more
likely to put more faith in a single leader than core right now if that
comparison was really made.

"Most decentralized" is nearly impossible to quantify, and has almost no
value to speculators.  Since all of these markets are highly speculative,
they only care about future demand.  Future demand relies upon future use.
Unsubstantiated?  Ethereum is already 28% of Bitcoin by cap and 24% by
trading.  Four months ago that was 4%.  Their transaction volume also
doubled.  What world are you living in?

> A coin like ethereum may even be able to pass Bitcoin in market cap. But
that's okay. Ethereum has very different properties and it's not something
I would trust as a tool to provide me with political sovereignty.

Well great, I guess so long as you're ok with it we'll just roll with it.
Wait, no.  If Bitcoin loses its first-mover network effect, a small cadre
of die-hard libertarians are not going to be able to keep it from becoming
a page in the history books.  Die hard libertarians can barely keep a voice
in the U.S. congress - neither markets nor day-to-day users particularly
care about the philosophy, they care about what it can do for them.

> Ethereum passing Bitcoin in market cap does not mean that it has proved
superior to Bitcoin.

The markets have literally told us why Ethereum is shooting up.  Its
because the Bitcoin community has fractured around a debate with nearly no
progress on a solution for the last 3 years, and especially because BU
appears to be strong enough to think they can fork and the markets know
full well wh

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
> What we want is a true fee-market where the miner can decide to make a
block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
on
> fees will go down!

I agree in concept with everything you've said here, but I think there's a
frequent misconception that there's a certain level of miner payouts that
miners "deserve" and/or the opposite, that miners "deserve" as little as
possible.  The 51% attacks that PoW's shields us from are relatively well
defined, which can be used to estimate the minimum amount of sustainable
fees for shielding.  Beyond that minimum amount of fees, the best amount of
fees for every non-miner is the lowest.

Unfortunately miners could arbitrarily decide to limit blocksizes, and
there's little except relay restrictions that everyone else could do about
it.  Fortunately miners so far have pushed for blocksize increases at least
as much as anyone else, though the future when Bitcoin adoption stabilizes
would be an unknown.

> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.

FYI, I don't see this happening again ever, barring brief exceptions,
unless there was a sudden blocksize change, which ideally we'd avoid ever
happening.  The stable average value of the transaction fee determines what
kind of business use-cases can be built using Bitcoin.  An average fee of
$0.001 usd enables a lot more use cases than $0.10 average fees, and $50.00
average fees still have far more possible use cases than a $1000 average
fee.  If fees stabilize low, use cases will spring up to fill the
blockspace, unless miners arbitraily seek to keep the fees above some level.

On Thu, Mar 30, 2017 at 3:30 AM, Tom Zander via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thursday, 30 March 2017 07:23:31 CEST Ryan J Martin via bitcoin-dev
> wrote:
> >  The original post and the assorted limit proposals---lead me to
> > something I think is worth reiterating: assuming Bitcoin adoption
> > continues to grow at similar or accelerating rates, then eventually the
> > mempool is going to be filled with thousands of txs at all times whether
> > block limits are 1MB or 16MB
>
> This is hopefully true. :)
>
> There is an unbounded amount of demand for block space, and as such it
> doesn’t benefit anyone if the amount of free transactions get out of hand.
> Because freeloaders would definitely be able to completely suffocate
> Bitcoin.
>
> In the mail posted by OP he makes clear that this is a proposal for a hard
> fork to change the block size *limit*. The actual block size would not be
> changed at the same time, it will continue being set based on market values
> or whatever we decide between now and then.
>
> The block size itself should be set based on the amount of fees being paid
> to miners to make a block.
>
> What we want is a true fee-market where the miner can decide to make a
> block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
> on
> fees will go down!
> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.
>
> As such I don’t fear the situation where the block size limit goes up a lot
> in one go, because it is not in anyone’s interest to make the actual block
> size follow.
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
> You are only looking at technical aspects and missing the political
aspect.

Nodes don't do politics.  People do, and politics is a lot larger with a
lot more moving parts than just node operation.

> full nodes protect the user from the change of any properties of Bitcoin
which they do not agree with.

Full nodes protect from nothing if the chain they attempt to use is
nonfunctional.

> The ability to retain this power for users is of prime importance and is
arguably what gives Bitcoin most of it's value
> Any increase in the cost to run a full node is an increase in cost to
maintain monetary sovereignty

This power is far more complicated than just nodes.  You're implying that
node operation == political participation.  Node operation is only a very
small part of the grand picture of the bitcoin balance of power.

> The ability for a user to run a node is what keeps the miners honest and
prevents them from rewriting any of Bitcoin's rules.

No, it isn't.  Nodes disagreeing with miners is necessary but not
sufficient to prevent that.  Nodes can't utilize a nonfunctional chain, nor
can they utilize a coin with no exchanges.

> What makes Bitcoin uncensorable

Only two things - 1. Node propagation being strong enough that a target
node can't be surrounded by attacker nodes (or so that attacker nodes can't
segment honest nodes), and 2. Miners being distributed in enough countries
and locations to avoid any single outside attacker group from having enough
leverage to prevent transaction inclusion, and miners also having enough
incentives(philosophical or economic) to refuse to collude towards
transaction exclusion.

Being able to run a node yourself has no real effect on either of the two.
Either we have enough nodes that an attacker can't segment the network or
we don't.

> What gives confidence that the 21 million limit will be upheld

What you're describing would result in a fork war.  The opposition to this
would widespread and preventing an attempt relies upon mutual destruction.
If users refused to get on board, exchanges would follow users.  If miners
refused to get on board, the attempt would be equally dead in the water.
It would require a majority of users, businesses and miners to change the
limit; Doing so without an overwhelming majority(90% at least) would still
result in a contentious fork that punished both sides(in price, confidence,
adoption, and possibly chain or node attacks) for refusing to agree.

Nodes have absolutely no say in the matter if they can't segment the
network, and even if they could their impact could be repaired.  Users !=
Nodes.

> What makes transactions irreversible

Err, this makes me worry that you don't understand how blockchains work...
This is because miners are severely punished for attempting to mine on
anything but the longest chain.  Nodes have absolutely no say in the
matter, they always follow the longest chain unless a hardfork was
applied.  If the hardfork has overwhelming consensus, i.e. stopping a 51%
attack, then the attack would be handled.  If the hardfork did not have
overwhelming consensus it would result in another fork war requiring users,
businesses, and miners to actively decide which to support and how, and
once again would involve mutual destruction on both forks.

Nodes don't decide any of these things.  Nodes follow the longest chain,
and have no practical choices in the matter.  Users not running nodes
doesn't diminish their power - Mutual destruction comes from the market
forces on the exchanges, and they could give a rats ass whether you run a
node or not.

> The market is not storing 10s of billions of dollars in Bitcoin despite
all it's risks because it is useful for everyday transactions, that is a
solved problem in every part of the world (Cash/Visa/etc..).

This is just the "bitcoin is gold" argument.  Bitcoin is not gold.  For
someone not already a believer, Bitcoin is a risky, speculative investment
into a promising future technology, whereas gold is a stable physical asset
with 4,000 years of acceptance history that has the same value in nearly
every city on the planet.  Bitcoin is difficult to purchase and difficult
to find someone to exchange for goods or services.  Literally the only
reason we have 10s of billions of dollars of value is because speculation,
which includes nearly all Bitcoin users/holders and almost all businesses
and miners.  While Bitcoin borrows useful features from gold, it has more
possible uses, including uses that were never possible before Bitcoin
existed, and we believe that gives it huge potential.

The ability of other systems to do transactions, like visa or cash, come
with the limitations of those systems.  Bitcoin was designed to break those
limitations and STILL provide the ability to do transactions.  We might all
agree Bitcoin isn't going to ever solve the microtransaction problem, at
least not on-chain, but saying Bitcoin doesn't need utility is just
foolish.  Gold doesn't need utility, gold h

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-30 Thread Jared Lee Richardson via bitcoin-dev
> The block size itself should be set based on the amount of fees being
paid to miners to make a block.

There's a formula to this as well, though going from that to a blocksize
number will be very difficult.  Miner fees need to be sufficient to
maintain economic protection against attackers.  There is no reason that
miner fees need to be any higher than "sufficient."  I believe that
"sufficient" value can be estimated by considering a potential attacker
seeking to profit from short-selling Bitcoin after causing a panic crash.
If they can earn more profit from shorting Bitcoin than it costs to buy,
build/deploy, and perform a 51% attack to shut down the network, then we
are clearly vulnerable.  The equation for the profit side of the equation
can be worked out as:

(bitcoin_price * num_coins_shortable * panic_price_drop_percentage)

The equation for the cost side of the equation depends on the total amount
of miner hardware that the network is sustainably paying to operate,
factoring in all costs of the entire bitcoin mining lifecycle(HW cost,
deployment cost, maintenance cost, electricity, amortized facilities cost,
business overheads, orphan losses, etc) except chip design, which the
attacker may be able to take advantage of for free.  For convenience I'm
simplifying that complicated cost down to a single number I'm calling
"hardware_lifespan" although the concept is slightly more involved than
that.

(total_miner_payouts * bitcoin_price * hardware_lifespan)

Bitcoin_price is on boths ides of the equation and so can be divided out,
giving:

Unsafe point = (num_coins_shortable * panic_price_drop_percentage) <
(total_miner_payouts
* hardware_lifespan)

Estimating the total number of shortable coins an attacker of nearly
unlimited funds is tricky, especially when things like high leverage levels
or naked short selling may be offered by exchanges.  The percent of damage
the resulting panic would cause is also tricky to estimate, but on both
numbers we can make some rough guesses and see how they play out.  With
more conservative numbers like say, 2 year hardware lifespan, 10% short,
70% panic drop you get: 1,300k coins profit, 1800 BTC/day in fees minimum
needed to make the attack cost more than it profits.

Using various inputs and erring on the side of caution, I get a minimum
BTC/day fee range of 500-2000.  Unfortunately if the blocksize isn't
increased, a relatively small number of transactions/users have to bear the
full cost of the minimum fees, over time increasing the minimum "safe"
average fee paid to 0.008 BTC, 30x the fees people are complaining about
today, and increasing in real-world terms as price increases.  All that
said, I believe the costs for node operation are the number that gets hit
first as blocksizes are increased, at least past 2020.  I don't think
blocksizes could be increased to such a size that the insufficient-fee
vulnerability would be a bigger concern than high node operational costs.
The main thing I don't have a good grasp on at the moment is any math to
estimate how many nodes we need to protect against the attacks that can
come from having few nodes, or even a clear understanding of what those
attacks are.

> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.

This is also totally true.  A system that tried to eliminate the fee
markets would be flawed, and fortunately miners have significant reasons to
oppose such a system.

The reverse is also a problem - If miners as a large group sought to lower
blocksizes to force fee markets higher, that could be a problem.  I don't
have solutions for the issue at this time, but something I've turned over
in my mind.

On Thu, Mar 30, 2017 at 3:30 AM, Tom Zander via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thursday, 30 March 2017 07:23:31 CEST Ryan J Martin via bitcoin-dev
> wrote:
> >  The original post and the assorted limit proposals---lead me to
> > something I think is worth reiterating: assuming Bitcoin adoption
> > continues to grow at similar or accelerating rates, then eventually the
> > mempool is going to be filled with thousands of txs at all times whether
> > block limits are 1MB or 16MB
>
> This is hopefully true. :)
>
> There is an unbounded amount of demand for block space, and as such it
> doesn’t benefit anyone if the amount of free transactions get out of hand.
> Because freeloaders would definitely be able to completely suffocate
> Bitcoin.
>
> In the mail posted by OP he makes clear that this is a proposal for a hard
> fork to change the block size *limit*. The actual block size would not be
> changed at the same time, it will continue being set based on market values
> or whatever we decide between now and then.
>
> The block size itself should be set based on the amount of fees being paid
> to miners to make a block.
>
> What we want is a true fee-market where the min

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> I’m confident that we could work with the miners who we have good
relationships with to start including the root hash of the (lagging) UTXO
set in their coinbase transactions, in order to begin transforming this
idea into reality.

By itself, this wouldn't work without a way for a new node to differentiate
between a false history and a true one.

>  We could also issue regular transactions from “semi-trusted” addresses
controlled by known people that include the same root hash in an OP_RETURN
output, which would allow cross-checking against the miners’ UTXO
commitments, as part of this initial “prototype”

This might work, but I fail to understand how a new node could verify an
address / transaction without a blockchain to back it.  Even if it could,
it becomes dependent upon those addresses not being compromised, and the
owners of those addresses would become targets for potential government
operations.

Having the software silently attempt to resolve the problem is risky unless
it is foolproof.  Otherwise, users will assume their software is showing
them the correct history/numbers implicitly, and if the change the utxo
attacker made was small, the users might be able to follow the main chain
totally until it was too late and the attacker struck with an address that
otherwise never transacted.  Sudden, bizarre, hard to debug fork and
potentially double spend against people who picked up the fraudulent utxo.

Users already treat wallet software with some level of suspicion, asking if
they can trust x or y or z, or like the portion of the BU community
convinced that core has been compromised by blockstream bigwigs.  Signed
releases could provide the same thing but would encourage both open-source
security checks of the signed utxo's and potentially of users to check
download signatures.

Either approach is better than what we have now though, so I'd support
anything.

On Wed, Mar 29, 2017 at 1:28 PM, Peter R via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I believe nearly everyone at Bitcoin Unlimited would be supportive of a
> UTXO check-pointing scheme.  I’d love to see this happen, as it would
> greatly reduce the time needed to get a new node up-and-running, for node
> operators who are comfortable trusting these commitments.
>
> I’m confident that we could work with the miners who we have good
> relationships with to start including the root hash of the (lagging) UTXO
> set in their coinbase transactions, in order to begin transforming this
> idea into reality.  We could also issue regular transactions from
> “semi-trusted” addresses controlled by known people that include the same
> root hash in an OP_RETURN output, which would allow cross-checking against
> the miners’ UTXO commitments, as part of this initial “prototype” system.
>
> This would "get the ball rolling" on UTXO commitments in a permissionless
> way (no one can stop us from doing this). If the results from this
> prototype commitment scheme were positive, then perhaps there would be
> support from the community and miners to enforce a new rule which requires
> the (lagging) root hashes be included in new blocks.  At that point, the
> UTXO commitment scheme is no longer a prototype but a trusted feature of
> the Bitcoin network.
>
> On that topic, are there any existing proposals detailing a canonical
> ordering of the UTXO set and a scheme to calculate the root hash?
>
> Best regards,
> Peter
>
>
> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> What about periodically committing the entire UTXO set to a special
> checkpoint block which becomes the new de facto Genesis block?
>
> Daniele
>
> --
>
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +
> From: Andrew Johnson 
> To: David Vorick 
> Cc: Bitcoin Dev 
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
>  ail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks.   I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction...  Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth.  Would also help t

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> It's a political assessment. Full nodes are the ultimate arbiters of
consensus.

That's not true unless miners are thought of as the identical to nodes,
which is has not been true for nearly 4 years now.  Nodes arbitrating a
consensus the BU theory - that nodes can restrain miners - but it doesn't
work.  If miners were forked off from nonminers, the miner network could
keep their blockchain operational under attack from the nodes far better
than nodes could keep their blockchain operational under attack from the
miners.  The miners could effectively grind the node network to a complete
halt and probably still run their own fork unimpeded at the same time.
This would continue until the the lack of faith in the network drove the
miners out of business economically, or until the node network capitulated
and followed the rules of the miner network.

The reason BU isn't a dire threat is that there's a great rift between the
miners just like there is between the average users, just as satoshi
intended, and that rift gives the user network the economic edge.

> If home users are not running their own full nodes, then home users have
to trust and rely on other, more powerful nodes to represent them. Of
course, the more powerful nodes, simply by nature of having more power, are
going to have different opinions and objectives from the users.

I think you're conflating mining with node operation here.  Node users only
power is to block the propagation of certain things.  Since miners also
have a node endpoint, they can cut the node users out of the equation by
linking with eachother directly - something they already do out of
practicality for propagation.  Node users do not have the power to
arbitrate consensus, that is why we have blocks and PoW.

> And it's impossible for 5000 nodes to properly represent the views of
5,000,000 users. Users running full nodes is important to prevent political
hijacking of the Bitcoin protocol.  [..] that changes you are opposed to
are not introduced into the network.

This isn't true.  Non-miner nodes cannot produce blocks.  Their opinion is
not represented in the blockchain in any way, the blockchain is entirely
made up of blocks.  They can commit transactions, but the transactions must
follow an even stricter set of rules and short of a user activated PoW
change, the miners get to decide.  It might be viable for us to introduce
ways for transactions to vote on things, but that also isn't nodes voting -
that's money voting.

Bitcoin is structured such that nodes have no votes because nodes cannot be
trusted.  They don't inherently represent individuals, they don't
inherently represent value, and they don't commit work that is played
against eachother to achieve a game theory equilibrium.  That's miners.

> This statement is not true for home users, it is true for datacenter
nodes. For home users, 200 GB of bandwidth and 500 GB of bandwidth largely
have the exact same cost.

Your assumption is predicated upon the idea that users pay a fixed cost for
any volume of bandwidth.  That assertion is true for some users but not
true for others, and it is becoming exceedingly less true in recent years
with the addition of bandwidth caps by many ISP's.  Even users without a
bandwidth cap can often get a very threatening letter if they were to max
their connection 24/7.  Assuming unlimited user bandwidth in the future and
comparing that with limited datacenter bandwidth is extremely short
sighted.  Fundamentally, if market forces have established that datacenter
bandwidth costs $0.09 per GB, what makes you think that ISP's don't have to
deal with the same limitations?  They do, the difference is that $0.09 per
GB times the total usage across the ISP's customer base is far, far lower
than $80 times the number of customers.  The more that a small group of
customers deviating wildly becomes a problem for them, the more they will
add bandwidth caps or send threatening letters or even rate-limit or stop
serving those users.

Without that assumption, your math and examples fall apart - Bandwidth
costs for full archival nodes are nearly 50 times higher than storage costs
no matter whether they are at home or in a datacenter.

> The financials of home nodes follow a completely different math than the
costs you are citing by quoting datacenter prices.

No, they really aren't without your assumption.  Yes, they are somewhat
different - If someone has a 2TB hard drive but only ever uses 40% of it,
the remaining hard drive space would have a cost of zero.  Those specific
examples break down when you average over several years and fifty thousand
users.  If that same user was running a bitcoin node and hard drive space
was indeed a concern, they would factor that desire into the purchase of
their next computer, preferring those with larger hard drives.  That
reintroduces the cost with the same individual who had no cost before.  The
cost difference doesn't work out to the exact same numbers as the
datacen

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> Pruned nodes are not the default configuration, if it was the default
configuration then I think you would see far more users running a pruned
node.

Default configurations aren't a big enough deal to factor into the critical
discussion of node costs versus transaction fee cost.  Default
configurations can be changed, and if nodes are negatively affected by a
default configuration, there will be an abundance of information about how
to correct that effect by turning on pruning.  Bitcoin can't design with
the assumption that people can't google - If we wanted to cater to that
population group right now, we'd need 100x the blocksize at least.

> But that would also substantially increase the burden on archive nodes.

This is already a big problem from the measurements I've been looking at.
There are alternatives that need to be considered there as well.  If we
limit ourselves to not changing the syncing process for most users, the
blocksize limit debate changes drastically.  Hard drive costs, CPU costs,
propagation times... none of those things matter because the cost of sync
bandwidth is so incredibly high even now ($130ish per month, see other
email).  Even if we didn't increase the blocksize any more than segwit,
we're already seeing sync costs being shifted onto fewer nodes - I.e., Luke
Jr's scan finding ~50k nodes online but only 7k of those show up on sites
like bitnodes.21.co.  Segwit will shift it further until the few nodes
providing sync limit speeds and/or max out on connections, providing no
fully-sync'd nodes for a new node to connect to. Then wallet providers /
node software will offer a solution - A bundled utxo checkpoint that
removes the need to sync.  This slightly increases centralization, and
increases centralization more if core were to adopt the same approach.

The advantage would be tremendous for such a simple solution - Node costs
would drop by a full order of magnitude for full nodes even today, more
when archival nodes are more restricted, history is bigger, and segwit
blocksizes are in effect, and then blocksizes could be safely increased by
nearly the same order of magnitude, increasing the utility of bitcoin and
the number of people that can effectively use it.

Another, much more complicated option is for the node sync process to
function like a tor network.  A very small number of seed nodes could send
data on to only other nodes with the highest bandwidth available(and good
retention policy, i.e. not tightly pruning as they sync), who then spread
it out further and so on.  That's complicated though, because as far as I
know the syncing process today has no ability to exchange a selfish syncing
node for a high performing syncing node.  I'm not even sure - will a
syncing node opt to sync from a different node that, itself, isn't fully
sync'd but is farther ahead?

At any rate, syncing bandwidth usage is a critical problem for future
growth and is solvable.  The upsides of fixing it are huge, though.

On Wed, Mar 29, 2017 at 9:25 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" 
> wrote:
>
> What's stopping these users from running a pruned node?  Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> Perhaps you are fortunate to have a home computer that has more than a
single 512GB SSD. Lots of consumer hardware has that little storage.

That's very poor logic, sorry.  Restricted-space SSD's are not a
cost-effective hardware option for running a node.  Keeping blocksizes
small has significant other costs for everyone.  Comparing the cost of
running a node under arbitrary conditons A, B, or C when there are far more
efficient options than any of those is a very bad way to think about the
costs of running a node.  You basically have to ignore the significant
consequences of keeping blocks small.

If node operational costs rose to the point where an entire wide swath of
users that we do actually need for security purposes could not justify
running a node, that's something important for consideration.  For me, that
translates to modern hardware that's relatively well aligned with the needs
of running a node - perhaps budget hardware, but still modern - and
above-average bandwidth caps.

You're free to disagree, but your example only makes sense to me if
blocksize caps didn't have serious consequences.  Even if those
consequences are just the threat of a contentious fork by people who are
mislead about the real consequences, that threat is still a consequence
itself.

On Wed, Mar 29, 2017 at 9:18 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Perhaps you are fortunate to have a home computer that has more than a
> single 512GB SSD. Lots of consumer hardware has that little storage. Throw
> on top of it standard consumer usage, and you're often left with less than
> 200 GB of free space. Bitcoin consumes more than half of that, which feels
> very expensive, especially if it motivates you to buy another drive.
>
> I have talked to several people who cite this as the primary reason that
> they are reluctant to join the full node club.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.

Why is that a given?  Is there math that outlines what the risk levels are
for various configurations of node distributions, vulnerabilities, etc?
How does one even evaluate the costs versus the benefits of node costs
versus transaction fees?

> Disk space I believe is the most significant problem today, with RAM
being the second most significant problem, and finally bandwidth
consumption as the third most important consideration. I believe that v0.14
is already too expensive on all three fronts, and that block size increases
shouldn't be considered at all until the requirements are reduced (or until
consumer hardware is better, but I believe we are talking 3-7 years of
waiting if we pick that option).

Disk space is not the largest cost, either today or in the future.  Without
historical checkpointing in some fashion, bandwidth costs are more than 2
orders of magnitude higher cost than every other cost for full listening
nodes.  With historical syncing discounted(i.e. pruned or nonlistening
nodes) bandwidth costs are still higher than hard drive costs.


Today: Full listening node, 133 peers, measured 1.5 TB/mo of bandwidth
consumption over two multi-day intervals.  1,500 GB/month @ ec2 low-tier
prices = $135/month, 110 GB storage = $4.95.  Similar arguments extend to
consumer hardware - Comcast broadband is ~$80/mo depending on region and
comes with 1.0 TB cap in most regions, so $120/mo or even $80/mo would be
in the same ballpark.  A consumer-grade 2GB hard drive is $70 and will last
for at least 2 years, so $2.93/month if the hard drive was totally
dedicated to Bitcoin and $0.16/month if we only count the percentage that
Bitcoin uses.

For a non-full listening node, ~25 peers I measured around 70 GB/month of
usage over several days, which is $6.3 per month EC2 or $5.6 proportional
Comcast cost.  If someone isn't supporting syncing, there's not much point
in them not turning on pruning.  Even if they didn't, a desktop in the $500
range typically comes with 1 or 2 TB of storage by default, and without
segwit or a blocksize cap increase, 3 years from now the full history will
only take up the 33% of the smaller, three year old, budget-range PC hard
drive.  Even then if we assume the hard drive price declines of the last 4
years hold steady(14%, very low compared to historical gains), 330gb of
data only works out to a proportional monthly cost of $6.20 - still
slightly smaller than his bandwidth costs, and almost entirely removable by
turning on pruning since he isn't paying to help others sync.

I don't know how to evaluate the impacts of RAM or CPU usage, or
consequently electricity usage for a node yet.  I'm open to quantifying any
of those if there's a method, but it seems absurd that ram could even
become a signficant factor given the abundance of cheap ram nowadays with
few programs needing it.  CPU usage and thus electricity costs might become
a factor, I just don't know how to quantify it at various block scales.
Currently cpu usage isn't taxing any hardware that I run a node on in any
way I have been able to notice, not including the syncing process.

> I am also solidly unconvinced that increasing the blocksize today is a
good move, even as little as SegWit does.

The consequence of your logic that holds node operational costs down is
that transaction fees for users go up, adoption slows as various use cases
become impractical, price growth suffers, and alt coins that choose lower
fees over node cost concerns will exhibit competitive growth against
Bitcoin's crypto-currency market share.  Even if you are right, that's
hardly a tradeoff not worth thoroughly investigating from every angle, the
consequences could be just as dire for Bitcoin in 10 years as it would be
if we made ourselves vulnerable.

And even if an altcoin can't take Bitcoin's dominance by lower fees, we
will not end up with millions of home users running nodes, ever.  If they
did so, that would be orders of magnitude fee market competition, and
continuing increases in price, while hardware costs decline.  If
transaction fees go up from space limitations, and they go up even further
in real-world terms from price increases, while node costs decline,
eventually it will cost more to send a transaction than it does to run a
node for a full month.  No home users would send transactions because the
fee costs would be higher than anything they might use Bitcoin for, and so
they would not run a node for something they don't use - Why would they?
The cost of letting the ratio between node costs and transaction costs go
in the extreme favor of node costs would be worse - Lower Bitcoin
usability, adoption, and price, without any meaningful increase in security.

How do we evaluate the math on node distributions versus various attack
vectors?



On Wed, Mar 29, 2017 at 8:5

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
In order for any blocksize increase to be agreed upon, more consensus is
needed.  The proportion of users believing no blocksize increases are
needed is larger than the hardfork target core wants(95% consensus).  The
proportion of users believing in microtransactions for all is also larger
than 5%, and both of those groups may be larger than 10% respectively.  I
don't think either the Big-blocks faction nor the low-node-costs faction
have even a simple majority of support.  Getting consensus is going to be a
big mess, but it is critical that it is done.

On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> If there should be a hard-fork, Core team should author the code. Other
> dev teams have marginal support among all BTC users.
>
> Im tending to believe, that HF is necessary evil now. But lets do it in
> conservative approach:
> - Fix historical BTC issues, improve code
> - Plan HF activation date well ahead - 12 months+
> - Allow increasing block size on year-year basis as Luke suggested
> - Compromise with miners on initial block size bump (e.g. 2MB)
> - SegWit
>
> Martin Lizner
>
> On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
be controversial among some users [..] I don't think it's very interesting
to discuss further size increases.

I think the reason for this is largely because SegWit as a blocksize
increase isn't very satisfying.  It resolves to a one-time increase with no
future plans, thus engendering the same objections as people who demand we
just "raise the number to N."  People can argue about what N should be, but
when N is just a flat number, we know we'll have to deal with the issue
again.

In that light I think it is even more essential to continue to discuss the
blocksize debate and problem.

> I find more interesting to talk to the users and see how they think
Segwit harms them,

>From an inordinant amount of time spent reading Reddit, I believe this
largely comes down to the rumor that has a deathgrip on the BU community -
That Core are all just extensions of Blockstream, and blockstream wants to
restrict growth on-chain to force growth of their 2nd layer
services(lightning and/or sidechains).

I believe the tone of the discussion needs to be changed, and have been
trying to work to change that tone for weeks now.  There's one faction that
believes that Bitcoin will rarely, if ever, benefit from a blocksize
increase, and fees rising is a desired/unavoidable result.  There's a
different faction that believes Bitcoin limits are arbitrary and that all
people worldwide should be able to put any size transactions, even
microtransactions, on-chain.  Both factions are extreme in their viewpoints
and resort to conspiracy theories to interpret the actions of
Core(blockstream did it) or BU(Jihan controls everything and anyone who
says overwise is a shill paid by Roger Ver!)

It is all very unhealthy for Bitcoin.  Both sides need to accept that
microtransactions from all humans cannot go on-chain, and that never
increasing the blocksize doesn't mean millions of home users will run
nodes.  The node argument breaks down economically and the microtransaction
argument is an impossible mountain for a blockchain to climb.


On Wed, Mar 29, 2017 at 2:37 AM, Jorge Timón via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
> be controversial among some users (I find that very often it is because
> they have been confused about what segwit does or even outright lied about
> it) I don't think it's very interesting to discuss further size increases.
> I find more interesting to talk to the users and see how they think Segwit
> harms them, maybe we missed something in segwit that needs to be removed
> for segwit to become uncontroversial, or maybe it is just disinformation.
>
> On the other hand, we may want to have our first uncontroversial hardfork
> asap, independently of block size. For example, we could do something as
> simple as fixing the timewarp attack as bip99 proposes. I cannot think of a
> hf that is easier to implement or has less potential for controversy than
> that.
>
> On 29 Mar 2017 8:32 am, "Bram Cohen via bitcoin-dev"  linuxfoundation.org> wrote:
>
> On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>
> Much as it may be appealing to repeal the block size limit now with a
> grace period until a replacement is needed in a repeal and replace
> strategy, it's dubious to assume that an idea can be agreed upon later when
> it can't be agreed upon now. Trying to put a time limit on it runs into the
> possibility that you'll find that whatever reasons there were for not
> having general agreement on a new setup before still apply, and running
> into the embarrassing situation of winding up sticking with the status quo
> after much sturm and drang.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> I suggest you take a look at this paper: http://fc16.ifca.ai/
bitcoin/papers/CDE+16.pdf  It may help you form opinions based in science
rather than what appears to be nothing more than a hunch.  It shows that
even 4MB is unsafe.  SegWit provides up to this limit.

I find this paper wholly unconvincing.  Firstly I note that he assumes the
price of electricity is 10c/kwh in Oct 2015.  As a miner operating and
building large farms at that time, I can guarantee you that almost no large
mines were paying anything even close to that high for electricity, even
then.  If he had performed a detailed search on the big mines he would have
found as much, or could have asked, but it seems like it was simply made
up.  Even U.S. industrial electricity prices are lower than that.

Moreover, he focuses his math almost entirely around mining, asserting in
table 1 that 98% of the "cost of processing a transaction" as being
mining.  That completely misunderstands the purpose of mining.  Miners
occasionally trivially resolve double spend conflicts, but miners are
paid(and played against eachother) for economic security against
attackers.  They aren't paid to process transactions.  Nodes process
transactions and are paid nothing to do so, and their costs are 100x more
relevant to the blocksize debate than a paper about miner costs.  Miner's
operational costs relate to economic protection formulas, not the cost of a
transaction.

He also states: "the top 10% of nodes receive a 1MB block 2.4min earlier
than the bottom 10% — meaning that depending on their access to nodes, some
miners could obtain a significant and unfair lead over others in solving
hash puzzles."

He's using 2012-era logic of mining.  By October 2015, no miner of any size
was in the bottom 10% of node propagation.  If they were a small or medium
sized miner, they mined shares on a pool and would be at most 30 seconds
behind the pool.  Pools that didn't get blocks within 20 seconds weren't
pools for long.  If they were a huge miner, they ran their own pool with
good propagation times.  For a scientific paper, this is reading like
someone who had absolutely no idea what was really going on in the mining
world at the time.  But again, none of that relates to transaction "costs."
 Transactions cost nodes money; protecting the network costs miners money.
Miners are rewarded with fees; nodes are rewarded only by utility and price
increases.

On Tue, Mar 28, 2017 at 10:53 AM, Alphonse Pace via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Juan,
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/
> bitcoin/papers/CDE+16.pdf  It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch.  It shows that
> even 4MB is unsafe.  SegWit provides up to this limit.
>
> 8MB is most definitely not safe today.
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
> Wang Chun,
>
> Can you specify what meeting you are talking about?  You seem to have not
> replied on that point.  Who were the participants and what was the purpose
> of this meeting?
>
> -Alphonse
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia  wrote:
>
>> Alphonse,
>>
>>
>>
>> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
>> 32MB limit valid in next halving, from network, storage and CPU perspective
>> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>>
>>
>>
>> If is unsafe or impossible to raise the blocksize is a different topic.
>>
>
>>
>> Regards
>>
>>
>>
>> Juan
>>
>>
>>
>>
>>
>> *From:* bitcoin-dev-boun...@lists.linuxfoundation.org [mailto:
>> bitcoin-dev-boun...@lists.linuxfoundation.org] *On Behalf Of *Alphonse
>> Pace via bitcoin-dev
>> *Sent:* Tuesday, March 28, 2017 2:24 PM
>> *To:* Wang Chun <1240...@gmail.com>; Bitcoin Protocol Discussion <
>> bitcoin-dev@lists.linuxfoundation.org>
>> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>>
>>
>>
>> What meeting are you referring to?  Who were the participants?
>>
>>
>>
>> Removing the limit but relying on the p2p protocol is not really a true
>> 32MiB limit, but a limit of whatever transport methods provide.  This can
>> lead to differing consensus if alternative layers for relaying are used.
>> What you seem to be asking for is an unbound block size (or at least
>> determined by whatever miners produce).  This has the possibility (and even
>> likelihood) of removing many participants from the network, including many
>> small miners.
>>
>>
>>
>> 32MB in less than 3 years also appears to be far beyond limits of safety
>> which are known to exist far sooner, and we cannot expect hardware and
>> networking layers to improve by those amounts in that time.
>>
>>
>>
>> It also seems like it would be much better to wait until SegWit activates
>> in order to truly measure the effects on the network from this increased
>> capacity before committing to a

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Jared Lee Richardson via bitcoin-dev
> That said, for that to be alleviated we
could simply do something based on historical transaction growth (which
is somewhat linear, with a few inflection points),

Where do you get this?  Transaction growth for the last 4 years averages to
+65% per year and the last 2 is +80% per year.  That's very much not linear.



On Tue, Mar 28, 2017 at 10:13 AM, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Not sure what "last week's meeting" is in reference to?
>
> Agreed that the hard fork should be well-prepared, but I think its
> dangerous to think that a hard fork as agreed upon would be a simple
> relaxation of the block size. For example, Johnson Lau's previous
> proposal, Spoonnet, which I think is probably one of the better ones,
> would be incompatible with these rules.
>
> I, of course, worry about what happens if we cannot come to consensus on
> a number to soft fork down to, potentially significantly risking miner
> profits (and, thus, the security of Bitcoin) if a group is able to keep
> things "at the status quo". That said, for that to be alleviated we
> could simply do something based on historical transaction growth (which
> is somewhat linear, with a few inflection points), but that number ends
> up being super low (eg somewhere around 2MB at the next halving, which
> SegWit itself already provides :/.
>
> We could, of course, focus on designing a hard fork's activation and
> technical details, with a very large block size increase in it (ie
> closer to 4/6MB at the next halving or so, something we at least could
> be confident we could develop software for), with intention to soft fork
> it back down if miner profits are suffering.
>
> Matt
>
> On 03/28/17 16:59, Wang Chun via bitcoin-dev wrote:
> > I've proposed this hard fork approach last year in Hong Kong Consensus
> > but immediately rejected by coredevs at that meeting, after more than
> > one year it seems that lots of people haven't heard of it. So I would
> > post this here again for comment.
> >
> > The basic idea is, as many of us agree, hard fork is risky and should
> > be well prepared. We need a long time to deploy it.
> >
> > Despite spam tx on the network, the block capacity is approaching its
> > limit, and we must think ahead. Shall we code a patch right now, to
> > remove the block size limit of 1MB, but not activate it until far in
> > the future. I would propose to remove the 1MB limit at the next block
> > halving in spring 2020, only limit the block size to 32MiB which is
> > the maximum size the current p2p protocol allows. This patch must be
> > in the immediate next release of Bitcoin Core.
> >
> > With this patch in core's next release, Bitcoin works just as before,
> > no fork will ever occur, until spring 2020. But everyone knows there
> > will be a fork scheduled. Third party services, libraries, wallets and
> > exchanges will have enough time to prepare for it over the next three
> > years.
> >
> > We don't yet have an agreement on how to increase the block size
> > limit. There have been many proposals over the past years, like
> > BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> > on. These hard fork proposals, with this patch already in Core's
> > release, they all become soft fork. We'll have enough time to discuss
> > all these proposals and decide which one to go. Take an example, if we
> > choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> > from 32MiB to 2MB will be a soft fork.
> >
> > Anyway, we must code something right now, before it becomes too late.
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev