Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Pindar Wong
I think it would be helpful if we could all *chill* and focus on the solid
engineering necessary to make Bitcoin succeed.

p.


On Mon, Jun 1, 2015 at 7:02 PM, Chun Wang 1240...@gmail.com wrote:

 On Mon, Jun 1, 2015 at 6:13 PM, Mike Hearn m...@plan99.net wrote:
  Whilst it would be nice if miners in China can carry on forever
 regardless
  of their internet situation, nobody has any inherent right to mine if
 they
  can't do the job - if miners in China can't get the trivial amounts of
  bandwidth required through their firewall and end up being outcompeted
 then
  OK, too bad, we'll have to carry on without them.
 
  But I'm not sure why it should be a big deal. They can always run a node
 on
  a server in Taiwan and connect the hardware to it via a VPN or so.

 Ignorant. You seem do not understand the current situation. We
 suffered from orphans a lot when we started in 2013. It is now your
 turn. If Western miners do not find a China-based VPN into China, or
 if Western pools do not manage to improve their connectivity to China,
 or run a node in China, it would be them to have higher orphans, not
 us. Because we have 50%+.


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-06-01 Thread Ricardo Filipe
I've been following the discussion of the block size limit and IMO it
is clear that any constant block size limit is, as many have said
before, just kicking the can down the road.
My problem with the dynamic lower limit solution based on past blocks
is that it doesn't account for usage spikes. I would like to propose
another dynamic lower limit scheme:
Let the block size limit be a function of the number of current
transactions in the mempool. This way, bitcoin usage regulates the
block size limit.

I'm sorry i don't have the knowledge of the code base or time to make
simulations on this kind of approach, but nevertheless I would like to
leave it here for discussion or foster other ideas.

cheers

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Chun Wang
On Mon, Jun 1, 2015 at 6:13 PM, Mike Hearn m...@plan99.net wrote:
 Whilst it would be nice if miners in China can carry on forever regardless
 of their internet situation, nobody has any inherent right to mine if they
 can't do the job - if miners in China can't get the trivial amounts of
 bandwidth required through their firewall and end up being outcompeted then
 OK, too bad, we'll have to carry on without them.

 But I'm not sure why it should be a big deal. They can always run a node on
 a server in Taiwan and connect the hardware to it via a VPN or so.

Ignorant. You seem do not understand the current situation. We
suffered from orphans a lot when we started in 2013. It is now your
turn. If Western miners do not find a China-based VPN into China, or
if Western pools do not manage to improve their connectivity to China,
or run a node in China, it would be them to have higher orphans, not
us. Because we have 50%+.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Pindar Wong
Two very valid and important points. Thank you for making these
observations Peter.

p.


On Mon, Jun 1, 2015 at 7:26 PM, Peter Todd p...@petertodd.org wrote:

 On Mon, Jun 01, 2015 at 06:42:05PM +0800, Pindar Wong wrote:
  On Mon, Jun 1, 2015 at 6:13 PM, Mike Hearn m...@plan99.net wrote:
 
   Whilst it would be nice if miners in China can carry on forever
 regardless
   of their internet situation, nobody has any inherent right to mine if
   they can't do the job - if miners in China can't get the trivial
 amounts of
   bandwidth required through their firewall and end up being outcompeted
 then
   OK, too bad, we'll have to carry on without them.
  
 
  I'd rather think of mining as a responsibility than a right per se, but
  you're right in so far as it's competitive and self-correcting.

 It's important to remember that the service Bitcoin miners are providing
 us is *not* transaction validation, but rather decentralization.
 Validation is something every full node does already; there's no
 shortage of it. What's tricky is designing a Bitcoin protocol that
 creates the appropriate incentives for mining to remain decentralized,
 so we get good value for the large amount of money being sent to miners.

 I've often likened this task to building a robot to go to the grocery
 store to buy milk for you. If that robot doesn't have a nose, before
 long store owners are going to realise it can't tell the difference
 between unspoilt and spoilt milk, and you're going to get ripped off
 paying for a bunch of spoiled milk.

 Designing a Bitcoin protocol where we expect competition to result in
 smaller miners in more geographically decentralized places to get
 outcompeted by larger miners who are more geographically centralized
 gets us bad value for our money. Sure it's self-correcting, but not in
 a way that we want.

   But I'm not sure why it should be a big deal. They can always run a
 node
   on a server in Taiwan and connect the hardware to it via a VPN or so.
  
  
   Let's agree to disagree on this point.

 Note how that VPN, and likely VPS it's connected too, immediately adds
 another one or two points of failure to the whole system. Not only does
 this decrease reliability, it also decreases security by making attacks
 significantly easier - VPS security is orders of magnitude worse than
 the security of physical hardware.

 --
 'peter'[:-1]@petertodd.org
 0e187b95a9159d04a3586dd4cbc068be88a3eafcb5b885f9

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Thy Shizzle
WOW Way to burn your biggest adopters who put your transactions into the 
chain...what a douche.

From: Mike Hearnmailto:m...@plan99.net
Sent: ‎1/‎06/‎2015 8:15 PM
To: Alex Mizrahimailto:alex.mizr...@gmail.com
Cc: Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

Whilst it would be nice if miners in China can carry on forever regardless
of their internet situation, nobody has any inherent right to mine if
they can't do the job - if miners in China can't get the trivial amounts of
bandwidth required through their firewall and end up being outcompeted then
OK, too bad, we'll have to carry on without them.

But I'm not sure why it should be a big deal. They can always run a node on
a server in Taiwan and connect the hardware to it via a VPN or so.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Chun Wang
I cannot believe why Gavin (who seems to have difficulty to spell my
name correctly.) insists on his 20MB proposal regardless the
community. BIP66 has been introduced for a long time and no one knows
when the 95% goal can be met. This change to the block max size must
take one year or more to be adopted. We should increase the limit and
increase it now. 20MB is simply too big and too risky, sometimes we
need compromise and push things forward. I agree with any solution
lower than 10MB in its first two years.

On Mon, Jun 1, 2015 at 7:02 PM, Chun Wang 1240...@gmail.com wrote:
 On Mon, Jun 1, 2015 at 6:13 PM, Mike Hearn m...@plan99.net wrote:
 Whilst it would be nice if miners in China can carry on forever regardless
 of their internet situation, nobody has any inherent right to mine if they
 can't do the job - if miners in China can't get the trivial amounts of
 bandwidth required through their firewall and end up being outcompeted then
 OK, too bad, we'll have to carry on without them.

 But I'm not sure why it should be a big deal. They can always run a node on
 a server in Taiwan and connect the hardware to it via a VPN or so.

 Ignorant. You seem do not understand the current situation. We
 suffered from orphans a lot when we started in 2013. It is now your
 turn. If Western miners do not find a China-based VPN into China, or
 if Western pools do not manage to improve their connectivity to China,
 or run a node in China, it would be them to have higher orphans, not
 us. Because we have 50%+.

On Mon, Jun 1, 2015 at 7:02 PM, Chun Wang 1240...@gmail.com wrote:
 On Mon, Jun 1, 2015 at 6:13 PM, Mike Hearn m...@plan99.net wrote:
 Whilst it would be nice if miners in China can carry on forever regardless
 of their internet situation, nobody has any inherent right to mine if they
 can't do the job - if miners in China can't get the trivial amounts of
 bandwidth required through their firewall and end up being outcompeted then
 OK, too bad, we'll have to carry on without them.

 But I'm not sure why it should be a big deal. They can always run a node on
 a server in Taiwan and connect the hardware to it via a VPN or so.

 Ignorant. You seem do not understand the current situation. We
 suffered from orphans a lot when we started in 2013. It is now your
 turn. If Western miners do not find a China-based VPN into China, or
 if Western pools do not manage to improve their connectivity to China,
 or run a node in China, it would be them to have higher orphans, not
 us. Because we have 50%+.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Ángel José Riesgo
Hi everyone,

I'm a long-time lurker of this mailing list but it's the first time I post
here, so first of all I'd like to thank all of the usual contributors for
the great insights and technical discussions that can be found here. As
this is such a momentous point in the history of Bitcoin, I'd just like to
throw in my opinion too.

First, I agree with Oliver Egginger's message that it's much more elegant
to keep the numbers as powers of 2 rather than introducing somewhat
arbitrary numbers like 20. This also makes it easier to count the level of
support for what would be a clear spectrum of discrete levels (1, 2, 4, ...
32, 64, ..., infinite). If a temporary peace accord can be reached with a
value like 8 or 16, this will buy us some time for both the user base to
continue growing without hitting the limit and for newer technologies like
the lightning network to be developed and tested. We will also see whether
the relatively small increase causes any unexpected harm or whether (as I
expect) everything continues to run smoothly.

Personally, I'd like to see Bitcoin grow and become what I think most
Bitcoin users like myself expect from it: that it should be a payment
network directly accessible to people all over the world. In my opinion, it
is the proposition of Bitcoin as a form of electronic money that
additionally makes it a good store of value. I don't believe in the idea
that it can exist as just some sort of digital gold for a geeky financial
elite. And I haven't been persuaded by those who claim the scarcity of
block space is an economic fundamental of Bitcoin either. It seems to me
there's a lot of batty economic ideas being bandied about regarding the
supposed long-term value of the cap without much justification. In this
sense, my sympathies are with those who want to remove the maximum block
size cap. This was after all the original idea, so it's not fair for the
1MB camp to claim that they're the ones preserving the essences of Bitcoin.

But, anyway, I also think that a consensus at this point would be much
better than a head-on confrontation between two incompatible pieces of
software competing to gain the favour of a majority of exchanges and
merchants. With this in mind, can't we accept the consensus that raising
the hard-coded limit to a value like 8MB buys us a bit of time and should
be at least palatable to everyone? This may not be what the staunch
supporters of the 1MB limit want, but it's also not what I and others would
want, so we're talking about finding some common ground here, and not about
one side getting their way to the detriment or humiliation of the other.

The problem with a compromise based on a one-off maximum-size increase, of
course, is that we're just kicking the can down the road and the discussion
will continue. It's not a solution I like, but how can we get people like
say Greg Maxwell or Pieter Wuille to accept something more drastic? If they
find a new maximum-size cap acceptable, then it could be a reasonable
compromise. A new cap will let us test the situation and see how the
Bitcoin environment reacts. The next time the discussion crops up (probably
very soon, I know...), we may all have a better understanding of the
implications.

Ángel José Riesgo
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] soft-fork block size increase (extension blocks)

2015-06-01 Thread Adam Back
Hi Gavin

Sorry for slow response  broken threading - mailbox filled up  only
saw your response on archive.

I do earnestly think opt-in block-size increases are politically
cleaner (gives different people different sized blocks by their own
volition without forcing a compromise) and less risky than hard forks.
Particularly if a hard-fork were really provoked without clear and
wide consensus - dragons lay there.

 Then ask the various wallet developer how long it would take them to update
 their software to support something like this,

I don't think thats any particular concern, extension block payments
are forwards and backwards compatible.  Businesses who are keen to
have more transactions, would make it their problem to implement in
their wallet, or ask the wallet vendor/maintainer they're working with
to do it.  Nothing breaks if they dont use it.  The people that have
the need for it will work on it.  Market at work.  If it turns out
they dont really have a need for it, just projected huge numbers for
their business plan that say dont materialise, well no foul.

 and do some UI mockups of what the experience would look like for users.

I am not a UX guy, but for example it might be appropriate for tipping
services or other micropayments to use an extension block.  Or small
retail payments.  They can choose what address they use.  Merchants,
integrators etc can do likewise.
It gives plenty enough scope that people can work with useful
trade-offs while others work on lightning.

 If there are two engineering solutions to a problem, one really simple, and
 one complex, why would you pick the complex one?

Because the more complex one is safer, more flexible, more future
proof and better for decentralisation (and so as a bonus and might
actually get done without more months of argument as its less
contentious because it gives users choice to opt-in).  Bitcoin itself
is complex, a central ledger is simpler but as we know uninteresting
which is to say this is a security tradeoff.

Obviously I do appreciate KISS as a design principle, and utility of
incremental improvements, but this is a security trade-off we're
discussing here.  I am proposing a way to not weaken security, while
getting what you think is important - access to more TPS with a higher
centralisation tradeoff (for those who opt-in to it, rather than for
everyone whether that tradeoff is strongly against their interests or
not).

The decentralisation metrics are getting worse, not better, see Greg
Maxwell's comments
http://www.reddit.com/r/Bitcoin/comments/37vg8y/is_the_blockstream_company_the_reason_why_4_core/crqg381

This would not by those metrics be a good moment in history to make
the situation worse.

 Especially if the complex solution has all of the problems of the simple
 one (20MB extension blocks are just as dangerous as 20MB main blocks,
 yes? If not, why not?)

Not at all, thats the point.  Bitcoin has a one-size fits all
blocksize.  People can pool mine the 8MB extension block, while solo
or GBT mining the 1MB block.  Thats more centralising than staying at
1MB (because to get the fees from the extension block some people
without that kind of bandwidth are pool mining 8/9th of the lower
security/decentralisation transactions.  But its less centralising
than a fixed blocksize of 9MB (1+8 for apples/apples) because
realistically if those transactions are not spam, they would've
happened offchain, and offchain until we get lightning like systems
up, means central systems which are worse than the slight
centralisation of 8MB blocks being single servers and prone to custody
 security failure.  I think you made that point yourself in a recent
post also.

Sound good? ;)  Seriously I think its the least bad idea I've heard on
this topic.

As an aside, a risk with using companies as a sounding board, is that
you can get a misleading sense of consensus.  Did they understand the
tradeoff between security (decentralisation) and blocksize.  Did they
care?  Do they represent users interests?  Would they have voted
instead for extension blocks if it was presented in similar terms?  (I
have to imagine they might have preferred extension blocks given the
better story if you gloss over complexity and tradeoffs).

Adam

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-01 Thread Mike Hearn

 It's surprising to see a core dev going to the public to defend a proposal
 most other core devs disagree on, and then lobbying the Bitcoin ecosystem.


I agree that it is a waste of time. Many agree. The Bitcoin ecosystem
doesn't really need lobbying - my experience from talking to businesses and
wallet developers months ago is they virtually all see raising capacity as
a no brainer ... and some of them see this debate as despair-inducing
insanity.

What's happened here is that a small number of people have come to believe
they have veto power over changes to Bitcoin, and they have also become
*wildly* out of step with what the wider community wants. That cannot last.
So, short of some sudden change of heart that lets us kick the can down the
road a bit longer, a fork is inevitable.

Just be glad it's Gavin driving this and not me ... or a faceless coalition
of startups.


 Decentralization is the core of Bitcoin's security model and thus that's
 what gives Bitcoin its value.


No. Usage is what gives Bitcoin value.

It's kind of maddening that I have to point this out. Decentralisation is a
means to an end. I first used Bitcoin in April 2009 and it was perfectly
decentralised back then: every wallet was a full node and every computer
was capable of mining.

So if you believe what you just wrote, I guess Bitcoin's value has gone
down every day since.

On the other hand, if you believe the markets, Bitcoin's value has gone up.

Apparently the question of what gives Bitcoin its value is a bit more
complicated than that.




 : to incentive layer 2 and offchain solutions to scale Bitcoin : there are
 promising designs/solutions out there (LN, ChainDB, OtherCoin protocole,
 ...), but most don't get much attention, because there is right now no need
 for them. And, I am sure new solutions will be invented.


I have seen this notion a few times. I would like to dispose of it right
now.

I am one of the wallet developers you would be trying to incentivise by
letting Bitcoin break, and I say: get real. Developers are not some
bottomless fountain of work that will spit out whatever you like for free
if you twist their arms badly enough.

The problems that incentivised the creation of Bitcoin existed for decades
before Bitcoin was actually invented. Incentives are not enough. Someone
has to actually do the work, too. All proposals on the table would:

   - Involve enormous amounts of effort from many different people
   - Be technically risky (read: we don't know if they would even work)
   - Not be Bitcoin

The last point is important: people who got interested in Bitcoin and
decided to devote their time to it might not feel the same way about some
network of payment hubs or whatever today's fashion is. Faced with their
work being broken by armchair developers on some mailing list, they might
just say screw it and walk away completely.

After all, as the arguments for these systems are not particularly logical,
they might slave over hot keyboards for a year to support the Lightning
Network or whatever and then discover that it's no longer the fashionable
thing ... and that suddenly an even more convoluted design is being
incentivised.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Yifu Guo
 Nielsen's Law of Internet Bandwidth is simply not true, but if you look at
data points like http://www.netindex.com/upload/ which will show we are at
least on the right track, but this is flawed still.

The fact of the matter is these speed tests are done from local origin to
local destination within the city, let alone the country ( see note about
how these test are only conducted 300 miles within the server). and our
current internet infrastructure is set up with CDNs for the web and media
consumption.
these data points can not and should not be used to model the connectivity
of a peer to peer network.

Uplink bandwidth is scarce is China and expensive, avg about $37 per 1mbps
after 5, but this is not the real problem. the true issue lies in the
ISP transparent proxy they run. this is not a problem isolated in just
China, Thailand and various other countries in Asia like Lebanon. I have
also heard in various IRCs that southern France also face this similar
issue due to poor routing configurations they have that prevents
connections to certain parts of the world, unsure if this is a mistake or a
geopolitical by-product.

As for your question earlier Gavin, from Dallas TX to a VPS in Shanghai
on 上海电信/Shanghai telecom, which is capped at 5mbps the data results match
my concerns, I've gotten low as 83 Kbits/sec and as high as 9.24 Mbits/sec.
and other ranges in between, none are consistent. ping avg is about 250ms.

The temporary solution I recommend again from earlier is MPTCP:
http://www.multipath-tcp.org/ which allows you to multiple
interfaces/networks for a single TCP connection, this is mainly developed
for mobile3g/wifi transition but I found uses to improve bandwidth and
connection stability on the go by combining a local wifi/ethernet
connection with my 3g phone tether. this allows you to set up a middlebox
somewhere, put shadowsocks server on it, and on your local machine you can
route all TCP traffic over the shadow socks client and MPTCP will
automatically pick the best path for upload and download.



On Mon, Jun 1, 2015 at 9:59 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 On Mon, Jun 1, 2015 at 7:20 AM, Chun Wang 1240...@gmail.com wrote:

 I cannot believe why Gavin (who seems to have difficulty to spell my
 name correctly.) insists on his 20MB proposal regardless the
 community. BIP66 has been introduced for a long time and no one knows
 when the 95% goal can be met. This change to the block max size must
 take one year or more to be adopted. We should increase the limit and
 increase it now. 20MB is simply too big and too risky, sometimes we
 need compromise and push things forward. I agree with any solution
 lower than 10MB in its first two years.


 Thanks, that's useful!

 What do other people think?  Would starting at a max of 8 or 4 get
 consensus?  Scaling up a little less than Nielsen's Law of Internet
 Bandwidth predicts for the next 20 years?  (I think predictability is
 REALLY important).

 I chose 20 because all of my testing shows it to be safe, and all of my
 back-of-the-envelope calculations indicate the costs are reasonable.

 If consensus is 8 because more than order-of-magnitude increases are
 scary -- ok.

 --
 --
 Gavin Andresen


 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
*Yifu Guo*
*Life is an everlasting self-improvement.*
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Gavin Andresen
On Sun, May 31, 2015 at 6:55 PM, Alex Mizrahi alex.mizr...@gmail.com
wrote:

 Yes, if you are on a slow network then you are at a (slight) disadvantage.
 So?


 Chun mentioned that his pool is on a slow network, and thus bigger blocks
 give it an disadvantage. (Orphan rate is proportional to block size.)

You said that no, on contrary those who make big blocks have a disadvantage.
 And now you say that yes, this disadvantage exist.


Did you just lie to Chun?


Chun said that if somebody produced a big block it would take them at least
6 seconds to process it.

He also said he has nodes outside the great firewall (We also use Aliyun
and Linode cloud services for block
propagation.).

So I assumed that he was talking about the what if somebody produces a
block that takes a long time to process attack -- which doesn't work (the
attacker just increases their own orphan rate).

If the whole network is creating blocks that takes everybody (except the
person creating the blocks) six seconds to broadcast+validate, then the
increase in orphan rate is spread out over the whole network. The
network-wide orphan rate goes up, everybody suffers a little (fewer blocks
created over time) until the next difficulty adjustment, then the
difficulty drops, then everybody is back in the same boat.

If it takes six seconds to validate because of limited bandwidth, then he
should connect via Matt's fast relay network, which optimize new block
announcements so they take a couple orders of magnitude less bandwidth.

If it takes six seconds because he's trying to validate on a raspberry
pi then he should buy a better validating machine, and/or help test the
current pending pull requests to make validation faster (e.g.
https://github.com/bitcoin/bitcoin/pull/5835 or
https://github.com/bitcoin/bitcoin/pull/6077 ).

If Chun had six seconds of latency, and he can't pay for a lower-latency
connection (or it is insanely expensive), then there's nothing he can do,
he'll have to live with a higher orphan rate no matter the block size.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Mike Hearn

 Ignorant. You seem do not understand the current situation. We
 suffered from orphans a lot when we started in 2013. It is now your
 turn.


Then please enlighten me. You're unable to download block templates from a
trusted node outside of the country because the bandwidth requirements are
too high? Or due to some other problem?

With respect to now it's your turn. Let's imagine the hard fork goes
ahead. Let us assume that almost all exchanges, payment processors and
other businesses go with larger blocks, but Chinese miners do not.

Then you will mine coins that will not be recognised on trading platforms
and cannot be sold. Your losses will be much larger than from orphans.

This can happen *even* if Chinese miners who can't/won't scale up are 50%
hashrate. SPV clients would need a forced checkpoint, which would be messy
and undesirable, but it's technically feasible. The smaller side of the
chain would cease to exist from the perspective of people actually trading
coins.

If your internet connectivity situation is really so poor that you cannot
handle one or two megabits out of the country then you're hanging on by
your fingernails anyway: your connection to the main internet could become
completely unusable at any time. If that's really the case, it seems to me
that Chinese Bitcoin is unsustainable and what you really need is a
China-specific alt coin that can run entirely within the Chinese internet.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-01 Thread Gavin Andresen
RE: going to the public:

I started pushing privately for SOMETHING, ANYTHING to be done, or at the
very least for there to be some coherent plan besides wait and see back
in February.

As for it being unhealthy for me to write the code that I think should be
written and asking people to run it:

Ok. What would you suggest I do? I believe scaling up is the number one
priority right now. I think core devs SHOULD be taking time to solve it,
because I think the uncertainty of how it will be solved (or if it will be
solved) is bad for Bitcoin.

I think working on things like fixing transaction malleability is great...
but the reason to work on that is to enable smart contracts and all sorts
of other interesting new uses of the blockchain. But if we're stuck with
1MB blocks then there won't be room for all of those interesting new uses
on the blockchain.

Others disagree, and have the advantage of status-quo : if nothing is done,
they get what they want.

Based on some comments I've seen, I think there is also concern that my
own personal network/computer connection might not be able to handle more
transaction volume. That is NOT a good reason to limit scalability, but I
think it is clouding the judgement of many of the core contributors who
started contributing as a spare-time hobby from their homes (where maybe
they have crappy DSL connections).


RE: decentralization:

I think this is a red-herring. I'll quote something I said on reddit
yesterday:

I don't believe a 20MB max size will increase centralization to any
significant degree.

See
http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

and http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners

And I think we will have a lot LESS centralization of payments via services
like Coinbase (or hubs in some future StrawPay/Lightning network) if the
bitcoin network can directly handle more payment volume.

The centralization trade-offs seems very clear to me, and I think the big
blocks mean more centralized arguments are either just wrong or are
exaggerated or ignore the tradeoff with payment centralization (I think
that is a lot more important for privacy and censorship resistance).


RE: incentives for off-chain solutions:

I'll quote myself again from
http://gavinandresen.ninja/it-must-be-done-but-is-not-a-panacea :

The “layer 2” services that are being built on top of the blockchain are
absolutely necessary to get nearly instant real-time payments,
micropayments and high volume machine-to-machine payments, to pick just
three examples. The ten-minute settlement time of blocks on the network is
not fast enough for those problems, and it will be the ten minute block
interval that drives development of those off-chain innovations more than
the total number of transactions supported.

On Mon, Jun 1, 2015 at 8:45 AM, Jérôme Legoupil jjlegou...@gmail.com
wrote:

 If during the 1MB bumpy period something goes wrong, consensus among the
 community would be reached easily if necessary.


That is the problem: this will be a frog in boiling water problem. I
believe there will be no sudden crisis-- instead, transactions will just
get increasingly unreliable and expensive, driving more and more people
away from Bitcoin towards... I don't know what. Some less expensive, more
reliable, probably more-centralized solution.

The Gavin 20MB proposal is compromising Bitcoin's long-term security in an
 irreversible way, for gaining short-term better user experience.


If by long-term security you mean will transaction fees be high enough to
pay for enough hashing power to secure the network if there are bigger
blocks I've written about that:
http://gavinandresen.ninja/block-size-and-miner-fees-again


If you mean something else, then please be specific.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Gavin Andresen
On Mon, Jun 1, 2015 at 7:20 AM, Chun Wang 1240...@gmail.com wrote:

 I cannot believe why Gavin (who seems to have difficulty to spell my
 name correctly.) insists on his 20MB proposal regardless the
 community. BIP66 has been introduced for a long time and no one knows
 when the 95% goal can be met. This change to the block max size must
 take one year or more to be adopted. We should increase the limit and
 increase it now. 20MB is simply too big and too risky, sometimes we
 need compromise and push things forward. I agree with any solution
 lower than 10MB in its first two years.


Thanks, that's useful!

What do other people think?  Would starting at a max of 8 or 4 get
consensus?  Scaling up a little less than Nielsen's Law of Internet
Bandwidth predicts for the next 20 years?  (I think predictability is
REALLY important).

I chose 20 because all of my testing shows it to be safe, and all of my
back-of-the-envelope calculations indicate the costs are reasonable.

If consensus is 8 because more than order-of-magnitude increases are
scary -- ok.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Mike Hearn
I don't see this as an issue of sensitivity or not. Miners are businesses
that sell a service to Bitcoin users - the service of ordering transactions
chronologically. They aren't charities.

If some miners can't provide the service Bitcoin users need any more, then
OK, they should not/cannot mine. Lots of miners have come and gone since
Bitcoin started as different technology generations came and went. That's
just business.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] soft-fork block size increase (extension blocks)

2015-06-01 Thread Adam Back
Mike wrote:
 Businesses who are keen to
 have more transactions, would make it their problem to implement in
 their wallet, or ask the wallet vendor/maintainer they're working with
 to do it.  Nothing breaks if they dont use it.


 I don't see how this is the case. If an exchange supports extension blocks
 and I withdraw from that to a wallet that doesn't, the money will never
 arrive from my perspective. Yet the exchange will claim they sent it and
 they will wash their hands of the matter. Disaster.

To be clear in case you are missing part of the mechanism.: it is
forward and backwards compatible meaning a 1MB address can receive
payments from an 8MB address (at reduced security if it has software
that doesnt understand it) and a 1MB address can pay an 8MB address by
paying to an OP_TRUE that has meaning to the extension block nodes.

A 1MB client wont even understand the difference between a 1MB and 8MB
out payment.  An 8MB client will understand and pay 1MB addresses in a
different way (moving the coin back to the 1MB chain).

So its opt-in and incrementally deployable.  Exchanges could encourage
their users to use wallets that support 8MB blocks, eg by charging a
fee for 1MB transactions.  If 1MB blocks experience significant fee
pressure, this will be persuasive.  Or they could chose not to and eat
the cost.  This is all normal market adoption of a new cheaper
technical option (in this case with a tradeoff of reduced
security/more centralisation for those opting in to it).

 Because the more complex one is safer, more flexible, more future
 proof and better for decentralisation

 I disagree with all of those points. I find Lightning/Stroem etc to be more
 dangerous, less flexible, and worse for decentralisation. I explain why
 here:

Extension blocks  lightning are unrelated things.

While I understand the need for being practical, there is IMO, amongst
engineering maxims something as far as being too pragmatic,
dangerously pragmatic even.  We cant do stuff in bitcoin that has bad
carry costs, nor throw out the baby with the bathwater.

The situation is just that we are facing a security vs volume tradeoff
and different people will have different requirements and comfort
zones.  If I am not misremembering, I think you've sided typically
with the huge block, big data center only end of the spectrum.  What I
am proposing empowers you to do experiments in that direction without
getting into a requirements conflict with people who value more
strongly the bitcoin properties arising from it being robustly
decentralised.

I am not sure personally where the blocksize discussion comes out - if
it stays as is for a year, in a wait and see, reduce spam, see
fee-pressure take effect as it has before, work on improving improve
decentralisation metrics, relay latency, and do a blocksize increment
to kick the can if-and-when it becomes necessary and in the mean-time
try to do something more long-term ambitious about scale rather than
volume.  Bitcoin without scale improvements probably wont get the
volume people would like.  So scale is more important than volume; and
security (decentralisation) is important too.  To the extreme analogy
we could fix scale tomorrow by throwing up a single high perf
database, but then we'd break the security properties arising from
decentralisation.  We should improve both within an approximately safe
envelope IMO.

Adam

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all?

2015-06-01 Thread Jim Phillips
Ok, I understand at least some of the reason that blocks have to be kept to
a certain size. I get that blocks which are too big will be hard to
propagate by relays. Miners will have more trouble uploading the large
blocks to the network once they've found a hash. We need block size
constraints to create a fee economy for the miners.

But these all sound to me like issues that affect some, but not others. So
it seems to me like it ought to be a configurable setting. We've already
witnessed with last week's stress test that most miners aren't even
creating 1MB blocks but are still using the software defaults of 730k. If
there are configurable limits, why does there have to be a hard limit?
Can't miners just use the configurable limit to decide what size blocks
they can afford to and are thus willing to create? They could just as
easily use that to create a fee economy. If the miners with the most
hashpower are not willing to mine blocks larger than 1 or 2 megs, then they
are able to slow down confirmations of transactions. It may take several
blocks before a miner willing to include a particular transaction finds a
block. This would actually force miners to compete with each other and find
a block size naturally instead of having it forced on them by the protocol.
Relays would be able to participate in that process by restricting the
miners ability to propagate large blocks. You know, like what happens in a
FREE MARKET economy, without burdensome regulation which can be manipulated
through politics? Isn't that what's really happening right now? Different
political factions with different agendas are fighting over how best to
regulate the Bitcoin protocol.

I know the limit was originally put in place to prevent spamming. But that
was when we were mining with CPUs and just beginning to see the occasional
GPU which could take control over the network and maliciously spam large
blocks. But with ASIC mining now catching up to Moore's Law, that's not
really an issue anymore. No one malicious entity can really just take over
the network now without spending more money than it's worth -- and that's
just going to get truer with time as hashpower continues to grow. And it's
not like the hard limit really does anything anymore to prevent spamming.
If a spammer wants to create thousands or millions of transactions, a hard
limit on the block size isn't going to stop him.. He'll just fill up the
mempool or UTXO database instead of someone's block database.. And block
storage media is generally the cheapest storage.. I mean they could be
written to tape and be just as valid as if they're stored in DRAM. Combine
that with pruning, and block storage costs are almost a non-issue for
anyone who isn't running an archival node.

And can't relay nodes just configure a limit on the size of blocks they
will relay? Sure they'd still need to download a big block occasionally,
but that's not really that big a deal, and they're under no obligation to
propagate it.. Even if it's a 2GB block, it'll get downloaded eventually.
It's only if it gets to the point where the average home connection is too
slow to keep up with the transaction  block flow that there's any real
issue there, and that would happen regardless of how big the blocks are. I
personally would much prefer to see hardware limits act as the bottleneck
than to introduce an artificial bottleneck into the protocol that has to be
adjusted regularly. The software and protocol are TECHNICALLY capable of
scaling to handle the world's entire transaction set. The real issue with
scaling to this size is limitations on hardware, which are regulated by
Moore's Law. Why do we need arbitrary soft limits? Why can't we allow
Bitcoin to grow naturally within the ever increasing limits of our
hardware? Is it because nobody will ever need more than 640k of RAM?

Am I missing something here? Is there some big reason that I'm overlooking
why there has to be some hard-coded limit on the block size that affects
the entire network and creates ongoing issues in the future?

--

*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] soft-fork block size increase (extension blocks)

2015-06-01 Thread Mike Hearn

 (at reduced security if it has software that doesnt understand it)


Well, yes. Isn't that rather key to the issue?  Whereas by simply
increasing the block size, SPV wallets don't care (same security and
protocol as before) and fully validating wallets can be updated with a very
small code change.


 A 1MB client wont even understand the difference between a 1MB and 8MB
 out payment.


Let's say an old client makes a payment that only gets confirmed in an
extension block. The wallet will think the payment is unconfirmed and show
that to the user forever, no?

Can you walk through the UX for each case?


 If I am not misremembering, I think you've sided typically
 with the huge block, big data center only end of the spectrum.


It would be Satoshi, that argued that.

I think there must be a communication issue here somewhere. I'm not sure
how this meme has taken hold amongst you guys, as I am the guy who wrote
the scalability page back in 2011:

https://en.bitcoin.it/wiki/Scalability

It says:

*The core Bitcoin network can scale to much higher transaction rates than
are seen today, assuming that nodes in the network are primarily running on
high end servers rather than desktops. *


By much higher rates I meant VISA scale and by high end server I meant
high end by today's standards not tomorrows. There's a big difference
between a datacenter and a single server! By definition a single server is
not a datacenter, although it would be conventional to place it in
one. But even
with the most wildly optimistic growth imaginable, I couldn't foresee a
time when you needed more than a single machine to keep up with the
transaction stream.

And we're not going to get to VISA scale any time soon: I don't think I've
ever argued we will. If it does happen it would presumably be decades away.
Again, short of some currently unimagined killer app.

So I don't believe I've ever argued this, and honestly I kinda feel people
are putting words in my mouth.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Warren Togami Jr.
By reversing Mike's language to the reality of the situation I had hoped
people would realize how abjectly ignorant and insensitive his statement
was.  I am sorry to those in the community if they misunderstood my post. I
thought it was obvious that it was sarcasm where I do not seriously believe
particular participants should be excluded.

On Mon, Jun 1, 2015 at 3:06 AM, Thy Shizzle thyshiz...@outlook.com wrote:

  Doesn't mean you should build something that says fuck you to the
 companies that have invested in farms of ASICS. To say Oh yea if they
 can't mine it how we want stuff 'em is naive. I get decentralisation, but
 don't dis incentivise mining. If miners are telling you that you're going
 to hurt them, esp. Miners that combined hold  50% hashing power, why would
 you say too bad so sad? Why not just start stripping bitcoin out of
 adopters wallets? Same thing.
  --
 From: Warren Togami Jr. wtog...@gmail.com
 Sent: ‎1/‎06/‎2015 10:30 PM
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

   Whilst it would be nice if miners in *outside* China can carry on
 forever regardless of their internet situation, nobody has any inherent
 right to mine if they can't do the job - if miners in *outside* China
 can't get the trivial amounts of bandwidth required through their firewall *TO
 THE MAJORITY OF THE HASHRATE* and end up being outcompeted then OK, too
 bad, we'll have to carry on without them.


 On Mon, Jun 1, 2015 at 12:13 AM, Mike Hearn m...@plan99.net wrote:

  Whilst it would be nice if miners in China can carry on forever
 regardless of their internet situation, nobody has any inherent right to
 mine if they can't do the job - if miners in China can't get the trivial
 amounts of bandwidth required through their firewall and end up being
 outcompeted then OK, too bad, we'll have to carry on without them.

  But I'm not sure why it should be a big deal. They can always run a node
 on a server in Taiwan and connect the hardware to it via a VPN or so.


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Roy Badami
On Mon, Jun 01, 2015 at 09:01:49PM +0100, Roy Badami wrote:
  What do other people think?  Would starting at a max of 8 or 4 get
  consensus?  Scaling up a little less than Nielsen's Law of Internet
  Bandwidth predicts for the next 20 years?  (I think predictability is
  REALLY important).
 
 TL;DR: Personally I'm in favour of doing something relatively
 uncontroversial (say, a simple increase in the block size to something
 in the 4-8GB range) with no further increases without a further hard
 fork.

And the other bit I should have added to my TL;DR:

If we end up spending a significant proportion of the next 20 years
discussing the then _next_ hard fork, that's a *good* thing, not a
*bad* thing.  Hard forks need to become, if not entirely routine, then
certainly less scary.  A sequence of (relatively) uncontroversial hard
forks over time is way more likely to gain consensus than a single
hard fork that attempts to set a schedule for block size increases out
to 2035.  IMHO.

 
 I'm not sure how relevent Nielsen's Law really is.  The only relevent
 data points Nielsen has really boil down to a law about how the speed
 of his cable modem connection has changed during the period 1998-2014.
 
 Interesting though that is, it's not hugely relevent to
 bandwidth-intensive operations like running a full node.  The problem
 is he's only looking at the actual speed of his connection in Mbps,
 not the amount of data usage in GB/month that his provider permits -
 and there's no particular reason to expect that both of those two
 figures follow the same curve.  In particular, we're more interested
 in the cost of backhaul and IP transit (which is what drives the
 GB/month figure) than we are in improvements in DOCSIS technology,
 which have little relevence to node operators even on cable modem, and
 none to any other kind of full node operator, be it on DSL or in a
 datacentre.
 
 More importantly, I also think a scheduled ramp up is an unnecessary
 complication.  Why do we need to commit now to future block size
 increases perhaps years into the future?  I'd rather schedule an
 uncontroversial hard fork now (if such thing is possible) even if
 there's a very real expectation - even an assumption - that by the
 time the fork has taken place, it's already time to start discussing
 the next one.  Any curve or schedule of increases that stretches years
 into the future is inevitably going to be controversial - and more so
 the further into the future it stretches - simply because the
 uncertainties around the Bitcoin landscape are going to be greater the
 further ahead we look.
 
 If a simple increase from 1GB to 4GB or 8GB will solve the problem for
 now, why not do that?  Yes, it's quite likely we'll have to do it
 again, but we'll be able to make that decision in the light of the
 2016 or 2017 landscape and can again make a simple, hopefully
 uncontroversial, increase in the limit at that time.
 
 So, with the proviso that I think this is all bike shedding, if I had
 to pick my favourite colour for the bike shed, it would be to schedule
 a hard fork that increases the 1GB limit (to something in the 4-8GB
 range) but with no further increases without a further hard fork.
 
 Personally I think trying to pick the best value of the 2035 block
 size now is about as foolish as trying to understand now the economics
 of Bitcoin mining many halvings hence.
 
 NB: this is not saying that I think we shouldn't go above 8GB in the
 relatively foreseeable future; quite the contrary, I strongly expect
 that we will.  I just don't see the need to pick the 2020 block size
 now when we can easily make a far better informed decision as to the
 2020 block size in 2018 or even 2019.
 
 As to knowing what the block size is going to be for the next 20 years
 being REALLY important?  100% disagree.  I also think it's
 impossible, because even if you manage to get consensus on a block
 size increase schedule that stretches out to 2035 (and my prediction
 is you won't) the reality is that that block size schedule will have
 been modified by a future hard fork long before we get to 2035.
 
 What I personally think is REALLY important is that the Bitcoin
 community demonstrates an ability to react appropriately to changing
 requirements and conditions - and we'll only be able to react to those
 conditions when we know what they are!  My expectation is that there
 will be several (hopefully _relatively_ uncontroversial) scheduled
 hard forks between now and 2035, and each of those will be discussed
 in suitable detail before being agreed.  And that's as it should be.
 
 roy

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Btc Drak
I did wonder what the post actually meant, I recommend appending /s after
sarcasm so it's clear. Lots gets lost in text. But I agree with you btw his
response was not particularly tactful.

On Mon, Jun 1, 2015 at 7:19 PM, Warren Togami Jr. wtog...@gmail.com wrote:

 By reversing Mike's language to the reality of the situation I had hoped
 people would realize how abjectly ignorant and insensitive his statement
 was.  I am sorry to those in the community if they misunderstood my post. I
 thought it was obvious that it was sarcasm where I do not seriously believe
 particular participants should be excluded.

 On Mon, Jun 1, 2015 at 3:06 AM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Doesn't mean you should build something that says fuck you to the
 companies that have invested in farms of ASICS. To say Oh yea if they
 can't mine it how we want stuff 'em is naive. I get decentralisation, but
 don't dis incentivise mining. If miners are telling you that you're going
 to hurt them, esp. Miners that combined hold  50% hashing power, why would
 you say too bad so sad? Why not just start stripping bitcoin out of
 adopters wallets? Same thing.
  --
 From: Warren Togami Jr. wtog...@gmail.com
 Sent: ‎1/‎06/‎2015 10:30 PM
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

   Whilst it would be nice if miners in *outside* China can carry on
 forever regardless of their internet situation, nobody has any inherent
 right to mine if they can't do the job - if miners in *outside* China
 can't get the trivial amounts of bandwidth required through their
 firewall *TO THE MAJORITY OF THE HASHRATE* and end up being outcompeted
 then OK, too bad, we'll have to carry on without them.


 On Mon, Jun 1, 2015 at 12:13 AM, Mike Hearn m...@plan99.net wrote:

  Whilst it would be nice if miners in China can carry on forever
 regardless of their internet situation, nobody has any inherent right to
 mine if they can't do the job - if miners in China can't get the trivial
 amounts of bandwidth required through their firewall and end up being
 outcompeted then OK, too bad, we'll have to carry on without them.

  But I'm not sure why it should be a big deal. They can always run a
 node on a server in Taiwan and connect the hardware to it via a VPN or so.




 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: Block Size Increase Requirements

2015-06-01 Thread Adam Back
So lets rephrase that and say instead more correctly it is the job of
miners (collectively) to be well connected globally - and indeed there
are incentivised to be or they tend to receive blocks at higher
latency and so are at increased risk of orphans.  And miner groups
with good block latency in-group and high hashrate are definitionally
the well connected, so the cost of getting good connectivity to high
hashrate groups is naturally borne by people outside of those groups.
Or thats the incentive anyway.

Adam


On 1 June 2015 at 19:30, Mike Hearn m...@plan99.net wrote:
 I don't see this as an issue of sensitivity or not. Miners are businesses
 that sell a service to Bitcoin users - the service of ordering transactions
 chronologically. They aren't charities.

 If some miners can't provide the service Bitcoin users need any more, then
 OK, they should not/cannot mine. Lots of miners have come and gone since
 Bitcoin started as different technology generations came and went. That's
 just business.

 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] We are filling most blocks right now - Let's change the max blocksize default

2015-06-01 Thread Raystonn .
We seem to be experiencing bursts of high transaction rate right now.  
https://blockchain.info/ shows nearly all blocks full.  We should increase the 
default max block size to 1MB to give us more space where we see the 731MB 
blocks, as we don’t want to be limited by those who don’t bother to change the 
settings from the default, and thus probably aren’t paying attention to this 
whole discussion.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP draft] Consensus-enforced transaction replacement signalled via sequence numbers

2015-06-01 Thread Stephen Morse
I see, so OP_SEQUENCEVERIFY will have a value pushed on the stack right
before, and then check that the input spending the prevout has nSequence
corresponds to at least the sequence specified by the stack value. Good
idea! Keeps the script code from depending on external chain specific data,
which is nice.

Hopefully we can repurpose one of the OP_NOPs for CHECKLOCKTIMEVERIFY and
one for OP_CHECKSEQUENCEVERIFY. Very complementary.

Best,
Stephen


On Tue, Jun 2, 2015 at 12:16 AM, Mark Friedenbach m...@friedenbach.org
wrote:

 You are correct! I am maintaining a 'checksequenceverify' branch in my git
 repository as well, an OP_RCLTV using sequence numbers:

 https://github.com/maaku/bitcoin/tree/checksequenceverify

 Most of the interesting use cases for relative lock-time require an RCLTV
 opcode. What is interesting about this architecture is that it possible to
 cleanly separate the relative lock-time (sequence numbers) from the RCLTV
 opcode (OP_CHECKSEQUENCEVERIFY) both in concept and in implementation. Like
 CLTV, the CSV opcode only checks transaction data and requires no
 contextual knowledge about block headers, a weakness of the other RCLTV
 proposals that violate the clean separation between libscript and
 libconsensus. In a similar way, this BIP proposal only touches the
 transaction validation logic without any impact to script.

 I would like to propose an additional BIP covering the CHECKSEQUENCEVERIFY
 opcode and its enabling applications. But, well, one thing at a time.

 On Mon, Jun 1, 2015 at 8:45 PM, Stephen Morse stephencalebmo...@gmail.com
  wrote:

 Hi Mark,

 Overall, I like this idea in every way except for one: unless I am
 missing something, we may still need an OP_RCLTV even with this being
 implemented.

 In use cases such as micropayment channels where the funds are locked up
 by multiple parties, the enforcement of the relative locktime can be done
 by the first-signing party. So, while your solution would probably work in
 cases like this, where multiple signing parties are involved, there may be
 other, seen or unforeseen, use cases that require putting the relative
 locktime right into the spending contract (the scriptPubKey itself).
 When there is only one signer, there's nothing that enforces using an
 nSequence and nVersion=2 that would prevent spending the output until a
 certain time.

 I hope this is received as constructive criticism, I do think this is an
 innovative idea. In my view, though, it seems to be less fully-featured
 than just repurposing an OP_NOP to create OP_RCLTV. The benefits are
 obviously that it saves transaction space by repurposing unused space, and
 would likely work for most cases where an OP_RCLTV would be needed.

 Best,
 Stephen

 On Mon, Jun 1, 2015 at 9:49 PM, Mark Friedenbach m...@friedenbach.org
 wrote:

 I have written a reference implementation and BIP draft for a soft-fork
 change to the consensus-enforced behaviour of sequence numbers for the
 purpose of supporting transaction replacement via per-input relative
 lock-times. This proposal was previously discussed on the mailing list in
 the following thread:

 http://sourceforge.net/p/bitcoin/mailman/message/34146752/

 In short summary, this proposal seeks to enable safe transaction
 replacement by re-purposing the nSequence field of a transaction input to
 be a consensus-enforced relative lock-time.

 The advantages of this approach is that it makes use of the full range
 of the 32-bit sequence number which until now has rarely been used for
 anything other than a boolean control over absolute nLockTime, and it does
 so in a way that is semantically compatible with the originally envisioned
 use of sequence numbers for fast mempool transaction replacement.

 The disadvantages are that external constraints often prevent the full
 range of sequence numbers from being used when interpreted as a relative
 lock-time, and re-purposing nSequence as a relative lock-time precludes its
 use in other contexts. The latter point has been partially addressed by
 having the relative lock-time semantics be enforced only if the
 most-significant bit of nSequence is set. This preserves 31 bits for
 alternative use when relative lock-times are not required.

 The BIP draft can be found at the following gist:

 https://gist.github.com/maaku/be15629fe64618b14f5a

 The reference implementation is available at the following git
 repository:

 https://github.com/maaku/bitcoin/tree/sequencenumbers

 I request that the BIP editor please assign a BIP number for this work.

 Sincerely,
 Mark Friedenbach


 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




--
___

Re: [Bitcoin-development] Meta suggestions for this block size debate

2015-06-01 Thread Ethan Heilman
I second this, I don't have time to read the large number of emails
generated every day from the block size debate. A summary of the various
positions and arguments would be extremely helpful.

On Mon, Jun 1, 2015 at 11:02 PM, gabe appleton gapplet...@gmail.com wrote:

 Also, can we try to get a wiki page for the debate? That way we could
 condense the information as much as possible. I'll be willing to assist if
 the page gets approval.
 On Jun 1, 2015 6:41 PM, Mats Henricson m...@henricson.se wrote:

 Hi!

 My fingers have been itching many times now, this debate
 drives me nuts.

 I just wish all posters could follow two simple principles:

 1. Read up. Yes. All of what has been written. Yes, it will
take many hours. But if you're rehashing what other
smarter people have said over and over before, you're
wasting hundreds of peoples time. Please don't.

 2. Be helpful. Suggest alternatives. Just cristizising is
just destructive. If you want no change, then say so.

 Mats


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP draft] Consensus-enforced transaction replacement signalled via sequence numbers

2015-06-01 Thread Stephen Morse
Hi Mark,

Overall, I like this idea in every way except for one: unless I am missing
something, we may still need an OP_RCLTV even with this being implemented.

In use cases such as micropayment channels where the funds are locked up by
multiple parties, the enforcement of the relative locktime can be done by
the first-signing party. So, while your solution would probably work in
cases like this, where multiple signing parties are involved, there may be
other, seen or unforeseen, use cases that require putting the relative
locktime right into the spending contract (the scriptPubKey itself). When
there is only one signer, there's nothing that enforces using an nSequence
and nVersion=2 that would prevent spending the output until a certain time.

I hope this is received as constructive criticism, I do think this is an
innovative idea. In my view, though, it seems to be less fully-featured
than just repurposing an OP_NOP to create OP_RCLTV. The benefits are
obviously that it saves transaction space by repurposing unused space, and
would likely work for most cases where an OP_RCLTV would be needed.

Best,
Stephen

On Mon, Jun 1, 2015 at 9:49 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 I have written a reference implementation and BIP draft for a soft-fork
 change to the consensus-enforced behaviour of sequence numbers for the
 purpose of supporting transaction replacement via per-input relative
 lock-times. This proposal was previously discussed on the mailing list in
 the following thread:

 http://sourceforge.net/p/bitcoin/mailman/message/34146752/

 In short summary, this proposal seeks to enable safe transaction
 replacement by re-purposing the nSequence field of a transaction input to
 be a consensus-enforced relative lock-time.

 The advantages of this approach is that it makes use of the full range of
 the 32-bit sequence number which until now has rarely been used for
 anything other than a boolean control over absolute nLockTime, and it does
 so in a way that is semantically compatible with the originally envisioned
 use of sequence numbers for fast mempool transaction replacement.

 The disadvantages are that external constraints often prevent the full
 range of sequence numbers from being used when interpreted as a relative
 lock-time, and re-purposing nSequence as a relative lock-time precludes its
 use in other contexts. The latter point has been partially addressed by
 having the relative lock-time semantics be enforced only if the
 most-significant bit of nSequence is set. This preserves 31 bits for
 alternative use when relative lock-times are not required.

 The BIP draft can be found at the following gist:

 https://gist.github.com/maaku/be15629fe64618b14f5a

 The reference implementation is available at the following git repository:

 https://github.com/maaku/bitcoin/tree/sequencenumbers

 I request that the BIP editor please assign a BIP number for this work.

 Sincerely,
 Mark Friedenbach


 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] soft-fork block size increase (extension blocks)

2015-06-01 Thread Tom Harding
On 6/1/2015 10:21 AM, Adam Back wrote:
 if it stays as is for a year, in a wait and see, reduce spam, see
 fee-pressure take effect as it has before, work on improving improve
 decentralisation metrics, relay latency, and do a blocksize increment
 to kick the can if-and-when it becomes necessary and in the mean-time
 try to do something more long-term ambitious about scale rather than
 volume.

What's your estimate of the lead time required to kick the can,
if-and-when it becomes necessary?

The other time-series I've seen all plot an average block size.  That's
misleading, because there's a distribution of block sizes.  If you bin
by retarget interval and plot every single block, you get this

http://i.imgur.com/5Gfh9CW.png

The max block size has clearly been in play for 8 months already.



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] [BIP draft] Consensus-enforced transaction replacement signalled via sequence numbers

2015-06-01 Thread Mark Friedenbach
I have written a reference implementation and BIP draft for a soft-fork
change to the consensus-enforced behaviour of sequence numbers for the
purpose of supporting transaction replacement via per-input relative
lock-times. This proposal was previously discussed on the mailing list in
the following thread:

http://sourceforge.net/p/bitcoin/mailman/message/34146752/

In short summary, this proposal seeks to enable safe transaction
replacement by re-purposing the nSequence field of a transaction input to
be a consensus-enforced relative lock-time.

The advantages of this approach is that it makes use of the full range of
the 32-bit sequence number which until now has rarely been used for
anything other than a boolean control over absolute nLockTime, and it does
so in a way that is semantically compatible with the originally envisioned
use of sequence numbers for fast mempool transaction replacement.

The disadvantages are that external constraints often prevent the full
range of sequence numbers from being used when interpreted as a relative
lock-time, and re-purposing nSequence as a relative lock-time precludes its
use in other contexts. The latter point has been partially addressed by
having the relative lock-time semantics be enforced only if the
most-significant bit of nSequence is set. This preserves 31 bits for
alternative use when relative lock-times are not required.

The BIP draft can be found at the following gist:

https://gist.github.com/maaku/be15629fe64618b14f5a

The reference implementation is available at the following git repository:

https://github.com/maaku/bitcoin/tree/sequencenumbers

I request that the BIP editor please assign a BIP number for this work.

Sincerely,
Mark Friedenbach
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Meta suggestions for this block size debate

2015-06-01 Thread gabe appleton
I don't have permission to create a page. If someone else does, I'll
happily get a framework started.

On Mon, Jun 1, 2015 at 11:32 PM, Ethan Heilman eth...@gmail.com wrote:

 I second this, I don't have time to read the large number of emails
 generated every day from the block size debate. A summary of the various
 positions and arguments would be extremely helpful.

 On Mon, Jun 1, 2015 at 11:02 PM, gabe appleton gapplet...@gmail.com
 wrote:

 Also, can we try to get a wiki page for the debate? That way we could
 condense the information as much as possible. I'll be willing to assist if
 the page gets approval.
 On Jun 1, 2015 6:41 PM, Mats Henricson m...@henricson.se wrote:

 Hi!

 My fingers have been itching many times now, this debate
 drives me nuts.

 I just wish all posters could follow two simple principles:

 1. Read up. Yes. All of what has been written. Yes, it will
take many hours. But if you're rehashing what other
smarter people have said over and over before, you're
wasting hundreds of peoples time. Please don't.

 2. Be helpful. Suggest alternatives. Just cristizising is
just destructive. If you want no change, then say so.

 Mats


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Meta suggestions for this block size debate

2015-06-01 Thread gabe appleton
Also, can we try to get a wiki page for the debate? That way we could
condense the information as much as possible. I'll be willing to assist if
the page gets approval.
On Jun 1, 2015 6:41 PM, Mats Henricson m...@henricson.se wrote:

 Hi!

 My fingers have been itching many times now, this debate
 drives me nuts.

 I just wish all posters could follow two simple principles:

 1. Read up. Yes. All of what has been written. Yes, it will
take many hours. But if you're rehashing what other
smarter people have said over and over before, you're
wasting hundreds of peoples time. Please don't.

 2. Be helpful. Suggest alternatives. Just cristizising is
just destructive. If you want no change, then say so.

 Mats


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development