Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev
 On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo elombr...@gmail.com 
 mailto:elombr...@gmail.com wrote:
 Mainstream usage of cryptocurrency will be enabled primarily by direct 
 party-to-party contract negotiation…with the use of the blockchain primarily 
 as a dispute resolution mechanism. The block size isn’t about scaling but 
 about supply and demand of finite resources. As demand for block space 
 increases, we can address it either by increasing computational resources 
 (block size) or by increasing fees. But to do the former we need a way to 
 offset the increase in cost by making sure that those who contribute said 
 resources have incentive to do so.’

I should also point out, improvements in hardware and network infrastructure 
can also reduce costs…and we could very well have a model where resource 
requirements can be increased as technology improves. However, currently, the 
computational cost of validation is clearly growing far more quickly than the 
cost of computational resources is going down. There are 7,000,000,000 people 
in the world. Payment networks in the developed world already regularly handle 
thousands of transactions a second. Even with highly optimized block 
propagation, pruning, and signature validation, we’re still many orders shy of 
being able to satisfy demand. To achieve mainstream adoption, we’ll have to 
pass through a period of quasi-exponential growth in userbase (until the market 
saturates…or until the network resources run out). Unless we’re able to achieve 
a validation complexity of O(polylog n) or better, it’s not a matter of having 
a negative attitude about the prospects…it’s just math. Whether we have 2MB or 
20MB or 100MB blocks (even assuming the above mentioned optimizations and that 
the computational resources exist and are willing to handle it) we will not be 
able to satisfy demand if we insist on requiring global validation for all 
transactions.


 On Jul 23, 2015, at 1:26 PM, Jorge Timón jti...@jtimon.cc wrote:
 
 On Thu, Jul 23, 2015 at 9:52 PM, Jameson Lopp via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:
 Running a node certainly has real-world costs that shouldn't be ignored.
 There are plenty of advocates who argue that Bitcoin should strive to keep
 it feasible for the average user to run their own node (as opposed to
 Satoshi's vision of beefy servers in data centers.) My impression is that
 even most of these advocates agree that it will be acceptable to eventually
 increase block sizes as resources become faster and cheaper because it won't
 be 'pricing out' the average user from running their own node. If this is
 the case, it seems to me that we have a problem given that there is no
 established baseline for the acceptable performance / hardware cost
 requirements to run a node. I'd really like to see further clarification
 from these advocates around the acceptable cost of running a node and how we
 can measure the global reduction in hardware and bandwidth costs in order to
 establish a baseline that we can use to justify additional resource usage by
 nodes.
 
 Although I don't have a concrete proposals myself, I agree that
 without having any common notion of what the minimal target hardware
 looks like, it is very difficult to discuss other things that depend
 on that.
 If there's data that shows that a 100 usd raspberry pi with a 1 MB
 connection in say, India (I actually have no idea about internet
 speeds there) size X is a viable full node, then I don't think anybody
 can reasonably oppose to rising the block size to X, and such a
 hardfork can perfectly be uncontroversial.
 I'm exaggerating ultra-low specifications, but it's just an example to
 illustrate your point.
 There was a thread about formalizing such minimum hardware
 requirements, but I think the discussion simply finished there:
 - Let's do this
 - Yeah, let's do it
 - +1, let's have concrete values, I generally agree.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev
I should also add that I think those who claim that fee pressure will scare 
away users and break the industry are *seriously* underestimating human 
ingenuity in the face of a challenge. We can do this - we can overcome this 
obstacle…we can find good solutions to a fee market. Unless someone can come up 
with another way to pay for the operation of the network, we NEED to do this. 
What makes anyone think it will be easier to do later rather than now? The 
longer we wait, the lower block rewards get, the larger the deployed 
infrastructure, the larger our userbase, the HARDER it will be to solve it. We 
should solve it now - we will be much better off for it…and so will our users.


 On Jul 23, 2015, at 4:57 PM, Eric Lombrozo elombr...@gmail.com wrote:
 
 
 On Jul 23, 2015, at 4:42 PM, Benedict Chan ben...@fragnetics.com 
 mailto:ben...@fragnetics.com wrote:
 
 Scaling the network will come in the form of a combination of many
 optimizations. Just because we do not know for sure how to eventually
 serve 7 billion people does not mean we should make decisions on
 global validation that impact our ability to serve the current set of
 users.
 
 Agreed. But I believe the economic and security arguments I gave regarding 
 fees and incentives still hold and are largely separate from the scalability 
 issue. Please correct me if I overlooked something.
 
 
 Also, blocking a change because it's more important to address issues
 such as... other improvements will further slow down the discussion.
 I believe an increase will not prevent the development of other
 improvements that we need - in contrast, the sooner we can get over
 the limit (which, as you agree, needs to be changed at some point),
 the sooner we can get back to work.
 
 An increase in block size at this time will exacerbate security concerns 
 around nodes relying on other nodes to validate (particularly miners and 
 wallets). It’s not really a matter of having limited developer resources that 
 need to be budgeted, as you seem to suggest.
 
 Regarding developments on properly handling fees, there must exist the 
 economic need for it before there’s an earnest effort to solve it. Increasing 
 the block size right now will, in all likelihood, delay this effort. I’d much 
 prefer to first let the fee market evolve because it’s a crucial component to 
 the protocol’s design and its security model…and so we can get a better sense 
 for fee economics. Then we might be able to figure out better approaches to 
 block size changes in the future that makes sense economically…perhaps with 
 mechanisms that can dynamically adjust it to reflect resource availability 
 and network load.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP Draft: Minimum Viable TXIn Hash

2015-07-23 Thread Jeremy Rubin via bitcoin-dev
Please see the following draft BIP which should decrease the amount of
bytes needed per transaction. This is very much a draft BIP, as the design
space for this type of improvement is large.

This BIP can be rolled out by a soft fork.

Improvements are around 12% for  standard one in two out txn, and even
more with more inputs hashes.

https://gist.github.com/JeremyRubin/e175662d2b8bf814a688
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Libconsensus separated repository (was Bitcoin Core and hard forks)

2015-07-23 Thread Jorge Timón via bitcoin-dev
On Thu, Jul 23, 2015 at 4:57 PM, Milly Bitcoin via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 On 7/23/2015 10:30 AM, Jorge Timón via bitcoin-dev wrote:

 [4] http://lmgtfy.com/?q=mike+hearn+dictatorl=1

Mike has sincerely said that he would like Bitcoin Core to have a
benevolent dictator like other free software projects, and I wanted
to make clear that I wasn't putting words in his mouth but it's
actually something very easy to find on the internet. But I now
realize that the search can be interpreted as me calling him dictator
or something of the sort. That wasn't my intention. In fact, Mike's
point of view on Bitcoin Core development wasn't even relevant for my
example so I shouldn't even have mentioned him in the first place. I
apologize for both mistakes, but please let's keep this thread focused
on libconsensus.

 You spend too much time on reddit.

I actually don't spend much time on reddit: I don't particularly like
it. But I do spend some time in reddit so, I agree: I spend too much
time on reddit.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Benedict Chan via bitcoin-dev
On Thu, Jul 23, 2015 at 1:52 PM, Eric Lombrozo via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo elombr...@gmail.com wrote:

 Mainstream usage of cryptocurrency will be enabled primarily by direct
 party-to-party contract negotiation…with the use of the blockchain primarily
 as a dispute resolution mechanism. The block size isn’t about scaling but
 about supply and demand of finite resources. As demand for block space
 increases, we can address it either by increasing computational resources
 (block size) or by increasing fees. But to do the former we need a way to
 offset the increase in cost by making sure that those who contribute said
 resources have incentive to do so.’


 I should also point out, improvements in hardware and network infrastructure
 can also reduce costs…and we could very well have a model where resource
 requirements can be increased as technology improves. However, currently,
 the computational cost of validation is clearly growing far more quickly
 than the cost of computational resources is going down. There are
 7,000,000,000 people in the world. Payment networks in the developed world
 already regularly handle thousands of transactions a second. Even with
 highly optimized block propagation, pruning, and signature validation, we’re
 still many orders shy of being able to satisfy demand. To achieve mainstream
 adoption, we’ll have to pass through a period of quasi-exponential growth in
 userbase (until the market saturates…or until the network resources run
 out). Unless we’re able to achieve a validation complexity of O(polylog n)
 or better, it’s not a matter of having a negative attitude about the
 prospects…it’s just math. Whether we have 2MB or 20MB or 100MB blocks (even
 assuming the above mentioned optimizations and that the computational
 resources exist and are willing to handle it) we will not be able to satisfy
 demand if we insist on requiring global validation for all transactions.


Scaling the network will come in the form of a combination of many
optimizations. Just because we do not know for sure how to eventually
serve 7 billion people does not mean we should make decisions on
global validation that impact our ability to serve the current set of
users.

Also, blocking a change because it's more important to address issues
such as... other improvements will further slow down the discussion.
I believe an increase will not prevent the development of other
improvements that we need - in contrast, the sooner we can get over
the limit (which, as you agree, needs to be changed at some point),
the sooner we can get back to work.


 On Jul 23, 2015, at 1:26 PM, Jorge Timón jti...@jtimon.cc wrote:

 On Thu, Jul 23, 2015 at 9:52 PM, Jameson Lopp via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:

 Running a node certainly has real-world costs that shouldn't be ignored.
 There are plenty of advocates who argue that Bitcoin should strive to keep
 it feasible for the average user to run their own node (as opposed to
 Satoshi's vision of beefy servers in data centers.) My impression is that
 even most of these advocates agree that it will be acceptable to eventually
 increase block sizes as resources become faster and cheaper because it won't
 be 'pricing out' the average user from running their own node. If this is
 the case, it seems to me that we have a problem given that there is no
 established baseline for the acceptable performance / hardware cost
 requirements to run a node. I'd really like to see further clarification
 from these advocates around the acceptable cost of running a node and how we
 can measure the global reduction in hardware and bandwidth costs in order to
 establish a baseline that we can use to justify additional resource usage by
 nodes.


 Although I don't have a concrete proposals myself, I agree that
 without having any common notion of what the minimal target hardware
 looks like, it is very difficult to discuss other things that depend
 on that.
 If there's data that shows that a 100 usd raspberry pi with a 1 MB
 connection in say, India (I actually have no idea about internet
 speeds there) size X is a viable full node, then I don't think anybody
 can reasonably oppose to rising the block size to X, and such a
 hardfork can perfectly be uncontroversial.
 I'm exaggerating ultra-low specifications, but it's just an example to
 illustrate your point.
 There was a thread about formalizing such minimum hardware
 requirements, but I think the discussion simply finished there:
 - Let's do this
 - Yeah, let's do it
 - +1, let's have concrete values, I generally agree.



 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org

Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jorge Timón via bitcoin-dev
On Thu, Jul 23, 2015 at 9:52 PM, Jameson Lopp via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 Running a node certainly has real-world costs that shouldn't be ignored.
 There are plenty of advocates who argue that Bitcoin should strive to keep
 it feasible for the average user to run their own node (as opposed to
 Satoshi's vision of beefy servers in data centers.) My impression is that
 even most of these advocates agree that it will be acceptable to eventually
 increase block sizes as resources become faster and cheaper because it won't
 be 'pricing out' the average user from running their own node. If this is
 the case, it seems to me that we have a problem given that there is no
 established baseline for the acceptable performance / hardware cost
 requirements to run a node. I'd really like to see further clarification
 from these advocates around the acceptable cost of running a node and how we
 can measure the global reduction in hardware and bandwidth costs in order to
 establish a baseline that we can use to justify additional resource usage by
 nodes.

Although I don't have a concrete proposals myself, I agree that
without having any common notion of what the minimal target hardware
looks like, it is very difficult to discuss other things that depend
on that.
If there's data that shows that a 100 usd raspberry pi with a 1 MB
connection in say, India (I actually have no idea about internet
speeds there) size X is a viable full node, then I don't think anybody
can reasonably oppose to rising the block size to X, and such a
hardfork can perfectly be uncontroversial.
I'm exaggerating ultra-low specifications, but it's just an example to
illustrate your point.
There was a thread about formalizing such minimum hardware
requirements, but I think the discussion simply finished there:
- Let's do this
- Yeah, let's do it
- +1, let's have concrete values, I generally agree.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev

 On Jul 23, 2015, at 4:42 PM, Benedict Chan ben...@fragnetics.com wrote:
 
 Scaling the network will come in the form of a combination of many
 optimizations. Just because we do not know for sure how to eventually
 serve 7 billion people does not mean we should make decisions on
 global validation that impact our ability to serve the current set of
 users.

Agreed. But I believe the economic and security arguments I gave regarding fees 
and incentives still hold and are largely separate from the scalability issue. 
Please correct me if I overlooked something.


 Also, blocking a change because it's more important to address issues
 such as... other improvements will further slow down the discussion.
 I believe an increase will not prevent the development of other
 improvements that we need - in contrast, the sooner we can get over
 the limit (which, as you agree, needs to be changed at some point),
 the sooner we can get back to work.

An increase in block size at this time will exacerbate security concerns around 
nodes relying on other nodes to validate (particularly miners and wallets). 
It’s not really a matter of having limited developer resources that need to be 
budgeted, as you seem to suggest.

Regarding developments on properly handling fees, there must exist the economic 
need for it before there’s an earnest effort to solve it. Increasing the block 
size right now will, in all likelihood, delay this effort. I’d much prefer to 
first let the fee market evolve because it’s a crucial component to the 
protocol’s design and its security model…and so we can get a better sense for 
fee economics. Then we might be able to figure out better approaches to block 
size changes in the future that makes sense economically…perhaps with 
mechanisms that can dynamically adjust it to reflect resource availability and 
network load.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev
Yes that is completely doable for the next crawl, however I am not sure how 
much that reflects the behavior bitcoind would see when making connections. 
Nodes do not make any attempt to sync with close peers, which is an undesirable 
property if you are attempting to be sybil resistant. With some quick playing 
around it seems that you do get the expected speedup with close proximity, but 
it's not a particularly huge difference at present. I'll keep working on it and 
see where I get. 


 Sent: Friday, July 24, 2015 at 4:48 AM
 From: Matt Corallo via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org
 To: bitcoin-dev@lists.linuxfoundation.org
 Subject: Re: [bitcoin-dev] Bitcoin Node Speed Test

 You may see much better throughput if you run a few servers around the
 globe and test based on closest-by-geoip. TCP throughput is rather
 significantly effected by latency, though I'm not really sure what you
 should be testing here, ideally.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev
I think it’s pretty clear by now that the assumption that all nodes have pretty 
similar computational resources leads to very misplaced incentives. Ultimately, 
cryptocurrencies will allow direct outsourcing of computation, making it 
possible to distribute computational tasks in an economically sensible way.

Wallets should be assumed to have low computational resources and intermittent 
Internet connections for the foreseeable future if we ever intend for this to 
be a practical payment system, methinks.


 On Jul 23, 2015, at 6:28 PM, Eric Lombrozo elombr...@gmail.com wrote:
 
 I suppose you can use a timelocked output that is spendable by anyone you 
 could go somewhat in this direction…the thing is it still means the wallet 
 must make fee estimations rather than being able to get a quick quote.
 
 On Jul 23, 2015, at 6:25 PM, Jean-Paul Kogelman jeanpaulkogel...@me.com 
 wrote:
 
 I think implicit QoS is far simpler to implement, requires less parties and 
 is closer to what Bitcoin started out as: a peer-to-peer digital cash 
 system, not a peer-to-let-me-handle-that-for-you-to-peer system.
 
 jp
 
 On Jul 24, 2015, at 9:08 AM, Eric Lombrozo elombr...@gmail.com wrote:
 
 By using third parties separate from individual miners that do bidding on 
 your behalf you get a mechanism that allows QoS guarantees and shifting the 
 complexity and risk from the wallet with little computational resources to 
 a service with abundance of them. Using timelocked contracts it’s possible 
 to enforce the guarantees.
 
 Negotiating directly with miners via smart contracts seems difficult at 
 best.
 
 
 On Jul 23, 2015, at 6:03 PM, Jean-Paul Kogelman via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Doesn't matter.
 
 It's not going to be perfect given the block time variance among other 
 factors but it's far more workable than guessing whether or not your 
 transaction is going to end up in a block at all.
 
 jp
 
 
 On Jul 24, 2015, at 8:53 AM, Peter Todd p...@petertodd.org wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 
 
 On 23 July 2015 20:49:20 GMT-04:00, Jean-Paul Kogelman via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 And it's obvious how a size cap would interfere with such a QoS scheme.
 Miners wouldn't be able to deliver the below guarantees if they have to
 start excluding transactions.
 
 As mining is a random, poisson process, obviously giving guarantees 
 without a majority of hashing power isn't possible.
 
 
 -BEGIN PGP SIGNATURE-
 
 iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsYyK
 AAoJEMCF8hzn9Lnc47AH/28WlecQLb37CiJpcvXO9tC4zqYEodurtB9nBHTSJrug
 VIEXZW53pSTdd3vv2qpGIlHxuYP8QmDSATztwQLuN6XWEszz7TO8MXBfLxKqZyGu
 i83WqSGjMAfwqjl0xR1G7PJgt4+E+0vaAFZc98vLCgZnedbiXRVtTGjhofG1jjTc
 DFMwMZHP0eqWTwtWwqUvnA7PTFHxdqoJruY/t1KceN+JDbBCJWMxBDswU64FXcVH
 0ecsk9nhLMyylBX/2v4HjCXyayocH8jQ+FpLSP0xxERyS+f1npFX9cxFMq24uXqn
 PcnZfLfaSJ6gMbmhbYG5wYDKN3u732j7dLzSJnMW6jk=
 =LY1+
 -END PGP SIGNATURE-
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-23 Thread Dave Scotese via bitcoin-dev
I used Google to establish that there is not already a post from 2015 that
mentions roadmap in the subject line.  Such would be a good skeleton for
anyone new to the list (like me).

1. Increase the 7 Tx per second - by increasing block size.

2. Do something about the trend toward centralization.  This is really two
issues in my mind:
A) Mining is falling to an ever shrinking number of businesses with the
infrastructure to run a datacenter.
B) The protocol as it is will soon make common computing machines
inadequate for running full nodes, and as a result they will not be able to
contribute to the ecosystem in meaningful ways.

Feel free to copy and then remove or alter any of that.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making Electrum more anonymous

2015-07-23 Thread Eric Voskuil via bitcoin-dev
On 07/23/2015 08:42 PM, Slurms MacKenzie via bitcoin-dev wrote:
 From: Eric Voskuil via bitcoin-dev

 From our perspective, another important objective of query privacy is
 allowing the caller make the trade-off between the relative levels of
 privacy and performance - from absolute to non-existent. In some
 cases privacy is neither required nor desired.

 Prefix filtering accomplishes the client-tuning objective. It also
 does not suffer server collusion attacks nor is it dependent on
 computational bounds. The primary trade-off becomes result set
 (download) size against privacy.

 Keep in mind this is the similar premise as claimed to be offered by
 BIP37 bloom filters, but faulty assumptions and implementation
 failure in BitcoinJ have meant that bloom filters uniquely identify
 the wallet and offer no privacy for the user no matter what the
 settings are.

Yes, quite true. And without the ability to search using filters there
is no private restore from backup short of downloading the full chain,
rendering the idea rather pointless.

This is why privacy remains a significant issue. Privacy is an essential
aspect of fungibility. This is a central problem for Bitcoin. The
correlation of addresses within transactions is of course problematic.
Possibly zero knowledge proof will at some point come to the rescue. But
the correlation of addresses via search works against the benefits of
address non-reuse, and the correlation of addresses to IP addresses
works against the use of private addresses.

Solving the latter two problems can go a long way to reducing the impact
of the former. But currently the only solution is to run a full chain
wallet. This is not a viable solution for many scenarios, and getting
less so.

This is not a problem that can be ignored, nor is it unique to Electrum.
The Bloom filter approach was problematic, but that doesn't preclude the
existence of valid solutions.

 If you imagine a system where there is somehow complete
 separation and anonymization between all requests and subscriptions,
 the timing still leaks the association between the addresses to the
 listeners.

Well because of presumed relationship in time these are not actually
separated requests. Which is why even the (performance-unrealistic)
option of a distinct Tor route for each independent address request is
*still* problematic.

 The obvious solution to that is to use a very high latency
 mix network, but I somehow doubt that there's any desire for a wallet
 with SPV security that takes a week to return results.

Introducing truly-random timing variations into the mixnet solutions can
mitigate timing attacks, but yes, this just makes the already
intolerable performance problem much worse.

e



signature.asc
Description: OpenPGP digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev
It's worth noting that even massive companies with $30M USD of funding don't 
run a single Bitcoin Core node, which is somewhat against the general concept 
people present of companies having an incentive to run their own to protect 
their own wallet. 


 Sent: Friday, July 24, 2015 at 4:57 AM
 From: Dave Scotese via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org
 To: bitcoin-dev@lists.linuxfoundation.org
 Subject: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis
 
 B) The protocol as it is will soon make common computing machines inadequate 
 for running full nodes, and as a result they will not be able to contribute 
 to the ecosystem in meaningful ways.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jean-Paul Kogelman via bitcoin-dev
Miners could include their fee tiers in the coinbase, but this is obviously 
open to manipulation, with little recourse (unless they are a pool and miners 
move away because of it). 

In any event, I think that trying out a solution that is both simple and 
involves the least number of parties necessary is preferable.

Have miners set their tiers, have users select the level of quality they want, 
ignore the block size.

Miners will adapt their tiers depending on how many transactions actually end 
up in them. If for example they set the first tier to be $1 to be included in 
the current block and no user chooses that level of service, they've obviously 
priced themselves out of the market. The opposite is also true; if a tier is 
popular they can choose to increase the cost of that tier.

jp 

 On Jul 24, 2015, at 9:28 AM, Eric Lombrozo elombr...@gmail.com wrote:
 
 I suppose you can use a timelocked output that is spendable by anyone you 
 could go somewhat in this direction…the thing is it still means the wallet 
 must make fee estimations rather than being able to get a quick quote.
 
 On Jul 23, 2015, at 6:25 PM, Jean-Paul Kogelman jeanpaulkogel...@me.com 
 wrote:
 
 I think implicit QoS is far simpler to implement, requires less parties and 
 is closer to what Bitcoin started out as: a peer-to-peer digital cash 
 system, not a peer-to-let-me-handle-that-for-you-to-peer system.
 
 jp
 
 On Jul 24, 2015, at 9:08 AM, Eric Lombrozo elombr...@gmail.com wrote:
 
 By using third parties separate from individual miners that do bidding on 
 your behalf you get a mechanism that allows QoS guarantees and shifting the 
 complexity and risk from the wallet with little computational resources to 
 a service with abundance of them. Using timelocked contracts it’s possible 
 to enforce the guarantees.
 
 Negotiating directly with miners via smart contracts seems difficult at 
 best.
 
 
 On Jul 23, 2015, at 6:03 PM, Jean-Paul Kogelman via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Doesn't matter.
 
 It's not going to be perfect given the block time variance among other 
 factors but it's far more workable than guessing whether or not your 
 transaction is going to end up in a block at all.
 
 jp
 
 
 On Jul 24, 2015, at 8:53 AM, Peter Todd p...@petertodd.org wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 
 
 On 23 July 2015 20:49:20 GMT-04:00, Jean-Paul Kogelman via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 And it's obvious how a size cap would interfere with such a QoS scheme.
 Miners wouldn't be able to deliver the below guarantees if they have to
 start excluding transactions.
 
 As mining is a random, poisson process, obviously giving guarantees 
 without a majority of hashing power isn't possible.
 
 
 -BEGIN PGP SIGNATURE-
 
 iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsYyK
 AAoJEMCF8hzn9Lnc47AH/28WlecQLb37CiJpcvXO9tC4zqYEodurtB9nBHTSJrug
 VIEXZW53pSTdd3vv2qpGIlHxuYP8QmDSATztwQLuN6XWEszz7TO8MXBfLxKqZyGu
 i83WqSGjMAfwqjl0xR1G7PJgt4+E+0vaAFZc98vLCgZnedbiXRVtTGjhofG1jjTc
 DFMwMZHP0eqWTwtWwqUvnA7PTFHxdqoJruY/t1KceN+JDbBCJWMxBDswU64FXcVH
 0ecsk9nhLMyylBX/2v4HjCXyayocH8jQ+FpLSP0xxERyS+f1npFX9cxFMq24uXqn
 PcnZfLfaSJ6gMbmhbYG5wYDKN3u732j7dLzSJnMW6jk=
 =LY1+
 -END PGP SIGNATURE-
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jean-Paul Kogelman via bitcoin-dev
Doesn't matter.

It's not going to be perfect given the block time variance among other factors 
but it's far more workable than guessing whether or not your transaction is 
going to end up in a block at all.

jp


 On Jul 24, 2015, at 8:53 AM, Peter Todd p...@petertodd.org wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 
 
 On 23 July 2015 20:49:20 GMT-04:00, Jean-Paul Kogelman via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 And it's obvious how a size cap would interfere with such a QoS scheme.
 Miners wouldn't be able to deliver the below guarantees if they have to
 start excluding transactions.
 
 As mining is a random, poisson process, obviously giving guarantees without a 
 majority of hashing power isn't possible.
 
 
 -BEGIN PGP SIGNATURE-
 
 iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsYyK
 AAoJEMCF8hzn9Lnc47AH/28WlecQLb37CiJpcvXO9tC4zqYEodurtB9nBHTSJrug
 VIEXZW53pSTdd3vv2qpGIlHxuYP8QmDSATztwQLuN6XWEszz7TO8MXBfLxKqZyGu
 i83WqSGjMAfwqjl0xR1G7PJgt4+E+0vaAFZc98vLCgZnedbiXRVtTGjhofG1jjTc
 DFMwMZHP0eqWTwtWwqUvnA7PTFHxdqoJruY/t1KceN+JDbBCJWMxBDswU64FXcVH
 0ecsk9nhLMyylBX/2v4HjCXyayocH8jQ+FpLSP0xxERyS+f1npFX9cxFMq24uXqn
 PcnZfLfaSJ6gMbmhbYG5wYDKN3u732j7dLzSJnMW6jk=
 =LY1+
 -END PGP SIGNATURE-
 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Matt Corallo via bitcoin-dev
You may see much better throughput if you run a few servers around the
globe and test based on closest-by-geoip. TCP throughput is rather
significantly effected by latency, though I'm not really sure what you
should be testing here, ideally.

On 07/23/15 14:19, slurms--- via bitcoin-dev wrote:
 On this day, the Bitcoin network was crawled and reachable nodes surveyed to 
 find their maximum throughput in order to determine if it can safely support 
 a faster block rate. Specifically this is an attempt to prove or disprove the 
 common statement that 1MB blocks were only suitable slower internet 
 connections in 2009 when Bitcoin launched, and that connection speeds have 
 improved to the point of obviously supporting larger blocks.
 
 
 The testing methodology is as follows:
 
  * Nodes were randomly selected from a peers.dat, 5% of the reachable nodes 
 in the network were contacted.
 
  * A random selection of blocks was downloaded from each peer.
 
  * There is some bias towards higher connection speeds, very slow connections 
 (30KB/s) timed out in order to run the test at a reasonable rate.
 
  * The connecting node was in Amsterdam with a 1GB NIC. 
 
  
 Results:
 
  * 37% of connected nodes failed to upload blocks faster than 1MB/s.
 
  * 16% of connected nodes uploaded blocks faster than 10MB/s.
 
  * Raw data, one line per connected node, kilobytes per second 
 http://pastebin.com/raw.php?i=6b4NuiVQ
 
 
 This does not support the theory that the network has the available bandwidth 
 for increased block sizes, as in its current state 37% of nodes would fail to 
 upload a 20MB block to a single peer in under 20 seconds (referencing a 
 number quoted by Gavin). If the bar for suitability is placed at taking only 
 1% of the block time (6 seconds) to upload one block to one peer, then 69% of 
 the network fails for 20MB blocks. For comparison, only 10% fail this metric 
 for 1MB blocks.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jameson Lopp via bitcoin-dev
On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo elombr...@gmail.com wrote:


 On Jul 23, 2015, at 11:10 AM, Jameson Lopp jameson.l...@gmail.com wrote:

 Larger block sizes don't scale the network, they merely increase how much
 load we allow the network to bear.


 Very well put, Jameson. And the cost of bearing this load must be paid
 for. And unless we’re willing to accept that computational resources are
 finite and subject to the same economic issues as any other finite
 resource, our incentive model collapses the security of the network will be
 significantly at risk. Whatever your usability concerns may be regarding
 fees, when the security model’s busted usability issues are moot.

 Larger blocks support more transactions…but they also incur Ω(n) overhead
 in bandwidth, CPU, and space. These are finite resources that must be paid
 for somehow…and as we all already know miners are willing to cut corners on
 all this and push the costs onto others (not to mention wallets and online
 block explorers). And who can really blame them? It’s rational behavior
 given the skewed incentives.


Running a node certainly has real-world costs that shouldn't be ignored.
There are plenty of advocates who argue that Bitcoin should strive to keep
it feasible for the average user to run their own node (as opposed to
Satoshi's vision of beefy servers in data centers.) My impression is that
even most of these advocates agree that it will be acceptable to eventually
increase block sizes as resources become faster and cheaper because it
won't be 'pricing out' the average user from running their own node. If
this is the case, it seems to me that we have a problem given that there is
no established baseline for the acceptable performance / hardware cost
requirements to run a node. I'd really like to see further clarification
from these advocates around the acceptable cost of running a node and how
we can measure the global reduction in hardware and bandwidth costs in
order to establish a baseline that we can use to justify additional
resource usage by nodes.

- Jameson


 On the flip side, the scalability proposals will still require larger
 blocks if we are ever to support anything close to resembling mainstream
 usage. This is not an either/or proposition - we clearly need both.


 Mainstream usage of cryptocurrency will be enabled primarily by direct
 party-to-party contract negotiation…with the use of the blockchain
 primarily as a dispute resolution mechanism. The block size isn’t about
 scaling but about supply and demand of finite resources. As demand for
 block space increases, we can address it either by increasing computational
 resources (block size) or by increasing fees. But to do the former we need
 a way to offset the increase in cost by making sure that those who
 contribute said resources have incentive to do so.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev

 On Jul 23, 2015, at 12:35 PM, Gavin Andresen gavinandre...@gmail.com wrote:
 
 There are lots of things we can do to decrease costs, and a lot of things 
 have ALREADY been done (e.g. running a pruned full node).

I also wanted to point out I fully agree with you that there are still many 
optimizations we could do to reduce costs, and think many of these things are 
certainly worth doing. However, there’s only so much we can do in this regard. 
Sooner or later we still run up against theoretical limitations. These 
optimizations can reduce costs by some factor…but they are highly unlikely to 
overcome the Ω(n) validation complexity barring some major algorithmic 
breakthrough (and perhaps allowing for nondeterminism, perhaps accepting a 
negligible but finite error probability).


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Marcel Jamin via bitcoin-dev
He measured the upload capacity of the peers by downloading from them, or
am I being dumb? :)


2015-07-23 18:05 GMT+02:00 Peter Todd via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256



 On 23 July 2015 10:19:59 GMT-04:00, slurms--- via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 This does not support the theory that the network has the available
 bandwidth for increased block sizes, as in its current state 37% of
 nodes would fail to upload a 20MB block to a single peer in under 20
 seconds (referencing a number quoted by Gavin). If the bar for
 suitability is placed at taking only 1% of the block time (6 seconds)
 to upload one block to one peer, then 69% of the network fails for 20MB
 blocks. For comparison, only 10% fail this metric for 1MB blocks.

 Note how due to bandwidth being generally asymetric your findings are
 probably optimistic - you've measured download capacity. On top of that
 upload is further reduced by the fact that multiple peers at once need to
 be sent blocks for reliability.

 Secondly you're measuring a network that isn't under attack - we need
 significant additional margin to resist attack as performance is
 consensus-critical.

 -BEGIN PGP SIGNATURE-

 iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsRCj
 AAoJEMCF8hzn9Lnc47AIAIQbznavjd2Rbqxeq5a3GLqeYoI4BZIQYqfWky+6OQtq
 yGRKaqPtGuES5y9L0k7efivT385mOl87PWnWMy61xxZ9FJgoS+YHkEx8K4tfgfA2
 yLOKzeFSar2ROCcjHYyPWa2XXjRbNmiLzfNuQyIBArg/Ch9//iXUUM+GG0mChF5k
 nUxLstXgXDNh5H8xkHeLi4lEbt9HFiwcZnT1Tzeo2dvVTujrtyNb/zEhNZScMXDc
 UOlT8rBLxzHlytKdXt1GNKIq0feTRJNbreBh7/EB4nYTT54CItaaVXul0LdHd5/2
 kgKtdbUdeyaRUKrKcvxiuIwclyoOuRQp0DZThsB262o=
 =tBUM
 -END PGP SIGNATURE-

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Joseph Gleason ⑈ via bitcoin-dev
That is how I read it as well.


On Thu, Jul 23, 2015 at 12:56 PM Marcel Jamin via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 He measured the upload capacity of the peers by downloading from them, or
 am I being dumb? :)


 2015-07-23 18:05 GMT+02:00 Peter Todd via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256



 On 23 July 2015 10:19:59 GMT-04:00, slurms--- via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 This does not support the theory that the network has the available
 bandwidth for increased block sizes, as in its current state 37% of
 nodes would fail to upload a 20MB block to a single peer in under 20
 seconds (referencing a number quoted by Gavin). If the bar for
 suitability is placed at taking only 1% of the block time (6 seconds)
 to upload one block to one peer, then 69% of the network fails for 20MB
 blocks. For comparison, only 10% fail this metric for 1MB blocks.

 Note how due to bandwidth being generally asymetric your findings are
 probably optimistic - you've measured download capacity. On top of that
 upload is further reduced by the fact that multiple peers at once need to
 be sent blocks for reliability.

 Secondly you're measuring a network that isn't under attack - we need
 significant additional margin to resist attack as performance is
 consensus-critical.

 -BEGIN PGP SIGNATURE-

 iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsRCj
 AAoJEMCF8hzn9Lnc47AIAIQbznavjd2Rbqxeq5a3GLqeYoI4BZIQYqfWky+6OQtq
 yGRKaqPtGuES5y9L0k7efivT385mOl87PWnWMy61xxZ9FJgoS+YHkEx8K4tfgfA2
 yLOKzeFSar2ROCcjHYyPWa2XXjRbNmiLzfNuQyIBArg/Ch9//iXUUM+GG0mChF5k
 nUxLstXgXDNh5H8xkHeLi4lEbt9HFiwcZnT1Tzeo2dvVTujrtyNb/zEhNZScMXDc
 UOlT8rBLxzHlytKdXt1GNKIq0feTRJNbreBh7/EB4nYTT54CItaaVXul0LdHd5/2
 kgKtdbUdeyaRUKrKcvxiuIwclyoOuRQp0DZThsB262o=
 =tBUM
 -END PGP SIGNATURE-

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Electrum Server Speed Test

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev
Similar to the Bitcoin Node Speed Test, this is a quick quantitative look at 
how the Electrum server software handles under load. The Electrum wallet is 
extremely popular, and the distributed servers which power it are all hosted by 
volunteers without budget. The server requires a fully indexed Bitcoin Core 
daemon running, and produces sizable external index in order to allow SPV 
clients to quickly retrieve their history. 

3.9Gelectrum/utxo
67M electrum/undo
19G electrum/hist
1.4Gelectrum/addr
24G electrum/

Based on my own logs produced by the electrum-server console, it takes this 
server (Xeon, lots of memory, 7200 RPM RAID) approximately 3.7 minutes per 
megabyte of block to process into the index. This seems to hold true through 
the 10 or so blocks I have in my scroll buffer, the contents of blocks seem to 
be of approximately the same processing load. Continuing this trend with the 
current inter-block time of 9.8 minutes, an electrum-server instance running on 
modest-high end dedicated server is able to support up to 2.64 MB block sizes 
before permanently falling behind the chain. 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Peter Todd via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 23 July 2015 10:19:59 GMT-04:00, slurms--- via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:
This does not support the theory that the network has the available
bandwidth for increased block sizes, as in its current state 37% of
nodes would fail to upload a 20MB block to a single peer in under 20
seconds (referencing a number quoted by Gavin). If the bar for
suitability is placed at taking only 1% of the block time (6 seconds)
to upload one block to one peer, then 69% of the network fails for 20MB
blocks. For comparison, only 10% fail this metric for 1MB blocks.

Note how due to bandwidth being generally asymetric your findings are probably 
optimistic - you've measured download capacity. On top of that upload is 
further reduced by the fact that multiple peers at once need to be sent blocks 
for reliability.

Secondly you're measuring a network that isn't under attack - we need 
significant additional margin to resist attack as performance is 
consensus-critical.

-BEGIN PGP SIGNATURE-

iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsRCj
AAoJEMCF8hzn9Lnc47AIAIQbznavjd2Rbqxeq5a3GLqeYoI4BZIQYqfWky+6OQtq
yGRKaqPtGuES5y9L0k7efivT385mOl87PWnWMy61xxZ9FJgoS+YHkEx8K4tfgfA2
yLOKzeFSar2ROCcjHYyPWa2XXjRbNmiLzfNuQyIBArg/Ch9//iXUUM+GG0mChF5k
nUxLstXgXDNh5H8xkHeLi4lEbt9HFiwcZnT1Tzeo2dvVTujrtyNb/zEhNZScMXDc
UOlT8rBLxzHlytKdXt1GNKIq0feTRJNbreBh7/EB4nYTT54CItaaVXul0LdHd5/2
kgKtdbUdeyaRUKrKcvxiuIwclyoOuRQp0DZThsB262o=
=tBUM
-END PGP SIGNATURE-

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev

The library used isnt open source, so unfortunately not. It shouldnt be too hard to replicate in python-bitcoinlib or bitcoinj though.



Sent:Thursday, July 23, 2015 at 6:55 PM
From:Jameson Lopp jameson.l...@gmail.com
To:slu...@gmx.us
Cc:bitcoin-dev@lists.linuxfoundation.org
Subject:Re: [bitcoin-dev] Bitcoin Node Speed Test


Are you willing to share the code that you used to run the test?


- Jameson



On Thu, Jul 23, 2015 at 10:19 AM, slurms--- via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote:

On this day, the Bitcoin network was crawled and reachable nodes surveyed to find their maximum throughput in order to determine if it can safely support a faster block rate. Specifically this is an attempt to prove or disprove the common statement that 1MB blocks were only suitable slower internet connections in 2009 when Bitcoin launched, and that connection speeds have improved to the point of obviously supporting larger blocks.


The testing methodology is as follows:

* Nodes were randomly selected from a peers.dat, 5% of the reachable nodes in the network were contacted.

* A random selection of blocks was downloaded from each peer.

* There is some bias towards higher connection speeds, very slow connections (30KB/s) timed out in order to run the test at a reasonable rate.

* The connecting node was in Amsterdam with a 1GB NIC.


Results:

* 37% of connected nodes failed to upload blocks faster than 1MB/s.

* 16% of connected nodes uploaded blocks faster than 10MB/s.

* Raw data, one line per connected node, kilobytes per second http://pastebin.com/raw.php?i=6b4NuiVQ


This does not support the theory that the network has the available bandwidth for increased block sizes, as in its current state 37% of nodes would fail to upload a 20MB block to a single peer in under 20 seconds (referencing a number quoted by Gavin). If the bar for suitability is placed at taking only 1% of the block time (6 seconds) to upload one block to one peer, then 69% of the network fails for 20MB blocks. For comparison, only 10% fail this metric for 1MB blocks.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev






___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Electrum Server Speed Test

2015-07-23 Thread Joseph Gleason ⑈ via bitcoin-dev
I have concerns about the performance of the Electrum server software as
well.  It seems to load data one block at a time (which normally makes
sense) and I think it is even single threaded on transactions inside the
block.

To try to addresses these issues, I made my own implementation of the
electrum server.  It doesn't support UTXO (yet) but happily interacts with
all the clients I've tested.  It is heavily multithreaded, uses mongodb as
a key value store and bitcoinj for block and transaction parsing.

https://github.com/fireduck64/jelectrum

You can hit a running instance at:
b.1209k.com:50002:s
or
b.1209k.com:50001:t

A synced node uses 347G of mongodb storage.

Here are the recent blocks imported, with number of transactions and import
time.
http://pastebin.com/cfW3C2L6
These times are based on having mongodb on SSD.
The CPU is 8 core Intel(R) Xeon(R) CPU E5430  @ 2.66GHz

I'd be happy to help with anything you need to evaluate it.


On Thu, Jul 23, 2015 at 9:01 AM Slurms MacKenzie via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Similar to the Bitcoin Node Speed Test, this is a quick quantitative look
 at how the Electrum server software handles under load. The Electrum wallet
 is extremely popular, and the distributed servers which power it are all
 hosted by volunteers without budget. The server requires a fully indexed
 Bitcoin Core daemon running, and produces sizable external index in order
 to allow SPV clients to quickly retrieve their history.

 3.9Gelectrum/utxo
 67M electrum/undo
 19G electrum/hist
 1.4Gelectrum/addr
 24G electrum/

 Based on my own logs produced by the electrum-server console, it takes
 this server (Xeon, lots of memory, 7200 RPM RAID) approximately 3.7 minutes
 per megabyte of block to process into the index. This seems to hold true
 through the 10 or so blocks I have in my scroll buffer, the contents of
 blocks seem to be of approximately the same processing load. Continuing
 this trend with the current inter-block time of 9.8 minutes, an
 electrum-server instance running on modest-high end dedicated server is
 able to support up to 2.64 MB block sizes before permanently falling behind
 the chain.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Electrum Server Speed Test

2015-07-23 Thread Matt Whitlock via bitcoin-dev
Great data points, but isn't this an argument for improving Electrum Server's 
database performance, not for holding Bitcoin back?

(Nice alias, by the way. Whimmy wham wham wozzle!)


On Thursday, 23 July 2015, at 5:56 pm, Slurms MacKenzie via bitcoin-dev wrote:
 Similar to the Bitcoin Node Speed Test, this is a quick quantitative look at 
 how the Electrum server software handles under load. The Electrum wallet is 
 extremely popular, and the distributed servers which power it are all hosted 
 by volunteers without budget. The server requires a fully indexed Bitcoin 
 Core daemon running, and produces sizable external index in order to allow 
 SPV clients to quickly retrieve their history. 
 
 3.9Gelectrum/utxo
 67M electrum/undo
 19G electrum/hist
 1.4Gelectrum/addr
 24G electrum/
 
 Based on my own logs produced by the electrum-server console, it takes this 
 server (Xeon, lots of memory, 7200 RPM RAID) approximately 3.7 minutes per 
 megabyte of block to process into the index. This seems to hold true through 
 the 10 or so blocks I have in my scroll buffer, the contents of blocks seem 
 to be of approximately the same processing load. Continuing this trend with 
 the current inter-block time of 9.8 minutes, an electrum-server instance 
 running on modest-high end dedicated server is able to support up to 2.64 MB 
 block sizes before permanently falling behind the chain. 
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev

 On Jul 23, 2015, at 11:10 AM, Jameson Lopp jameson.l...@gmail.com wrote:
 
 Larger block sizes don't scale the network, they merely increase how much 
 load we allow the network to bear.

Very well put, Jameson. And the cost of bearing this load must be paid for. And 
unless we’re willing to accept that computational resources are finite and 
subject to the same economic issues as any other finite resource, our incentive 
model collapses the security of the network will be significantly at risk. 
Whatever your usability concerns may be regarding fees, when the security 
model’s busted usability issues are moot.

Larger blocks support more transactions…but they also incur Ω(n) overhead in 
bandwidth, CPU, and space. These are finite resources that must be paid for 
somehow…and as we all already know miners are willing to cut corners on all 
this and push the costs onto others (not to mention wallets and online block 
explorers). And who can really blame them? It’s rational behavior given the 
skewed incentives.

 On the flip side, the scalability proposals will still require larger blocks 
 if we are ever to support anything close to resembling mainstream usage. 
 This is not an either/or proposition - we clearly need both.

Mainstream usage of cryptocurrency will be enabled primarily by direct 
party-to-party contract negotiation…with the use of the blockchain primarily as 
a dispute resolution mechanism. The block size isn’t about scaling but about 
supply and demand of finite resources. As demand for block space increases, we 
can address it either by increasing computational resources (block size) or by 
increasing fees. But to do the former we need a way to offset the increase in 
cost by making sure that those who contribute said resources have incentive to 
do so.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP draft: Hardfork bit

2015-07-23 Thread jl2012 via bitcoin-dev


Quoting Tier Nolan via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org:


On Thu, Jul 23, 2015 at 5:23 PM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:


2) Full nodes and SPV nodes following original consensus rules may not be
aware of the deployment of a hardfork. They may stick to an
economic-minority fork and unknowingly accept devalued legacy tokens.



This change means that they are kicked off the main chain immediately when
the fork activates.

The change is itself a hard fork.  Clients have be updated to get the
benefits.


I refrain from calling it the main chain. I use original chain and  
new chain instead as I make no assumption about the distribution of  
mining power. This BIP still works when we have a 50/50 hardfork. The  
main point is to protect all users on both chains, and allow them to  
make an informed choice.




3) In the case which the original consensus rules are also valid under the

new consensus rules, users following the new chain may unexpectedly reorg
back to the original chain if it grows faster than the new one. People may
find their confirmed transactions becoming unconfirmed and lose money.



I don't understand the situation here.  Is the assumption of a group of
miners suddenly switching (for example, they realise that they didn't
intend to support the new rules)?



Again, as I make no assumption about the mining power distribution,  
the new chain may actually have less miner support. Without any  
protection (AFAIK, for example, BIP100, 101, 102), the weaker new  
chain will get 51%-attacked by the original chain constantly.





Flag block is constructed in a way that nodes with the original consensus
rules must reject. On the other hand, nodes with the new consensus rules
must reject a block if it is not a flag block while it is supposed to be.
To achieve these goals, the flag block must 1) have the hardfork bit
setting to 1, 2) include a short predetermined unique description of the
hardfork anywhere in its coinbase, and 3) follow any other rules required
by the hardfork. If these conditions are not fully satisfied, upgraded
nodes shall reject the block.



Ok, so set the bit and then include BIP-GIT-HASH of the canonical BIP on
github in the coinbase?


I guess the git hash is not known until the code is written? (correct  
me if I'm wrong) As the coinbase message is consensus-critical, it  
must be part of the source code and therefore you can't use any kind  
of hash of the code itself (a chicken-and-egg problem)



Since it is a hard fork, the version field could be completely
re-purposed.  Set the bit and add the BIP number as the lower bits in the
version field.  This lets SPV clients check if they know about the hard
fork.


This may not be compatible with the other version bits voting mechanisms.


There network protocol could be updated to add getdata support for asking
for a coinbase only merkleblock.  This would allow SPV clients to obtain
the coinbase.


Yes



Automatic warning system: When a flag block is found on the network, full

nodes and SPV nodes should look into its coinbase. They should alert their
users and/or stop accepting incoming transactions if it is an unknown
hardfork. It should be noted that the warning system could become a DoS
vector if the attacker is willing to give up the block reward. Therefore,
the warning may be issued only if a few blocks are built on top of the flag
block in a reasonable time frame. This will in turn increase the risk in
case of a real planned hardfork so it is up to the wallet programmers to
decide the optimal strategy. Human warning system (e.g. the emergency alert
system in Bitcoin Core) could fill the gap.



If the rule was that hard forks only take effect 100 blocks after the flag
block, then this problem is eliminated.

Emergency hard forks may still have to take effect immediately though, so
it would have to be a custom not a rule.


The flag block itself is a hardfork already and old miners will not  
mine on top of the flag block. So your suggestion won't be helpful in  
this situation.


To make it really meaningful, we need to consume one more bit of the  
'version' field (notice bit). Supporting miners will turn on the  
notice bit, and include a message in coinbase (notice block). When a  
full node/SPV node find many notice blocks with the same coinbase  
message, they could bet that the subsequent flag block is a legit one.  
However, an attacker may still troll you by injecting an invalid flag  
block after many legit notice blocks. So I'm not sure if it is worth  
the added complexity.




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev

 On Jul 23, 2015, at 10:14 AM, Robert Learney via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 That’s not exactly what’s happened though, is it Cipher? Gavin put forward 
 20Mb then after analysis and discussion has moved to 8Mb, whereas the other 
 camp of core developers is firmly stuck in the ‘1Mb or bust’ group.

The issue isn’t really whether it’s 1MB or 2MB or 4MB or 8MB or whatever. First 
of all, the burden of justifying this change should be on those proposing a 
hardfork. The default is to not have a hard fork. Second of all, it’s not 
really about *whether* the block size is increased…but about *when* and *how* 
it is increased. There’s a good argument to be made that right now it is more 
important to address issues such as the fact that validation is so expensive 
(which as others and myself have pointed out has led to a collapse of the 
security model in the past, requiring manual intervention to temporarily 
“fix”)…and the fact that we don’t yet have great solutions to dealing with 
fees, which are a crucial component of the design of the protocol.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Electrum Server Speed Test

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev
That's purely the time on the wall for electrum-server, validation in bitcoind 
happens before. As ThomasV has pointed out it is significantly faster with a 
solid state disk (but much more expensive to operate), if we get to that point 
it'll only be expensive servers with lots of SSD space which are able to keep 
up with the current software. 

I was mostly trying to make a point about other software being impacted in ways 
which aren't really discussed, rather than a specific dig about 
electrum-server. I should have made that more clear. 


 Sent: Thursday, July 23, 2015 at 9:21 PM
 From: Eric Voskuil e...@voskuil.org
 To: Slurms MacKenzie slu...@gmx.us, bitcoin-dev@lists.linuxfoundation.org
 Subject: Re: [bitcoin-dev] Electrum Server Speed Test

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Does to process into the index include time for transport and/or
 block validation (presumably by bitcoind) or this this exclusively the
 time for Electrum Server to index a validated block?
 
 e
 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jorge Timón via bitcoin-dev
On Thu, Jul 23, 2015 at 7:14 PM, Robert Learney via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 That’s not exactly what’s happened though, is it Cipher? Gavin put forward
 20Mb then after analysis and discussion has moved to 8Mb, whereas the other
 camp of core developers is firmly stuck in the ‘1Mb or bust’ group.

His proposals actually end up with 20 GB and 8 GB respectively. I'm
not sure if you count me on the ‘1Mb or bust’ group, but I'm not
firmly stuck anywhere.
I've never said that the block size should never be increased, that it
shouldn't change now, that 8 MB is too much or anything like that
because I simply don't have the data (and I don't think anybody has
it). I invite people to collect that data and I've written a patch to
bitcoin to facilitate that task.
Do you really think that's an obstructionist attitude?

My position could be summarized like this:

- We're going to hit the limit tomorrow, and Bitcoin will fail when we do.
- I'm not so sure we will hit the limit tomorrow but even accepting
the premise, this is a non sequitur. Fees will probably rise, but
that's not necessarily a bad thing. A limit that is meaningful in
practice must happen eventually, mustn't it? If not now, when are we
planning to let that disaster happen?
- That's too far in the future to worry about it.
- Does that mean waiting, say, 4 more subsidy halvings, 8? 10?
- Just don't worry about it

I'm not opposing to anything, I'm just patiently waiting for some
answers that never seem to arrive.
If people interpret questions or the fact that when people use
fallacious arguments I like to identify the concrete fallacy they're
using and state it so publicly (I do it for sport and against all
sides) as opposition, I don't really think I can do anything about
it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Eric Lombrozo via bitcoin-dev

 On Jul 23, 2015, at 9:28 AM, Gavin Andresen via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 I'd really like to move from IMPOSSIBLE because...  (electrum hasn't been 
 optimized
 (by the way: you should run on SSDs, LevelDB isn't designed for spinning 
 disks),
 what if the network is attacked?  (attacked HOW???), current p2p network is 
 using
 the simplest, stupidest possible block propagation algorithm...)
 
 ... to lets work together and work through the problems and scale it up.

Let’s be absolutely clear about one thing - block size increases are *not* 
about scaling the network. Can we please stop promoting this falsehood? It 
doesn’t matter by what number we multiply the block size…we can NEVER satisfy 
the full demand if we insist on every single transaction from every single 
person everywhere in the world being on the blockchain…it’s just absurd.

Increasing block size only temporarily addresses one significant issue - how to 
postpone having to deal with transaction fees, which by design, are how the 
cost of operating the Bitcoin network (which is already very expensive) is 
supposed to be paid for ultimately. Suggesting we avoid dealing with this 
constitutes a new economic policy - dealing with it is the default economic 
policy we’ve all known about from the beginning…so please stop claiming 
otherwise.

 On Jul 23, 2015, at 9:50 AM, cipher anthem via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Why not help on a project that actually seems to offer great scalability like 
 the lightning network? There have been great progress there.


Exactly. There’s been tremendous progress here in addressing scalability, yet I 
don’t see you participating in that discussion, Gavin.

 On Jul 23, 2015, at 5:17 AM, Jorge Timón via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 But it seems to me that the not now side has no centralization
 concerns at all and their true position is not ever hit the blocksize
 limit, that's the only explanation I can find to their lack of
 answers to the when do you think we should allow users to notice that
 there's a limit in the blocksize to guarantee that the system can be
 decentralized?.

I agree with what you’re saying, Jorge…but It’s even worse than that. The July 
4th fork illustrated that the security model of the network itself could be at 
risk from the increasing costs in validation causing people to rely on others 
to validate for them…and increasing block size only makes the problem worse.

- Eric Lombrozo


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jorge Timón via bitcoin-dev
On Thu, Jul 23, 2015 at 6:17 PM, Tom Harding via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 On 7/23/2015 5:17 AM, Jorge Timón via bitcoin-dev wrote:

 If the user expectation is that a price would never arise because
 supply is going to be increased ad infinitum and they will always be
 able to send fast in-chain bitcoin transactions for free, just like
 breath air (an abundant resource) for free, then we should change that
 expectation as soon as possible.

 No.  We should accept that reality may change, and we should promote
 understanding of that fact.

 We should not artificially manipulate the market as soon as possible,
 since we ourselves don't know much at all about how the market will
 unfold in the future.

We know perfectly well that the system will need to eventually be
sustained by fees.
We should stop misinforming new users talking them about how bitcoin
transactions are free, because they're clearly not.

 the criteria for the consensus block size should be purely based on
 technological capacity (propagation benchmarking, etc) and
 centralization concerns

 Right, purely these.  There is no place for artificially manipulating
 expectations.

Am I artificially manipulating expectations ?

 they will simply advance the front and start another battle, because
 their true hidden faction is the not ever side. Please, Jeff, Gavin,
 Mike, show me that I'm wrong on this point. Please, answer my question
 this time. If not now, then when?

 Bitcoin has all the hash power.  The merkle root has effectively
 infinite capacity.  We should be asking HOW to scale the supporting
 information propagation system appropriately, not WHEN to limit the
 capacity of the primary time-stamping machine.

Timestamping data using the blockchain is not the same as including
that the data in the blockchain itself because the later is a scarce
resource.
The timestamping space is already unlimited today with no changes.
You can use a bitcoin transaction to timestamp an unbounded amount of
external data using exactly 0 extra bytes in your transaction!
Here's the code: https://github.com/Blockstream/contracthashtool

And I'm very interested in scaling Bitcoin, I just disagree that
changing a constant is a scaling solution.

On Thu, Jul 23, 2015 at 6:28 PM, Gavin Andresen via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 On Thu, Jul 23, 2015 at 12:17 PM, Tom Harding via bitcoin-dev
 We haven't tried yet.  I can't answer for the people you asked, but
 personally I haven't thought much about when we should declare failure.


 Yes! Lets plan for success!

I extremely disagree that having a block limit is failure. It's a
design decision to protect the system against centralization (which we
will be able to rise as we solve technical and centralization problems
we have today).
But thank you for being more clear about it now, Gavin. You won't stop
on a 8GB or 32GB limit because you think having ANY limit would be a
failure.
Is that correct?
If not, can you please answer clearly when and why you think the
blocksize should be lower than demand (when you will be ok with
bitcoin users having to pay fees for the service they're enjoying)?
If your answer is never, I would prefer to hear it from you than
just concluding it by the lack of an answer.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP draft: Hardfork bit

2015-07-23 Thread Tier Nolan via bitcoin-dev
On Thu, Jul 23, 2015 at 5:23 PM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 2) Full nodes and SPV nodes following original consensus rules may not be
 aware of the deployment of a hardfork. They may stick to an
 economic-minority fork and unknowingly accept devalued legacy tokens.


This change means that they are kicked off the main chain immediately when
the fork activates.

The change is itself a hard fork.  Clients have be updated to get the
benefits.

3) In the case which the original consensus rules are also valid under the
 new consensus rules, users following the new chain may unexpectedly reorg
 back to the original chain if it grows faster than the new one. People may
 find their confirmed transactions becoming unconfirmed and lose money.


I don't understand the situation here.  Is the assumption of a group of
miners suddenly switching (for example, they realise that they didn't
intend to support the new rules)?


 Flag block is constructed in a way that nodes with the original consensus
 rules must reject. On the other hand, nodes with the new consensus rules
 must reject a block if it is not a flag block while it is supposed to be.
 To achieve these goals, the flag block must 1) have the hardfork bit
 setting to 1, 2) include a short predetermined unique description of the
 hardfork anywhere in its coinbase, and 3) follow any other rules required
 by the hardfork. If these conditions are not fully satisfied, upgraded
 nodes shall reject the block.


Ok, so set the bit and then include BIP-GIT-HASH of the canonical BIP on
github in the coinbase?

Since it is a hard fork, the version field could be completely
re-purposed.  Set the bit and add the BIP number as the lower bits in the
version field.  This lets SPV clients check if they know about the hard
fork.

There network protocol could be updated to add getdata support for asking
for a coinbase only merkleblock.  This would allow SPV clients to obtain
the coinbase.

Automatic warning system: When a flag block is found on the network, full
 nodes and SPV nodes should look into its coinbase. They should alert their
 users and/or stop accepting incoming transactions if it is an unknown
 hardfork. It should be noted that the warning system could become a DoS
 vector if the attacker is willing to give up the block reward. Therefore,
 the warning may be issued only if a few blocks are built on top of the flag
 block in a reasonable time frame. This will in turn increase the risk in
 case of a real planned hardfork so it is up to the wallet programmers to
 decide the optimal strategy. Human warning system (e.g. the emergency alert
 system in Bitcoin Core) could fill the gap.


If the rule was that hard forks only take effect 100 blocks after the flag
block, then this problem is eliminated.

Emergency hard forks may still have to take effect immediately though, so
it would have to be a custom not a rule.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Jorge Timón via bitcoin-dev
On Thu, Jul 23, 2015 at 1:42 AM, Cory Fields via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
 I'm not sure why Bitcoin Core and the rules and policies that it
 enforces are being conflated in this thread. There's nothing stopping
 us from adding the ability for the user to decide what their consensus
 parameters should be at runtime. In fact, that's already in use:
 ./bitcoind -testnet. As mentioned in another thread, the chain params
 could even come from a config file that the user could edit without
 touching the code.

For what is worth, here's yet another piece of code from the doing
nothing side:

https://github.com/bitcoin/bitcoin/pull/6382

It allows you to create a regtest-like testchain with a maximum block
size chosen at run time.
Rusty used a less generic testchain for testing 8 MB blocks:

http://rusty.ozlabs.org/?p=509

Unfortunately I don't know of anybody that has used my patch to test
any other size (maybe there's not that much interest in testing other
sizes after all?).

I'm totally in favor of preemptively adapting the code so that when a
new blocksize is to be deployed, adapting the code is not a problem.
Developers can agree on many changes in the code without users having
to agree on a concrete block size first.
I offer my help to do that. That's what I'm trying to do in #6382 and
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008961.html
but to my surprise that gets disregarded as doing nothing and as
having a negative attitude, when not simply ignored.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Pindar Wong via bitcoin-dev
This looks like the beginnings of some great analysis.

Per Peter's remarks, I think it would be productive to run the test(s) on a
simulated network with worst case network failure(s) so that we can
determine the safety margin needed.

I have potential access to h/w resources that would be available for
running such tests at the necessary scales.

Cheers,

p.


On Fri, Jul 24, 2015 at 1:14 AM, Slurms MacKenzie via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 The library used isn't open source, so unfortunately not. It shouldn't be
 too hard to replicate in python-bitcoinlib or bitcoinj though.

 *Sent:* Thursday, July 23, 2015 at 6:55 PM
 *From:* Jameson Lopp jameson.l...@gmail.com
 *To:* slu...@gmx.us
 *Cc:* bitcoin-dev@lists.linuxfoundation.org
 *Subject:* Re: [bitcoin-dev] Bitcoin Node Speed Test
  Are you willing to share the code that you used to run the test?

 - Jameson

 On Thu, Jul 23, 2015 at 10:19 AM, slurms--- via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 On this day, the Bitcoin network was crawled and reachable nodes surveyed
 to find their maximum throughput in order to determine if it can safely
 support a faster block rate. Specifically this is an attempt to prove or
 disprove the common statement that 1MB blocks were only suitable slower
 internet connections in 2009 when Bitcoin launched, and that connection
 speeds have improved to the point of obviously supporting larger blocks.


 The testing methodology is as follows:

  * Nodes were randomly selected from a peers.dat, 5% of the reachable
 nodes in the network were contacted.

  * A random selection of blocks was downloaded from each peer.

  * There is some bias towards higher connection speeds, very slow
 connections (30KB/s) timed out in order to run the test at a reasonable
 rate.

  * The connecting node was in Amsterdam with a 1GB NIC.


 Results:

  * 37% of connected nodes failed to upload blocks faster than 1MB/s.

  * 16% of connected nodes uploaded blocks faster than 10MB/s.

  * Raw data, one line per connected node, kilobytes per second
 http://pastebin.com/raw.php?i=6b4NuiVQ


 This does not support the theory that the network has the available
 bandwidth for increased block sizes, as in its current state 37% of nodes
 would fail to upload a 20MB block to a single peer in under 20 seconds
 (referencing a number quoted by Gavin). If the bar for suitability is
 placed at taking only 1% of the block time (6 seconds) to upload one block
 to one peer, then 69% of the network fails for 20MB blocks. For comparison,
 only 10% fail this metric for 1MB blocks.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Electrum Server Speed Test

2015-07-23 Thread Eric Voskuil via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Does to process into the index include time for transport and/or
block validation (presumably by bitcoind) or this this exclusively the
time for Electrum Server to index a validated block?

e

On 07/23/2015 08:56 AM, Slurms MacKenzie via bitcoin-dev wrote:

 Similar to the Bitcoin Node Speed Test, this is a quick
 quantitative
look at how the Electrum server software handles under load. The
Electrum wallet is extremely popular, and the distributed servers
which power it are all hosted by volunteers without budget. The server
requires a fully indexed Bitcoin Core daemon running, and produces
sizable external index in order to allow SPV clients to quickly
retrieve their history.
 
 
 3.9Gelectrum/utxo 67M electrum/undo 19G electrum/hist 
 1.4Gelectrum/addr 24G electrum/
 
 
 Based on my own logs produced by the electrum-server console, it
takes this server (Xeon, lots of memory, 7200 RPM RAID) approximately
3.7 minutes per megabyte of block to process into the index. This
seems to hold true through the 10 or so blocks I have in my scroll
buffer, the contents of blocks seem to be of approximately the same
processing load. Continuing this trend with the current inter-block
time of 9.8 minutes, an electrum-server instance running on
modest-high end dedicated server is able to support up to 2.64 MB
block sizes before permanently falling behind the chain.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVsTCdAAoJEDzYwH8LXOFOLpgIAIAe5QBTTgpe2kyYrqamUKNI
uasljmNQi3In/rdxNm9Ga+vwTsBg7f4v/xHVrQYAGxdjjZxs0h9bOmAvuc+p2fad
ZeHC/vewrtLxB2aVj5JVbSk5ik57ePyk3SmghTpGMAhgTIWkceIrxR+Fq7TFOlqn
NGnTuBSSsGL9nY57hIFwMJb2CKdPwMVLL/0gjVQ9Llqt+Fu31eSRhDzHOvJnkofO
xwVrVGgST2GE73np3WA0jvzRHjFgsPnQknhjLGQiTgDKsKL0BywE9/Vke2zVNyP7
cAlTQoScEal++9u0h3D475lsxv43V0Rxafc7W0a/OyfXujJ2AlixgV8PlLwhvaY=
=DK5K
-END PGP SIGNATURE-
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 23, 2015 at 12:17 PM, Tom Harding via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On 7/23/2015 5:17 AM, Jorge Timón via bitcoin-dev wrote:
  they will simply advance the front and start another battle, because
  their true hidden faction is the not ever side. Please, Jeff, Gavin,
  Mike, show me that I'm wrong on this point. Please, answer my question
  this time. If not now, then when?

 Bitcoin has all the hash power.  The merkle root has effectively
 infinite capacity.  We should be asking HOW to scale the supporting
 information propagation system appropriately, not WHEN to limit the
 capacity of the primary time-stamping machine.

 We haven't tried yet.  I can't answer for the people you asked, but
 personally I haven't thought much about when we should declare failure.


Yes! Lets plan for success!

I'd really like to move from IMPOSSIBLE because...  (electrum hasn't been
optimized
(by the way: you should run on SSDs, LevelDB isn't designed for spinning
disks),
what if the network is attacked?  (attacked HOW???), current p2p network is
using
the simplest, stupidest possible block propagation algorithm...)

... to lets work together and work through the problems and scale it up.

I'm frankly tired of all the negativity here; so tired of it I've decided
to mostly ignore
all the debate for a while, not respond to misinformation I see being spread
(like miners have some incentive to create slow-to-propagate blocks),
work with people like Tom and Mike who have a 'lets get it done' attitude,
and
focus on what it will take to scale up.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Leo Wandersleb via bitcoin-dev
Thank you a lot for doing this test!

Two questions:

1) A node is typically connected to many nodes that would all in parallel
download said block. In your test you measured how fast new blocks that
presumably are being uploaded in parallel to all those other nodes are being
uploaded? Or did you download blocks while those nodes were basically idle?

2) What is your percentage of the very slow connections?

On 07/23/2015 11:19 AM, slurms--- via bitcoin-dev wrote:
 On this day, the Bitcoin network was crawled and reachable nodes surveyed to 
 find their maximum throughput in order to determine if it can safely support 
 a faster block rate. Specifically this is an attempt to prove or disprove the 
 common statement that 1MB blocks were only suitable slower internet 
 connections in 2009 when Bitcoin launched, and that connection speeds have 
 improved to the point of obviously supporting larger blocks.


 The testing methodology is as follows:

  * Nodes were randomly selected from a peers.dat, 5% of the reachable nodes 
 in the network were contacted.

  * A random selection of blocks was downloaded from each peer.

  * There is some bias towards higher connection speeds, very slow connections 
 (30KB/s) timed out in order to run the test at a reasonable rate.

  * The connecting node was in Amsterdam with a 1GB NIC. 

  
 Results:

  * 37% of connected nodes failed to upload blocks faster than 1MB/s.

  * 16% of connected nodes uploaded blocks faster than 10MB/s.

  * Raw data, one line per connected node, kilobytes per second 
 http://pastebin.com/raw.php?i=6b4NuiVQ


 This does not support the theory that the network has the available bandwidth 
 for increased block sizes, as in its current state 37% of nodes would fail to 
 upload a 20MB block to a single peer in under 20 seconds (referencing a 
 number quoted by Gavin). If the bar for suitability is placed at taking only 
 1% of the block time (6 seconds) to upload one block to one peer, then 69% of 
 the network fails for 20MB blocks. For comparison, only 10% fail this metric 
 for 1MB blocks.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev






signature.asc
Description: OpenPGP digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Slurms MacKenzie via bitcoin-dev

I was testing against otherwise idle nodes, fetching blocks back from the tip of the chain in an attempt to eliminate any unfair effects of caching. During the time my crawler was running there was no new blocks on the network (luck more than design), so other than initially syncing nodes and transaction broadcasts there should have been no traffic from these peers other than me.



Theres unfortunately not enough granularity in my log to tell the difference between nodes which returned bad results (pruned nodes perhaps), and those that timed out. The total number of those was around 10 of 202 successful handshakes, which is fairly insignificant anyway. Ill retool at some point soon and run this a second time with better logging and some other tweaks Ive since realised would help get more, cleaner data.



As Peter Todd has pointed out, my numbers are blue sky optimism and should be taken with a grain of salt as far as justifying larger blocks. Im finding the ceiling of the node pushing a block to a single peer (which is unrealistic on the network), and leaving little headroom for anything else.





Sent:Thursday, July 23, 2015 at 7:36 PM
From:Leo Wandersleb via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org
To:bitcoin-dev@lists.linuxfoundation.org
Subject:Re: [bitcoin-dev] Bitcoin Node Speed Test

Thank you a lot for doing this test!

Two questions:

1) A node is typically connected to many nodes that would all in parallel
download said block. In your test you measured how fast new blocks that
presumably are being uploaded in parallel to all those other nodes are being
uploaded? Or did you download blocks while those nodes were basically idle?

2) What is your percentage of the very slow connections?

On 07/23/2015 11:19 AM, slurms--- via bitcoin-dev wrote:
 On this day, the Bitcoin network was crawled and reachable nodes surveyed to find their maximum throughput in order to determine if it can safely support a faster block rate. Specifically this is an attempt to prove or disprove the common statement that 1MB blocks were only suitable slower internet connections in 2009 when Bitcoin launched, and that connection speeds have improved to the point of obviously supporting larger blocks.


 The testing methodology is as follows:

 * Nodes were randomly selected from a peers.dat, 5% of the reachable nodes in the network were contacted.

 * A random selection of blocks was downloaded from each peer.

 * There is some bias towards higher connection speeds, very slow connections (30KB/s) timed out in order to run the test at a reasonable rate.

 * The connecting node was in Amsterdam with a 1GB NIC.


 Results:

 * 37% of connected nodes failed to upload blocks faster than 1MB/s.

 * 16% of connected nodes uploaded blocks faster than 10MB/s.

 * Raw data, one line per connected node, kilobytes per second http://pastebin.com/raw.php?i=6b4NuiVQ


 This does not support the theory that the network has the available bandwidth for increased block sizes, as in its current state 37% of nodes would fail to upload a 20MB block to a single peer in under 20 seconds (referencing a number quoted by Gavin). If the bar for suitability is placed at taking only 1% of the block time (6 seconds) to upload one block to one peer, then 69% of the network fails for 20MB blocks. For comparison, only 10% fail this metric for 1MB blocks.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev