Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread bitcoingrant

Recently China has updated its firewall blocking bitcoin sites and pools. Whether this is simple blacklist or moresophisticatedpacket targeting is uncertain, however this update did spefically target VPN handshakes.





Sent:Monday, April 07, 2014 at 1:07 PM
From:Drak d...@zikula.org
To:Mike Hearn m...@plan99.net
Cc:Bitcoin Dev bitcoin-development@lists.sourceforge.net
Subject:Re: [Bitcoin-development] Why are we bleeding nodes?


For what its worth, the number of nodes rose dramatically during the China bullrun (I recall 45k in China alone) and dropped as dramatically as the price after the first PBOC announcement designed to cool down bitcoin trading in China.


On 7 April 2014 12:34, Mike Hearn m...@plan99.net wrote:


At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and still falling:


 http://getaddr.bitnodes.io/dashboard/chart/?days=60



I know all the reasons why people mightstop running a node (uses too much disk space, bandwidth, lost interest etc). But does anyone have any idea how we might get more insight into whats really going on? Itd be convenient if the subVer contained the operating system, as then we could tell if the bleed was mostly from desktops/laptops (Windows/Mac), which would be expected, or from virtual servers (Linux), which would be more concerning.



When you set up a Tor node, you can add your email address to the config file and the Tor project sends you emails from time to time about things you should know about. If we did the same, we could have a little exit survey: if your node disappears for long enough, we could email the operator and ask why they stopped.


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development



-- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test  Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development








--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Mike Hearn
Yeah I'm expecting port 8333 to go away in China at some point. Actually I
was expecting that years ago and was kind of surprised that the suppression
was being done via banks. Guess the GFW operators were just slow to catch
up.
On 20 May 2014 10:16, bitcoingr...@gmx.com wrote:

 Recently China has updated its firewall blocking bitcoin sites and pools.
 Whether this is simple blacklist or more sophisticated packet targeting
 is uncertain, however this update did spefically target VPN handshakes.

  *Sent:* Monday, April 07, 2014 at 1:07 PM
 *From:* Drak d...@zikula.org
 *To:* Mike Hearn m...@plan99.net
 *Cc:* Bitcoin Dev bitcoin-development@lists.sourceforge.net
 *Subject:* Re: [Bitcoin-development] Why are we bleeding nodes?
  For what it's worth, the number of nodes rose dramatically during the
 China bullrun (I recall 45k in China alone) and dropped as dramatically as
 the price after the first PBOC announcement designed to cool down bitcoin
 trading in China.

 On 7 April 2014 12:34, Mike Hearn m...@plan99.net wrote:

 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500
 and still falling:

http://getaddr.bitnodes.io/dashboard/chart/?days=60

 I know all the reasons why people *might* stop running a node (uses too
 much disk space, bandwidth, lost interest etc). But does anyone have any
 idea how we might get more insight into what's really going on? It'd be
 convenient if the subVer contained the operating system, as then we could
 tell if the bleed was mostly from desktops/laptops (Windows/Mac), which
 would be expected, or from virtual servers (Linux), which would be more
 concerning.

 When you set up a Tor node, you can add your email address to the config
 file and the Tor project sends you emails from time to time about things
 you should know about. If we did the same, we could have a little exit
 survey: if your node disappears for long enough, we could email the
 operator and ask why they stopped.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


  
 --
 Put Bad Developers to Shame Dominate Development with Jenkins Continuous
 Integration Continuously Automate Build, Test  Deployment Start a new
 project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees___Bitcoin-development
  mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Eugen Leitl
On Tue, May 20, 2014 at 10:15:44AM +0200, bitcoingr...@gmx.com wrote:
Recently China has updated its firewall blocking bitcoin sites and pools.
Whether this is simple blacklist or more sophisticated packet targeting is
uncertain, however this update did spefically target VPN handshakes.

Could a blockchain fork due to network split happen?
 

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Gmail
Unlikely. I doubt any significant portion of miners in china will continue to 
mine on a china-specific chain, since it will certainly be outmined by 
non-Chinese miners, and will be orphaned eventually. 

More likely is that mining interests in china will make special arrangements to 
circumvent the GFwOC.

Users who can't access the worldwide blockchain will notice horrendously slow 
confirmation times and other side effects. 

 On May 20, 2014, at 10:37, Eugen Leitl eu...@leitl.org 
 
 Could a blockchain fork due to network split happen?
 


smime.p7s
Description: S/MIME cryptographic signature
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Andy Alness
Has there ever been serious discussion on extending the protocol to
support UDP transport? That would allow for NAT traversal and for many
more people to run effective nodes. I'm also curious if it could be
made improve block propagation time.

On Tue, May 20, 2014 at 7:52 AM, Gmail will.ya...@gmail.com wrote:
 Unlikely. I doubt any significant portion of miners in china will continue to 
 mine on a china-specific chain, since it will certainly be outmined by 
 non-Chinese miners, and will be orphaned eventually.

 More likely is that mining interests in china will make special arrangements 
 to circumvent the GFwOC.

 Users who can't access the worldwide blockchain will notice horrendously slow 
 confirmation times and other side effects.

 On May 20, 2014, at 10:37, Eugen Leitl eu...@leitl.org

 Could a blockchain fork due to network split happen?


 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
Andy Alness
Software Engineer
Coinbase
San Francisco, CA

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Jeff Garzik
Yes, i spec'd out the UDP traversal of the P2P protocol.  It seems
reasonable especially for inv messages.

On Tue, May 20, 2014 at 2:46 PM, Andy Alness a...@coinbase.com wrote:
 Has there ever been serious discussion on extending the protocol to
 support UDP transport? That would allow for NAT traversal and for many
 more people to run effective nodes. I'm also curious if it could be
 made improve block propagation time.

 On Tue, May 20, 2014 at 7:52 AM, Gmail will.ya...@gmail.com wrote:
 Unlikely. I doubt any significant portion of miners in china will continue 
 to mine on a china-specific chain, since it will certainly be outmined by 
 non-Chinese miners, and will be orphaned eventually.

 More likely is that mining interests in china will make special arrangements 
 to circumvent the GFwOC.

 Users who can't access the worldwide blockchain will notice horrendously 
 slow confirmation times and other side effects.

 On May 20, 2014, at 10:37, Eugen Leitl eu...@leitl.org

 Could a blockchain fork due to network split happen?


 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform 
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Andy Alness
 Software Engineer
 Coinbase
 San Francisco, CA

 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Isidor Zeuner
 
  In my opinion, the number of full nodes doesn't matter (as long as
  it's enough to satisfy demand by other nodes).
 

 Correct. Still, a high number of nodes has a few other benefits:

 1) The more nodes there are, the cheaper it should be to run each one,
 given that the bandwidth and CPU for serving the chain will be spread over
 more people.

 2) It makes Bitcoin *seem* bigger, more robust and more decentralised,
 because there are more people uniting to run it. So there's a psychological
 benefit.


Psychological benefit vs. effective benefit involves the danger of
destroying trust in the Bitcoin network when there are hard facts for
non-robustness while the node number looks big. Therefore, it may make
sense to establish better metrics.

 Also, we don't have a good way to measure capacity vs demand at the moment.
 Whether we have enough capacity is rather a shot in the dark right now.


  What matters is how hard it is to run one.
 

 Which is why I'm interested to learn the reason behind the drop. Is it
 insufficient interest, or is running a node too painful?

 For this purpose I'd like to exclude people running Bitcoin Core on laptops
 or non-dedicated desktops. I don't think full nodes will ever make sense
 for consumer wallets again, and I see the bleeding off of those people as
 natural and expected (as Satoshi did). But if someone feels it's too hard
 to run on a cheap server then that'd concern me.


In my opinion, the characteristic of being able to make use of
non-dedicated nodes should be regarded as an advantage of the Bitcoin
protocol, and not something to get rid of. Nodes being able to
contribute this way may lead to even more robustness than
decentralization alone, as they can do so without exposing a fixed
address which could be attacked.

Best regards,

Isidor

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Andy Alness
Awesome! I'm assuming this is it:
https://bitcointalk.org/index.php?topic=156769.0

It would be interesting (at least to me) to take this a step further
and offer UDP as a full TCP replacement capable of STUN-assisted NAT
traversal and possibly swarmed blockchain syncs. It would require open
TCP nodes to facilitate connection establishment. It is obviously a
non-trivial amount of work but would be an interesting experiment.
Maybe BitTorrent's µTP protocol could be leveraged.

On Tue, May 20, 2014 at 12:17 PM, Jeff Garzik jgar...@bitpay.com wrote:
 Yes, i spec'd out the UDP traversal of the P2P protocol.  It seems
 reasonable especially for inv messages.

 On Tue, May 20, 2014 at 2:46 PM, Andy Alness a...@coinbase.com wrote:
 Has there ever been serious discussion on extending the protocol to
 support UDP transport? That would allow for NAT traversal and for many
 more people to run effective nodes. I'm also curious if it could be
 made improve block propagation time.

 On Tue, May 20, 2014 at 7:52 AM, Gmail will.ya...@gmail.com wrote:
 Unlikely. I doubt any significant portion of miners in china will continue 
 to mine on a china-specific chain, since it will certainly be outmined by 
 non-Chinese miners, and will be orphaned eventually.

 More likely is that mining interests in china will make special 
 arrangements to circumvent the GFwOC.

 Users who can't access the worldwide blockchain will notice horrendously 
 slow confirmation times and other side effects.

 On May 20, 2014, at 10:37, Eugen Leitl eu...@leitl.org

 Could a blockchain fork due to network split happen?


 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform 
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Andy Alness
 Software Engineer
 Coinbase
 San Francisco, CA

 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform 
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/



-- 
Andy Alness
Software Engineer
Coinbase
San Francisco, CA

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-05-20 Thread Jeff Garzik
Indeed -- you must reinvent TCP over UDP, ultimately, to handle blocks
and large TXs.


On Tue, May 20, 2014 at 4:09 PM, Andy Alness a...@coinbase.com wrote:
 Awesome! I'm assuming this is it:
 https://bitcointalk.org/index.php?topic=156769.0

 It would be interesting (at least to me) to take this a step further
 and offer UDP as a full TCP replacement capable of STUN-assisted NAT
 traversal and possibly swarmed blockchain syncs. It would require open
 TCP nodes to facilitate connection establishment. It is obviously a
 non-trivial amount of work but would be an interesting experiment.
 Maybe BitTorrent's µTP protocol could be leveraged.

 On Tue, May 20, 2014 at 12:17 PM, Jeff Garzik jgar...@bitpay.com wrote:
 Yes, i spec'd out the UDP traversal of the P2P protocol.  It seems
 reasonable especially for inv messages.

 On Tue, May 20, 2014 at 2:46 PM, Andy Alness a...@coinbase.com wrote:
 Has there ever been serious discussion on extending the protocol to
 support UDP transport? That would allow for NAT traversal and for many
 more people to run effective nodes. I'm also curious if it could be
 made improve block propagation time.

 On Tue, May 20, 2014 at 7:52 AM, Gmail will.ya...@gmail.com wrote:
 Unlikely. I doubt any significant portion of miners in china will continue 
 to mine on a china-specific chain, since it will certainly be outmined by 
 non-Chinese miners, and will be orphaned eventually.

 More likely is that mining interests in china will make special 
 arrangements to circumvent the GFwOC.

 Users who can't access the worldwide blockchain will notice horrendously 
 slow confirmation times and other side effects.

 On May 20, 2014, at 10:37, Eugen Leitl eu...@leitl.org

 Could a blockchain fork due to network split happen?


 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform 
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Andy Alness
 Software Engineer
 Coinbase
 San Francisco, CA

 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform 
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/



 --
 Andy Alness
 Software Engineer
 Coinbase
 San Francisco, CA



-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-09 Thread Wladimir
On Wed, Apr 9, 2014 at 12:38 PM, Wendell w...@hivewallet.com wrote:

 On that note, I think we have every possibility to make desktop and mobile
 wallets mind-numbingly simple -- and perhaps even do one better. Is this
 now a community priority? If so, I would really appreciate some additional
 contributors to Hive!


How does that relate to the nodes issue?

Would packaging an optional bitcoind with your wallet be an option, which
is automatically managed in the background, so that users can run a full
node if they want?

Wladimir
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Jean-Paul Kogelman
Isn't that just conceding that p2p protocol A is better than p2p protocol B?

Can't Bitcoin Core's block fetching be improved to get similar performance as a 
torrent + import?

Currently it's hard to go wide on data fetching because headers first is still 
pretty 'beefy'. The headers can be compressed, which would get you about 50% 
savings.

Also, maybe adding a layer that groups block headers into a single hash (say, 
2016 headers), and then being able to fetch those (possibly compressed) header 
'blocks' from multiple sources in parallel. And fanning out block fetches even 
further, favoring fast nodes.

Just thinking out loud.

jp

 On Apr 7, 2014, at 8:44 PM, Jeff Garzik jgar...@bitpay.com wrote:
 
 Being Mr. Torrent, I've held open the 80% serious suggestion to
 simply refuse to serve blocks older than X (3 months?).
 
 That forces download by other means (presumably torrent).
 
 I do not feel it is productive for any nodes on the network to waste
 time/bandwidth/etc. serving static, ancient data.  There remain, of
 course, issues of older nodes and getting the word out that prevents
 this switch from being flipped on tomorrow.
 
 
 
 On Mon, Apr 7, 2014 at 2:49 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer ta...@bitsofproof.com 
 wrote:
 BTW, did we already agree on the service bits for an archive node?
 
 I'm still very concerned that a binary archive bit will cause extreme
 load hot-spotting and the kind of binary Use lots of resources YES or
 NO I think we're currently suffering some from, but at that point
 enshrined in the protocol.
 
 It would be much better to extend the addr messages so that nodes can
 indicate a range or two of blocks that they're serving, so that all
 nodes can contribute fractionally according to their means. E.g. if
 you want to offer up 8 GB of distributed storage and contribute to the
 availability of the blockchain,  without having to swollow the whole
 20, 30, 40 ... gigabyte pill.
 
 Already we need that kind of distributed storage for the most recent
 blocks to prevent extreme bandwidth load on archives, so extending it
 to arbitrary ranges is only more complicated because there is
 currently no room to signal it.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 
 
 
 -- 
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Mike Hearn

 Multi-sig requires infrastructure.  It isn't a magic wand that we can
 wave to make everyone secure.  The protocols and techniques necessary
 don't exist yet, and apparently no one has much of an incentive to
 create them.


It is starting to happen. If you're OK with using a specific web wallet
there's BitGo and greenaddress.it already, though I think their risk
analysis is just sending you an SMS code. I wrote up an integration plan
for bitcoinj a few days ago:

 https://groups.google.com/forum/#!topic/bitcoinj/Uxl-z40OLuQ

but guess what? It's quite complicated. As with all these features.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Tamas Blummer
Specialization of nodes is ongoing most prominent with SPV wallets and mining.

There is a need I see on my own business for software that is able to serve 
multiple wallets, and is multi tiered,
so the world facing P2P node can be in a DMZ. I target them with a hybrid model 
that is SPV plus mempool transaction validation 
against UTXO and use ‘reference’ implementations as border router.  I think 
that this setup will be common for enterprises 
and hence push for a stripped down ‘reference’ border router without wallet, 
payment protocol, GUI, RPC calls here. 

That border router could also serve as archive node evtl. also offering blocks 
at bulk e.g. through http. 
Enterprises that run a multi tiered environment have the bandwith to serve as 
archives.

Tamas Blummer
http://bitsofproof.com

On 08.04.2014, at 05:44, Jeff Garzik jgar...@bitpay.com wrote:

 Being Mr. Torrent, I've held open the 80% serious suggestion to
 simply refuse to serve blocks older than X (3 months?).
 
 That forces download by other means (presumably torrent).
 
 I do not feel it is productive for any nodes on the network to waste
 time/bandwidth/etc. serving static, ancient data.  There remain, of
 course, issues of older nodes and getting the word out that prevents
 this switch from being flipped on tomorrow.
 
 
 
 On Mon, Apr 7, 2014 at 2:49 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer ta...@bitsofproof.com wrote:
 BTW, did we already agree on the service bits for an archive node?
 
 I'm still very concerned that a binary archive bit will cause extreme
 load hot-spotting and the kind of binary Use lots of resources YES or
 NO I think we're currently suffering some from, but at that point
 enshrined in the protocol.
 
 It would be much better to extend the addr messages so that nodes can
 indicate a range or two of blocks that they're serving, so that all
 nodes can contribute fractionally according to their means. E.g. if
 you want to offer up 8 GB of distributed storage and contribute to the
 availability of the blockchain,  without having to swollow the whole
 20, 30, 40 ... gigabyte pill.
 
 Already we need that kind of distributed storage for the most recent
 blocks to prevent extreme bandwidth load on archives, so extending it
 to arbitrary ranges is only more complicated because there is
 currently no room to signal it.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 
 
 
 -- 
 Jeff Garzik
 Bitcoin core developer and open source evangelist
 BitPay, Inc.  https://bitpay.com/
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Jesus Cea
On 07/04/14 15:50, Gregory Maxwell wrote:
 Bitcoin.org recommends people away from running Bitcoin-QT now, so I'm
 not sure that we should generally find that trend surprising.

What options are out there for people caring about 20GB blockchain?
Depending of third party server is not an option.

-- 
Jesús Cea Avión _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
Twitter: @jcea_/_/_/_/  _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz



signature.asc
Description: OpenPGP digital signature
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Andrew LeCody
My node (based in Dallas, TX) has about 240 connections and is using a
little under 4 Mbps in bandwidth right now.

According the hosting provider I'm at 11.85 Mbps for this week, using 95th
percentile billing. The report from my provider includes my other servers
though.


On Mon, Apr 7, 2014 at 12:39 PM, Chris Williams ch...@icloudtools.netwrote:

 I’m afraid this is a highly simplistic view of the costs of running a full
 node.

 My node consumes fantastic amounts of data traffic, which is a real cost.

 In the 30 days ending Apri 6, my node:

 * Received 36.8 gb of data
 * Sent 456.5 gb data

 At my geographic service location (Singapore), this cost about $90 last
 month for bandwidth alone. It would be slightly cheaper if I was hosted in
 the US of course.

 But anyone can understand that moving a half-terrabyte of data around in a
 month will not be cheap.


 On Apr 7, 2014, at 8:53 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

  On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier justusranv...@gmail.com
 wrote:
  1. The resource requirements of a full node are moving beyond the
  capabilities of casual users. This isn't inherently a problem - after
  all most people don't grow their own food, tailor their own clothes, or
  keep blacksmith tools handy in to forge their own horseshoes either.
 
  Right now running a full node consumes about $1 in disk space
  non-reoccurring and costs a couple cents in power per month.
 
  This isn't to say things are all ducky. But if you're going to say the
  resource requirements are beyond the capabilities of casual users I'm
  afraid I'm going to have to say: citation needed.
 
 
 --
  Put Bad Developers to Shame
  Dominate Development with Jenkins Continuous Integration
  Continuously Automate Build, Test  Deployment
  Start a new project now. Try Jenkins in the cloud.
  http://p.sf.net/sfu/13600_Cloudbees_APR
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Jameson Lopp
I'm glad to see that I'm not the only one concerned about the consistent 
dropping of nodes. Though I think that the fundamental question should be: how 
many nodes do we really need? Obviously more is better, but it's difficult to 
say how concerned we should be without more information. I posted my thoughts 
last month: http://coinchomp.com/2014/03/19/bitcoin-nodes-many-enough/

I have begun working on my node monitoring project and will post updates if it 
results in me gaining any new insights about the network.

- Jameson

On 04/07/2014 07:34 AM, Mike Hearn wrote:
 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
 still falling:
 
http://getaddr.bitnodes.io/dashboard/chart/?days=60
 
 I know all the reasons why people *might* stop running a node (uses too
 much disk space, bandwidth, lost interest etc). But does anyone have any
 idea how we might get more insight into what's really going on? It'd be
 convenient if the subVer contained the operating system, as then we could
 tell if the bleed was mostly from desktops/laptops (Windows/Mac), which
 would be expected, or from virtual servers (Linux), which would be more
 concerning.
 
 When you set up a Tor node, you can add your email address to the config
 file and the Tor project sends you emails from time to time about things
 you should know about. If we did the same, we could have a little exit
 survey: if your node disappears for long enough, we could email the
 operator and ask why they stopped.
 
 
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 
 
 
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Pieter Wuille
On Mon, Apr 7, 2014 at 2:19 PM, Jameson Lopp jameson.l...@gmail.com wrote:
 I'm glad to see that I'm not the only one concerned about the consistent 
 dropping of nodes. Though I think that the fundamental question should be: 
 how many nodes do we really need? Obviously more is better, but it's 
 difficult to say how concerned we should be without more information. I 
 posted my thoughts last month: 
 http://coinchomp.com/2014/03/19/bitcoin-nodes-many-enough/

In my opinion, the number of full nodes doesn't matter (as long as
it's enough to satisfy demand by other nodes).

What matters is how hard it is to run one. If someone is interesting
in verifying that nobody is cheating on the network, can they, and can
they without significant investment? Whether they actually will
depends also no how interesting the currency and its digital transfers
are.

 On 04/07/2014 07:34 AM, Mike Hearn wrote:
 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
 still falling:

http://getaddr.bitnodes.io/dashboard/chart/?days=60

My own network crawler (which feeds my DNS seeder) hasn't seen any
significant drop that I remember, but I don't have actual logs. It's
seeing around 6000 well reachable nodes currently, which is the
highest number I've ever seen (though it's been around 6000 for quite
a while now).

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mike Hearn

 In my opinion, the number of full nodes doesn't matter (as long as
 it's enough to satisfy demand by other nodes).


Correct. Still, a high number of nodes has a few other benefits:

1) The more nodes there are, the cheaper it should be to run each one,
given that the bandwidth and CPU for serving the chain will be spread over
more people.

2) It makes Bitcoin *seem* bigger, more robust and more decentralised,
because there are more people uniting to run it. So there's a psychological
benefit.

Also, we don't have a good way to measure capacity vs demand at the moment.
Whether we have enough capacity is rather a shot in the dark right now.


 What matters is how hard it is to run one.


Which is why I'm interested to learn the reason behind the drop. Is it
insufficient interest, or is running a node too painful?

For this purpose I'd like to exclude people running Bitcoin Core on laptops
or non-dedicated desktops. I don't think full nodes will ever make sense
for consumer wallets again, and I see the bleeding off of those people as
natural and expected (as Satoshi did). But if someone feels it's too hard
to run on a cheap server then that'd concern me.


 My own network crawler (which feeds my DNS seeder) hasn't seen any
 significant drop


It would be good to explain the difference, but I suspect your definition
of well reachable excludes people running Core at home. From the diurnal
cycle we see in Addy's graphs it's clear some nodes are being shut down
when people go to bed. So if we have 6000 nodes on servers and 2000 at
home, then I'd expect Addy's graphs and yours to slowly come into alignment
as people give up using Core as a consumer wallet.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Jameson Lopp
On 04/07/2014 08:26 AM, Pieter Wuille wrote:
 In my opinion, the number of full nodes doesn't matter (as long as
 it's enough to satisfy demand by other nodes).

I agree, but if we don't quantify demand then we are practically blind. What 
is the plan? To wait until SPV clients start lagging / timing out because their 
requests cannot be handled by the nodes?

For all I know, the network would run just fine on 100 nodes. But not knowing 
really irks me as an engineer.

- Jameson

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Andreas Schildbach
They're _not_ phasing out into SPV wallets from what I can say. At
around the 24th of February, there has been a sharp change of the
current installs graph of Bitcoin Wallet. That number used to grow at
about 20.000 per month. After that date until now, it just barely moves
horizontal.

My guess is that a large number of users have lost interest after they
lost their money in MtGox. The 24th of February coincides with the
final shutdown, according to

http://en.wikipedia.org/wiki/Mt._Gox#February_2014_shutdown_and_bankruptcy


On 04/07/2014 02:17 PM, Ricardo Filipe wrote:
 phasing out of bitcoinqt into spv wallets?
 
 2014-04-07 12:34 GMT+01:00 Mike Hearn m...@plan99.net:
 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
 still falling:

http://getaddr.bitnodes.io/dashboard/chart/?days=60

 I know all the reasons why people might stop running a node (uses too much
 disk space, bandwidth, lost interest etc). But does anyone have any idea how
 we might get more insight into what's really going on? It'd be convenient if
 the subVer contained the operating system, as then we could tell if the
 bleed was mostly from desktops/laptops (Windows/Mac), which would be
 expected, or from virtual servers (Linux), which would be more concerning.

 When you set up a Tor node, you can add your email address to the config
 file and the Tor project sends you emails from time to time about things you
 should know about. If we did the same, we could have a little exit survey:
 if your node disappears for long enough, we could email the operator and ask
 why they stopped.

 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 4:34 AM, Mike Hearn m...@plan99.net wrote:
 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
 still falling:

FWIW, A few months before that we had even less than 8500 by the bitnodes count.

Bitcoin.org recommends people away from running Bitcoin-QT now, so I'm
not sure that we should generally find that trend surprising.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 6:50 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
 FWIW, A few months before that we had even less than 8500 by the bitnodes 
 count.

Gah, accidentally send I wanted to continue here that it was less
than 8500 and had been falling pretty consistently for months,
basically since the bitcoin.org change.  Unfortunately it looks like
the old bitnodes.io data isn't available anymore, so I'm going off my
memory here.

The Bitnodes counts have always been somewhat higher than my or sipa's
node counts too, fwiw.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Jameson Lopp
The Bitnodes project updated their counting algorithm a month or so ago. It 
used to be slower and less accurate - prior to their update, it was reporting 
in excess of 100,000 nodes.

- Jameson

On 04/07/2014 09:53 AM, Gregory Maxwell wrote:
 On Mon, Apr 7, 2014 at 6:50 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
 FWIW, A few months before that we had even less than 8500 by the bitnodes 
 count.
 
 Gah, accidentally send I wanted to continue here that it was less
 than 8500 and had been falling pretty consistently for months,
 basically since the bitcoin.org change.  Unfortunately it looks like
 the old bitnodes.io data isn't available anymore, so I'm going off my
 memory here.
 
 The Bitnodes counts have always been somewhat higher than my or sipa's
 node counts too, fwiw.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 6:58 AM, Jameson Lopp jameson.l...@gmail.com wrote:
 The Bitnodes project updated their counting algorithm a month or so ago. It 
 used to be slower and less accurate - prior to their update, it was reporting 
 in excess of 100,000 nodes.

Nah.  It reported multiple metrics. The 100,000 number was an mostly
useless metric that just counted the number of distinct addr messages
floating around the network, which contains a lot of junk.  They also
reported an actual connectable node count previously and while they
may have tweaked things here and there as far as I can tell it has
been consistent with the numbers they are using in the headlines now.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mike Hearn

 My guess is that a large number of users have lost interest after they
 lost their money in MtGox. The 24th of February coincides with the
 final shutdown


Sigh. It would not be surprising if MtGox has indeed dealt the community a
critical blow in this regard. TX traffic is down since then too:

https://blockchain.info/charts/n-transactions-excluding-popular?timespan=60daysshowDataPoints=falsedaysAverageString=1show_header=truescale=0address=

Judging from comments and the leaked user db, it seems a lot of well known
people lost money there   (not me fortunately). I wish I could say people
have learned but from the size of the deposit base at Bitstamp they clearly
have not. A lot of Bitcoin users don't seem to be ready to be their own
bank, yet still want to own some on the assumption everyone else either is
or soon will be. So it's really only a matter of time until something goes
wrong with some large bitbank again, either Bitstamp or Coinbase.

Some days I wonder if Bitcoin will be killed off by people who just refuse
to use it properly before it ever gets a chance to shine. The general
public doesn't distinguish between Bitcoin users who deposit with a third
party and the real Bitcoin users who don't.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mike Hearn
Indeed, fully agreed. The only way to really make progress here is to make
the UX of being your own bank not only as good as trusting a third party,
but better.

I've been encouraged by the rise of risk analysis services, but we need to
integrate them into wallets more widely for them to have much impact.
Otherwise people get to pick between a variety of wallets, none of which
have *all* the features they want. And TREZOR is cool, albeit, something
that's going to be for committed users only.



On Mon, Apr 7, 2014 at 4:15 PM, Eric Martindale e...@ericmartindale.comwrote:

 We need to make it so mind-numbingly simple to run Bitcoin correctly
 that the average user doesn't find reasons to do so in the course of normal
 use.  Right now, Coinbase and Bitstamp are winning in the user experience
 battle, which technically endanger the user, and by proxy the Bitcoin
 network.

 Multi-sig as a default is a start.  It won't succeed unless the user
 experience is simply better than trusted third parties, but we need to
 start the education process with the very basic fundamental: trusting a
 third-party with full access to your Bitcoin is just replacing one
 centralized banking system with another.

 Eric Martindale
 Developer Evangelist, BitPay
 +1 (919) 374-2020
 On Apr 7, 2014 7:05 AM, Mike Hearn m...@plan99.net wrote:

  My guess is that a large number of users have lost interest after they
 lost their money in MtGox. The 24th of February coincides with the
 final shutdown


 Sigh. It would not be surprising if MtGox has indeed dealt the community
 a critical blow in this regard. TX traffic is down since then too:


 https://blockchain.info/charts/n-transactions-excluding-popular?timespan=60daysshowDataPoints=falsedaysAverageString=1show_header=truescale=0address=

 Judging from comments and the leaked user db, it seems a lot of well
 known people lost money there   (not me fortunately). I wish I could say
 people have learned but from the size of the deposit base at Bitstamp they
 clearly have not. A lot of Bitcoin users don't seem to be ready to be their
 own bank, yet still want to own some on the assumption everyone else either
 is or soon will be. So it's really only a matter of time until something
 goes wrong with some large bitbank again, either Bitstamp or Coinbase.

 Some days I wonder if Bitcoin will be killed off by people who just
 refuse to use it properly before it ever gets a chance to shine. The
 general public doesn't distinguish between Bitcoin users who deposit with
 a third party and the real Bitcoin users who don't.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Eric Martindale
We need to make it so mind-numbingly simple to run Bitcoin correctly that
the average user doesn't find reasons to do so in the course of normal
use.  Right now, Coinbase and Bitstamp are winning in the user experience
battle, which technically endanger the user, and by proxy the Bitcoin
network.

Multi-sig as a default is a start.  It won't succeed unless the user
experience is simply better than trusted third parties, but we need to
start the education process with the very basic fundamental: trusting a
third-party with full access to your Bitcoin is just replacing one
centralized banking system with another.

Eric Martindale
Developer Evangelist, BitPay
+1 (919) 374-2020
On Apr 7, 2014 7:05 AM, Mike Hearn m...@plan99.net wrote:

 My guess is that a large number of users have lost interest after they
 lost their money in MtGox. The 24th of February coincides with the
 final shutdown


 Sigh. It would not be surprising if MtGox has indeed dealt the community a
 critical blow in this regard. TX traffic is down since then too:


 https://blockchain.info/charts/n-transactions-excluding-popular?timespan=60daysshowDataPoints=falsedaysAverageString=1show_header=truescale=0address=

 Judging from comments and the leaked user db, it seems a lot of well known
 people lost money there   (not me fortunately). I wish I could say people
 have learned but from the size of the deposit base at Bitstamp they clearly
 have not. A lot of Bitcoin users don't seem to be ready to be their own
 bank, yet still want to own some on the assumption everyone else either is
 or soon will be. So it's really only a matter of time until something goes
 wrong with some large bitbank again, either Bitstamp or Coinbase.

 Some days I wonder if Bitcoin will be killed off by people who just refuse
 to use it properly before it ever gets a chance to shine. The general
 public doesn't distinguish between Bitcoin users who deposit with a third
 party and the real Bitcoin users who don't.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Jameson Lopp
I would point to bandwidth as the most important issue to the casual user who 
runs a node at home. Few casual users have the know-how to set up QoS rules and 
thus become quite annoyed when their Internet connection is discernibly slowed.

- Jameson

On 04/07/2014 11:53 AM, Gregory Maxwell wrote:
 On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier justusranv...@gmail.com 
 wrote:
 1. The resource requirements of a full node are moving beyond the
 capabilities of casual users. This isn't inherently a problem - after
 all most people don't grow their own food, tailor their own clothes, or
 keep blacksmith tools handy in to forge their own horseshoes either.
 
 Right now running a full node consumes about $1 in disk space
 non-reoccurring and costs a couple cents in power per month.
 
 This isn't to say things are all ducky. But if you're going to say the
 resource requirements are beyond the capabilities of casual users I'm
 afraid I'm going to have to say: citation needed.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mark Friedenbach
Right now running a full-node on my home DSL connection (1Mbps) makes
other internet activity periodically unresponsive. I think we've already
hit a point where resource requirements are pushing out casual users,
although of course we can't be certain that accounts for all lost nodes.

On 04/07/2014 08:53 AM, Gregory Maxwell wrote:
 On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier justusranv...@gmail.com 
 wrote:
 1. The resource requirements of a full node are moving beyond the
 capabilities of casual users. This isn't inherently a problem - after
 all most people don't grow their own food, tailor their own clothes, or
 keep blacksmith tools handy in to forge their own horseshoes either.
 
 Right now running a full node consumes about $1 in disk space
 non-reoccurring and costs a couple cents in power per month.
 
 This isn't to say things are all ducky. But if you're going to say the
 resource requirements are beyond the capabilities of casual users I'm
 afraid I'm going to have to say: citation needed.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 .
 

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 9:27 AM, Mark Friedenbach m...@monetize.io wrote:
 Right now running a full-node on my home DSL connection (1Mbps) makes
 other internet activity periodically unresponsive. I think we've already
 hit a point where resource requirements are pushing out casual users,
 although of course we can't be certain that accounts for all lost nodes.

That is an implementation issue— mostly one that arises as an indirect
consequence of not having headers first and the parallel fetch, not a
requirements issue.

Under the current bitcoin validity rules it should be completely
reasonable to run a full contributing node with no more than 30 kb/s
inbound (reviving two copies of everything, blocks + tansactions ) and
60 kbit/sec outbound (sending out four copies of everything). (So long
as you're sending out = what you're taking in you're contributing to
the network's capacity). Throw in a factor of two for bursting, though
not every node needs to be contributing super low latency capacity.

This is absolutely not the case with the current implementation, but
it's not a requirements thing.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Drak
For what it's worth, the number of nodes rose dramatically during the China
bullrun (I recall 45k in China alone) and dropped as dramatically as the
price after the first PBOC announcement designed to cool down bitcoin
trading in China.


On 7 April 2014 12:34, Mike Hearn m...@plan99.net wrote:

 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500
 and still falling:

http://getaddr.bitnodes.io/dashboard/chart/?days=60

 I know all the reasons why people *might* stop running a node (uses too
 much disk space, bandwidth, lost interest etc). But does anyone have any
 idea how we might get more insight into what's really going on? It'd be
 convenient if the subVer contained the operating system, as then we could
 tell if the bleed was mostly from desktops/laptops (Windows/Mac), which
 would be expected, or from virtual servers (Linux), which would be more
 concerning.

 When you set up a Tor node, you can add your email address to the config
 file and the Tor project sends you emails from time to time about things
 you should know about. If we did the same, we could have a little exit
 survey: if your node disappears for long enough, we could email the
 operator and ask why they stopped.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach m...@monetize.io wrote:
 On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
 That is an implementation issue— mostly one that arises as an indirect
 consequence of not having headers first and the parallel fetch, not a
 requirements issue.

 Oh, absolutely. But the question why are people not running full
 nodes? has to do with the current implementation, not abstract
 capabilities of a future version of the bitcoind code base.

The distinction is very important because it's a matter of things we
can and should fix vs things that cannot be fixed except by changing
goals/incentives!  Opposite approaches to handling them.

When I read resource requirements of a full node are moving beyond I
didn't extract from that that there are implementation issues that
need to be improved to make it work better for low resource users due
to the word requirements.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mark Friedenbach
On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
 That is an implementation issue— mostly one that arises as an indirect
 consequence of not having headers first and the parallel fetch, not a
 requirements issue.

Oh, absolutely. But the question why are people not running full
nodes? has to do with the current implementation, not abstract
capabilities of a future version of the bitcoind code base.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Brent Shambaugh
How difficult would it be to set up a node? Using lots of electricity at
home (if required) could be an issue, but I do have a Webfaction account.


On Mon, Apr 7, 2014 at 12:16 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach m...@monetize.io
 wrote:
  On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
  That is an implementation issue-- mostly one that arises as an indirect
  consequence of not having headers first and the parallel fetch, not a
  requirements issue.
 
  Oh, absolutely. But the question why are people not running full
  nodes? has to do with the current implementation, not abstract
  capabilities of a future version of the bitcoind code base.

 The distinction is very important because it's a matter of things we
 can and should fix vs things that cannot be fixed except by changing
 goals/incentives!  Opposite approaches to handling them.

 When I read resource requirements of a full node are moving beyond I
 didn't extract from that that there are implementation issues that
 need to be improved to make it work better for low resource users due
 to the word requirements.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mike Hearn
It uses ~no electricity, it's not like mining.

The primary resources it needs are disk space and bandwidth, after an
intensive initial day or two of building the database.

Actually, I wonder if we should start shipping (auditable) pre-baked
databases calculated up to the last checkpoint so people can download them
and boot up their node right away. Recalculating the entire thing from
scratch every time isn't sustainable in the long run anyway.


On Mon, Apr 7, 2014 at 7:35 PM, Brent Shambaugh
brent.shamba...@gmail.comwrote:

 How difficult would it be to set up a node? Using lots of electricity at
 home (if required) could be an issue, but I do have a Webfaction account.


 On Mon, Apr 7, 2014 at 12:16 PM, Gregory Maxwell gmaxw...@gmail.comwrote:

 On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach m...@monetize.io
 wrote:
  On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
  That is an implementation issue— mostly one that arises as an indirect
  consequence of not having headers first and the parallel fetch, not a
  requirements issue.
 
  Oh, absolutely. But the question why are people not running full
  nodes? has to do with the current implementation, not abstract
  capabilities of a future version of the bitcoind code base.

 The distinction is very important because it's a matter of things we
 can and should fix vs things that cannot be fixed except by changing
 goals/incentives!  Opposite approaches to handling them.

 When I read resource requirements of a full node are moving beyond I
 didn't extract from that that there are implementation issues that
 need to be improved to make it work better for low resource users due
 to the word requirements.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 10:40 AM, Mike Hearn m...@plan99.net wrote:
 Actually, I wonder

The actual validation isn't really the problem today. The slowness of
the IBD is almost exclusively the lack of parallel fetching and the
existence of slow peers.  And making the signature gate adaptive (and
deploying the 6x faster ECDSA code) would improve that future.

Go grab sipa's headers first branch, it has no problem saturating a
20mbit/sec pipe syncing up.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
I rather prefer to start with SPV and upgrade to full node, if desired.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 19:40, Mike Hearn m...@plan99.net wrote:

 
 Actually, I wonder if we should start shipping (auditable) pre-baked 
 databases calculated up to the last checkpoint so people can download them 
 and boot up their node right away. Recalculating the entire thing from 
 scratch every time isn't sustainable in the long run anyway.
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Justus Ranvier
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/07/2014 05:16 PM, Gregory Maxwell wrote:
 When I read resource requirements of a full node are moving
 beyond I didn't extract from that that there are implementation
 issues that need to be improved to make it work better for low
 resource users due to the word requirements.

In order to prevent future confusion: whenever I talk about
requirements (or generally use the present tense) , I'm talking about
reality as it currently exists.

If I ever decide to talk about hypothetical future requirements in
some imaginary world of Platonic forms, as opposed to the requirements
imposed by the software that's actually available for casual users to
download today, I'll mention that specifically.

- -- 
Support online privacy by using email encryption whenever possible.
Learn how here: http://www.youtube.com/watch?v=bakOKJFtB-k
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTQuRkAAoJECoisBQbQ4v07zUH/2wNQ7+0211+wm5oQP/ABHg7
kQVeQcRz8/BhOp3hzv6HiQ9Oaekfz7QhClbJYPUPE3aR2gxGoCgT6B1G2N75q0pG
piGVgCeogaBA3Ny91sOPRMv92cGWpbTeyO+rhIIIlYWPiZobTaYttYYm1zF6oc6K
CdYzCW9X12/NIXxEkbPnAFJ01Uty0HKccTP+9jex7+gobzl2yCo4MyywwtWF9XEk
K9aZ3+3i+12+F4nDMAAimD02SV6dI8GMpahMf4kNXn0CcMefC9A28FdhhApPh7Nx
1phQnMPZMPdYt0aHgjgzt4N+SR/1EOUZaoqk9ccyXh+khBR84tiUEo6NuOIfTGM=
=Mr12
-END PGP SIGNATURE-


0x1B438BF4.asc
Description: application/pgp-keys
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Brent Shambaugh
Okay awesome. It seems like I set up a Litecoin node without knowing it
(because it was like this:
https://bitcointalk.org/index.php?topic=128122.0) I was able to bootstrap
it (https://litecoin.info/).


On Mon, Apr 7, 2014 at 12:40 PM, Mike Hearn m...@plan99.net wrote:

 It uses ~no electricity, it's not like mining.

 The primary resources it needs are disk space and bandwidth, after an
 intensive initial day or two of building the database.

 Actually, I wonder if we should start shipping (auditable) pre-baked
 databases calculated up to the last checkpoint so people can download them
 and boot up their node right away. Recalculating the entire thing from
 scratch every time isn't sustainable in the long run anyway.


 On Mon, Apr 7, 2014 at 7:35 PM, Brent Shambaugh brent.shamba...@gmail.com
  wrote:

 How difficult would it be to set up a node? Using lots of electricity at
 home (if required) could be an issue, but I do have a Webfaction account.


 On Mon, Apr 7, 2014 at 12:16 PM, Gregory Maxwell gmaxw...@gmail.comwrote:

 On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach m...@monetize.io
 wrote:
  On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
  That is an implementation issue-- mostly one that arises as an indirect
  consequence of not having headers first and the parallel fetch, not a
  requirements issue.
 
  Oh, absolutely. But the question why are people not running full
  nodes? has to do with the current implementation, not abstract
  capabilities of a future version of the bitcoind code base.

 The distinction is very important because it's a matter of things we
 can and should fix vs things that cannot be fixed except by changing
 goals/incentives!  Opposite approaches to handling them.

 When I read resource requirements of a full node are moving beyond I
 didn't extract from that that there are implementation issues that
 need to be improved to make it work better for low resource users due
 to the word requirements.


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Chris Williams
I’m afraid this is a highly simplistic view of the costs of running a full node.

My node consumes fantastic amounts of data traffic, which is a real cost.

In the 30 days ending Apri 6, my node:

* Received 36.8 gb of data
* Sent 456.5 gb data

At my geographic service location (Singapore), this cost about $90 last month 
for bandwidth alone. It would be slightly cheaper if I was hosted in the US of 
course.

But anyone can understand that moving a half-terrabyte of data around in a 
month will not be cheap.


On Apr 7, 2014, at 8:53 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier justusranv...@gmail.com 
 wrote:
 1. The resource requirements of a full node are moving beyond the
 capabilities of casual users. This isn't inherently a problem - after
 all most people don't grow their own food, tailor their own clothes, or
 keep blacksmith tools handy in to forge their own horseshoes either.
 
 Right now running a full node consumes about $1 in disk space
 non-reoccurring and costs a couple cents in power per month.
 
 This isn't to say things are all ducky. But if you're going to say the
 resource requirements are beyond the capabilities of casual users I'm
 afraid I'm going to have to say: citation needed.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Justus Ranvier
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/07/2014 05:40 PM, Mike Hearn wrote:
 The primary resources it needs are disk space and bandwidth, after
 an intensive initial day or two of building the database.

Check out the kind of hardware causal users are running these days.

The bottleneck is not bulk disk space, but rather IOPS.

Most users don't have spare machines to dedicate to the task of
running a full node, nor is it acceptable for them to not be able to
use their device for other tasks while the node is bootstrapping.

- -- 
Support online privacy by using email encryption whenever possible.
Learn how here: http://www.youtube.com/watch?v=bakOKJFtB-k
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTQuVNAAoJECoisBQbQ4v004EIAMPpQLrVlzCoGjcgALyHV4xK
6JDnlCHXTR72mTlKwcbD6Dpyr/Dl6tcXtdbQi0m3gsbOcAZI/eHtrswgunaq7c1y
mTOM2klPE4M8+B/4Ecp+2iK2UM/swlL3z8ryx/HPhgOZ+Rr7AENe3WUYOKiVNE4O
YQP1x9c0l09ma3ZU0sLmz2VTyqVzI7yV3mu/+HKcwYnqNrK9i/c8MRhOLdZ0gcUL
b2PtjjCdnSNLelZvwSLcqqR5+1oejAVwgt1Aq4RGyZzD9DVdCXUR9c9HWLAs0MEU
WPU5gU03YmrE5mkgGHRO3YnDbky0gdAGEw2Phzqd2upud4CLqdhEqA4v6/tQhpk=
=BVpU
-END PGP SIGNATURE-


0x1B438BF4.asc
Description: application/pgp-keys
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mike Hearn

 * Sent 456.5 gb data

 At my geographic service location (Singapore), this cost about $90 last
 month for bandwidth alone.


One of the reasons I initiated the (now stalled) PayFile project was in
anticipation of this problem:

https://github.com/mikehearn/PayFile
http://www.youtube.com/watch?v=r0BXnWlnIi4feature=youtu.be

At some point if you want to actually download and validate the full block
chain from scratch, you will have to start paying for it I'm sure.

In the meantime:

   1. Getting headers-first implemented and rolled out everywhere would
   reduce the amount of redundant downloading and hopefully reduce transmit
   traffic network-wide.
   2. Implementing chain pruning would allow people to control upload
   bandwidth consumption by reducing the amount of disk storage they allow.
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
Headers first loading allows the node to run SPV from the very first minutes 
and it can converge to full node by time.
This is BTW how newest versions of BOP can work.

Pruning however disqualifies the node as a source for bootstrapping an other 
full node. 
BTW, did we already agree on the service bits for an archive node?


Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 20:23, Mike Hearn m...@plan99.net wrote:

 * Sent 456.5 gb data
 
 At my geographic service location (Singapore), this cost about $90 last month 
 for bandwidth alone.
 
 One of the reasons I initiated the (now stalled) PayFile project was in 
 anticipation of this problem:
 
 https://github.com/mikehearn/PayFile
 http://www.youtube.com/watch?v=r0BXnWlnIi4feature=youtu.be
 
 At some point if you want to actually download and validate the full block 
 chain from scratch, you will have to start paying for it I'm sure.
 
 In the meantime:
 Getting headers-first implemented and rolled out everywhere would reduce the 
 amount of redundant downloading and hopefully reduce transmit traffic 
 network-wide.
 Implementing chain pruning would allow people to control upload bandwidth 
 consumption by reducing the amount of disk storage they allow.
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer ta...@bitsofproof.com wrote:
 BTW, did we already agree on the service bits for an archive node?

I'm still very concerned that a binary archive bit will cause extreme
load hot-spotting and the kind of binary Use lots of resources YES or
NO I think we're currently suffering some from, but at that point
enshrined in the protocol.

It would be much better to extend the addr messages so that nodes can
indicate a range or two of blocks that they're serving, so that all
nodes can contribute fractionally according to their means. E.g. if
you want to offer up 8 GB of distributed storage and contribute to the
availability of the blockchain,  without having to swollow the whole
20, 30, 40 ... gigabyte pill.

Already we need that kind of distributed storage for the most recent
blocks to prevent extreme bandwidth load on archives, so extending it
to arbitrary ranges is only more complicated because there is
currently no room to signal it.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
Once a single transaction in pruned in a block, the block is no longer eligible 
to be served to other nodes. 
Which transactions are pruned can be rather custom e.g. even depending on the 
wallet(s) of the node,
therefore I guess it is more handy to return some bitmap of pruned/full blocks 
than ranges.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 20:49, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer ta...@bitsofproof.com wrote:
 BTW, did we already agree on the service bits for an archive node?
 
 I'm still very concerned that a binary archive bit will cause extreme
 load hot-spotting and the kind of binary Use lots of resources YES or
 NO I think we're currently suffering some from, but at that point
 enshrined in the protocol.
 
 It would be much better to extend the addr messages so that nodes can
 indicate a range or two of blocks that they're serving, so that all
 nodes can contribute fractionally according to their means. E.g. if
 you want to offer up 8 GB of distributed storage and contribute to the
 availability of the blockchain,  without having to swollow the whole
 20, 30, 40 ... gigabyte pill.
 
 Already we need that kind of distributed storage for the most recent
 blocks to prevent extreme bandwidth load on archives, so extending it
 to arbitrary ranges is only more complicated because there is
 currently no room to signal it.
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 Once a single transaction in pruned in a block, the block is no longer
 eligible to be served to other nodes.
 Which transactions are pruned can be rather custom e.g. even depending on
 the wallet(s) of the node,
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.

This isn't at all how pruning works in Bitcoin-QT  (nor is it how I
expect pruning to work for any mature implementation). Pruning can
work happily on a whole block at a time basis regardless if all the
transactions in it are spent or not.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.

A bitmap also means high overhead and— if it's used to advertise
non-contiguous blocks— poor locality, since blocks are fetched
sequentially.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mark Friedenbach
On 04/07/2014 12:00 PM, Tamas Blummer wrote:
 Once a single transaction in pruned in a block, the block is no longer
 eligible to be served to other nodes. 
 Which transactions are pruned can be rather custom e.g. even depending
 on the wallet(s) of the node,
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.

The point is that the node has decided not to prune transactions from
that block, so that it is capable of returning full blocks within that
range.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
Maybe it is not a question of the maturity of the implementation but that of 
the person making presumptions of it.

I consider a fully pruned blockchain being equivalent to the UTXO. Block that 
hold no
more unspent transaction are reduced to a header. There is however no harm if 
more retained.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:02, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 Once a single transaction in pruned in a block, the block is no longer
 eligible to be served to other nodes.
 Which transactions are pruned can be rather custom e.g. even depending on
 the wallet(s) of the node,
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.
 
 This isn't at all how pruning works in Bitcoin-QT  (nor is it how I
 expect pruning to work for any mature implementation). Pruning can
 work happily on a whole block at a time basis regardless if all the
 transactions in it are spent or not.
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer

Once headers are loaded first there is no reason for sequential loading. 

Validation has to be sequantial, but that step can be deferred until the blocks 
before a point are loaded and continous.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.
 
 A bitmap also means high overhead and— if it's used to advertise
 non-contiguous blocks— poor locality, since blocks are fetched
 sequentially.
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Mark Friedenbach


On 04/07/2014 12:20 PM, Tamas Blummer wrote:
 Validation has to be sequantial, but that step can be deferred until the
 blocks before a point are loaded and continous.

And how do you find those blocks?

I have a suggestion: have nodes advertise which range of full blocks
they possess, then you can perform synchronization from the adversed ranges!

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
You have the trunk defined by the headers. Once a range from genesis to block n 
is fully downloaded,
you may validate upto block n. Furthermore after validation you can prune 
transactions spent until block n.

You would approach the highest block with validation and stop pruning say 100 
blocks before it, to leave room for reorgs.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:13, Mark Friedenbach m...@monetize.io wrote:

 
 
 On 04/07/2014 12:20 PM, Tamas Blummer wrote:
 Validation has to be sequantial, but that step can be deferred until the
 blocks before a point are loaded and continous.
 
 And how do you find those blocks?
 
 I have a suggestion: have nodes advertise which range of full blocks
 they possess, then you can perform synchronization from the adversed ranges!
 
 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Paul Lyon
I hope I'm not thread-jacking here, apologies if so, but that's the approach 
I've taken with the node I'm working on.
Headers can be downloaded and stored in any order, it'll make sense of what the 
winning chain is. Blocks don't need to be downloaded in any particular order 
and they don't need to be saved to disk, the UTXO is fully self-contained. That 
way the concern of storing blocks for seeding (or not) is wholly separated from 
syncing the UTXO. This allows me to do the initial blockchain sync in ~6 hours 
when I use my SSD. I only need enough disk space to store the UTXO, and then 
whatever amount of block data the user would want to store for the health of 
the network.
This project is a bitcoin learning exercise for me, so I can only hope I don't 
have any critical design flaws in there. :)

From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?


Once headers are loaded first there is no reason for sequential loading. 
Validation has to be sequantial, but that step can be deferred until the blocks 
before a point are loaded and continous.
Tamas Blummerhttp://bitsofproof.com


On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:On Mon, Apr 
7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.

A bitmap also means high overhead and— if it's used to advertise
non-contiguous blocks— poor locality, since blocks are fetched
sequentially.



--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
  --
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Troy Benjegerdes
I understand the theoretical benefits of multi-sig. But if you want
to make this mind-numbingly simple, do it on the *existing* single-sig.

But why in the world do we not have a *business* that offers bitcoin
wallet insurance? The bitcoin world (and this list) ran around blaming
MtGox and users for being 'stupid' to trust mtgox.

So start a multi-level marketing business that offers *insurance* so
if your bitcoin wallet gets hacked/stolen/whatever, your 'upstream'
or whomever sold you the wallet comes to your house with a new 
computer or installs the new wallet software, or whatever, or just
makes it good.

Now, if the **insurance underwriter** decides that multisig will 
reduce fraud, and **tests it**, then I'd say we do multi-sig. But right
now we are just a bunch of technology wizards trying to force our own
opinions about what's right and 'simple' for end users without ever
asking the damn end-users.

And then we call the end-users idiots because some scammer calls them
and says I'm calling from Microsoft and your computer is broke, please
download this software to fix it.

Multi-sig is more magical moon-math that scammers will exploit to con
your grandma out of bitcoin, and then your friends will call her a stupid
luddite for falling for it.

Fix the cultural victim-blaming bullshit and you'll fix the node bleeding
problem.

On Mon, Apr 07, 2014 at 10:15:15AM -0400, Eric Martindale wrote:
 We need to make it so mind-numbingly simple to run Bitcoin correctly that
 the average user doesn't find reasons to do so in the course of normal
 use.  Right now, Coinbase and Bitstamp are winning in the user experience
 battle, which technically endanger the user, and by proxy the Bitcoin
 network.
 
 Multi-sig as a default is a start.  It won't succeed unless the user
 experience is simply better than trusted third parties, but we need to
 start the education process with the very basic fundamental: trusting a
 third-party with full access to your Bitcoin is just replacing one
 centralized banking system with another.
 
 Eric Martindale
 Developer Evangelist, BitPay
 +1 (919) 374-2020
 On Apr 7, 2014 7:05 AM, Mike Hearn m...@plan99.net wrote:
 
  My guess is that a large number of users have lost interest after they
  lost their money in MtGox. The 24th of February coincides with the
  final shutdown
 
 
  Sigh. It would not be surprising if MtGox has indeed dealt the community a
  critical blow in this regard. TX traffic is down since then too:
 
 
  https://blockchain.info/charts/n-transactions-excluding-popular?timespan=60daysshowDataPoints=falsedaysAverageString=1show_header=truescale=0address=
 
  Judging from comments and the leaked user db, it seems a lot of well known
  people lost money there   (not me fortunately). I wish I could say people
  have learned but from the size of the deposit base at Bitstamp they clearly
  have not. A lot of Bitcoin users don't seem to be ready to be their own
  bank, yet still want to own some on the assumption everyone else either is
  or soon will be. So it's really only a matter of time until something goes
  wrong with some large bitbank again, either Bitstamp or Coinbase.
 
  Some days I wonder if Bitcoin will be killed off by people who just refuse
  to use it properly before it ever gets a chance to shine. The general
  public doesn't distinguish between Bitcoin users who deposit with a third
  party and the real Bitcoin users who don't.
 
 
  --
  Put Bad Developers to Shame
  Dominate Development with Jenkins Continuous Integration
  Continuously Automate Build, Test  Deployment
  Start a new project now. Try Jenkins in the cloud.
  http://p.sf.net/sfu/13600_Cloudbees_APR
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 
 

 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment 
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees_APR

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


-- 

Troy Benjegerdes 'da hozer'  ho...@hozed.org
7 elements  earth::water::air::fire::mind::spirit::soulgrid.coop

  Never pick a fight with someone who buys ink by the barrel,
 nor try buy a hacker who makes money by the megahash


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration

Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tamas Blummer
You have to load headers sequantially to be able to connect them and determine 
the longest chain.

Blocks can be loaded in random order once you have their order given by the 
headers.
Computing the UTXO however will force you to at least temporarily store the 
blocks unless you have plenty of RAM. 

Regards,

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:30, Paul Lyon pml...@hotmail.ca wrote:

 I hope I'm not thread-jacking here, apologies if so, but that's the approach 
 I've taken with the node I'm working on.
 
 Headers can be downloaded and stored in any order, it'll make sense of what 
 the winning chain is. Blocks don't need to be downloaded in any particular 
 order and they don't need to be saved to disk, the UTXO is fully 
 self-contained. That way the concern of storing blocks for seeding (or not) 
 is wholly separated from syncing the UTXO. This allows me to do the initial 
 blockchain sync in ~6 hours when I use my SSD. I only need enough disk space 
 to store the UTXO, and then whatever amount of block data the user would want 
 to store for the health of the network.
 
 This project is a bitcoin learning exercise for me, so I can only hope I 
 don't have any critical design flaws in there. :)
 
 From: ta...@bitsofproof.com
 Date: Mon, 7 Apr 2014 21:20:31 +0200
 To: gmaxw...@gmail.com
 CC: bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] Why are we bleeding nodes?
 
 
 Once headers are loaded first there is no reason for sequential loading. 
 
 Validation has to be sequantial, but that step can be deferred until the 
 blocks before a point are loaded and continous.
 
 Tamas Blummer
 http://bitsofproof.com
 
 On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:
 
 On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 therefore I guess it is more handy to return some bitmap of pruned/full
 blocks than ranges.
 
 A bitmap also means high overhead and— if it's used to advertise
 non-contiguous blocks— poor locality, since blocks are fetched
 sequentially.
 
 
 
 --
  Put Bad Developers to Shame Dominate Development with Jenkins Continuous 
 Integration Continuously Automate Build, Test  Deployment Start a new 
 project now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees
 ___ Bitcoin-development mailing 
 list 
 Bitcoin-development@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bitcoin-development



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Ricardo Filipe
Or have blocks distributed through pruned nodes as a DHT.

2014-04-07 20:13 GMT+01:00 Mark Friedenbach m...@monetize.io:


 On 04/07/2014 12:20 PM, Tamas Blummer wrote:
 Validation has to be sequantial, but that step can be deferred until the
 blocks before a point are loaded and continous.

 And how do you find those blocks?

 I have a suggestion: have nodes advertise which range of full blocks
 they possess, then you can perform synchronization from the adversed ranges!

 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tier Nolan
On Mon, Apr 7, 2014 at 8:50 PM, Tamas Blummer ta...@bitsofproof.com wrote:

 You have to load headers sequantially to be able to connect them and
 determine the longest chain.


The isn't strictly true.  If you are connected to a some honest nodes, then
you could download portions of the chain and then connect the various
sub-chains together.

The protocol doesn't support it though.  There is no system to ask for
block headers for the main chain block with a given height,

Finding one high bandwidth peer to download the entire header chain
sequentially is pretty much forced.  The client can switch if there is a
timeout.

Other peers could be used to parallel download the block chain while the
main chain is downloading.  Even if the header download stalled, it
wouldn't be that big a deal.

 Blocks can be loaded in random order once you have their order given by
the headers.
 Computing the UTXO however will force you to at least temporarily store
the blocks unless you have plenty of RAM.

You only need to store the UTXO set, rather than the entire block chain.

It is possible to generate the UTXO set without doing any signature
verification.

A lightweight node could just verify the UTXO set and then do random
signature verifications.

The keeps disk space and CPU reasonably low.  If an illegal transaction is
added to be a block, then proof could be provided for the bad transaction.

The only slightly difficult thing is confirming inflation.  That can be
checked on a block by block basis when downloading the entire block chain.

 Regards,
 Tamas Blummer
 http://bitsofproof.com http://bitsofproof.com

On 07.04.2014, at 21:30, Paul Lyon pml...@hotmail.ca wrote:

I hope I'm not thread-jacking here, apologies if so, but that's the
approach I've taken with the node I'm working on.

Headers can be downloaded and stored in any order, it'll make sense of what
the winning chain is. Blocks don't need to be downloaded in any particular
order and they don't need to be saved to disk, the UTXO is fully
self-contained. That way the concern of storing blocks for seeding (or not)
is wholly separated from syncing the UTXO. This allows me to do the initial
blockchain sync in ~6 hours when I use my SSD. I only need enough disk
space to store the UTXO, and then whatever amount of block data the user
would want to store for the health of the network.

This project is a bitcoin learning exercise for me, so I can only hope I
don't have any critical design flaws in there. :)

--
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?


Once headers are loaded first there is no reason for sequential loading.

Validation has to be sequantial, but that step can be deferred until the
blocks before a point are loaded and continous.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:

On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com
wrote:

therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.


A bitmap also means high overhead and-- if it's used to advertise
non-contiguous blocks-- poor locality, since blocks are fetched
sequentially.



--
Put Bad Developers to Shame Dominate Development with Jenkins Continuous
Integration Continuously Automate Build, Test  Deployment Start a new
project now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees

___ Bitcoin-development mailing
list Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Gregory Maxwell
On Mon, Apr 7, 2014 at 2:48 PM, Tier Nolan tier.no...@gmail.com wrote:
 Blocks can be loaded in random order once you have their order given by
 the headers.
 Computing the UTXO however will force you to at least temporarily store
 the blocks unless you have plenty of RAM.
 You only need to store the UTXO set, rather than the entire block chain.

The comment was specifically in the context of an out of order fetch.
Verification must be in order. If you have fetched blocks out of order
you must preserve them at least long enough to reorder them. Thats
all.

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Tier Nolan
On Mon, Apr 7, 2014 at 10:55 PM, Paul Lyon pml...@hotmail.ca wrote:

  I actually ask for headers from each peer I'm connected to and then dump
 them into the backend to be sorted out.. is this abusive to the network?


I think downloading from a subset of the peers and switching out any slow
ones is a reasonable compromise.

Once you have a chain, you can quickly check that all peers have the same
main chain.

Your backend system could have a method that gives you the hash of the last
10 headers on the longest chain it knows about.  You can use the block
locator hash system.

This can be used with the getheaders message and if the new peer is on a
different chain, then it will just send you the headers starting at the
genesis block.

If that happens, you need to download the entire chain from that peer and
see if it is better than your current best.


*From:* Tier Nolan tier.no...@gmail.com
*Sent:* Monday, April 07, 2014 6:48 PM
*To:* bitcoin-development@lists.sourceforge.net


On Mon, Apr 7, 2014 at 8:50 PM, Tamas Blummer ta...@bitsofproof.com wrote:

 You have to load headers sequantially to be able to connect them and
 determine the longest chain.


The isn't strictly true.  If you are connected to a some honest nodes, then
you could download portions of the chain and then connect the various
sub-chains together.

The protocol doesn't support it though.  There is no system to ask for
block headers for the main chain block with a given height,

Finding one high bandwidth peer to download the entire header chain
sequentially is pretty much forced.  The client can switch if there is a
timeout.

Other peers could be used to parallel download the block chain while the
main chain is downloading.  Even if the header download stalled, it
wouldn't be that big a deal.

 Blocks can be loaded in random order once you have their order given by
the headers.
 Computing the UTXO however will force you to at least temporarily store
the blocks unless you have plenty of RAM.

You only need to store the UTXO set, rather than the entire block chain.

It is possible to generate the UTXO set without doing any signature
verification.

A lightweight node could just verify the UTXO set and then do random
signature verifications.

The keeps disk space and CPU reasonably low.  If an illegal transaction is
added to be a block, then proof could be provided for the bad transaction.

The only slightly difficult thing is confirming inflation.  That can be
checked on a block by block basis when downloading the entire block chain.

 Regards,
 Tamas Blummer
 http://bitsofproof.com http://bitsofproof.com

On 07.04.2014, at 21:30, Paul Lyon pml...@hotmail.ca wrote:

I hope I'm not thread-jacking here, apologies if so, but that's the
approach I've taken with the node I'm working on.

Headers can be downloaded and stored in any order, it'll make sense of what
the winning chain is. Blocks don't need to be downloaded in any particular
order and they don't need to be saved to disk, the UTXO is fully
self-contained. That way the concern of storing blocks for seeding (or not)
is wholly separated from syncing the UTXO. This allows me to do the initial
blockchain sync in ~6 hours when I use my SSD. I only need enough disk
space to store the UTXO, and then whatever amount of block data the user
would want to store for the health of the network.

This project is a bitcoin learning exercise for me, so I can only hope I
don't have any critical design flaws in there. :)

--
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?


Once headers are loaded first there is no reason for sequential loading.

Validation has to be sequantial, but that step can be deferred until the
blocks before a point are loaded and continous.

Tamas Blummer
http://bitsofproof.com

On 07.04.2014, at 21:03, Gregory Maxwell gmaxw...@gmail.com wrote:

On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer ta...@bitsofproof.com
wrote:

therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.


A bitmap also means high overhead and-- if it's used to advertise
non-contiguous blocks-- poor locality, since blocks are fetched
sequentially.



--
Put Bad Developers to Shame Dominate Development with Jenkins Continuous
Integration Continuously Automate Build, Test  Deployment Start a new
project now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees

___ Bitcoin-development mailing
list Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration

Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread kjj
Multi-sig requires infrastructure.  It isn't a magic wand that we can 
wave to make everyone secure.  The protocols and techniques necessary 
don't exist yet, and apparently no one has much of an incentive to 
create them.

I mean no offense, and I don't mean to pick on you.  Your post stuck out 
while I was reading.  Secure multi-sig is what we all want, but wanting 
apparently isn't enough to make it happen.

Other random notes from reading this 50+ post thread:

Perhaps we should have a config flag to prevent a node from serving IBD 
to new nodes.  IBD crushes marginal machines, particularly those with 
spinning disks.  This has been extensively discussed elsewhere.

The ideal IBD hosts are serving the blockchain out of a RAM disk. Is 
there any interest in setting up a network of volunteers to host 
expensive servers with fast connections?  It doesn't look too terribly 
difficult to figure out when a node has stopped asking for blocks in 
bulk, so we could add another config flag to eject nodes once they are 
done booting.

Even ignoring IBD, I think that we are gradually outgrowing cheapass 
hosting options.  Personally, I long ago gave up on answering forum 
questions about running nodes on virtual servers and VPSs.  It is 
certainly still possible to run bitcoind on small boxes, but it isn't 
trivial any more.  (Anyone running on less than my Athlon XP 1800+ with 
896 MB RAM?)  If we want those nodes back, we need to optimize the hell 
out of the memory use, and even that might not be enough.


Eric Martindale wrote:

 We need to make it so mind-numbingly simple to run Bitcoin correctly 
 that the average user doesn't find reasons to do so in the course of 
 normal use.  Right now, Coinbase and Bitstamp are winning in the user 
 experience battle, which technically endanger the user, and by proxy 
 the Bitcoin network.

 Multi-sig as a default is a start.  It won't succeed unless the user 
 experience is simply better than trusted third parties, but we need to 
 start the education process with the very basic fundamental: trusting 
 a third-party with full access to your Bitcoin is just replacing one 
 centralized banking system with another.




--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Jeff Garzik
Being Mr. Torrent, I've held open the 80% serious suggestion to
simply refuse to serve blocks older than X (3 months?).

That forces download by other means (presumably torrent).

I do not feel it is productive for any nodes on the network to waste
time/bandwidth/etc. serving static, ancient data.  There remain, of
course, issues of older nodes and getting the word out that prevents
this switch from being flipped on tomorrow.



On Mon, Apr 7, 2014 at 2:49 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer ta...@bitsofproof.com wrote:
 BTW, did we already agree on the service bits for an archive node?

 I'm still very concerned that a binary archive bit will cause extreme
 load hot-spotting and the kind of binary Use lots of resources YES or
 NO I think we're currently suffering some from, but at that point
 enshrined in the protocol.

 It would be much better to extend the addr messages so that nodes can
 indicate a range or two of blocks that they're serving, so that all
 nodes can contribute fractionally according to their means. E.g. if
 you want to offer up 8 GB of distributed storage and contribute to the
 availability of the blockchain,  without having to swollow the whole
 20, 30, 40 ... gigabyte pill.

 Already we need that kind of distributed storage for the most recent
 blocks to prevent extreme bandwidth load on archives, so extending it
 to arbitrary ranges is only more complicated because there is
 currently no room to signal it.

 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development