Indeed -- you must reinvent TCP over UDP, ultimately, to handle blocks
and large TXs.
On Tue, May 20, 2014 at 4:09 PM, Andy Alness wrote:
> Awesome! I'm assuming this is it:
> https://bitcointalk.org/index.php?topic=156769.0
>
> It would be interesting (at least to me) to take this a step furthe
Awesome! I'm assuming this is it:
https://bitcointalk.org/index.php?topic=156769.0
It would be interesting (at least to me) to take this a step further
and offer UDP as a full TCP replacement capable of STUN-assisted NAT
traversal and possibly swarmed blockchain syncs. It would require open
TCP no
> >
> > In my opinion, the number of full nodes doesn't matter (as long as
> > it's enough to satisfy demand by other nodes).
> >
>
> Correct. Still, a high number of nodes has a few other benefits:
>
> 1) The more nodes there are, the cheaper it should be to run each one,
> given that the bandwidt
Yes, i spec'd out the UDP traversal of the P2P protocol. It seems
reasonable especially for "inv" messages.
On Tue, May 20, 2014 at 2:46 PM, Andy Alness wrote:
> Has there ever been serious discussion on extending the protocol to
> support UDP transport? That would allow for NAT traversal and fo
Has there ever been serious discussion on extending the protocol to
support UDP transport? That would allow for NAT traversal and for many
more people to run effective nodes. I'm also curious if it could be
made improve block propagation time.
On Tue, May 20, 2014 at 7:52 AM, Gmail wrote:
> Unlik
Unlikely. I doubt any significant portion of miners in china will continue to
mine on a china-specific chain, since it will certainly be outmined by
non-Chinese miners, and will be orphaned eventually.
More likely is that mining interests in china will make special arrangements to
circumvent t
On Tue, May 20, 2014 at 10:15:44AM +0200, bitcoingr...@gmx.com wrote:
>Recently China has updated its firewall blocking bitcoin sites and pools.
>Whether this is simple blacklist or more sophisticated packet targeting is
>uncertain, however this update did spefically target VPN handshak
ke Hearn"
> *Cc:* "Bitcoin Dev"
> *Subject:* Re: [Bitcoin-development] Why are we bleeding nodes?
> For what it's worth, the number of nodes rose dramatically during the
> China bullrun (I recall 45k in China alone) and dropped as dramatically as
> the price after the
"
Cc: "Bitcoin Dev"
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?
For what it's worth, the number of nodes rose dramatically during the China bullrun (I recall 45k in China alone) and dropped as dramatically as the price after the first PBOC announcement desi
On Wed, Apr 9, 2014 at 12:38 PM, Wendell wrote:
> On that note, I think we have every possibility to make desktop and mobile
> wallets mind-numbingly simple -- and perhaps even do one better. Is this
> now a community priority? If so, I would really appreciate some additional
> contributors to Hi
On that note, I think we have every possibility to make desktop and mobile
wallets mind-numbingly simple -- and perhaps even do one better. Is this now a
community priority? If so, I would really appreciate some additional
contributors to Hive!
https://github.com/hivewallet/hive-osx
https://git
My node (based in Dallas, TX) has about 240 connections and is using a
little under 4 Mbps in bandwidth right now.
According the hosting provider I'm at 11.85 Mbps for this week, using 95th
percentile billing. The report from my provider includes my other servers
though.
On Mon, Apr 7, 2014 at 1
On 07/04/14 15:50, Gregory Maxwell wrote:
> Bitcoin.org recommends people away from running Bitcoin-QT now, so I'm
> not sure that we should generally find that trend surprising.
What options are out there for people caring about 20GB blockchain?
Depending of third party server is not an option.
Specialization of nodes is ongoing most prominent with SPV wallets and mining.
There is a need I see on my own business for software that is able to serve
multiple wallets, and is multi tiered,
so the world facing P2P node can be in a DMZ. I target them with a hybrid model
that is SPV plus mempo
>
> Multi-sig requires infrastructure. It isn't a magic wand that we can
> wave to make everyone secure. The protocols and techniques necessary
> don't exist yet, and apparently no one has much of an incentive to
> create them.
It is starting to happen. If you're OK with using a specific web wa
Isn't that just conceding that p2p protocol A is better than p2p protocol B?
Can't Bitcoin Core's block fetching be improved to get similar performance as a
torrent + import?
Currently it's hard to go wide on data fetching because headers first is still
pretty 'beefy'. The headers can be compre
Being Mr. Torrent, I've held open the "80% serious" suggestion to
simply refuse to serve blocks older than X (3 months?).
That forces download by other means (presumably torrent).
I do not feel it is productive for any nodes on the network to waste
time/bandwidth/etc. serving static, ancient data
Multi-sig requires infrastructure. It isn't a magic wand that we can
wave to make everyone secure. The protocols and techniques necessary
don't exist yet, and apparently no one has much of an incentive to
create them.
I mean no offense, and I don't mean to pick on you. Your post stuck out
w
pr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?
Once headers are loaded first there is no reason for sequential loading.
Validation has to be sequantial, but that step can be deferred
n there. :)
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?
Once headers are loaded first there is no reason for sequential loading.
Val
On Mon, Apr 7, 2014 at 2:48 PM, Tier Nolan wrote:
>> Blocks can be loaded in random order once you have their order given by
>> the headers.
>> Computing the UTXO however will force you to at least temporarily store
>> the blocks unless you have plenty of RAM.
> You only need to store the UTXO set
health of the network.
This project is a bitcoin learning exercise for me, so I can only hope I
don't have any critical design flaws in there. :)
--
From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-developme
Or have blocks distributed through pruned nodes as a DHT.
2014-04-07 20:13 GMT+01:00 Mark Friedenbach :
>
>
> On 04/07/2014 12:20 PM, Tamas Blummer wrote:
>> Validation has to be sequantial, but that step can be deferred until the
>> blocks before a point are loaded and continous.
>
> And how do y
is a bitcoin learning exercise for me, so I can only hope I
> don't have any critical design flaws in there. :)
>
> From: ta...@bitsofproof.com
> Date: Mon, 7 Apr 2014 21:20:31 +0200
> To: gmaxw...@gmail.com
> CC: bitcoin-development@lists.sourceforge.net
> Subject: Re
I understand the theoretical benefits of multi-sig. But if you want
to make this mind-numbingly simple, do it on the *existing* single-sig.
But why in the world do we not have a *business* that offers bitcoin
wallet insurance? The bitcoin world (and this list) ran around blaming
MtGox and users fo
Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?
Once headers are loaded first there is no reason for sequential loading.
Validation has to be sequantial, but that step can be deferred until t
You have the trunk defined by the headers. Once a range from genesis to block n
is fully downloaded,
you may validate upto block n. Furthermore after validation you can prune
transactions spent until block n.
You would approach the highest block with validation and stop pruning say 100
blocks b
On 04/07/2014 12:20 PM, Tamas Blummer wrote:
> Validation has to be sequantial, but that step can be deferred until the
> blocks before a point are loaded and continous.
And how do you find those blocks?
I have a suggestion: have nodes advertise which range of full blocks
they possess, then you
Once headers are loaded first there is no reason for sequential loading.
Validation has to be sequantial, but that step can be deferred until the blocks
before a point are loaded and continous.
Tamas Blummer
http://bitsofproof.com
On 07.04.2014, at 21:03, Gregory Maxwell wrote:
> On Mon, Ap
On Mon, Apr 7, 2014 at 8:03 PM, Gregory Maxwell wrote:
> A bitmap also means high overhead and-- if it's used to advertise
> non-contiguous blocks-- poor locality, since blocks are fetched
> sequentially.
>
A range seems like a great compromise. Putting it in the address is also a
pretty cool.
Maybe it is not a question of the maturity of the implementation but that of
the person making presumptions of it.
I consider a fully pruned blockchain being equivalent to the UTXO. Block that
hold no
more unspent transaction are reduced to a header. There is however no harm if
more retained.
On 04/07/2014 12:00 PM, Tamas Blummer wrote:
> Once a single transaction in pruned in a block, the block is no longer
> eligible to be served to other nodes.
> Which transactions are pruned can be rather custom e.g. even depending
> on the wallet(s) of the node,
> therefore I guess it is more hand
On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer wrote:
> therefore I guess it is more handy to return some bitmap of pruned/full
> blocks than ranges.
A bitmap also means high overhead and— if it's used to advertise
non-contiguous blocks— poor locality, since blocks are fetched
sequentially.
> The bottleneck is not bulk disk space, but rather IOPS.
Exactly. I stopped running a full node on both of my desktops machines
in the last month. Both systems were simply becoming very noticeable
(=unbearably) sluggish. I am also running dedicated nodes, which are
fine, but on a desktop latency
On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer wrote:
> Once a single transaction in pruned in a block, the block is no longer
> eligible to be served to other nodes.
> Which transactions are pruned can be rather custom e.g. even depending on
> the wallet(s) of the node,
> therefore I guess it is
Once a single transaction in pruned in a block, the block is no longer eligible
to be served to other nodes.
Which transactions are pruned can be rather custom e.g. even depending on the
wallet(s) of the node,
therefore I guess it is more handy to return some bitmap of pruned/full blocks
than r
On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer wrote:
> BTW, did we already agree on the service bits for an archive node?
I'm still very concerned that a binary archive bit will cause extreme
load hot-spotting and the kind of binary "Use lots of resources YES or
NO" I think we're currently suffe
Headers first loading allows the node to run SPV from the very first minutes
and it can converge to full node by time.
This is BTW how newest versions of BOP can work.
Pruning however disqualifies the node as a source for bootstrapping an other
full node.
BTW, did we already agree on the servic
>
> * Sent 456.5 gb data
>
> At my geographic service location (Singapore), this cost about $90 last
> month for bandwidth alone.
One of the reasons I initiated the (now stalled) PayFile project was in
anticipation of this problem:
https://github.com/mikehearn/PayFile
http://www.youtube.com/watc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/07/2014 05:40 PM, Mike Hearn wrote:
> The primary resources it needs are disk space and bandwidth, after
> an intensive initial day or two of building the database.
Check out the kind of hardware causal users are running these days.
The bottlen
I’m afraid this is a highly simplistic view of the costs of running a full node.
My node consumes fantastic amounts of data traffic, which is a real cost.
In the 30 days ending Apri 6, my node:
* Received 36.8 gb of data
* Sent 456.5 gb data
At my geographic service location (Singapore), this c
Okay awesome. It seems like I set up a Litecoin node without knowing it
(because it was like this:
https://bitcointalk.org/index.php?topic=128122.0) I was able to bootstrap
it (https://litecoin.info/).
On Mon, Apr 7, 2014 at 12:40 PM, Mike Hearn wrote:
> It uses ~no electricity, it's not like m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/07/2014 05:16 PM, Gregory Maxwell wrote:
> When I read "resource requirements of a full node are moving
> beyond" I didn't extract from that that "there are implementation
> issues that need to be improved to make it work better for low
> resourc
I rather prefer to start with SPV and upgrade to full node, if desired.
Tamas Blummer
http://bitsofproof.com
On 07.04.2014, at 19:40, Mike Hearn wrote:
>
> Actually, I wonder if we should start shipping (auditable) pre-baked
> databases calculated up to the last checkpoint so people can downl
On Mon, Apr 7, 2014 at 10:40 AM, Mike Hearn wrote:
> Actually, I wonder
The actual validation isn't really the problem today. The slowness of
the IBD is almost exclusively the lack of parallel fetching and the
existence of slow peers. And making the signature gate adaptive (and
deploying the 6x
It uses ~no electricity, it's not like mining.
The primary resources it needs are disk space and bandwidth, after an
intensive initial day or two of building the database.
Actually, I wonder if we should start shipping (auditable) pre-baked
databases calculated up to the last checkpoint so people
How difficult would it be to set up a node? Using lots of electricity at
home (if required) could be an issue, but I do have a Webfaction account.
On Mon, Apr 7, 2014 at 12:16 PM, Gregory Maxwell wrote:
> On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach
> wrote:
> > On 04/07/2014 09:57 AM, Gr
On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
> That is an implementation issue— mostly one that arises as an indirect
> consequence of not having headers first and the parallel fetch, not a
> requirements issue.
Oh, absolutely. But the question "why are people not running full
nodes?" has to do
On Mon, Apr 7, 2014 at 10:01 AM, Mark Friedenbach wrote:
> On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
>> That is an implementation issue— mostly one that arises as an indirect
>> consequence of not having headers first and the parallel fetch, not a
>> requirements issue.
>
> Oh, absolutely. Bu
For what it's worth, the number of nodes rose dramatically during the China
bullrun (I recall 45k in China alone) and dropped as dramatically as the
price after the first PBOC announcement designed to cool down bitcoin
trading in China.
On 7 April 2014 12:34, Mike Hearn wrote:
> At the start of
On Mon, Apr 7, 2014 at 9:27 AM, Mark Friedenbach wrote:
> Right now running a full-node on my home DSL connection (<1Mbps) makes
> other internet activity periodically unresponsive. I think we've already
> hit a point where resource requirements are pushing out casual users,
> although of course w
Right now running a full-node on my home DSL connection (<1Mbps) makes
other internet activity periodically unresponsive. I think we've already
hit a point where resource requirements are pushing out casual users,
although of course we can't be certain that accounts for all lost nodes.
On 04/07/20
I would point to bandwidth as the most important issue to the casual user who
runs a node at home. Few casual users have the know-how to set up QoS rules and
thus become quite annoyed when their Internet connection is discernibly slowed.
- Jameson
On 04/07/2014 11:53 AM, Gregory Maxwell wrote:
On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier wrote:
> 1. The resource requirements of a full node are moving beyond the
> capabilities of casual users. This isn't inherently a problem - after
> all most people don't grow their own food, tailor their own clothes, or
> keep blacksmith tools handy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/07/2014 11:34 AM, Mike Hearn wrote:
> At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
> still falling:
>
>http://getaddr.bitnodes.io/dashboard/chart/?days=60
>
> I know all the reasons why people *might* stop run
On 4/7/2014 7:05 AM, Mike Hearn wrote:
> Some days I wonder if Bitcoin will be killed off by people who just
> refuse to use it properly before it ever gets a chance to shine. The
> general public doesn't distinguish between "Bitcoin users" who deposit
> with a third party and the real Bitcoin u
We need to make it so mind-numbingly simple to "run Bitcoin correctly" that
the average user doesn't find reasons to do so in the course of normal
use. Right now, Coinbase and Bitstamp are winning in the user experience
battle, which technically endanger the user, and by proxy the Bitcoin
network.
Indeed, fully agreed. The only way to really make progress here is to make
the UX of being your own bank not only as good as trusting a third party,
but better.
I've been encouraged by the rise of risk analysis services, but we need to
integrate them into wallets more widely for them to have much
>
> My guess is that a large number of users have lost interest after they
> lost their money in MtGox. The 24th of February coincides with the
> "final" shutdown
Sigh. It would not be surprising if MtGox has indeed dealt the community a
critical blow in this regard. TX traffic is down since then
On Mon, Apr 7, 2014 at 6:58 AM, Jameson Lopp wrote:
> The Bitnodes project updated their counting algorithm a month or so ago. It
> used to be slower and less accurate - prior to their update, it was reporting
> in excess of 100,000 nodes.
Nah. It reported multiple metrics. The "100,000" numbe
The Bitnodes project updated their counting algorithm a month or so ago. It
used to be slower and less accurate - prior to their update, it was reporting
in excess of 100,000 nodes.
- Jameson
On 04/07/2014 09:53 AM, Gregory Maxwell wrote:
> On Mon, Apr 7, 2014 at 6:50 AM, Gregory Maxwell wrote
On Mon, Apr 7, 2014 at 6:50 AM, Gregory Maxwell wrote:
> FWIW, A few months before that we had even less than 8500 by the bitnodes
> count.
Gah, accidentally send I wanted to continue here that it was less
than 8500 and had been falling pretty consistently for months,
basically since the bit
On Mon, Apr 7, 2014 at 4:34 AM, Mike Hearn wrote:
> At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
> still falling:
FWIW, A few months before that we had even less than 8500 by the bitnodes count.
Bitcoin.org recommends people away from running Bitcoin-QT now, so I'm
They're _not_ phasing out into SPV wallets from what I can say. At
around the 24th of February, there has been a sharp change of the
"current installs" graph of Bitcoin Wallet. That number used to grow at
about 20.000 per month. After that date until now, it just barely moves
horizontal.
My guess
On 04/07/2014 08:26 AM, Pieter Wuille wrote:
> In my opinion, the number of full nodes doesn't matter (as long as
> it's enough to satisfy demand by other nodes).
I agree, but if we don't quantify "demand" then we are practically blind. What
is the plan? To wait until SPV clients start lagging /
>
> In my opinion, the number of full nodes doesn't matter (as long as
> it's enough to satisfy demand by other nodes).
>
Correct. Still, a high number of nodes has a few other benefits:
1) The more nodes there are, the cheaper it should be to run each one,
given that the bandwidth and CPU for se
On Mon, Apr 7, 2014 at 2:19 PM, Jameson Lopp wrote:
> I'm glad to see that I'm not the only one concerned about the consistent
> dropping of nodes. Though I think that the fundamental question should be:
> how many nodes do we really need? Obviously more is better, but it's
> difficult to say h
I'm glad to see that I'm not the only one concerned about the consistent
dropping of nodes. Though I think that the fundamental question should be: how
many nodes do we really need? Obviously more is better, but it's difficult to
say how concerned we should be without more information. I posted
phasing out of bitcoinqt into spv wallets?
2014-04-07 12:34 GMT+01:00 Mike Hearn :
> At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
> still falling:
>
>http://getaddr.bitnodes.io/dashboard/chart/?days=60
>
> I know all the reasons why people might stop running a no
69 matches
Mail list logo