[Bitcoin-development] Why do we need a MAX_BLOCK_SIZE at all?

2015-06-01 Thread Jim Phillips
Ok, I understand at least some of the reason that blocks have to be kept to
a certain size. I get that blocks which are too big will be hard to
propagate by relays. Miners will have more trouble uploading the large
blocks to the network once they've found a hash. We need block size
constraints to create a fee economy for the miners.

But these all sound to me like issues that affect some, but not others. So
it seems to me like it ought to be a configurable setting. We've already
witnessed with last week's stress test that most miners aren't even
creating 1MB blocks but are still using the software defaults of 730k. If
there are configurable limits, why does there have to be a hard limit?
Can't miners just use the configurable limit to decide what size blocks
they can afford to and are thus willing to create? They could just as
easily use that to create a fee economy. If the miners with the most
hashpower are not willing to mine blocks larger than 1 or 2 megs, then they
are able to slow down confirmations of transactions. It may take several
blocks before a miner willing to include a particular transaction finds a
block. This would actually force miners to compete with each other and find
a block size naturally instead of having it forced on them by the protocol.
Relays would be able to participate in that process by restricting the
miners ability to propagate large blocks. You know, like what happens in a
FREE MARKET economy, without burdensome regulation which can be manipulated
through politics? Isn't that what's really happening right now? Different
political factions with different agendas are fighting over how best to
regulate the Bitcoin protocol.

I know the limit was originally put in place to prevent spamming. But that
was when we were mining with CPUs and just beginning to see the occasional
GPU which could take control over the network and maliciously spam large
blocks. But with ASIC mining now catching up to Moore's Law, that's not
really an issue anymore. No one malicious entity can really just take over
the network now without spending more money than it's worth -- and that's
just going to get truer with time as hashpower continues to grow. And it's
not like the hard limit really does anything anymore to prevent spamming.
If a spammer wants to create thousands or millions of transactions, a hard
limit on the block size isn't going to stop him.. He'll just fill up the
mempool or UTXO database instead of someone's block database.. And block
storage media is generally the cheapest storage.. I mean they could be
written to tape and be just as valid as if they're stored in DRAM. Combine
that with pruning, and block storage costs are almost a non-issue for
anyone who isn't running an archival node.

And can't relay nodes just configure a limit on the size of blocks they
will relay? Sure they'd still need to download a big block occasionally,
but that's not really that big a deal, and they're under no obligation to
propagate it.. Even if it's a 2GB block, it'll get downloaded eventually.
It's only if it gets to the point where the average home connection is too
slow to keep up with the transaction  block flow that there's any real
issue there, and that would happen regardless of how big the blocks are. I
personally would much prefer to see hardware limits act as the bottleneck
than to introduce an artificial bottleneck into the protocol that has to be
adjusted regularly. The software and protocol are TECHNICALLY capable of
scaling to handle the world's entire transaction set. The real issue with
scaling to this size is limitations on hardware, which are regulated by
Moore's Law. Why do we need arbitrary soft limits? Why can't we allow
Bitcoin to grow naturally within the ever increasing limits of our
hardware? Is it because nobody will ever need more than 640k of RAM?

Am I missing something here? Is there some big reason that I'm overlooking
why there has to be some hard-coded limit on the block size that affects
the entire network and creates ongoing issues in the future?

--

*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-26 Thread Jim Phillips
I think all the suggestions recommending cutting the block time down also
suggest reducing the rewards to compensate.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Tue, May 26, 2015 at 12:43 AM, gabe appleton gapplet...@gmail.com
wrote:

 Sync time wouldn't be longer compared to 20MB, it would (eventually) be
 longer under either setup.

 Also, and this is probably a silly concern, but wouldn't changing block
 time change the supply curve? If we cut the rate in half or a power of two,
 that affects nothing, but if we want to keep it in round numbers, we need
 to do it by 10, 5, or 2. I feel like most people would bank for 10 or 5,
 both of which change the supply curve due to truncation.

 Again, it's a trivial concern, but probably one that should be addressed.
 On May 25, 2015 11:52 PM, Jim Phillips j...@ergophobia.org wrote:

 Incidentally, even once we have the Internet of Things brought on by
 21, Inc. or whoever beats them to it, I would expect the average home to
 have only a single full node hub receiving the blockchain and
 broadcasting transactions created by all the minor SPV connected devices
 running within the house. The in-home full node would be peered with high
 bandwidth full-node relays running at the ISP or in the cloud. There are
 more than enough ISPs and cloud compute providers in the world such that
 there should be no concern at all about centralization of relays. Full
 nodes could some day become as ubiquitous on the Internet as authoritative
 DNS servers. And just like DNS servers, if you don't trust the nodes your
 ISP creates or it's too slow or censors transactions, there's nothing
 preventing you from peering with nodes hosted by the Googles or OpenDNSs
 out there, or running your own if you're really paranoid and have a few
 extra bucks for a VPS.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals. -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:23 PM, Jim Phillips j...@ergophobia.org
 wrote:

 I don't see how the fact that my 2Mbps connection causes me to not be a
 very good relay has any bearing on whether or not the network as a whole
 would be negatively impacted by a 20MB block. My inability to rapidly
 propagate blocks doesn't really harm the network. It's only if MOST relays
 are as slow as mine that it creates an issue. I'm one node in thousands
 (potentially tens or hundreds of thousands if/when Bitcoin goes
 mainstream). And I'm an individual. There's no reason at all for me to run
 a full node from my home, except to have my own trusted and validated copy
 of the blockchain on a computer I control directly. I don't need to act as
 a relay for that and as long as I can download blocks faster than they are
 created I'm fine. Also, I can easily afford a VPS server or several to run
 full nodes as relays if I am feeling altruistic. It's actually cheaper for
 me to lease a VPS than to keep my own home PC on 24/7, which is why I have
 2 of them.

 And as a business, the cost of a server and bandwidth to run a full node
 is a drop in the bucket. I'm involved in several projects where we have
 full nodes running on leased servers with multiple 1Gbps connections. It's
 an almost zero cost. Those nodes could handle 20MB blocks today without
 thinking about it, and I'm sure our nodes are just a few amongst thousands
 just like them. I'm not at all concerned about the network being too
 centralized.

 What concerns me is the fact that we are using edge cases like my home
 PC as a lame excuse to debate expanding the capacity of the network.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals. -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
 right now, but I showed years ago that you could keep up with VISA on a
 single well specced server with today's technology. Only people living in a
 dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


... And will certainly NEVER have if we can't solve the capacity problem
SOON.

In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.

In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.

And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
Frankly I'm good with either way. I'm definitely in favor of faster
confirmation times.

The important thing is that we need to increase the amount of transactions
that get into blocks over a given time frame to a point that is in line
with what current technology can handle. We can handle WAY more than we are
doing right now. The Bitcoin network is not currently Disk, CPU, or RAM
bound.. Not even close. The metric we're closest to being restricted by
would be Network bandwidth. I live in a developing country. 2Mbps is a
typical broadband speed here (although 5Mbps and 10Mbps connections are
affordable). That equates to about 17MB per minute, or 170x more capacity
than what I need to receive a full copy of the blockchain if I only talk to
one peer. If I relay to say 10 peers, I can still handle 17x larger block
sizes on a slow 2Mbps connection.

Also, even if we reduce the difficulty so that we're doing 1MB blocks every
minute, that's still only 10MB every 10 minutes. Eventually we're going to
have to increase that, and we can only reduce the confirmation period so
much. I think someone once said 30 seconds or so is about the shortest
period you can practically achieve.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] No Bitcoin For You


 On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
 down right now, but I showed years ago that you could keep up with VISA on
 a single well specced server with today's technology. Only people living in
 a dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


  ... And will certainly NEVER have if we can't solve the capacity problem
 SOON.

  In a former life, I was a capacity planner for Bank of America's
 mid-range server group. We had one hard and fast rule. When you are
 typically exceeding 75% of capacity on a given metric, it's time to expand
 capacity. Period. You don't do silly things like adjusting the business
 model to disincentivize use. Unless there's some flaw in the system and
 it's leaking resources, if usage has increased to the point where you are
 at or near the limits of capacity, you expand capacity. It's as simple as
 that, and I've found that same rule fits quite well in a number of systems.

  In Bitcoin, we're not leaking resources. There's no flaw. The system is
 performing as intended. Usage is increasing because it works so well, and
 there is huge potential for future growth as we identify more uses and
 attract more users. There might be a few technical things we can do to
 reduce consumption, but the metric we're concerned with right now is how
 many transactions we can fit in a block. We've broken through the 75%
 marker and are regularly bumping up against the 100% limit.

  It is time to stop debating this and take action to expand capacity. The
 only questions that should remain are how much capacity do we add, and how
 soon can we do it. Given that most existing computer systems and networks
 can easily handle 20MB blocks every 10 minutes, and given that that will
 increase capacity 20-fold, I can't think of a single reason why we can't go
 to 20MB as soon as humanly possible. And in a few years, when the average
 block size is over 15MB, we bump it up again to as high as we can go then
 without pushing typical computers or networks beyond their capacity. We can
 worry about ways to slow down growth without affecting the usefulness of
 Bitcoin as we get closer to the hard technical limits on our capacity.

  And you know what else? If miners need higher fees to accommodate the
 costs of bigger blocks, they can configure their nodes to only mine
 transactions with higher fees.. Let the miners decide how to charge enough
 to pay for their costs. We don't need to cripple the network just for them.

  --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
I don't see how the fact that my 2Mbps connection causes me to not be a
very good relay has any bearing on whether or not the network as a whole
would be negatively impacted by a 20MB block. My inability to rapidly
propagate blocks doesn't really harm the network. It's only if MOST relays
are as slow as mine that it creates an issue. I'm one node in thousands
(potentially tens or hundreds of thousands if/when Bitcoin goes
mainstream). And I'm an individual. There's no reason at all for me to run
a full node from my home, except to have my own trusted and validated copy
of the blockchain on a computer I control directly. I don't need to act as
a relay for that and as long as I can download blocks faster than they are
created I'm fine. Also, I can easily afford a VPS server or several to run
full nodes as relays if I am feeling altruistic. It's actually cheaper for
me to lease a VPS than to keep my own home PC on 24/7, which is why I have
2 of them.

And as a business, the cost of a server and bandwidth to run a full node is
a drop in the bucket. I'm involved in several projects where we have full
nodes running on leased servers with multiple 1Gbps connections. It's an
almost zero cost. Those nodes could handle 20MB blocks today without
thinking about it, and I'm sure our nodes are just a few amongst thousands
just like them. I'm not at all concerned about the network being too
centralized.

What concerns me is the fact that we are using edge cases like my home PC
as a lame excuse to debate expanding the capacity of the network.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease in block time, we can then look
 to bump up block size because hopefully 20mbps connections will be baseline
 by then etc.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:53 PM
 To: Thy Shizzle thyshiz...@outlook.com
 Cc: Mike Hearn m...@plan99.net; Bitcoin Dev
 bitcoin-development@lists.sourceforge.net

 Subject: Re: [Bitcoin-development] No Bitcoin For You

  Frankly I'm good with either way. I'm definitely in favor of faster
 confirmation times.

  The important thing is that we need to increase the amount of
 transactions that get into blocks over a given time frame to a point that
 is in line with what current technology can handle. We can handle WAY more
 than we are doing right now. The Bitcoin network is not currently Disk,
 CPU, or RAM bound.. Not even close. The metric we're closest to being
 restricted by would be Network bandwidth. I live in a developing country.
 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
 connections are affordable). That equates to about 17MB per minute, or 170x
 more capacity than what I need to receive a full copy of the blockchain if
 I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
 larger block sizes on a slow 2Mbps connection.

  Also, even if we reduce the difficulty so that we're doing 1MB blocks
 every minute, that's still only 10MB every 10 minutes. Eventually we're
 going to have to increase that, and we can only reduce the confirmation
 period so much. I think someone once said 30 seconds or so is about the
 shortest period you can practically achieve.

  --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy *

   *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
Incidentally, even once we have the Internet of Things brought on by 21,
Inc. or whoever beats them to it, I would expect the average home to have
only a single full node hub receiving the blockchain and broadcasting
transactions created by all the minor SPV connected devices running within
the house. The in-home full node would be peered with high bandwidth
full-node relays running at the ISP or in the cloud. There are more than
enough ISPs and cloud compute providers in the world such that there should
be no concern at all about centralization of relays. Full nodes could some
day become as ubiquitous on the Internet as authoritative DNS servers. And
just like DNS servers, if you don't trust the nodes your ISP creates or
it's too slow or censors transactions, there's nothing preventing you from
peering with nodes hosted by the Googles or OpenDNSs out there, or running
your own if you're really paranoid and have a few extra bucks for a VPS.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:23 PM, Jim Phillips j...@ergophobia.org wrote:

 I don't see how the fact that my 2Mbps connection causes me to not be a
 very good relay has any bearing on whether or not the network as a whole
 would be negatively impacted by a 20MB block. My inability to rapidly
 propagate blocks doesn't really harm the network. It's only if MOST relays
 are as slow as mine that it creates an issue. I'm one node in thousands
 (potentially tens or hundreds of thousands if/when Bitcoin goes
 mainstream). And I'm an individual. There's no reason at all for me to run
 a full node from my home, except to have my own trusted and validated copy
 of the blockchain on a computer I control directly. I don't need to act as
 a relay for that and as long as I can download blocks faster than they are
 created I'm fine. Also, I can easily afford a VPS server or several to run
 full nodes as relays if I am feeling altruistic. It's actually cheaper for
 me to lease a VPS than to keep my own home PC on 24/7, which is why I have
 2 of them.

 And as a business, the cost of a server and bandwidth to run a full node
 is a drop in the bucket. I'm involved in several projects where we have
 full nodes running on leased servers with multiple 1Gbps connections. It's
 an almost zero cost. Those nodes could handle 20MB blocks today without
 thinking about it, and I'm sure our nodes are just a few amongst thousands
 just like them. I'm not at all concerned about the network being too
 centralized.

 What concerns me is the fact that we are using edge cases like my home PC
 as a lame excuse to debate expanding the capacity of the network.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease in block time, we can then look
 to bump up block size because hopefully 20mbps connections will be baseline
 by then etc.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:53 PM
 To: Thy Shizzle thyshiz...@outlook.com
 Cc: Mike Hearn m...@plan99.net; Bitcoin Dev
 bitcoin-development@lists.sourceforge.net

 Subject: Re: [Bitcoin-development] No Bitcoin For You

  Frankly I'm good with either way. I'm definitely in favor of faster
 confirmation times.

  The important thing is that we need to increase the amount of
 transactions that get into blocks over a given time frame to a point that
 is in line with what current technology can handle. We can handle WAY more
 than we are doing right now. The Bitcoin network is not currently Disk,
 CPU, or RAM bound.. Not even close. The metric we're closest to being
 restricted by would be Network bandwidth. I live in a developing country.
 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
 connections are affordable). That equates to about 17MB per minute, or 170x
 more capacity than what I need to receive

Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Jim Phillips
Do any wallets actually do this yet?
On May 25, 2015 11:37 PM, Matt Whitlock b...@mattwhitlock.name wrote:

 This is very simple to do. Just ping the all nodes address (ff02::1) and
 try connecting to TCP port 8333 of each node that responds. Shouldn't take
 but more than a few milliseconds on any but the most densely populated LANs.


 On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
  Is there any work being done on using some kind of zero-conf service
  discovery protocol so that lightweight clients can find a full node on
 the
  same LAN to peer with rather than having to tie up WAN bandwidth?
 
  I envision a future where lightweight devices within a home use SPV over
  WiFi to connect with a home server which in turn relays the transactions
  they create out to the larger and faster relays on the Internet.
 
  In a situation where there are hundreds or thousands of small SPV devices
  in a single home (if 21, Inc. is successful) monitoring the blockchain,
  this could result in lower traffic across the slow WAN connection.  And
  yes, I realize it could potentially take a LOT of these devices before
 the
  total bandwidth is greater than downloading a full copy of the
 blockchain,
  but there's other reasons to host your own full node -- trust being one.
 
  --
  *James G. Phillips IV*
  https://plus.google.com/u/0/113107039501292625391/posts
  http://www.linkedin.com/in/ergophobe
 
  *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals.
  -- David Ogilvy*
 
   *This message was created with 100% recycled electrons. Please think
 twice
  before printing.*

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Jim Phillips
Is there any work being done on using some kind of zero-conf service
discovery protocol so that lightweight clients can find a full node on the
same LAN to peer with rather than having to tie up WAN bandwidth?

I envision a future where lightweight devices within a home use SPV over
WiFi to connect with a home server which in turn relays the transactions
they create out to the larger and faster relays on the Internet.

In a situation where there are hundreds or thousands of small SPV devices
in a single home (if 21, Inc. is successful) monitoring the blockchain,
this could result in lower traffic across the slow WAN connection.  And
yes, I realize it could potentially take a LOT of these devices before the
total bandwidth is greater than downloading a full copy of the blockchain,
but there's other reasons to host your own full node -- trust being one.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-10 Thread Jim Phillips
I feel your pain. I've had the same thing happen to me in the past. And I
agree it's more likely to occur with my proposed scheme but I think with HD
wallets there will still be UTXOs left unspent after most transactions
since, for privacy sake it's looking for the smallest set of addresses that
can be linked.
On May 9, 2015 9:11 PM, Matt Whitlock b...@mattwhitlock.name wrote:

 Minimizing the number of UTXOs in a wallet is sometimes not in the best
 interests of the user. In fact, quite often I've wished for a configuration
 option like Try to maintain _[number]_ UTXOs in the wallet. This is
 because I often want to make multiple spends from my wallet within one
 block, but spends of unconfirmed inputs are less reliable than spends of
 confirmed inputs, and some wallets (e.g., Andreas Schildbach's wallet)
 don't even allow it - you can only spend confirmed UTXOs. I can't tell you
 how aggravating it is to have to tell a friend, Oh, oops, I can't pay you
 yet. I have to wait for the last transaction I did to confirm first. All
 the more aggravating because I know, if I have multiple UTXOs in my wallet,
 I can make multiple spends within the same block.


 On Saturday, 9 May 2015, at 12:09 pm, Jim Phillips wrote:
  Forgive me if this idea has been suggested before, but I made this
  suggestion on reddit and I got some feedback recommending I also bring it
  to this list -- so here goes.
 
  I wonder if there isn't perhaps a simpler way of dealing with UTXO
 growth.
  What if, rather than deal with the issue at the protocol level, we deal
  with it at the source of the problem -- the wallets. Right now, the
 typical
  wallet selects only the minimum number of unspent outputs when building a
  transaction. The goal is to keep the transaction size to a minimum so
 that
  the fee stays low. Consequently, lots of unspent outputs just don't get
  used, and are left lying around until some point in the future.
 
  What if we started designing wallets to consolidate unspent outputs? When
  selecting unspent outputs for a transaction, rather than choosing just
 the
  minimum number from a particular address, why not select them ALL? Take
 all
  of the UTXOs from a particular address or wallet, send however much needs
  to be spent to the payee, and send the rest back to the same address or a
  change address as a single output? Through this method, we should wind up
  shrinking the UTXO database over time rather than growing it with each
  transaction. Obviously, as Bitcoin gains wider adoption, the UTXO
 database
  will grow, simply because there are 7 billion people in the world, and
  eventually a good percentage of them will have one or more wallets with
  spendable bitcoin. But this idea could limit the growth at least.
 
  The vast majority of users are running one of a handful of different
 wallet
  apps: Core, Electrum; Armory; Mycelium; Breadwallet; Coinbase; Circle;
  Blockchain.info; and maybe a few others. The developers of all these
  wallets have a vested interest in the continued usefulness of Bitcoin,
 and
  so should not be opposed to changing their UTXO selection algorithms to
 one
  that reduces the UTXO database instead of growing it.
 
  From the miners perspective, even though these types of transactions
 would
  be larger, the fee could stay low. Miners actually benefit from them in
  that it reduces the amount of storage they need to dedicate to holding
 the
  UTXO. So miners are incentivized to mine these types of transactions
 with a
  higher priority despite a low fee.
 
  Relays could also get in on the action and enforce this type of behavior
 by
  refusing to relay or deprioritizing the relay of transactions that don't
  use all of the available UTXOs from the addresses used as inputs. Relays
  are not only the ones who benefit the most from a reduction of the UTXO
  database, they're also in the best position to promote good behavior.
 
  --
  *James G. Phillips IV*
  https://plus.google.com/u/0/113107039501292625391/posts
 
  *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals.
  -- David Ogilvy*
 
   *This message was created with 100% recycled electrons. Please think
 twice
  before printing.*

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
Makes sense.. So with that said, I'd propose the following criteria for
selecting UTXOs:

1. Select the smallest possible set of addresses that can be linked in
order to come up with enough BTC to send to the payee.
2. Given multiple possible sets, select the one that has the largest number
of UTXOs.
3. Given multiple possible sets, choose the one that contains the largest
amount of total BTC.
4. Given multiple possible sets, select the one that destroys the most
bitcoin days.
5. If there's still multiple possible sets, just choose one at random.

Once the final set of addresses has been identified, use ALL UTXOs from
that set, sending appropriate outputs to the recipient(s), a new change
address, and a mining fee.

Miners should be cognisant of and reward the fact that the user is making
an effort to consolidate UTXOs. They can easily spot these transactions by
looking at whether all possible UTXOs from each input addresses have been
used. Since most miners use Bitcoin Core, and its defaults, this test can
be built into Bitcoin Core's logic for determining which transactions to
include when mining a block.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Sat, May 9, 2015 at 3:38 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 Miners do not care about the age of a UTXO entry, apart for two
 exceptions. It is also economically irrelevant.
 * There is a free transaction policy, which sets a small portion of block
 space aside for transactions which do not pay sufficient fee. This is
 mostly an altruistic way of encouraging Bitcoin adoption. As a DoS
 prevention mechanism, there is a requirement that these free transactions
 are of sufficient priority (computed as BTC-days-destroyed per byte),
 essentially requiring these transactions to consume another scarce
 resource, even if not money.
 * Coinbase transaction outputs can, as a consensus rule, only be spent
 after 100 confirmations. This is to prevent random reorganisations from
 invalidating transactions that spend young coinbase transactions (which
 can't move to the new chain). In addition, wallets also select more
 confirmed outputs first to consume, for the same reason.
 On May 9, 2015 1:20 PM, Raystonn rayst...@hotmail.com wrote:

 That policy is included in Bitcoin Core.  Miners use it because it is the
 default.  The policy was likely intended to help real transactions get
 through in the face of spam.  But it favors those with more bitcoin, as the
 priority is determined by amount spent multiplied by age of UTXOs.  At the
 very least the amount spent should be removed as a factor, or fees are
 unlikely to ever be paid by those who can afford them.  We can reassess the
 role age plays later.  One change at a time is better.
  On 9 May 2015 12:52 pm, Jim Phillips j...@ergophobia.org wrote:

 On Sat, May 9, 2015 at 2:43 PM, Raystonn rayst...@hotmail.com wrote:

 How about this as a happy medium default policy: Rather than select UTXOs
 based solely on age and limiting the size of the transaction, we select as
 many UTXOs as possible from as few addresses as possible, prioritizing
 which addresses to use based on the number of UTXOs it contains (more being
 preferable) and how old those UTXOs are (in order to reduce the fee)?

 If selecting older UTXOs gives higher priority for a lesser (or at least
 not greater) fee, that is an incentive for a rational user to use the older
 UTXOs.  Such policy needs to be defended or removed.  It doesn't support
 privacy or a reduction in UTXOs.

 Before starting this thread, I had completely forgotten that age was even
 a factor in determining which UTXOs to use. Frankly, I can't think of any
 reason why miners care how old a particular UTXO is when determining what
 fees to charge. I'm sure there is one, I just don't know what it is. I just
 tossed it in there as homage to Andreas who pointed out to me that it was
 still part of the selection criteria.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 1:45 PM, Peter Todd p...@petertodd.org wrote:

 On Sat, May 09, 2015 at 12:09:32PM -0500, Jim Phillips wrote:
  The vast majority of users are running one of a handful of different
 wallet
  apps: Core, Electrum; Armory; Mycelium; Breadwallet; Coinbase; Circle;
  Blockchain.info; and maybe a few others. The developers of all these
  wallets have a vested interest in the continued usefulness of Bitcoin,
 and
  so should not be opposed to changing their UTXO selection algorithms to
 one
  that reduces the UTXO database instead of growing it.

 You can't assume that UTXO growth will be driven by walles at all; the
 UTXO set's global consensus functionality is incredibly useful and will
 certainly be used by all manner of applications, many having nothing to
 do with Bitcoin.


You're correct in this point. Future UTXO growth will be coming from all
directions. But I'm a believer in the idea that whatever can be done should
be done.  If we get Bitcoin devs into the mindset now that UTXOs are
expensive to those that have to store them, and that they should be good
netizens and do what they can to limit them, then hopefully that will ideal
will be passed down to future developers. I don't believe consolidating
UTXOs in the wallet is the only solution.. I just think it is a fairly easy
one to implement, and can only help the problem from getting worse in the
future.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 2:00 PM, Andreas Schildbach andr...@schildbach.de
wrote:

 Actually your assumption is wrong. Bitcoin Wallet (and I think most, if
 not all, other bitcoinj based wallets) picks UTXO by age, in order to
 maximize priority. So it keeps the number of UTXOs low, though not as
 low as if it would always pick *all* UTXOs.

 Is it not fair to say though that UTXO database growth is not considered
when selecting the UTXOs to use? And that size of transaction is a priority
if not the top priority?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 2:06 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 It's a very complex trade-off, which is hard to optimize for all use
 cases. Using more UTXOs requires larger transactions, and thus more fees in
 general.

Unless the miner determines that the reduction in UTXO storage requirements
is worth the lower fee. There's no protocol level enforcement of a fee as
far as I understand it. It's enforced by the miners and their willingness
to include a transaction in a block.

 In addition, it results in more linkage between coins/addresses used, so
 lower privacy.

Not if you only select all the UTXOs from a single address. A wallet that
is geared more towards privacy minded individuals may want to reduce the
amount of address linkage, but a wallet geared towards the general masses
probably won't have to worry so much about that.

 The only way you can guarantee an economical reason to keep the UTXO set
 small is by actually having a consensus rule that punishes increasing its
 size.

There's an economical reason right now to keeping the UTXO set small. The
smaller it is, the easier it is for the individual to run a full node. The
easier it is to run a full node, the faster Bitcoin will spread to the
masses. The faster it spreads to the masses, the more valuable it becomes.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 2:12 PM, Patrick Mccorry (PGR) 
patrick.mcco...@newcastle.ac.uk wrote:

   Not necessarily. If you want to ensure privacy, you could limit the
 selection of UTXOs to a single address, and even go so far as to send
 change back to that same address. This wouldn't be as effective as
 combining the UTXOs from multiple addresses, but it would help. The key is
 to do everything that can be done when building a transaction to ensure
 that as many inputs as possible are consolidated into as few outputs as
 possible.


  I would agree if you have multiple utxo for a single address then it
 makes sense since there is no privacy loss. However sending the change back
 to the same address would damage privacy (Hive does this) as it is then
 obvious from looking at the transaction which output is change and which
 output is sending funds.


I tend to agree with you here. But the change output could just as easily
be sent to a new change address.

  Also not everyone is concerned with their own privacy, and I'm not aware
 of any HD-wallet implementations that won't already combine inputs from
 multiple addresses within that wallet without user input.


  For people who do not care for privacy then it would work fine. But
 adding it into the wallet as default behaviour would deter those who do
 care for privacy - and making it a customisable option just adds complexity
 for the users. Wallets do need to combine utxo at times to spend bitcoins
 which is how people can be tracked today, using the minimum set of utxo
 tries to reduce the risk.

 Different wallets are targeted at different demographics. Some are geared
towards more mainstream users (for whom the privacy issue is less a
concern) and some (such as DarkWallet) are geared more towards the privacy
advocates. These wallets may choose to set their defaults at oposite ends
of the spectrum as to how they choose to select and link addresses and
UTXOs, but they can all improve on their current algorithms and promote
some degree of consolidation.

   Additionally, large wallets that have lots of addresses owned by
 multiple users like exchanges, blockchain.info, and Coinbase can
 consolidate UTXOs very effectively when building transactions


  That's true - I'm not sure how they would feel about it though. I
 imagine they probably are already to minimise key management.

 That's what these discussions are for. Hopefully this thread will be seen
by developers of these wallets and give them something to consider.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 2:25 PM, Raystonn rayst...@hotmail.com wrote:

 Lack of privacy is viral.  We shouldn't encourage policy in most wallets
 that discourages privacy.  It adversely affects privacy across the entire
 network.

How about this as a happy medium default policy: Rather than select UTXOs
based solely on age and limiting the size of the transaction, we select as
many UTXOs as possible from as few addresses as possible, prioritizing
which addresses to use based on the number of UTXOs it contains (more being
preferable) and how old those UTXOs are (in order to reduce the fee)?
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Jim Phillips
On Sat, May 9, 2015 at 2:43 PM, Raystonn rayst...@hotmail.com wrote:

 How about this as a happy medium default policy: Rather than select UTXOs
 based solely on age and limiting the size of the transaction, we select as
 many UTXOs as possible from as few addresses as possible, prioritizing
 which addresses to use based on the number of UTXOs it contains (more being
 preferable) and how old those UTXOs are (in order to reduce the fee)?

 If selecting older UTXOs gives higher priority for a lesser (or at least
 not greater) fee, that is an incentive for a rational user to use the older
 UTXOs.  Such policy needs to be defended or removed.  It doesn't support
 privacy or a reduction in UTXOs.

Before starting this thread, I had completely forgotten that age was even a
factor in determining which UTXOs to use. Frankly, I can't think of any
reason why miners care how old a particular UTXO is when determining what
fees to charge. I'm sure there is one, I just don't know what it is. I just
tossed it in there as homage to Andreas who pointed out to me that it was
still part of the selection criteria.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development