Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Andreas Schildbach
On 05/25/2015 10:03 PM, Matt Whitlock wrote:
 On Monday, 25 May 2015, at 8:41 pm, Mike Hearn wrote:
 some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you
 can only spend confirmed UTXOs. I can't tell you how aggravating it is to
 have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for
 the last transaction I did to confirm first. All the more aggravating
 because I know, if I have multiple UTXOs in my wallet, I can make multiple
 spends within the same block.

 Andreas' wallet hasn't done that for years. Are you repeating this from
 some very old memory or do you actually see this issue in reality?

 The only time you're forced to wait for confirmations is when you have an
 unconfirmed inbound transaction, and thus the sender is unknown.
 
 I see this behavior all the time. I am using the latest release, as far as I 
 know. Version 4.30.
 
 The same behavior occurs in the Testnet3 variant of the app. Go in there with 
 an empty wallet and receive one payment and wait for it to confirm. Then send 
 a payment and, before it confirms, try to send another one. The wallet won't 
 let you send the second payment. It'll say something like, You need x.xx 
 more bitcoins to make this payment. But if you wait for your first payment 
 to confirm, then you'll be able to make the second payment.
 
 If it matters, I configure the app to connect only to my own trusted Bitcoin 
 node, so I only ever have one active connection at most. I notice that 
 outgoing payments never show as Sent until they appear in a block, 
 presumably because the app never sees the transaction come in over any 
 connection.

Yes, that's the issue. Because you're connecting only to one node, you
don't get any instant confirmations -- due to a Bitcoin protocol
limitation you can only get them from nodes you don't post the tx to.




--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Scaling Bitcoin with Subchains

2015-05-25 Thread Mike Hearn
Hi Andrew,

Your belief that Bitcoin has to be constrained by the belief that hardware
will never improve is extremist, but regardless, your concerns are easy to
assuage: there is no requirement that the block chain be stored on hard
disks. As you note yourself the block chain is used for building/auditing
the ledger. Random access to it is not required, if all you care about is
running a full node.

Luckily this makes it a great fit for tape backup. Technology that can
store 185 terabytes *per cartridge* has already been developed:

http://www.itworld.com/article/2693369/sony-develops-tape-tech-that-could-lead-to-185-tb-cartridges.html

As you could certainly share costs of a block chain archive with other
people, the cost would not be a major concern even today. And it's
virtually guaranteed that humanity will not hit a storage technology wall
in 2015.

If your computer is compromised then all bets are off. Validating the chain
on a compromised host is meaningless.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-25 Thread Mike Hearn
Hi Thomas,

My problem is that this seems to lacks a vision.


Are you aware of my proposal for network assurance contracts?

There is a discussion here:


https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07552.html

But I agree with Gavin that attempting to plan for 20 years from now is
ambitious at best. Bitcoin might not even exist 20 years from now, or might
be an abandoned backwater a la USENET.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Mike Hearn

 some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you
 can only spend confirmed UTXOs. I can't tell you how aggravating it is to
 have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for
 the last transaction I did to confirm first. All the more aggravating
 because I know, if I have multiple UTXOs in my wallet, I can make multiple
 spends within the same block.


Andreas' wallet hasn't done that for years. Are you repeating this from
some very old memory or do you actually see this issue in reality?

The only time you're forced to wait for confirmations is when you have an
unconfirmed inbound transaction, and thus the sender is unknown.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] alternatives to the 20MB block limit, measure first!

2015-05-25 Thread Ron
Hello all,

With all the discussion about the Block size limit, I thought it would be 
interesting to measure, in some sense, the average Tx size.  Then given a fixed 
average block period (Bp) of 10 minutes (i.e 600 seconds), all one needs to do 
to estimate an average block size is ask the question: what average transaction 
rate (tps) do you want?

So for tps ~ 10 (Tx/sec) and an average transaction size (avgTxSz) of 612 Bytes 
(last ten blocks up to block 357998 (2:05pm EDT 5/25/2015) we have a block size 
of 612 * 10 * 600 = 3,672,000 Bytes

Alternatively, given an avgTxSz ~612 and maxBl = 1,000,000 we have (maxBl / 
avgBlSz) / Bp is the actual current max tps, which is ~2.72 tps.

The avgBlSz for the 10 blocks up to block # 357999 is ~ 576 Bytes, so the 
current possible tps is ~2.89 and the maxBL for a tps = 10 is 3,456,000 bytes.

So I think one should state one's assumed tps and a measured or presumed 
avgTxSz before saying what a maxBl should be. So for a maxBl ~20,000,000 Bytes 
and a current avgTxSz ~600 Bytes, the tps ~55.5 FWIW

Ron (aka old c coder)

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Mike Hearn

 If capacity grows, fewer individuals would be able to run full nodes.


Hardly. Nobody is currently exhausting the CPU capacity of even a normal
computer currently and even if we did a 20x increase in load overnight,
that still wouldn't even warm up most machines good enough to be always on.

The reasons full nodes are unpopular to run seem to be:

1. Uncontrollable bandwidth usage from sending people the chain
2. People don't run them all the time, then don't want to wait for them to
catch up

The first can be fixed with better code (you can already easily opt out of
uploading the chain, it's just not as fine-grained as desirable), and the
second is fundamental to what full nodes do and how people work. For
merchants, who are the most important demographic we want to be using full
nodes, they can just keep it running all the time. No biggie.


 Therefore miners and other full nodes would depend on
 it, which is rather critical as those nodes grow closer to data-center
 proportions.


This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
right now, but I showed years ago that you could keep up with VISA on a
single well specced server with today's technology. Only people living in a
dreamworld think that Bitcoin might actually have to match that level of
transaction demand with today's hardware. As noted previously, too many
users is simply not a problem Bitcoin has  and may never have!
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Mike Hearn
Wallets are incentivised to do a better job with defragmentation already,
as if you have lots of tiny UTXOs then your fees end up being huge when
trying to make a payment.

The reason they largely don't is just one of manpower. Nobody is working on
it.

As a wallet developer myself, one way I'd like to see this issue be fixed
by making free transactions more reliable. Then wallets can submit free
transactions to the network to consolidate UTXOs together, e.g. at night
when the user is sleeping. They would then fit into whatever space is
available in the block during periods of low demand, like on Sunday.

If we don't do this then wallets won't automatically defragment, as we'd be
unable to explain to the user why their money is slowly leaking out of
their wallet without them doing anything. Trying to explain the existing
transaction fees is hard enough already (I thought bitcoin doesn't have
banks etc).

There is another way:  as the fee is based on a rounded 1kb calculation, if
you go into the next fee band adding some more outputs and making a bigger
change output becomes free for another output or two. But wallets don't
exploit this today.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Matt Whitlock
On Monday, 25 May 2015, at 8:41 pm, Mike Hearn wrote:
  some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you
  can only spend confirmed UTXOs. I can't tell you how aggravating it is to
  have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for
  the last transaction I did to confirm first. All the more aggravating
  because I know, if I have multiple UTXOs in my wallet, I can make multiple
  spends within the same block.
 
 Andreas' wallet hasn't done that for years. Are you repeating this from
 some very old memory or do you actually see this issue in reality?
 
 The only time you're forced to wait for confirmations is when you have an
 unconfirmed inbound transaction, and thus the sender is unknown.

I see this behavior all the time. I am using the latest release, as far as I 
know. Version 4.30.

The same behavior occurs in the Testnet3 variant of the app. Go in there with 
an empty wallet and receive one payment and wait for it to confirm. Then send a 
payment and, before it confirms, try to send another one. The wallet won't let 
you send the second payment. It'll say something like, You need x.xx more 
bitcoins to make this payment. But if you wait for your first payment to 
confirm, then you'll be able to make the second payment.

If it matters, I configure the app to connect only to my own trusted Bitcoin 
node, so I only ever have one active connection at most. I notice that outgoing 
payments never show as Sent until they appear in a block, presumably because 
the app never sees the transaction come in over any connection.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Peter Todd
On Mon, May 25, 2015 at 10:29:26PM +0200, Andreas Schildbach wrote:
  I see this behavior all the time. I am using the latest release, as far as 
  I know. Version 4.30.
  
  The same behavior occurs in the Testnet3 variant of the app. Go in there 
  with an empty wallet and receive one payment and wait for it to confirm. 
  Then send a payment and, before it confirms, try to send another one. The 
  wallet won't let you send the second payment. It'll say something like, 
  You need x.xx more bitcoins to make this payment. But if you wait for 
  your first payment to confirm, then you'll be able to make the second 
  payment.
  
  If it matters, I configure the app to connect only to my own trusted 
  Bitcoin node, so I only ever have one active connection at most. I notice 
  that outgoing payments never show as Sent until they appear in a block, 
  presumably because the app never sees the transaction come in over any 
  connection.
 
 Yes, that's the issue. Because you're connecting only to one node, you
 don't get any instant confirmations -- due to a Bitcoin protocol
 limitation you can only get them from nodes you don't post the tx to.

Odd, I just tried the above as well - with multiple peers connected -
and had the exact same problem.

-- 
'peter'[:-1]@petertodd.org
0e83c311f4244e4eefb54aa845abb181e46f16d126ab21e1


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Virtual Notary.

2015-05-25 Thread Mike Hearn
Very nice Emin! This could be very useful as a building block for oracle
based services. If only there were opcodes for working with X.509 ;)

I'd suggest at least documenting in the FAQ how to extract the data from
the certificate:

openssl pkcs12 -in virtual-notary-cert-stocks-16070.p12 -nodes -passin
pass: | openssl x509 -text|less

That's good enough to get started, but I note two issues:


   1. X.509 is kind of annoying to work with: example code in popular
   languages/frameworks to extract the statement would be useful.

   2. The stock price plugin, at least, embeds the data as text inside the
   X.509 certificate. That's also not terribly developer friendly and risks
   parsing errors undermining security schemes built on it.

   The way I'd solve this is to embed either a protocol buffer or DER
   encoded structure inside the extension, so developers can extract the
   notarised data directly, without needing to do any additional parsing.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Cost savings by using replace-by-fee, 30-90%

2015-05-25 Thread Peter Todd
On Tue, May 26, 2015 at 12:03:09AM +0200, Mike Hearn wrote:
 CPFP also solves it just fine.

CPFP is a significantly more expensive way of paying fees than RBF,
particularly for the use-case of defragmenting outputs, with cost
savings ranging from 30% to 90%


Case 1: CPFP vs. RBF for increasing the fee on a single tx
--

Creating an spending a P2PKH output uses 34 bytes of txout, and 148
bytes of txin, 182 bytes total.

Let's suppose I have a 1 BTC P2PKH output and I want to pay 0.1 BTC to
Alice. This results in a 1in/2out transaction t1 that's 226 bytes in size.
I forget to click on the priority fee option, so it goes out with the
minimum fee of 2.26uBTC. Whoops! I use CPFP to spend that output,
creating a new transaction t2 that's 192 bytes in size. I want to pay
1mBTC/KB for a fast confirmation, so I'm now paying 418uBTC of
transaction fees.

On the other hand, had I use RBF, my wallet would have simply
rebroadcast t1 with the change address decreased. The rules require you
to pay 2.26uBTC for the bandwidth consumed broadcasting it, plus the new
fee level, or 218uBTC of fees in total.

Cost savings: 48%


Case 2: Paying multiple recipients in succession


Suppose that after I pay Alice, I also decide to pay Bob for his hard
work demonstrating cryptographic protocols. I need to create a new
transaction t2 spending t1's change address. Normally t2 would be
another 226 bytes in size, resulting in 226uBTC additional fees.

With RBF on the other hand I can simply double-spend t1 with a
transaction paying both Alice and Bob. This new transaction is 260 bytes
in size. I have to pay 2.6uBTC additional fees to pay for the bandwidth
consumed broadcasting it, resulting in an additional 36uBTC of fees.

Cost savings: 84%


Case 3: Paying multiple recipients from a 2-of-3 multisig wallet


The above situation gets even worse with multisig. t1 in the multisig
case is 367 bytes; t2 another 367 bytes, costing an additional 367uBTC
in fees. With RBF we rewrite t1 with an additional output, resulting in
a 399 byte transaction, with just 36uBTC in additional fees.

Cost savings: 90%


Case 4: Dust defragmentation


My wallet has a two transaction outputs that it wants to combine into
one for the purpose of UTXO defragmentation. It broadcasts transaction
t1 with two inputs and one output, size 340 bytes, paying zero fees.

Prior to the transaction confirming I find I need to spend those funds
for a priority transaction at the 1mBTC/KB fee level. This transaction,
t2a, has one input and two outputs, 226 bytes in size. However it needs
to pay fees for both transactions at once, resulting in a combined total
fee of 556uBTC. If this situation happens frequently, defragmenting
UTXOs is likely to cost more in additional fees than it saves.

With RBF I'd simply doublespend t1 with a 2-in-2-out transaction 374
bytes in size, paying 374uBTC. Even better, if one of the two inputs is
sufficiently large to cover my costs I can doublespend t1 with a
1-in-2-out tx just 226 bytes in size, paying 226uBTC.

Cost savings: 32% to 59%, or even infinite if defragmentation w/o RBF
  costs you more than you save

-- 
'peter'[:-1]@petertodd.org
134ce6577d4122094479f548b997baf84367eaf0c190bc9f


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
 right now, but I showed years ago that you could keep up with VISA on a
 single well specced server with today's technology. Only people living in a
 dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


... And will certainly NEVER have if we can't solve the capacity problem
SOON.

In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.

In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.

And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Mike Hearn
CPFP also solves it just fine.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-25 Thread Peter Todd
On Mon, May 25, 2015 at 08:44:18PM +0200, Mike Hearn wrote:
 Wallets are incentivised to do a better job with defragmentation already,
 as if you have lots of tiny UTXOs then your fees end up being huge when
 trying to make a payment.
 
 The reason they largely don't is just one of manpower. Nobody is working on
 it.
 
 As a wallet developer myself, one way I'd like to see this issue be fixed
 by making free transactions more reliable. Then wallets can submit free
 transactions to the network to consolidate UTXOs together, e.g. at night
 when the user is sleeping. They would then fit into whatever space is
 available in the block during periods of low demand, like on Sunday.

This can cause problems as until those transactions confirm, even more
of the user's outputs are unavailable for spending, causing confusion as
to why they can't send their full balance. It's also inefficient, as in
the case where the user does try to send a small payment that could be
satisfied by one or more of these small UTXO's, the wallet has to use a
larger UTXO.

With replace-by-fee however this problem goes away, as you can simply
double-spend the pending defragmentation transactions instead if they
are still unconfirmed when you need to use them.

-- 
'peter'[:-1]@petertodd.org
0aa9033c06c10d6131eafa3754c3157d74c2267c1dd2ca35


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Thy Shizzle
Nah don't make blocks 20mb, then you are slowing down block propagation and 
blowing out conf tikes as a result. Just decrease the time it takes to make a 
1mb block, then you still see the same propagation times today and just 
increase the transaction throughput.

From: Jim Phillipsmailto:j...@ergophobia.org
Sent: ‎26/‎05/‎2015 12:27 PM
To: Mike Hearnmailto:m...@plan99.net
Cc: Bitcoin Devmailto:bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] No Bitcoin For You

On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
 right now, but I showed years ago that you could keep up with VISA on a
 single well specced server with today's technology. Only people living in a
 dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


... And will certainly NEVER have if we can't solve the capacity problem
SOON.

In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.

In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.

And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread gabe appleton
But don't you see the same trade-off in the end there? You're still
propagating the same amount of data over the same amount of time, so unless
I misunderstand, the costs of such a move should be approximately the same,
just in different areas. The risks as I understand are as follows:

20MB:


   1. Longer per-block propagation (eventually)
   2. Longer processing time (eventually)
   3. Longer sync time

1 Minute:

   1. Weaker individual confirmations (approx. equal per confirmation*time)
   2. Higher orphan rate (immediately)
   3. Longer sync time

That risk-set makes me want a middle-ground approach. Something where the
immediate consequences aren't all that strong, and where we have some idea
of what to do in the future. Is there any chance we can get decent network
simulations at various configurations (5MB/4min, etc)? Perhaps
re-appropriate the testnet?

On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle thyshiz...@outlook.com
wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] No Bitcoin For You


 On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
 down right now, but I showed years ago that you could keep up with VISA on
 a single well specced server with today's technology. Only people living in
 a dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


  ... And will certainly NEVER have if we can't solve the capacity problem
 SOON.

  In a former life, I was a capacity planner for Bank of America's
 mid-range server group. We had one hard and fast rule. When you are
 typically exceeding 75% of capacity on a given metric, it's time to expand
 capacity. Period. You don't do silly things like adjusting the business
 model to disincentivize use. Unless there's some flaw in the system and
 it's leaking resources, if usage has increased to the point where you are
 at or near the limits of capacity, you expand capacity. It's as simple as
 that, and I've found that same rule fits quite well in a number of systems.

  In Bitcoin, we're not leaking resources. There's no flaw. The system is
 performing as intended. Usage is increasing because it works so well, and
 there is huge potential for future growth as we identify more uses and
 attract more users. There might be a few technical things we can do to
 reduce consumption, but the metric we're concerned with right now is how
 many transactions we can fit in a block. We've broken through the 75%
 marker and are regularly bumping up against the 100% limit.

  It is time to stop debating this and take action to expand capacity. The
 only questions that should remain are how much capacity do we add, and how
 soon can we do it. Given that most existing computer systems and networks
 can easily handle 20MB blocks every 10 minutes, and given that that will
 increase capacity 20-fold, I can't think of a single reason why we can't go
 to 20MB as soon as humanly possible. And in a few years, when the average
 block size is over 15MB, we bump it up again to as high as we can go then
 without pushing typical computers or networks beyond their capacity. We can
 worry about ways to slow down growth without affecting the usefulness of
 Bitcoin as we get closer to the hard technical limits on our capacity.

  And you know what else? If miners need higher fees to accommodate the
 costs of bigger blocks, they can configure their nodes to only mine
 transactions with higher fees.. Let the miners decide how to charge enough
 to pay for their costs. We don't need to cripple the network just for them.

  --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy *

   *This message was created with 100% recycled electrons. Please think
 twice before printing.*



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
Frankly I'm good with either way. I'm definitely in favor of faster
confirmation times.

The important thing is that we need to increase the amount of transactions
that get into blocks over a given time frame to a point that is in line
with what current technology can handle. We can handle WAY more than we are
doing right now. The Bitcoin network is not currently Disk, CPU, or RAM
bound.. Not even close. The metric we're closest to being restricted by
would be Network bandwidth. I live in a developing country. 2Mbps is a
typical broadband speed here (although 5Mbps and 10Mbps connections are
affordable). That equates to about 17MB per minute, or 170x more capacity
than what I need to receive a full copy of the blockchain if I only talk to
one peer. If I relay to say 10 peers, I can still handle 17x larger block
sizes on a slow 2Mbps connection.

Also, even if we reduce the difficulty so that we're doing 1MB blocks every
minute, that's still only 10MB every 10 minutes. Eventually we're going to
have to increase that, and we can only reduce the confirmation period so
much. I think someone once said 30 seconds or so is about the shortest
period you can practically achieve.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] No Bitcoin For You


 On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
 down right now, but I showed years ago that you could keep up with VISA on
 a single well specced server with today's technology. Only people living in
 a dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


  ... And will certainly NEVER have if we can't solve the capacity problem
 SOON.

  In a former life, I was a capacity planner for Bank of America's
 mid-range server group. We had one hard and fast rule. When you are
 typically exceeding 75% of capacity on a given metric, it's time to expand
 capacity. Period. You don't do silly things like adjusting the business
 model to disincentivize use. Unless there's some flaw in the system and
 it's leaking resources, if usage has increased to the point where you are
 at or near the limits of capacity, you expand capacity. It's as simple as
 that, and I've found that same rule fits quite well in a number of systems.

  In Bitcoin, we're not leaking resources. There's no flaw. The system is
 performing as intended. Usage is increasing because it works so well, and
 there is huge potential for future growth as we identify more uses and
 attract more users. There might be a few technical things we can do to
 reduce consumption, but the metric we're concerned with right now is how
 many transactions we can fit in a block. We've broken through the 75%
 marker and are regularly bumping up against the 100% limit.

  It is time to stop debating this and take action to expand capacity. The
 only questions that should remain are how much capacity do we add, and how
 soon can we do it. Given that most existing computer systems and networks
 can easily handle 20MB blocks every 10 minutes, and given that that will
 increase capacity 20-fold, I can't think of a single reason why we can't go
 to 20MB as soon as humanly possible. And in a few years, when the average
 block size is over 15MB, we bump it up again to as high as we can go then
 without pushing typical computers or networks beyond their capacity. We can
 worry about ways to slow down growth without affecting the usefulness of
 Bitcoin as we get closer to the hard technical limits on our capacity.

  And you know what else? If miners need higher fees to accommodate the
 costs of bigger blocks, they can configure their nodes to only mine
 transactions with higher fees.. Let the miners decide how to charge enough
 to pay for their costs. We don't need to cripple the network just for them.

  --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy 

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Thy Shizzle
I wouldn't say same trade-off because you need the whole 20mb block before you 
can start to use it where as a 1mb block can be used quicker thus transactions 
found in tge block quicker etc. As for tge higher rate of orphans, I think this 
would be complimented by a faster correction rate, so if you're pumping out 
blocks at a rate of 1 per minute, if we get a fork and the next block comes in 
10 minutes and is the decider, it took 10 minutes to determine which block is 
the orphan. But at a rate of 1 block per 1 minute then it only takes 1 minute 
to resolve the orphan (obviously this is very simplified) so I'm not so sure 
that orphan rate is a big issue here. Indeed you would need to draw upon more 
confirmations for easier block creation but surely that is not an issue?

Why would sync time be longer as opposed to 20mb blocks?

From: gabe appletonmailto:gapplet...@gmail.com
Sent: ‎26/‎05/‎2015 12:41 PM
To: Thy Shizzlemailto:thyshiz...@outlook.com
Cc: Jim Phillipsmailto:j...@ergophobia.org; Mike 
Hearnmailto:m...@plan99.net; Bitcoin 
Devmailto:bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] No Bitcoin For You

But don't you see the same trade-off in the end there? You're still
propagating the same amount of data over the same amount of time, so unless
I misunderstand, the costs of such a move should be approximately the same,
just in different areas. The risks as I understand are as follows:

20MB:


   1. Longer per-block propagation (eventually)
   2. Longer processing time (eventually)
   3. Longer sync time

1 Minute:

   1. Weaker individual confirmations (approx. equal per confirmation*time)
   2. Higher orphan rate (immediately)
   3. Longer sync time

That risk-set makes me want a middle-ground approach. Something where the
immediate consequences aren't all that strong, and where we have some idea
of what to do in the future. Is there any chance we can get decent network
simulations at various configurations (5MB/4min, etc)? Perhaps
re-appropriate the testnet?

On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle thyshiz...@outlook.com
wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] No Bitcoin For You


 On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
 down right now, but I showed years ago that you could keep up with VISA on
 a single well specced server with today's technology. Only people living in
 a dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


  ... And will certainly NEVER have if we can't solve the capacity problem
 SOON.

  In a former life, I was a capacity planner for Bank of America's
 mid-range server group. We had one hard and fast rule. When you are
 typically exceeding 75% of capacity on a given metric, it's time to expand
 capacity. Period. You don't do silly things like adjusting the business
 model to disincentivize use. Unless there's some flaw in the system and
 it's leaking resources, if usage has increased to the point where you are
 at or near the limits of capacity, you expand capacity. It's as simple as
 that, and I've found that same rule fits quite well in a number of systems.

  In Bitcoin, we're not leaking resources. There's no flaw. The system is
 performing as intended. Usage is increasing because it works so well, and
 there is huge potential for future growth as we identify more uses and
 attract more users. There might be a few technical things we can do to
 reduce consumption, but the metric we're concerned with right now is how
 many transactions we can fit in a block. We've broken through the 75%
 marker and are regularly bumping up against the 100% limit.

  It is time to stop debating this and take action to expand capacity. The
 only questions that should remain are how much capacity do we add, and how
 soon can we do it. Given that most existing computer systems and networks
 can easily handle 20MB blocks every 10 minutes, and given that that will
 increase capacity 20-fold, I can't think of a single reason why we can't go
 to 20MB as soon as humanly possible. And in a few years, when the average
 block size is over 15MB, we bump it up again to as high as we can go then
 without pushing typical computers or networks beyond their capacity. We can
 worry about ways to slow down growth without 

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Thy Shizzle
Indeed Jim, your internet connection makes a good reason why I don't like 20mb 
blocks (right now). It would take you well over a minute to download the block 
before you could even relay it on, so much slow down in propagation! Yes I do 
see how decreasing the time to create blocks is a bit of a band-aid fix, and to 
use tge term I've seen mentioned here kicking the can down the road I agree 
that this is doing this, however as you say bandwidth is our biggest enemy 
right now and so hopefully by the time we exceed the capacity gained by the 
decrease in block time, we can then look to bump up block size because 
hopefully 20mbps connections will be baseline by then etc.

From: Jim Phillipsmailto:j...@ergophobia.org
Sent: ‎26/‎05/‎2015 12:53 PM
To: Thy Shizzlemailto:thyshiz...@outlook.com
Cc: Mike Hearnmailto:m...@plan99.net; Bitcoin 
Devmailto:bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] No Bitcoin For You

Frankly I'm good with either way. I'm definitely in favor of faster
confirmation times.

The important thing is that we need to increase the amount of transactions
that get into blocks over a given time frame to a point that is in line
with what current technology can handle. We can handle WAY more than we are
doing right now. The Bitcoin network is not currently Disk, CPU, or RAM
bound.. Not even close. The metric we're closest to being restricted by
would be Network bandwidth. I live in a developing country. 2Mbps is a
typical broadband speed here (although 5Mbps and 10Mbps connections are
affordable). That equates to about 17MB per minute, or 170x more capacity
than what I need to receive a full copy of the blockchain if I only talk to
one peer. If I relay to say 10 peers, I can still handle 17x larger block
sizes on a slow 2Mbps connection.

Also, even if we reduce the difficulty so that we're doing 1MB blocks every
minute, that's still only 10MB every 10 minutes. Eventually we're going to
have to increase that, and we can only reduce the confirmation period so
much. I think someone once said 30 seconds or so is about the shortest
period you can practically achieve.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] No Bitcoin For You


 On Mon, May 25, 2015 at 1:36 PM, Mike Hearn m...@plan99.net wrote:

   This meme about datacenter-sized nodes has to die. The Bitcoin wiki is
 down right now, but I showed years ago that you could keep up with VISA on
 a single well specced server with today's technology. Only people living in
 a dreamworld think that Bitcoin might actually have to match that level of
 transaction demand with today's hardware. As noted previously, too many
 users is simply not a problem Bitcoin has  and may never have!


  ... And will certainly NEVER have if we can't solve the capacity problem
 SOON.

  In a former life, I was a capacity planner for Bank of America's
 mid-range server group. We had one hard and fast rule. When you are
 typically exceeding 75% of capacity on a given metric, it's time to expand
 capacity. Period. You don't do silly things like adjusting the business
 model to disincentivize use. Unless there's some flaw in the system and
 it's leaking resources, if usage has increased to the point where you are
 at or near the limits of capacity, you expand capacity. It's as simple as
 that, and I've found that same rule fits quite well in a number of systems.

  In Bitcoin, we're not leaking resources. There's no flaw. The system is
 performing as intended. Usage is increasing because it works so well, and
 there is huge potential for future growth as we identify more uses and
 attract more users. There might be a few technical things we can do to
 reduce consumption, but the metric we're concerned with right now is how
 many transactions we can fit in a block. We've broken through the 75%
 marker and are regularly bumping up against the 100% limit.

  It is time to stop debating this and take action to expand capacity. The
 only questions that should remain are how much capacity do we add, and how
 soon can we do it. Given that most existing computer systems and networks
 can 

Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Matt Whitlock
On Tuesday, 26 May 2015, at 1:15 am, Peter Todd wrote:
 On Tue, May 26, 2015 at 12:52:07AM -0400, Matt Whitlock wrote:
  On Monday, 25 May 2015, at 11:48 pm, Jim Phillips wrote:
   Do any wallets actually do this yet?
  
  Not that I know of, but they do seed their address database via DNS, which 
  you can poison if you control the LAN's DNS resolver. I did this for a 
  Bitcoin-only Wi-Fi network I operated at a remote festival. We had well 
  over a hundred lightweight wallets, all trying to connect to the Bitcoin 
  P2P network over a very bandwidth-constrained Internet link, so I poisoned 
  the DNS and rejected all outbound connection attempts on port 8333, to 
  force all the wallets to connect to a single local full node, which had 
  connectivity to a single remote node over the Internet. Thus, all the 
  lightweight wallets at the festival had Bitcoin network connectivity, but 
  we only needed to backhaul the Bitcoin network's transaction traffic once.
 
 Interesting!
 
 What festival was this?

The Porcupine Freedom Festival (PorcFest) in New Hampshire last summer. I 
strongly suspect that it's the largest gathering of Bitcoin users at any event 
that is not specifically Bitcoin-themed. There's a lot of overlap between the 
Bitcoin and liberty communities. PorcFest draws somewhere around 1000-2000 
attendees, a solid quarter of whom have Bitcoin wallets on their mobile devices.

The backhaul was a 3G cellular Internet connection, and the local Bitcoin node 
and network router were hosted on a Raspberry Pi with some Netfilter tricks to 
restrict connectivity. The net result was that all Bitcoin nodes (lightweight 
and heavyweight) on the local Wi-Fi network were unable to connect to any 
Bitcoin nodes except for the local node, which they discovered via DNS. I also 
had provisions in place to allow outbound connectivity to the API servers for 
Mycelium, Blockchain, and Coinbase wallets, by feeding the DNS resolver's 
results in real-time into a whitelisting Netfilter rule utilizing IP Sets.

For your amusement, here's the graphic for the banner that I had made to 
advertise the network at the festival (*chuckle*): 
http://www.mattwhitlock.com/bitcoin_wifi.png

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Matt Whitlock
Who would be performing a Sybil attack against themselves? We're talking about 
a LAN here. All the nodes would be under the control of the same entity. In 
that case, you actually want them all connecting solely to a central hub node 
on the LAN, and the hub node should connect to diverse and unpredictable 
other nodes on the Bitcoin network.


On Monday, 25 May 2015, at 9:46 pm, Kevin Greene wrote:
 This is something you actually don't want. In order to make it as difficult
 as possible for an attacker to perform a sybil attack, you want to choose a
 set of peers that is as diverse, and unpredictable as possible.
 
 
 On Mon, May 25, 2015 at 9:37 PM, Matt Whitlock b...@mattwhitlock.name
 wrote:
 
  This is very simple to do. Just ping the all nodes address (ff02::1) and
  try connecting to TCP port 8333 of each node that responds. Shouldn't take
  but more than a few milliseconds on any but the most densely populated LANs.
 
 
  On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
   Is there any work being done on using some kind of zero-conf service
   discovery protocol so that lightweight clients can find a full node on
  the
   same LAN to peer with rather than having to tie up WAN bandwidth?
  
   I envision a future where lightweight devices within a home use SPV over
   WiFi to connect with a home server which in turn relays the transactions
   they create out to the larger and faster relays on the Internet.
  
   In a situation where there are hundreds or thousands of small SPV devices
   in a single home (if 21, Inc. is successful) monitoring the blockchain,
   this could result in lower traffic across the slow WAN connection.  And
   yes, I realize it could potentially take a LOT of these devices before
  the
   total bandwidth is greater than downloading a full copy of the
  blockchain,
   but there's other reasons to host your own full node -- trust being one.
  
   --
   *James G. Phillips IV*
   https://plus.google.com/u/0/113107039501292625391/posts
   http://www.linkedin.com/in/ergophobe
  
   *Don't bunt. Aim out of the ball park. Aim for the company of
  immortals.
   -- David Ogilvy*
  
*This message was created with 100% recycled electrons. Please think
  twice
   before printing.*
 
 
  --
  One dashboard for servers and applications across Physical-Virtual-Cloud
  Widest out-of-the-box monitoring support with 50+ applications
  Performance metrics, stats and reports that give you Actionable Insights
  Deep dive visibility with transaction tracing using APM Insight.
  http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
I don't see how the fact that my 2Mbps connection causes me to not be a
very good relay has any bearing on whether or not the network as a whole
would be negatively impacted by a 20MB block. My inability to rapidly
propagate blocks doesn't really harm the network. It's only if MOST relays
are as slow as mine that it creates an issue. I'm one node in thousands
(potentially tens or hundreds of thousands if/when Bitcoin goes
mainstream). And I'm an individual. There's no reason at all for me to run
a full node from my home, except to have my own trusted and validated copy
of the blockchain on a computer I control directly. I don't need to act as
a relay for that and as long as I can download blocks faster than they are
created I'm fine. Also, I can easily afford a VPS server or several to run
full nodes as relays if I am feeling altruistic. It's actually cheaper for
me to lease a VPS than to keep my own home PC on 24/7, which is why I have
2 of them.

And as a business, the cost of a server and bandwidth to run a full node is
a drop in the bucket. I'm involved in several projects where we have full
nodes running on leased servers with multiple 1Gbps connections. It's an
almost zero cost. Those nodes could handle 20MB blocks today without
thinking about it, and I'm sure our nodes are just a few amongst thousands
just like them. I'm not at all concerned about the network being too
centralized.

What concerns me is the fact that we are using edge cases like my home PC
as a lame excuse to debate expanding the capacity of the network.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease in block time, we can then look
 to bump up block size because hopefully 20mbps connections will be baseline
 by then etc.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:53 PM
 To: Thy Shizzle thyshiz...@outlook.com
 Cc: Mike Hearn m...@plan99.net; Bitcoin Dev
 bitcoin-development@lists.sourceforge.net

 Subject: Re: [Bitcoin-development] No Bitcoin For You

  Frankly I'm good with either way. I'm definitely in favor of faster
 confirmation times.

  The important thing is that we need to increase the amount of
 transactions that get into blocks over a given time frame to a point that
 is in line with what current technology can handle. We can handle WAY more
 than we are doing right now. The Bitcoin network is not currently Disk,
 CPU, or RAM bound.. Not even close. The metric we're closest to being
 restricted by would be Network bandwidth. I live in a developing country.
 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
 connections are affordable). That equates to about 17MB per minute, or 170x
 more capacity than what I need to receive a full copy of the blockchain if
 I only talk to one peer. If I relay to say 10 peers, I can still handle 17x
 larger block sizes on a slow 2Mbps connection.

  Also, even if we reduce the difficulty so that we're doing 1MB blocks
 every minute, that's still only 10MB every 10 minutes. Eventually we're
 going to have to increase that, and we can only reduce the confirmation
 period so much. I think someone once said 30 seconds or so is about the
 shortest period you can practically achieve.

  --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy *

   *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 9:30 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Nah don't make blocks 20mb, then you are slowing down block propagation
 and blowing out conf tikes as a result. Just decrease the time it takes to
 make a 1mb block, then you still see the same propagation times today and
 just increase the transaction throughput.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:27 PM
 To: Mike Hearn m...@plan99.net
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: 

Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread Jim Phillips
Incidentally, even once we have the Internet of Things brought on by 21,
Inc. or whoever beats them to it, I would expect the average home to have
only a single full node hub receiving the blockchain and broadcasting
transactions created by all the minor SPV connected devices running within
the house. The in-home full node would be peered with high bandwidth
full-node relays running at the ISP or in the cloud. There are more than
enough ISPs and cloud compute providers in the world such that there should
be no concern at all about centralization of relays. Full nodes could some
day become as ubiquitous on the Internet as authoritative DNS servers. And
just like DNS servers, if you don't trust the nodes your ISP creates or
it's too slow or censors transactions, there's nothing preventing you from
peering with nodes hosted by the Googles or OpenDNSs out there, or running
your own if you're really paranoid and have a few extra bucks for a VPS.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*

On Mon, May 25, 2015 at 10:23 PM, Jim Phillips j...@ergophobia.org wrote:

 I don't see how the fact that my 2Mbps connection causes me to not be a
 very good relay has any bearing on whether or not the network as a whole
 would be negatively impacted by a 20MB block. My inability to rapidly
 propagate blocks doesn't really harm the network. It's only if MOST relays
 are as slow as mine that it creates an issue. I'm one node in thousands
 (potentially tens or hundreds of thousands if/when Bitcoin goes
 mainstream). And I'm an individual. There's no reason at all for me to run
 a full node from my home, except to have my own trusted and validated copy
 of the blockchain on a computer I control directly. I don't need to act as
 a relay for that and as long as I can download blocks faster than they are
 created I'm fine. Also, I can easily afford a VPS server or several to run
 full nodes as relays if I am feeling altruistic. It's actually cheaper for
 me to lease a VPS than to keep my own home PC on 24/7, which is why I have
 2 of them.

 And as a business, the cost of a server and bandwidth to run a full node
 is a drop in the bucket. I'm involved in several projects where we have
 full nodes running on leased servers with multiple 1Gbps connections. It's
 an almost zero cost. Those nodes could handle 20MB blocks today without
 thinking about it, and I'm sure our nodes are just a few amongst thousands
 just like them. I'm not at all concerned about the network being too
 centralized.

 What concerns me is the fact that we are using edge cases like my home PC
 as a lame excuse to debate expanding the capacity of the network.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease in block time, we can then look
 to bump up block size because hopefully 20mbps connections will be baseline
 by then etc.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:53 PM
 To: Thy Shizzle thyshiz...@outlook.com
 Cc: Mike Hearn m...@plan99.net; Bitcoin Dev
 bitcoin-development@lists.sourceforge.net

 Subject: Re: [Bitcoin-development] No Bitcoin For You

  Frankly I'm good with either way. I'm definitely in favor of faster
 confirmation times.

  The important thing is that we need to increase the amount of
 transactions that get into blocks over a given time frame to a point that
 is in line with what current technology can handle. We can handle WAY more
 than we are doing right now. The Bitcoin network is not currently Disk,
 CPU, or RAM bound.. Not even close. The metric we're closest to being
 restricted by would be Network bandwidth. I live in a developing country.
 2Mbps is a typical broadband speed here (although 5Mbps and 10Mbps
 connections are affordable). That equates to about 17MB per minute, or 170x
 more capacity than what I need to receive a 

Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Kevin Greene
This is something you actually don't want. In order to make it as difficult
as possible for an attacker to perform a sybil attack, you want to choose a
set of peers that is as diverse, and unpredictable as possible.


On Mon, May 25, 2015 at 9:37 PM, Matt Whitlock b...@mattwhitlock.name
wrote:

 This is very simple to do. Just ping the all nodes address (ff02::1) and
 try connecting to TCP port 8333 of each node that responds. Shouldn't take
 but more than a few milliseconds on any but the most densely populated LANs.


 On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
  Is there any work being done on using some kind of zero-conf service
  discovery protocol so that lightweight clients can find a full node on
 the
  same LAN to peer with rather than having to tie up WAN bandwidth?
 
  I envision a future where lightweight devices within a home use SPV over
  WiFi to connect with a home server which in turn relays the transactions
  they create out to the larger and faster relays on the Internet.
 
  In a situation where there are hundreds or thousands of small SPV devices
  in a single home (if 21, Inc. is successful) monitoring the blockchain,
  this could result in lower traffic across the slow WAN connection.  And
  yes, I realize it could potentially take a LOT of these devices before
 the
  total bandwidth is greater than downloading a full copy of the
 blockchain,
  but there's other reasons to host your own full node -- trust being one.
 
  --
  *James G. Phillips IV*
  https://plus.google.com/u/0/113107039501292625391/posts
  http://www.linkedin.com/in/ergophobe
 
  *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals.
  -- David Ogilvy*
 
   *This message was created with 100% recycled electrons. Please think
 twice
  before printing.*


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Luke Dashjr
On Tuesday, May 26, 2015 4:46:22 AM Kevin Greene wrote:
 This is something you actually don't want. In order to make it as difficult
 as possible for an attacker to perform a sybil attack, you want to choose a
 set of peers that is as diverse, and unpredictable as possible.

It doesn't hurt to have a local node or two, though. Might as well to improve 
propagation, while maintaining the other peers to avoid sybil attacks.

Luke

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] No Bitcoin For You

2015-05-25 Thread gabe appleton
Sync time wouldn't be longer compared to 20MB, it would (eventually) be
longer under either setup.

Also, and this is probably a silly concern, but wouldn't changing block
time change the supply curve? If we cut the rate in half or a power of two,
that affects nothing, but if we want to keep it in round numbers, we need
to do it by 10, 5, or 2. I feel like most people would bank for 10 or 5,
both of which change the supply curve due to truncation.

Again, it's a trivial concern, but probably one that should be addressed.
On May 25, 2015 11:52 PM, Jim Phillips j...@ergophobia.org wrote:

 Incidentally, even once we have the Internet of Things brought on by 21,
 Inc. or whoever beats them to it, I would expect the average home to have
 only a single full node hub receiving the blockchain and broadcasting
 transactions created by all the minor SPV connected devices running within
 the house. The in-home full node would be peered with high bandwidth
 full-node relays running at the ISP or in the cloud. There are more than
 enough ISPs and cloud compute providers in the world such that there should
 be no concern at all about centralization of relays. Full nodes could some
 day become as ubiquitous on the Internet as authoritative DNS servers. And
 just like DNS servers, if you don't trust the nodes your ISP creates or
 it's too slow or censors transactions, there's nothing preventing you from
 peering with nodes hosted by the Googles or OpenDNSs out there, or running
 your own if you're really paranoid and have a few extra bucks for a VPS.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:23 PM, Jim Phillips j...@ergophobia.org wrote:

 I don't see how the fact that my 2Mbps connection causes me to not be a
 very good relay has any bearing on whether or not the network as a whole
 would be negatively impacted by a 20MB block. My inability to rapidly
 propagate blocks doesn't really harm the network. It's only if MOST relays
 are as slow as mine that it creates an issue. I'm one node in thousands
 (potentially tens or hundreds of thousands if/when Bitcoin goes
 mainstream). And I'm an individual. There's no reason at all for me to run
 a full node from my home, except to have my own trusted and validated copy
 of the blockchain on a computer I control directly. I don't need to act as
 a relay for that and as long as I can download blocks faster than they are
 created I'm fine. Also, I can easily afford a VPS server or several to run
 full nodes as relays if I am feeling altruistic. It's actually cheaper for
 me to lease a VPS than to keep my own home PC on 24/7, which is why I have
 2 of them.

 And as a business, the cost of a server and bandwidth to run a full node
 is a drop in the bucket. I'm involved in several projects where we have
 full nodes running on leased servers with multiple 1Gbps connections. It's
 an almost zero cost. Those nodes could handle 20MB blocks today without
 thinking about it, and I'm sure our nodes are just a few amongst thousands
 just like them. I'm not at all concerned about the network being too
 centralized.

 What concerns me is the fact that we are using edge cases like my home PC
 as a lame excuse to debate expanding the capacity of the network.

 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe

 *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals. -- David Ogilvy*

  *This message was created with 100% recycled electrons. Please think
 twice before printing.*

 On Mon, May 25, 2015 at 10:02 PM, Thy Shizzle thyshiz...@outlook.com
 wrote:

  Indeed Jim, your internet connection makes a good reason why I don't
 like 20mb blocks (right now). It would take you well over a minute to
 download the block before you could even relay it on, so much slow down in
 propagation! Yes I do see how decreasing the time to create blocks is a bit
 of a band-aid fix, and to use tge term I've seen mentioned here kicking
 the can down the road I agree that this is doing this, however as you say
 bandwidth is our biggest enemy right now and so hopefully by the time we
 exceed the capacity gained by the decrease in block time, we can then look
 to bump up block size because hopefully 20mbps connections will be baseline
 by then etc.
  --
 From: Jim Phillips j...@ergophobia.org
 Sent: ‎26/‎05/‎2015 12:53 PM
 To: Thy Shizzle thyshiz...@outlook.com
 Cc: Mike Hearn m...@plan99.net; Bitcoin Dev
 bitcoin-development@lists.sourceforge.net

 Subject: Re: [Bitcoin-development] No Bitcoin For You

  Frankly I'm good with either way. I'm definitely in favor of faster
 confirmation times.

  The 

Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Matt Whitlock
This is very simple to do. Just ping the all nodes address (ff02::1) and try 
connecting to TCP port 8333 of each node that responds. Shouldn't take but more 
than a few milliseconds on any but the most densely populated LANs.


On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
 Is there any work being done on using some kind of zero-conf service
 discovery protocol so that lightweight clients can find a full node on the
 same LAN to peer with rather than having to tie up WAN bandwidth?
 
 I envision a future where lightweight devices within a home use SPV over
 WiFi to connect with a home server which in turn relays the transactions
 they create out to the larger and faster relays on the Internet.
 
 In a situation where there are hundreds or thousands of small SPV devices
 in a single home (if 21, Inc. is successful) monitoring the blockchain,
 this could result in lower traffic across the slow WAN connection.  And
 yes, I realize it could potentially take a LOT of these devices before the
 total bandwidth is greater than downloading a full copy of the blockchain,
 but there's other reasons to host your own full node -- trust being one.
 
 --
 *James G. Phillips IV*
 https://plus.google.com/u/0/113107039501292625391/posts
 http://www.linkedin.com/in/ergophobe
 
 *Don't bunt. Aim out of the ball park. Aim for the company of immortals.
 -- David Ogilvy*
 
  *This message was created with 100% recycled electrons. Please think twice
 before printing.*

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Jim Phillips
Do any wallets actually do this yet?
On May 25, 2015 11:37 PM, Matt Whitlock b...@mattwhitlock.name wrote:

 This is very simple to do. Just ping the all nodes address (ff02::1) and
 try connecting to TCP port 8333 of each node that responds. Shouldn't take
 but more than a few milliseconds on any but the most densely populated LANs.


 On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
  Is there any work being done on using some kind of zero-conf service
  discovery protocol so that lightweight clients can find a full node on
 the
  same LAN to peer with rather than having to tie up WAN bandwidth?
 
  I envision a future where lightweight devices within a home use SPV over
  WiFi to connect with a home server which in turn relays the transactions
  they create out to the larger and faster relays on the Internet.
 
  In a situation where there are hundreds or thousands of small SPV devices
  in a single home (if 21, Inc. is successful) monitoring the blockchain,
  this could result in lower traffic across the slow WAN connection.  And
  yes, I realize it could potentially take a LOT of these devices before
 the
  total bandwidth is greater than downloading a full copy of the
 blockchain,
  but there's other reasons to host your own full node -- trust being one.
 
  --
  *James G. Phillips IV*
  https://plus.google.com/u/0/113107039501292625391/posts
  http://www.linkedin.com/in/ergophobe
 
  *Don't bunt. Aim out of the ball park. Aim for the company of
 immortals.
  -- David Ogilvy*
 
   *This message was created with 100% recycled electrons. Please think
 twice
  before printing.*

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] First-Seen-Safe Replace-by-Fee

2015-05-25 Thread Peter Todd
Summary
---

First-seen-safe replace-by-fee (FSS RBF) does the following:

1) Give users effective ways of getting stuck transactions unstuck.
2) Use blockchain space efficiently.

without:

3) Changing the status quo with regard to zeroconf.

The current Bitcoin Core implementation has first-seen mempool
behavior. Once transaction t1 has been accepted, the transaction is
never removed from the mempool until mined, or double-spent by a
transaction in a block. The author's previously proposed replace-by-fee
replaced this behavior with simply accepting the transaction paying the
highest fee.

FSS RBF is a compromise between these two behaviors. Transactions may be
replaced by higher-fee paying transactions, provided that all outputs in
the previous transaction are still paid by the replacement. While not as
general as standard RBF, and with higher costs than standard RBF, this
still allows fees on transaction to be increased after the fact with
less cost and higher efficiency than child-pays-for-parent in many
common situations; in some situations CPFP is unusable, leaving RBF as
the only option.


Semantics
-

For reference, standard replace-by-fee has the following criteria for
determining whether to replace a transaction.

1) t2 pays  fees than t1

2) The delta fees pay by t2, t2.fee - t1.fee, are = the minimum fee
   required to relay t2. (t2.size * min_fee_per_kb)

3) t2 pays more fees/kb than t1

FSS RBF adds the following additional criteria to replace-by-fee before
allowing a transaction t1 to be replaced with t2:

1) All outputs of t1 exist in t2 and pay = the value in t1.

2) All outputs of t1 are unspent.

3) The order of outputs in t2 is the same as in t1 with additional new
   outputs at the end of the output list.

4) t2 only conflicts with a single transaction, t1

5) t2 does not spend any outputs of t1 (which would make it an invalid
   transaction, impossible to mine)

These additional criteria respect the existing first-seen behavior of
the Bitcoin Core mempool implementation, such that once an address is
payed some amount of BTC, all subsequent replacement transactions will
pay an equal or greater amount. In short, FSS-RBF is zeroconf safe and
has no affect on the ability of attackers to doublespend. (beyond of
course the fact that any changes what-so-ever to mempool behavior are
potential zeroconf doublespend vulnerabilities)


Implementation
--

Pull-req for git HEAD: https://github.com/bitcoin/bitcoin/pull/6176

A backport to v0.10.2 is pending.

An implementation of fee bumping respecting FSS rules is available at:

https://github.com/petertodd/replace-by-fee-tools/blob/master/bump-fee.py


Usage Scenarios
---

Case 1: Increasing the fee on a single tx
-

We start with a 1-in-2-out P2PKH using transaction t1, 226 bytes in size
with the minimal relay fee, 2.26uBTC. Increasing the fee while
respecting FSS-RBF rules requires the addition of one more txin, with
the change output value increased appropriately, resulting in
transaction t2, size 374 bytes. If the change txout is sufficient for
the fee increase, increasing the fee via CPFP requires a second
1-in-1-out transaction, 192 bytes, for a total of 418 bytes; if another
input is required, CPFP requires a 2-in-1-out tx, 340 bytes, for a total
of 566 bytes.

Benefits: 11% to 34%+ cost savings, and RBF can increase fees even in
  cases where the original transaction didn't have a change
  output.


Case 2: Paying multiple recipients in succession


We have a 1-in-2-out P2PKH transaction t1, 226 bytes, that pays Alice.
We now need to pay Bob. With plain RBF we'd just add a new outptu and
reduce the value of the change address, a 90% savings. However with FSS
RBF, decreasing the value is not allowed, so we have to add an input.

If the change of t1 is sufficient to pay Bob, a second 1-in-2-out tx can
be created, 2*226=452 bytes in total. With FSS RBF we can replace t1
with a 2-in-3-out tx paying both, increasing the value of the change
output appropriately, resulting in 408 bytes transaction saving 10%

Similar to the above example in the case where the change address of t1
is insufficient to pay Bob the end result is one less transaction output
in the wallet, defragmenting it. Spending these outputs later on would
require two 148 byte inputs compared to one with RBF, resulting in an
overall savings of 25%


Case 3: Paying the same recipient multiple times


For example, consider the situation of an exchange, Acme Bitcoin Sales,
that keeps the majority of coins in cold storage. Acme wants to move
funds to cold storage at the lowest possible cost, taking advantage of
periods of higher capacity. (inevitable due to the poisson nature of
block creation) At the same time they would like to defragment their
incoming outputs to keep redemption costs low, 

Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Kevin Greene
This is true, but the device doesn't know if the LAN it's on is a safe
network or a hotel wifi, for example. So there would be a tricky UX there.
You'd have to ask the user during set up if this is a trusted LAN or not;
or something like that. That may not be an issue though depending on the
nature of the product. For example, Chromecast doesn't need any security
protections against trolls on the same LAN. I guess it just depends on what
you're planning to build.

On Mon, May 25, 2015 at 9:56 PM, Matt Whitlock b...@mattwhitlock.name
wrote:

 Who would be performing a Sybil attack against themselves? We're talking
 about a LAN here. All the nodes would be under the control of the same
 entity. In that case, you actually want them all connecting solely to a
 central hub node on the LAN, and the hub node should connect to diverse
 and unpredictable other nodes on the Bitcoin network.


 On Monday, 25 May 2015, at 9:46 pm, Kevin Greene wrote:
  This is something you actually don't want. In order to make it as
 difficult
  as possible for an attacker to perform a sybil attack, you want to
 choose a
  set of peers that is as diverse, and unpredictable as possible.
 
 
  On Mon, May 25, 2015 at 9:37 PM, Matt Whitlock b...@mattwhitlock.name
  wrote:
 
   This is very simple to do. Just ping the all nodes address (ff02::1)
 and
   try connecting to TCP port 8333 of each node that responds. Shouldn't
 take
   but more than a few milliseconds on any but the most densely populated
 LANs.
  
  
   On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
Is there any work being done on using some kind of zero-conf service
discovery protocol so that lightweight clients can find a full node
 on
   the
same LAN to peer with rather than having to tie up WAN bandwidth?
   
I envision a future where lightweight devices within a home use SPV
 over
WiFi to connect with a home server which in turn relays the
 transactions
they create out to the larger and faster relays on the Internet.
   
In a situation where there are hundreds or thousands of small SPV
 devices
in a single home (if 21, Inc. is successful) monitoring the
 blockchain,
this could result in lower traffic across the slow WAN connection.
 And
yes, I realize it could potentially take a LOT of these devices
 before
   the
total bandwidth is greater than downloading a full copy of the
   blockchain,
but there's other reasons to host your own full node -- trust being
 one.
   
--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe
   
*Don't bunt. Aim out of the ball park. Aim for the company of
   immortals.
-- David Ogilvy*
   
 *This message was created with 100% recycled electrons. Please think
   twice
before printing.*
  
  
  
 --
   One dashboard for servers and applications across
 Physical-Virtual-Cloud
   Widest out-of-the-box monitoring support with 50+ applications
   Performance metrics, stats and reports that give you Actionable
 Insights
   Deep dive visibility with transaction tracing using APM Insight.
   http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
   ___
   Bitcoin-development mailing list
   Bitcoin-development@lists.sourceforge.net
   https://lists.sourceforge.net/lists/listinfo/bitcoin-development
  

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Peter Todd
On Tue, May 26, 2015 at 12:52:07AM -0400, Matt Whitlock wrote:
 On Monday, 25 May 2015, at 11:48 pm, Jim Phillips wrote:
  Do any wallets actually do this yet?
 
 Not that I know of, but they do seed their address database via DNS, which 
 you can poison if you control the LAN's DNS resolver. I did this for a 
 Bitcoin-only Wi-Fi network I operated at a remote festival. We had well over 
 a hundred lightweight wallets, all trying to connect to the Bitcoin P2P 
 network over a very bandwidth-constrained Internet link, so I poisoned the 
 DNS and rejected all outbound connection attempts on port 8333, to force all 
 the wallets to connect to a single local full node, which had connectivity to 
 a single remote node over the Internet. Thus, all the lightweight wallets at 
 the festival had Bitcoin network connectivity, but we only needed to backhaul 
 the Bitcoin network's transaction traffic once.

Interesting!

What festival was this?

-- 
'peter'[:-1]@petertodd.org
03ce9f2f90736ab7bd24d29f40346057f9e217b3753896bb


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Jim Phillips
Is there any work being done on using some kind of zero-conf service
discovery protocol so that lightweight clients can find a full node on the
same LAN to peer with rather than having to tie up WAN bandwidth?

I envision a future where lightweight devices within a home use SPV over
WiFi to connect with a home server which in turn relays the transactions
they create out to the larger and faster relays on the Internet.

In a situation where there are hundreds or thousands of small SPV devices
in a single home (if 21, Inc. is successful) monitoring the blockchain,
this could result in lower traffic across the slow WAN connection.  And
yes, I realize it could potentially take a LOT of these devices before the
total bandwidth is greater than downloading a full copy of the blockchain,
but there's other reasons to host your own full node -- trust being one.

--
*James G. Phillips IV*
https://plus.google.com/u/0/113107039501292625391/posts
http://www.linkedin.com/in/ergophobe

*Don't bunt. Aim out of the ball park. Aim for the company of immortals.
-- David Ogilvy*

 *This message was created with 100% recycled electrons. Please think twice
before printing.*
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Zero-Conf for Full Node Discovery

2015-05-25 Thread Matt Whitlock
On Monday, 25 May 2015, at 11:48 pm, Jim Phillips wrote:
 Do any wallets actually do this yet?

Not that I know of, but they do seed their address database via DNS, which you 
can poison if you control the LAN's DNS resolver. I did this for a Bitcoin-only 
Wi-Fi network I operated at a remote festival. We had well over a hundred 
lightweight wallets, all trying to connect to the Bitcoin P2P network over a 
very bandwidth-constrained Internet link, so I poisoned the DNS and rejected 
all outbound connection attempts on port 8333, to force all the wallets to 
connect to a single local full node, which had connectivity to a single remote 
node over the Internet. Thus, all the lightweight wallets at the festival had 
Bitcoin network connectivity, but we only needed to backhaul the Bitcoin 
network's transaction traffic once.



 On May 25, 2015 11:37 PM, Matt Whitlock b...@mattwhitlock.name wrote:
 
  This is very simple to do. Just ping the all nodes address (ff02::1) and
  try connecting to TCP port 8333 of each node that responds. Shouldn't take
  but more than a few milliseconds on any but the most densely populated LANs.
 
 
  On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote:
   Is there any work being done on using some kind of zero-conf service
   discovery protocol so that lightweight clients can find a full node on
  the
   same LAN to peer with rather than having to tie up WAN bandwidth?
  
   I envision a future where lightweight devices within a home use SPV over
   WiFi to connect with a home server which in turn relays the transactions
   they create out to the larger and faster relays on the Internet.
  
   In a situation where there are hundreds or thousands of small SPV devices
   in a single home (if 21, Inc. is successful) monitoring the blockchain,
   this could result in lower traffic across the slow WAN connection.  And
   yes, I realize it could potentially take a LOT of these devices before
  the
   total bandwidth is greater than downloading a full copy of the
  blockchain,
   but there's other reasons to host your own full node -- trust being one.
  
   --
   *James G. Phillips IV*
   https://plus.google.com/u/0/113107039501292625391/posts
   http://www.linkedin.com/in/ergophobe
  
   *Don't bunt. Aim out of the ball park. Aim for the company of
  immortals.
   -- David Ogilvy*
  
*This message was created with 100% recycled electrons. Please think
  twice
   before printing.*
 

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development