Re: [Bitcoin-development] Scaling Bitcoin with Subchains

2015-06-16 Thread Andrew
On Tue, Jun 16, 2015 at 6:17 PM, Peter Todd p...@petertodd.org wrote:

 Merge-mined sidechains are not a scaling solution any more than SPV is a
 scaling solution because they don't solve the scaling problem for
 miners.

 Some kind of treechain like sidechain / subchains where what part of the
 tree miners can mine is constrained to preserve fairness could be both a
 scaling solution, and decentralized, but no-one has come up with a solid
 design yet that's ready for production. (my treechains don't qualify for
 transactions yet; maybe for other proof-of-publication uses)


Well doesn't my proposal solve the miner decentralization problem. Only the
direct parent and children chains are merge mined. To be more clear, let
the top chain to have level 1. Each chain that is a child of a chain of
level n has level n+1. For any chain C, a block is accepted if the hash of
its header has an appropriate number of trailing zeros (as usual). It can
also be accepted with special transactions as I will explain. Let C be a
chain of level n. Let C0,C1,,C9 be its children (each of level n+1).
For any i in {0,1,...,9}, any solution to the mining problem of C can be
inserted as a special transaction inside Ci and this enables the block to
be accepted in Ci (so C and C0,C1,...,C9 are merge mined. But, for any i in
{0,1,...,9} and any j in {0,1,...,9}, any solution to the mining problem of
C cannot be inserted as a special transaction inside of child Cij of Ci. So
that means all of the chains are not merge mined, only localised parts,
right?

By the way, we can eventually get rid of the block size 1 MB limit by
requiring more than just the header to be hashed, but that can be done in
the future as soft fork with sidechains, and is a side topic.


-- 
PGP: B6AC 822C 451D 6304 6A28  49E9 7DB7 011C D53B 5647
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Scaling Bitcoin with Subchains

2015-06-13 Thread Andrew
First of all, I added more info to bitcointalk.org:
https://bitcointalk.org/index.php?topic=1083345.0

On Sat, Jun 13, 2015 at 2:39 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:


 In your proposal, transactions go to a chain based the addresses involved.
 We can reasonably assume that different people's wallet will tend to be
 distributed uniformly over several sidechains to hold their transactions
 (if they're not, there is no scaling benefit anyway...). That means that
 for an average transaction, you will need a cross-chain transfer in order
 to get the money to the recipient (as their wallet will usually be
 associated to a chain that is different from your own). Either you use an
 atomic swap (which actually means you end up briefly with coins in the
 destination chain, and require multiple transactions and a medium delay),
 or you use the 2way peg transfer mechanism (which is very slow, and reduces
 the security the recipient has to SPV).

 Whatever you do, the result will be that most transactions are:
 * Slower (a bit, or a lot, depending on what mechanism you use).
 * More complex, with more failure modes.
 * Require more and larger transactions (causing a total net extra load on
 all verifiers together).

 And either:
 * Less secure (because you rely on a third party to do an atomic swap
 with, or because of the 2 way peg transfer mechanism which has SPV security)
 * Doesn't offer any scaling benefit (because the recipient needs to fully
 validate both his own and the receiver chain).

 In short, you have not added any scaling at all, or reduced the security
 of the system significantly, as well as made it significantly less
 convenient to use.

 So no, sidechains are not a direct means for solving any of the scaling
 problems Bitcoin has. What they offer is a mechanism for easier
 experimentation, so that new technology can be built and tested without
 needing to introduce a new currency first (with the related speculative and
 network effect problems). That experimentation could eventually lead us to
 discover mechanisms for better scaling, or for more scalability/security
 tradeoffs (see for example the Witness Segregation that Elements Alpha has).


Thanks Pieter for your reply. The chain the transaction goes to does not
have to be based on the address (there can be a way for the protocol to
choose), but ya, the address scheme can be a good default. As I said, there
will be an incentive for empty chains to fill up since they will require
less fees (so the scaling benefit isn't dependent on a uniform distribution
of addresses).

The rule I mentioned is that at most 2 different chains can be involved in
one transaction. From a chain to itself is easy. From a parent or
grandparent chain to its child or grandchild chain, is also easy since the
child/grandchild always trusts its parent/grandparent. From a
child/grandchild to parent/grandparent, is also easy (no delay) since the
parent/grandparent will commit to its children (which recursively commit to
their children). As mentioned I am just doing a form of block extensions as
Adam Back described; the chains are not independent. From one chain to
another chain at the same level (sibling chains), the transaction is
recorded on both sibling chains (yes there is some duplication but this is
limited by requiring at most 2 sibling chains participating in a
transaction). They both have to be consistent and this will be ensured by
the miners of their parent chain (those miners will commit to their blocks).

So no, I don't see how it's slower, except that there needs to be some
delay for communication between children/grandchildren and
parents/grandparents, of time O(log n) where n is the number of levels.
Even a small number of levels corresponds to a large transaction volume: n
= 5 corresponds to the equivalent of 625 MB blocks.

Security-wise, it is true that the top level chain will likely have higher
security (more hash power), but at least you can fine tune the fees you pay
according to what level of security is acceptable to you, and as Bitcoin
grows, level 2,3,4 chains can be regarded as almost as secure as the level
1 chain, since there will still be a lot of hash power on them. And anyone
can run a full node on their chains of interest, so there is no SPV level
security here, it is full level security.

Transactions are not significantly different. Miners just have to deal with
child chains, but if there is a scaling benefit, we should not be scared of
complexity. It is probably the simplest way I can think of scaling.

The recipient will validate their own chain fully and will just need the
headers of the relevant parent chains to see whether an output from the
other chain involved in a transaction is really valid. They can also get
the headers of the sibling chain involved in the transaction if they want
to validate the work of the miners on these parent chains. They don't need
the full blocks of the parent and sibling chains involved 

Re: [Bitcoin-development] Scaling Bitcoin with Subchains

2015-05-27 Thread Andrew
? Or is
it a combination of both things? You should disclose this to the people
following your words because they trust you as an experienced professional
with a good reputation, and it would be dishonest to not disclose this to
them. (same goes for Gavin)

Overall, I think this system is the only system that I heard of that can
scale decentralization without a block size increase. Lightning by itself,
for example, requires a block size increase that depends on how many such
Lightning contracts are being made, so relies on people changing the
protocol, which is obviously less secure and robust than a fixed protocol.
But I am not ruling out any other possibilities, so other things should
also be considered. But eventually, we may have to decide how to scale
without knowing for sure whether the chosen scaling method is the ultimate
scaling method. And I think this is a good candidate for that, and also,
can be reversed later on without changing the original protocol before the
softfork. Actually, we can just make nodes advertise whether they support
the soft fork or not, and if a better scaling protocol comes along, those
nodes can switch to advertise the better one. So it is quite a harmless
soft fork to make, in my opinion.


On Mon, May 25, 2015 at 6:15 PM, Mike Hearn m...@plan99.net wrote:

 Hi Andrew,

 Your belief that Bitcoin has to be constrained by the belief that hardware
 will never improve is extremist, but regardless, your concerns are easy to
 assuage: there is no requirement that the block chain be stored on hard
 disks. As you note yourself the block chain is used for building/auditing
 the ledger. Random access to it is not required, if all you care about is
 running a full node.

 Luckily this makes it a great fit for tape backup. Technology that can
 store 185 terabytes *per cartridge* has already been developed:


 http://www.itworld.com/article/2693369/sony-develops-tape-tech-that-could-lead-to-185-tb-cartridges.html

 As you could certainly share costs of a block chain archive with other
 people, the cost would not be a major concern even today. And it's
 virtually guaranteed that humanity will not hit a storage technology wall
 in 2015.

 If your computer is compromised then all bets are off. Validating the
 chain on a compromised host is meaningless.




-- 
PGP: B6AC 822C 451D 6304 6A28  49E9 7DB7 011C D53B 5647
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Scaling Bitcoin with Subchains

2015-05-19 Thread Andrew
Hi

I briefly mentioned something about this on the bitcoin-dev IRC room. In
general, it seems experts (like sipa i.e. Pieter) are against using
sidechains as a way of scaling. As I only have a high level understanding
of the Bitcoin protocol, I cannot be sure if what I want to do is actually
defined as a side chain, but let me just propose it, and please let me know
whether it can work, and if not why not (I'm not scared of digging into
more technical resources in order to fully understand). I do have a good
academic/practical background for Bitcoin, and I'm ready to contribute code
if needed (one of my contributions includes a paper wallet creator written
in C).

The main problem I see with increasing the block size limit is that it
increases the amount of storage required to fully validate all transactions
for say 100 years (a person's life). With 1 MB blocks, you can store all
lifetime transactions on a 5 TB drive, which basically any regular user can
do. With 10 MB blocks, you need a 50 TB drive, not accessible for regular
users. Yes, it's possible that in the future hard drive technology will get
cheaper and smaller, but this still didn't happen, so we can't just say it
should be doable at the rate of Moore's law etc..., we need to know that
it is accessible for everyone, now. Also, don't forget that human life
expectancy can increase with time as well. I know, it sounds silly to use a
human lifetime as a measurement of how far back each user should be able to
store transactions for, but what is a better measurement? This is a
technology made for people i.e. humans, right, and the important part is
that it is for regular people and not just well privileged people. You can
search my last four emails for some more calculations.

What sipa told me on the IRC channel is that Bitcoin Core does not care
about old transactions. It only looks at the current blocks. Yes, that
makes sense, but how do you know that your machine wasn't compromised when
validating the previous blocks? And what if you want to check some old
transactions (assuming you didn't index everything)? What if some of your
old transaction data was lost or corrupted? I think it is clear that it is
useful to be able to validate all blocks (since 100 years) rather than just
a pruned part. It empowers people to have as much information about Bitcoin
transactions as do large data centers; transactions that may include
government or corporate corruption. This is the key to how Bitcoin enables
transparency for those who should be transparent (individual users with
private addresses can still remain anonymous). Also, 5 TB takes about 20
days to sync starting fresh, on a regular computer, so it allows easy entry
into the system.

So assuming we agree that people should be able to store ~ a lifetime of
transactions, then we need 1 MB blocks. But of course, this leads to huge
transaction costs, and small purchases will be out of limits. So to fix
this, I propose adding a 10 1 MB chains below the main chain (sorry on the
IRC room I said 10 10 MB chains by mistake), so effectively, you have a new
10 MB chain that is partitioned into 10 parts. You can also add a third
level: 100 1 MB chains, and keep going like that. The idea is that when you
make a large transaction, you put it through the top chain; when you make a
medium sized transaction, you put it through one of the middle chains,
which will be verified (mined) by the middle chain, and the top chain will
verify the aggregate transactions of the middle chain. If you have a small
sized transaction, you put it through one of the bottom chains, the bottom
chain verifies it, the middle chain verifies the aggregate transactions of
the bottom chain, and the top chain verifies the aggregate transactions of
the middle chain. By aggregate transaction, I mean the net result of
multiple transactions, and I suppose it can be 20 transactions belonging
only to one sibling chain for level 2, or 200 transactions for level 3,
etc...

Now, how does the system decide to which of the 10 chains the middle sized
transaction goes to? I propose just taking some simple function of the
input addresses mod 10, so that you can just keep randomly generating a
wallet until you get one with only addresses that map to only one of the 10
chains (even distribution), so that someone can choose one of the 10
chains, and store only the transactions that belong to that chain. They
should also choose a chain from level 3, etc... So in effect, they will be
storing a chain with block size O(n) where n is the number of levels. They
may store multiple sibling chains at one level, if they want to track of
other people's transactions, such as those of their government MP, or
perhaps, they want to have a separate identity that would be more anonymous
with a separate series of sibling chains. This will increase the storage
size, but the increase will be proportional to the number of things you
want to keep track of (at least this kind of system 

Re: [Bitcoin-development] Block Size Increase

2015-05-09 Thread Andrew
The nice thing about 1 MB is that you can store ALL bitcoin transactions
relevant to your lifetime (~100 years) on one 5 TB hard drive
(1*6*24*365*100=5256000). Any regular person can run a full node and store
this 5 TB hard drive easily at their home. With 10 MB blocks you need a 50
TB drive just for your bitcoin transactions! This is not doable for most
regular people due to space and monetary constraints. Being able to review
all transactions relevant to your lifetime is one of the key important
properties of Bitcoin. How else can people audit the financial transactions
of companies and governments that are using the Bitcoin blockchain? How
else can we achieve this level of transparency that is essential to keeping
corrupt governments/companies in check? How else can we keep track of our
own personal transactions without relying on others to keep track of them
for us? As time passes, storage technology may increase, but so may human
life expectancy. So yes, in this sense, 1 MB just may be the magic number.

Assuming that we have a perfectly functional off-chain transaction system,
what do we actually gain by going from 1 MB to 1000 MB (my approximate
limit for regular users having enough processing power)? If there is no
clear and substantial gain, then it is foolish to venture into this
territory, i.e. KEEP IT AT 1 MB! For example Angel said he wants to see
computers transacting with computers at super speeds. Why do you need to do
this on the main chain? You will lose all the transparency of the current
system, an essential feature.


On Fri, May 8, 2015 at 10:36 PM, Angel Leon gubat...@gmail.com wrote:

 I believe 100MB is still very conservative, I think that's barely 666 tps.

 I also find it not very creative that people are imagining these limits
 for 10 billion people using bitcoin, I think bitcoin's potential is
 realized with computers transacting with computers, which can eat those 666
 tps in a single scoup (what if bittorrent developers got creative with
 seeding, or someone created a decentralized paid itunes on top of bitcoin,
 or the openbazaar developers actually pulled a decentralized amazon with no
 off-chain transaction since they want the thing to be fully decentralized,
 bitcoin would collapse right away)

 I truly hope people see past regular people running nodes at home, that's
 never going to happen. This should be about the miner's networking, storage
 and cpu capacity. They will have gigabit access, they will have shitload of
 storage, and they already have plenty of processing power, all of which are
 only going to get cheaper.

 In order to have the success we all dream we'll need gigabit blocks. Let's
 hope adoption remains slow.

 http://twitter.com/gubatron

 On Fri, May 8, 2015 at 1:51 PM, Alan Reiner etothe...@gmail.com wrote:

 Actually I believe that side chains and off-main-chain transactions will
 be a critical part for the overall scalability of the network.  I was
 actually trying to make the point that (insert some huge block size here)
 will be needed to even accommodate the reduced traffic.

 I believe that it is definitely over 20MB. If it was determined to be 100
 MB ten years from now, that wouldn't surprise me.

 Sent from my overpriced smartphone
 On May 8, 2015 1:17 PM, Andrew onelinepr...@gmail.com wrote:



 On Fri, May 8, 2015 at 2:59 PM, Alan Reiner etothe...@gmail.com wrote:


 This isn't about everyone's coffee.  This is about an absolute
 minimum amount of participation by people who wish to use the network.   If
 our goal is really for bitcoin to really be a global, open transaction
 network that makes money fluid, then 7tps is already a failure.  If even 5%
 of the world (350M people) was using the network for 1 tx per month
 (perhaps to open payment channels, or shift money between side chains),
 we'll be above 100 tps.  And that doesn't include all the non-individuals
 (organizations) that want to use it.


 The goals of a global transaction network and everyone must be able
 to run a full node with their $200 dell laptop are not compatible.  We
 need to accept that a global transaction system cannot be fully/constantly
 audited by everyone and their mother.  The important feature of the network
 is that it is open and anyone *can* get the history and verify it.  But not
 everyone is required to.   Trying to promote a system wher000e the history
 can be forever handled by a low-end PC is already falling out of reach,
 even with our miniscule 7 tps.  Clinging to that goal needlessly limits the
 capability for the network to scale to be a useful global payments system


 These are good points and got me thinking (but I think you're wrong). If
 we really want each of the 10 billion people soon using bitcoin once per
 month, that will require 500MB blocks. That's about 2 TB per month. And if
 you relay it to 4 peers, it's 10 TB per month. Which I suppose is doable
 for a home desktop, so you can just run a pruned full node with all
 transactions

Re: [Bitcoin-development] Block Size Increase

2015-05-09 Thread Andrew
On Sat, May 9, 2015 at 12:53 PM, Justus Ranvier justusranv...@riseup.net
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 05/09/2015 02:02 PM, Andrew wrote:
  The nice thing about 1 MB is that you can store ALL bitcoin
  transactions relevant to your lifetime (~100 years) on one 5 TB
  hard drive (1*6*24*365*100=5256000). Any regular person can run a
  full node and store this 5 TB hard drive easily at their home. With
  10 MB blocks you need a 50 TB drive just for your bitcoin
  transactions! This is not doable for most regular people due to
  space and monetary constraints. Being able to review all
  transactions relevant to your lifetime is one of the key important
  properties of Bitcoin. How else can people audit the financial
  transactions of companies and governments that are using the
  Bitcoin blockchain? How else can we achieve this level of
  transparency that is essential to keeping corrupt
  governments/companies in check? How else can we keep track of our
  own personal transactions without relying on others to keep track
  of them for us? As time passes, storage technology may increase,
  but so may human life expectancy. So yes, in this sense, 1 MB just
  may be the magic number.

 How many individuals and companies do you propose will ever use
 Bitcoin (order of magnitude estimates are fine)

 Whatever number you select above, please describe approximately how
 many lifetime Bitcoin transactions each individual and company will be
 capable of performing with a 1 MB block size limit.


I would expect at least 10 billion people (directly or indirectly) to be
using it at once for at least 100 years. But I think it's pointless to
guess how many will use it, but rather make the system ready for 10 billion
people. The point is that for small transactions, they will be done
off-chain. The actual Bitcoin blockchain will only show very large
transactions (such as a military purchasing a new space shuttle) or
aggregate transactions (i.e. a transaction consisting of multiple smaller
transactions done off-chain). There can also be multiple layers of chains
creating a tree-like structure. Each chain above will validate the
aggregate transactions of the chain below. You can think of the Bitcoin
blockchain as the hypervisor that manages all the other chains. While
your coffee purchase 4 days ago may not be directly visible within the
Bitcoin blockchain (the main chain), you can trace it down the sequence of
chains until you find it. Same with that fancy dinner your government MP
paid for using public funds. You don't have to store a copy of all
transactions that occurred for each chain in existence, but rather just the
transactions for the chains that you use or are relevant to you.

As you see, this kind of system is totally transparent to all users and
totally flexible (you can choose your sub chains). The flexibility also
allows you to have arbitrarily fast transactions (choose a chain or
lightning channel attached to that chain that supports it), and you can
enjoy a wide variety of features from other chains, like using one chain
that is known to have good anonymity properties.


 -BEGIN PGP SIGNATURE-

 iQIcBAEBAgAGBQJVTgNcAAoJECpf2nDq2eYjM8AP/2kwSF+HMPR1KdaZsATL4rog
 xSS97Q5iEX8StA61jUqHQmpXL5pG6z5DeeKT/liwcMnYnVqOEOLvoVctr3gXfgRz
 9GJeTOlmN5l9xBeX/nWa0A2ql0kWZpYolBS1FwYadWReAD8R0X9UeBd9YXLZNy33
 Ow9JjwRjKHhsuyrlMP8pRDKlGPoa/U+2aW4FwiysMLa0Gu6dbFjTrp3bHw4Fccpi
 X0E/aDN68U4FV+lZ4NzkMsBK9VARzmC8KI0DQ540pqfkcnyoYf0VERl/gslPWhfq
 t6Rqa7vHHMqFe82lgCd3ji8Qhsz8oBrDS4u4jqwATvgihgImOB6K85JoKmf3y2JS
 jByjMGd4Ep0F80Z2MRhi6HuEoRU69uY2u6l9bZxMjzvLX8sG6QTNk3uLMS3ARXcY
 JBjZ/g13DXgcRj01fq05CHbCTJYZgTA9pRZTY+ZKH4r0mu86b9ua7hjvyKHS6q54
 uaFmRkNcnKlpCY+fvH/JUdvvmwrA0ETUdHhRyk8vzWIMi+aH4//GwrCmBNRrugzv
 9JtQ1BC+tQqtSX2VkFEhAVISitgkBqurVVlGk18FvVKPFO8cnFS/6NWoPE0WLLzW
 2pTuhEPjdz9UAHD3RW601rb4C0LbuwVlGO4tYBjyqCmk/vBlES2XIjQKctXZLBEy
 eLgn3gMwEXUTU6UdGyvb
 =RPhK
 -END PGP SIGNATURE-


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
PGP: B6AC 822C 451D 6304 6A28  49E9 7DB7 011C D53B 5647
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight

Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Andrew
On Fri, May 8, 2015 at 2:59 PM, Alan Reiner etothe...@gmail.com wrote:


 This isn't about everyone's coffee.  This is about an absolute minimum
 amount of participation by people who wish to use the network.   If our
 goal is really for bitcoin to really be a global, open transaction network
 that makes money fluid, then 7tps is already a failure.  If even 5% of the
 world (350M people) was using the network for 1 tx per month (perhaps to
 open payment channels, or shift money between side chains), we'll be above
 100 tps.  And that doesn't include all the non-individuals (organizations)
 that want to use it.


 The goals of a global transaction network and everyone must be able to
 run a full node with their $200 dell laptop are not compatible.  We need
 to accept that a global transaction system cannot be fully/constantly
 audited by everyone and their mother.  The important feature of the network
 is that it is open and anyone *can* get the history and verify it.  But not
 everyone is required to.   Trying to promote a system wher000e the history
 can be forever handled by a low-end PC is already falling out of reach,
 even with our miniscule 7 tps.  Clinging to that goal needlessly limits the
 capability for the network to scale to be a useful global payments system


These are good points and got me thinking (but I think you're wrong). If we
really want each of the 10 billion people soon using bitcoin once per
month, that will require 500MB blocks. That's about 2 TB per month. And if
you relay it to 4 peers, it's 10 TB per month. Which I suppose is doable
for a home desktop, so you can just run a pruned full node with all
transactions from the past month. But how do you sync all those
transactions if you've never done this before or it's been a while since
you did? I think it currently takes at least 3 hours to fully sync 30 GB of
transactions. So 2 TB will take 8 days, then you take a bit more time to
sync the days that passed while you were syncing. So that's doable, but at
a certain point, like 10 TB per month (still only 5 transactions per month
per person), you will need 41 days to sync that month, so you will never
catch up. So I think in order to keep the very important property of anyone
being able to start clean and verify the thing, then we need to think of
bitcoin as a system that does transactions for a large number of users at
once in one transaction, and not a system where each person will make a
~monthly transaction on. We need to therefore rely on sidechains,
treechains, lightning channels, etc...

I'm not a bitcoin wizard and this is just my second post on this mailing
list, so I may be missing something. So please someone, correct me if I'm
wrong.




 On 05/07/2015 03:54 PM, Jeff Garzik wrote:

  On Thu, May 7, 2015 at 3:31 PM, Alan Reiner etothe...@gmail.com wrote:


  (2) Leveraging fee pressure at 1MB to solve the problem is actually
 really a bad idea.  It's really bad while Bitcoin is still growing, and
 relying on fee pressure at 1 MB severely impacts attractiveness and
 adoption potential of Bitcoin (due to high fees and unreliability).  But
 more importantly, it ignores the fact that for a 7 tps is pathetic for a
 global transaction system.  It is a couple orders of magnitude too low for
 any meaningful commercial activity to occur.  If we continue with a cap of
 7 tps forever, Bitcoin *will* fail.  Or at best, it will fail to be
 useful for the vast majority of the world (which probably leads to
 failure).  We shouldn't be talking about fee pressure until we hit 700 tps,
 which is probably still too low.

  [...]

  1) Agree that 7 tps is too low

  2) Where do you want to go?  Should bitcoin scale up to handle all the
 world's coffees?

  This is hugely unrealistic.  700 tps is 100MB blocks, 14.4 GB/day --
 just for a single feed.  If you include relaying to multiple nodes, plus
 serving 500 million SPV clients en grosse, who has the capacity to run such
 a node?  By the time we get to fee pressure, in your scenario, our network
 node count is tiny and highly centralized.

  3) In RE fee pressure -- Do you see the moral hazard to a software-run
 system?  It is an intentional, human decision to flood the market with
 supply, thereby altering the economics, forcing fees to remain low in the
 hopes of achieving adoption.  I'm pro-bitcoin and obviously want to see
 bitcoin adoption - but I don't want to sacrifice every decentralized
 principle and become a central banker in order to get there.




 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development 

Re: [Bitcoin-development] Block Size Increase

2015-05-07 Thread Andrew
I'm mainly just an observer on this. I mostly agree with Pieter. Also, I
think the main reason why people like Gavin and Mike Hearn are trying to
rush this through is because they have some kind of apps that depend on
zero conf instant transactions, so this would of course require more
traffic on the blockchain. I think people like Gavin or Mike should state
clearly what kind of (rigorous) system for instant transactions is
satisfactory for use in their applications. Be it lightning or something
similar, what is good enough? And no zero conf is not a real secure system.
Then once we know what is good enough for them (and everyone else), we can
implement it as a soft fork into the protocol, and it's a win win situation
for both sides (we can also benefit from all the new users people like Mike
are trying bring in).

On Thu, May 7, 2015 at 10:52 AM, Jorge Timón jti...@jtimon.cc wrote:

 On Thu, May 7, 2015 at 11:25 AM, Mike Hearn m...@plan99.net wrote:
  I observed to Wladimir and Gavin in private that this timeline meant a
 change to the block size was unlikely to get into 0.11, leaving only 0.12,
 which would give everyone only a few months to upgrade in order to fork the
 chain by the end of the winter growth season. That seemed tight.

 Can you please elaborate on what terrible things will happen if we
 don't increase the block size by winter this year?
 I assume that you are expecting full blocks by then, have you used any
 statistical technique to come up with that date or is it just your
 guess?
 Because I love wild guesses and mine is that full 1 MB blocks will not
 happen until June 2017.

  What we need to see right now is leadership and a plan, that fits in the
  available time window.
 
 
  Certainly a consensus in this kind of technical community should be a
  basic requirement for any serious commitment to blocksize increase.
 
 
  I'm afraid I have come to disagree. I no longer believe this community
 can
  reach consensus on anything protocol related. Some of these arguments
 have
  dragged on for years. Consensus isn't even well defined - consensus of
 who?
  Anyone who shows up? And what happens when, inevitably, no consensus is
  reached? Stasis forever?

 We've successfully reached consensus for several softfork proposals
 already.
 I agree with others that hardfork need to be uncontroversial and there
 should be consensus about them.
 If you have other ideas for the criteria for hardfork deployment all I'm
 ears.
 I just hope that by  What we need to see right now is leadership you
 don't mean something like when Gaving and Mike agree it's enough to
 deploy a hardfork when you go from vague to concrete.


  Long-term incentive compatibility requires that there be some fee
  pressure, and that blocks be relatively consistently full or very nearly
  full.
 
 
  I disagree. When the money supply eventually dwindles I doubt it will be
 fee
  pressure that funds mining, but as that's a long time in the future, it's
  very hard to predict what might happen.

 Oh, so your answer to bitcoin will eventually need to live on fees
 and we would like to know more about how it will look like then it's
 no bitcoin long term it's broken long term but that's far away in the
 future so let's just worry about the present.
 I agree that it's hard to predict that future, but having some
 competition for block space would actually help us get more data on a
 similar situation to be able to predict that future better.
 What you want to avoid at all cost (the block size actually being
 used), I see as the best opportunity we have to look into the future.

  What we see today are
  transactions enjoying next-block confirmations with nearly zero pressure
  to include any fee at all (though many do because it makes wallet code
  simpler).
 
 
  Many do because free transactions are broken - the relay limiter means
  whether a free transaction actually makes it across the network or not is
  basically pot luck and there's no way for a wallet to know, short of
 either
  trying it or actually receiving every single transaction and repeating
 the
  calculations. If free transactions weren't broken for all non-full nodes
  they'd probably be used a lot more.

 Free transactions are a gift from miners that run an altruistic policy.
 That's great but we shouldn't rely on them for the future. They will
 likely disappear at some point and that's ok.
 In any case, he's not complaining about the lack of free transactions,
 more like the opposite.
 He is saying that's very easy to get free transactions in the next
 block and blocks aren't full so there's no incentive to include fees
 to compete for the space.
 We can talk a lot about a fee market and build a theoretically
 perfect fee estimator but we won't actually have a fee market until
 there's some competition for space.
 Nobody will pay for space that's abundant just like people don't pay
 for the air they breath.

  What I don't see from you yet is a specific and 

Re: [Bitcoin-development] Request for comments on hybrid, PoW/PoS enhancement for Bitcoin

2015-02-25 Thread Andrew Lapp
Having stakeholders endorse blocks has, according to you, the benefits 
of increasing the number of full nodes and making a 51% attack more 
expensive. It seems to me it would have the opposite effects and other 
negative side effects. Any stakeholder that has won could just be 
running an SPV client and be informed by a full node that they have won, 
then cooperate to collect the reward. You are mistaking proof of stake 
as a proof you are running a full node. At the same time, the network 
becomes cheaper to attack in proportion to the amount of the block 
reward that is paid to endorsers. Another side effect is that miners 
would have a bigger economy of scale. The more stake a miner has, the 
more they can endorse their own blocks and not others blocks. I 
recommend reading this: https://download.wpsoftware.net/bitcoin/pos.pdf

-Andrew Lapp

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Andrew Poelstra

I've read this and it looks A-OK to me.

Andrew



On Tue, Jan 20, 2015 at 07:35:49PM -0500, Pieter Wuille wrote:
 Hello everyone,
 
 We've been aware of the risk of depending on OpenSSL for consensus
 rules for a while, and were trying to get rid of this as part of BIP
 62 (malleability protection), which was however postponed due to
 unforeseen complexities. The recent evens (see the thread titled
 OpenSSL 1.0.0p / 1.0.1k incompatible, causes blockchain rejection.
 on this mailing list) have made it clear that the problem is very
 real, however, and I would prefer to have a fundamental solution for
 it sooner rather than later.
 
 I therefore propose a softfork to make non-DER signatures illegal
 (they've been non-standard since v0.8.0). A draft BIP text can be
 found on:
 
 https://gist.github.com/sipa/5d12c343746dad376c80
 
 The document includes motivation and specification. In addition, an
 implementation (including unit tests derived from the BIP text) can be
 found on:
 
 https://github.com/sipa/bitcoin/commit/bipstrictder
 
 Comments/criticisms are very welcome, but I'd prefer keeping the
 discussion here on the mailinglist (which is more accessible than on
 the gist).
 
 -- 
 Pieter
 
 --
 New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
 GigeNET is offering a free month of service with a new server in Ashburn.
 Choose from 2 high performing configs, both with 100TB of bandwidth.
 Higher redundancy.Lower latency.Increased capacity.Completely compliant.
 http://p.sf.net/sfu/gigenet
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

-- 
Andrew Poelstra
Mathematics Department, University of Texas at Austin
Email: apoelstra at wpsoftware.net
Web:   http://www.wpsoftware.net/andrew

If they had taught a class on how to be the kind of citizen Dick Cheney
 worries about, I would have finished high school.   --Edward Snowden



pgpgbq38zIFUD.pgp
Description: PGP signature
--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] side-chains 2-way pegging (Re: is there a way to do bitcoin-staging?)

2014-11-03 Thread Andrew Poelstra
On Mon, Nov 03, 2014 at 06:01:46PM +0200, Alex Mizrahi wrote:
 
 Yes, but harder isn't same as unlikely.


We are aware of the distintion between hardness (expected work) and
likelihood of successful attack -- much of Appendix B talks about this,
in the context of producing compact SPV proofs which are (a) hard to
forge, and (b) very unlikely to be forgeries.

We did spend some time formalizing this but due to space constraints
(and it being somewhat beside the point of the whitepaper beyond we
believe it is possible to do), we did not explore this in as great
depth as we'd have liked.
 
 Another problem with this section is that it only mentions reorganizations.
 But a fraudulent transfer can happen without a reorganization, as an
 attacker can produce an SPV proof which is totally fake. So this is not
 similar to double-spending, attacker doesn't need to own coins to perform
 an attack.
 

Well, even in the absense of a reorganization, the attacker's false proof
will just be invalidated by a proof of longer work on the real chain.
And there is still a real cost to producing the false proof.


-- 
Andrew Poelstra
Mathematics Department, University of Texas at Austin
Email: apoelstra at wpsoftware.net
Web:   http://www.wpsoftware.net/andrew



pgpHV90RFPrEv.pgp
Description: PGP signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-08 Thread Andrew LeCody
My node (based in Dallas, TX) has about 240 connections and is using a
little under 4 Mbps in bandwidth right now.

According the hosting provider I'm at 11.85 Mbps for this week, using 95th
percentile billing. The report from my provider includes my other servers
though.


On Mon, Apr 7, 2014 at 12:39 PM, Chris Williams ch...@icloudtools.netwrote:

 I’m afraid this is a highly simplistic view of the costs of running a full
 node.

 My node consumes fantastic amounts of data traffic, which is a real cost.

 In the 30 days ending Apri 6, my node:

 * Received 36.8 gb of data
 * Sent 456.5 gb data

 At my geographic service location (Singapore), this cost about $90 last
 month for bandwidth alone. It would be slightly cheaper if I was hosted in
 the US of course.

 But anyone can understand that moving a half-terrabyte of data around in a
 month will not be cheap.


 On Apr 7, 2014, at 8:53 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

  On Mon, Apr 7, 2014 at 8:45 AM, Justus Ranvier justusranv...@gmail.com
 wrote:
  1. The resource requirements of a full node are moving beyond the
  capabilities of casual users. This isn't inherently a problem - after
  all most people don't grow their own food, tailor their own clothes, or
  keep blacksmith tools handy in to forge their own horseshoes either.
 
  Right now running a full node consumes about $1 in disk space
  non-reoccurring and costs a couple cents in power per month.
 
  This isn't to say things are all ducky. But if you're going to say the
  resource requirements are beyond the capabilities of casual users I'm
  afraid I'm going to have to say: citation needed.
 
 
 --
  Put Bad Developers to Shame
  Dominate Development with Jenkins Continuous Integration
  Continuously Automate Build, Test  Deployment
  Start a new project now. Try Jenkins in the cloud.
  http://p.sf.net/sfu/13600_Cloudbees_APR
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] moving the default display to mbtc

2014-03-14 Thread Andrew Smith
Well, not sure I wanted to subscribe the mbtc vs ubtc list... its a
default, not a big deal.
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Ultimate Blockchain Compression w/ trust-free lite node

2012-06-19 Thread Andrew Miller
 Peter Todd wrote:
 My solution was to simply state that vertexes that happened to cause the
 tree to be unbalanced would be discarded, and set the depth of inbalance
 such that this would be extremely unlikely to happen by accident. I'd
 rather see someone come up with something better though.

Here is a simpler solution. (most of this message repeats the content
of my reply to the forum)

Suppose we were talking about a binary search tree, rather than a
Merkle tree. It's important to balance a binary search tree, so that
the worst-case maximum length from the root to a leaf is bounded by
O(log N). AVL trees were the original algorithm to do this, Red-Black
trees are also popular, and there are many similar methods. All
involve storing some form of 'balancing metadata' at each node. In a
RedBlack tree, this is a single bit (red or black). Every operation on
these trees, including search, inserting, deleting, and rebalancing,
requires a worst-case effort of O(log N).

Any (acyclic) recursive data structure can be Merkle-ized, simply by
adding a hash of the child node alongside each link/pointer. This way,
you can verify the data for each node very naturally, as you traverse
the structure.

In fact, as long as a lite-client knows the O(1) root hash, the rest
of the storage burden can be delegated to an untrusted helper server.
Suppose a lite-client wants to insert and rebalance its tree. This
requires accessing at most O(log N) nodes. The client can request only
the data relevant to these nodes, and it knows the hash for each chunk
of data in advance of accessing it. After computing the updated root
hash, the client can even discard the data it processed.

This technique has been well discussed in the academic literature,
e.g. [1,2], although since I am not aware of any existing
implementation, I made my own, intended as an explanatory aid:
https://github.com/amiller/redblackmerkle/blob/master/redblack.py


[1] Certificate Revocation and Update
Naor and Nissim. 1998

http://static.usenix.org/publications/library/proceedings/sec98/full_papers/nissim/nissim.pdf

[2] A General Model for Authenticated Data Structures
Martel, Nuckolls, Devanbu, Michael Gertz, Kwong, Stubblebine. 2004
http://truthsayer.cs.ucdavis.edu/algorithmica.pdf

--
Andrew Miller

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Ultimate Blockchain Compression w/ trust-free lite node

2012-06-19 Thread Andrew Miller
Alan Reiner wrote:
 A PATRICIA tree/trie would be ideal, in my mind, as it also has a
 completely deterministic structure, and is an order-of-magnitude more
 space-efficient.  Insert, delete and query times are still O(1).
 However, it is not a trivial implementation.  I have occasionally looked
 for implementations, but not found any that were satisfactory.

PATRICIA Tries (aka Radix trees) have worst-case O(k), where k is the
number of bits in the key. Notice that since we would storing k-bit
hashes, the number of elements must be less than 2^k, or else by
birthday paradox we would have a hash collision! So O(log N) = O(k).

You're right, though, that such a trie would have the property that
any two trees containing the same data (leaves) will be identical. I
can't think of any reason why this is useful, although I am hoping we
can figure out what is triggering your intuition to desire this! I am
indeed assuming that the tree will be incrementally constructed
according to the canonical (blockchain) ordering of transactions, and
that the balancing rules are agreed on as part of the protocol.

-- 
Andrew Miller

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development