As an independently verifiable, decentralized store of public information, the 
Bitcoin block tree and transaction DAG do have an advantage over systems such 
as Visa. The store is just a cache. There is no need to implement reliability 
in storage or in communications. It is sufficient to be able to detect 
invalidity. And even if a subset of nodes fail to do so, the system overall 
compensates.

As such the architecture of a Bitcoin node and its supporting hardware 
requirements are very different from an unverifiable, centralized store of 
private information. So in that sense the comparison below is not entirely 
fair. Many, if not most, of the high costs of a Visa datacenter do not apply 
because of Bitcoin's information architecture.

However, if the system cannot remain decentralized these architectural 
advantages will not hold. At that point your considerations below are entirely 
valid. Once the information is centralized it necessarily becomes private and 
fragile. Conversely, once it becomes private it necessarily becomes centralized 
and fragile. This fragility requires significant investment by the central 
authority to maintain.

So as has been said, we can have decentralization and its benefit of 
trustlessness or we can have Visa. We already have Visa. Making another is 
entirely uninteresting.

e 

> On Mar 31, 2017, at 11:23 AM, David Vorick via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> 
> Sure, your math is pretty much entirely irrelevant because scaling systems to 
> massive sizes doesn't work that way.
> 
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a 
> database size of petabytes. How much RAM do you need to process blocks like 
> that? Can you fit that much RAM into a single machine? Okay, you can't fit 
> that much RAM into a single machine. So you have to rework the code to 
> operate on a computer cluster.
> 
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin 
> to do block validation on a computer cluster overnight. Further, are storage 
> costs consistent when we're talking about setting up clusters? Are bandwidth 
> costs consistent when we're talking about setting up clusters? Are RAM and 
> CPU costs consistent when we're talking about setting up clusters? No, they 
> aren't. Clusters are a lot more expensive to set up per-resource because they 
> need to talk to eachother and synchronize with eachother and you have a LOT 
> more parts, so you have to build in redundancies that aren't necessary in 
> non-clusters.
> 
> Also worth pointing out that peak transaction volumes are typically 20-50x 
> the size of typical transaction volumes. So your cluster isn't going to need 
> to plan to handle 15k transactions per second, you're really looking at more 
> like 200k or even 500k transactions per second to handle peak-volumes. And if 
> it can't, you're still going to see full blocks.
> 
> You'd need a handful of experts just to maintain such a thing. Disks are 
> going to be failing every day when you are storing multiple PB, so you can't 
> just count a flat cost of $20/TB and expect that to work. You're going to 
> need redundancy and tolerance so that you don't lose the system when a few of 
> your hard drives all fail within minutes of eachother. And you need a way to 
> rebuild everything without taking the system offline.
> 
> This isn't even my area of expertise. I'm sure there are a dozen other 
> significant issues that one of the Visa architects could tell you about when 
> dealing with mission-critical data at this scale.
> 
> --------
> 
> Massive systems operate very differently and are much more costly per-unit 
> than tiny systems. Once we grow the blocksize large enough that a single 
> computer can't do all the processing all by itself we get into a world of 
> much harder, much more expensive scaling problems. Especially because we're 
> talking about a distributed system where the nodes don't even trust each 
> other. And transaction processing is largely non-parallel. You have to check 
> each transaction against each other transaction to make sure that they aren't 
> double spending eachother. This takes synchronization and prevents 500 CPUs 
> from all crunching the data concurrently. You have to be a lot more clever 
> than that to get things working and consistent.
> 
> When talking about scalability problems, you should ask yourself what other 
> systems in the world operate at the scales you are talking about. None of 
> them have cost structures in the 6 digit range, and I'd bet (without actually 
> knowing) that none of them have cost structures in the 7 digit range either. 
> In fact I know from working in a related industry that the cost structures 
> for the datacenters (plus the support engineers, plus the software 
> management, etc.) that do airline ticket processing are above $5 million per 
> year for the larger airlines. Visa is probably even more expensive than that 
> (though I can only speculate).
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to