Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I am in favor of a more gradual (longer) period and a softforking solution... that is, more than 30 days of grace period (some period between 60 days and a year), ... ... and given the number of valid softforking proposals out there it seems to me that it would be rather simple to see one or more (e.g. Cameron Garnham's dynamic block size adjustment (needing soft fork only)) in a BIP. It is also worthwhile to add that some of the softforking proposals, and I believe Garnham's to be one of them, are not incompatible with proposals such as Jeff Garzik's BIP 100, that is to say, there is nothing keeping you from doing the Garnham dynamic block size adjustment (softfork) right now today, examining its progress and effect, while also preparing for a Garzik (BIP 100) for example. - - Odinn On 08/02/2015 12:16 AM, jl2012 via bitcoin-dev wrote: Pieter Wuille 於 2015-08-01 16:45 寫到: On Fri, Jul 31, 2015 at 10:39 AM, jl2012 via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: 2. Starting date: 30 days after 75% miner support, but not before 2016-01-12 00:00 UTC Rationale: A 30-day grace period is given to make sure everyone has enough time to follow. This is a compromise between 14 day in BIP101 and 1 year in BIP103. I tend to agree with BIP101. Even 1 year is given, people will just do it on the 364th day if they opt to procrastinate. Given the time recent softforks have taken to deploy, I think that's too soon. Since I'm using 30 days after 75% miner support, the actual deployment period will be longer than 30 days. Anyway, if all major exchanges and merchants agree to upgrade, people are forced to upgrade immediately or they will follow a worthless chain. 3. The block size at 2016-01-12 will be 1,414,213 bytes, and multiplied by 1.414213 by every 2^23 seconds (97 days) until exactly 8MB is reached on 2017-05-11. Rationale: Instead of jumping to 8MB, I suggest to increase it gradually to 8MB in 16 months. 8MB should not be particularly painful to run even with current equipment (you may see my earlier post on bitctointalk: https://bitcointalk.org/index.php?topic=1054482.0 ). 8MB is also agreed by Chinese miners, who control 60% of the network. I have considered suggesting a faster ramp-up in the beginning, but I don't think there is indisputable evidence that we can currently deal with significantly larger blocks. I don't think painful is the right criterion either; I'm sure my equipment can handle 20 MB blocks too, but with a huge impact on network propagation speed, and even more people choosing the outsource their full nodes. Regarding reasonable, I have a theory. What if we would have had 8 MB blocks from the start? My guess is that some more people would have decided to run their high-transaction-rate use cases on chain, that we'd regularly see 4-6 MB blocks, there would be more complaints about low full node counts, maybe 90% instead of 60% of the hash rate would be have SPV mining agreements with each other, we'd somehow have accepted that even worse reality, but people would still be complaining about the measly 25 transactions per second that Bitcoin could handle on-chain, and be demanding a faster rampup to a more reasonable 64 MB block size as well. Since the block reward is miners' major income source, no rational miner would create mega blocks unless the fee could cover the extra orphaning risk. Blocks were not constantly full until recent months, and many miners are still keeping the 750kB soft limit. This strongly suggests that we won't have 4MB blocks now even Satoshi set a 8MB limit. I don't have the data now but I believe the Satoshi Dice model failed not primarily due to the 1MB cap, but the raise in BTC/USD rate. Since minting reward is a fixed value in BTC, the tx fee must also be valued in BTC as it is primarily for compensating the extra orphaning risk. As the BTC/USD rate increases, the tx fee measured in USD would also increase, making micro-payment (measured in USD) unsustainable. We might have less full nodes, but it was Satoshi's original plan: At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node. Theoretically, we only require one honest full node to prove wrongdoing on the blockchain and tell every SPV nodes to blacklist the invalid chain. I think SPV mining exists long before the 1MB block became full, and I don't think we could stop this trend by artificially suppressing the block size. Miners should just do it properly, e.g. stop mining until the grandparent block is verified, which would make sure an invalid fork won't grow beyond 2 blocks. Global bandwidth is expected to
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 +1 on every point, sipa On 08/02/2015 05:32 PM, Pieter Wuille via bitcoin-dev wrote: 2. Starting date: 30 days after 75% miner support, but not before 2016-01-12 00:00 UTC Rationale: A 30-day grace period is given to make sure everyone has enough time to follow. This is a compromise between 14 day in BIP101 and 1 year in BIP103. I tend to agree with BIP101. Even 1 year is given, people will just do it on the 364th day if they opt to procrastinate. Given the time recent softforks have taken to deploy, I think that's too soon. Since I'm using 30 days after 75% miner support, the actual deployment period will be longer than 30 days. Anyway, if all major exchanges and merchants agree to upgrade, people are forced to upgrade immediately or they will follow a worthless chain. If we don't want it to go fast, why let them? A hardfork is a means for the community to agree on the rules that different parties have to obey. 3. The block size at 2016-01-12 will be 1,414,213 bytes, and multiplied by 1.414213 by every 2^23 seconds (97 days) until exactly 8MB is reached on 2017-05-11. Rationale: Instead of jumping to 8MB, I suggest to increase it gradually to 8MB in 16 months. 8MB should not be particularly painful to run even with current equipment (you may see my earlier post on bitctointalk: https://bitcointalk.org/index.php?topic=1054482.0 ). 8MB is also agreed by Chinese miners, who control 60% of the network. I have considered suggesting a faster ramp-up in the beginning, but I don't think there is indisputable evidence that we can currently deal with significantly larger blocks. I don't think painful is the right criterion either; I'm sure my equipment can handle 20 MB blocks too, but with a huge impact on network propagation speed, and even more people choosing the outsource their full nodes. Regarding reasonable, I have a theory. What if we would have had 8 MB blocks from the start? My guess is that some more people would have decided to run their high-transaction-rate use cases on chain, that we'd regularly see 4-6 MB blocks, there would be more complaints about low full node counts, maybe 90% instead of 60% of the hash rate would be have SPV mining agreements with each other, we'd somehow have accepted that even worse reality, but people would still be complaining about the measly 25 transactions per second that Bitcoin could handle on-chain, and be demanding a faster rampup to a more reasonable 64 MB block size as well. Since the block reward is miners' major income source, no rational miner would create mega blocks unless the fee could cover the extra orphaning risk. Blocks were not constantly full until recent months, and many miners are still keeping the 750kB soft limit. This strongly suggests that we won't have 4MB blocks now even Satoshi set a 8MB limit. I disagree. I think demand is strongly influenced by the knowledge of space that looks available. If you look at historic block sizes, you see it follows a series of block functions, not nice organic growth. My theory is that this is changed defaults in software, new services appearing suddenly, and people reacting to it. Demand fills the available space. Also, SPV mining has nearly zero orphaning risk, only brief chance of loss of fees as income. I don't have the data now but I believe the Satoshi Dice model failed not primarily due to the 1MB cap, but the raise in BTC/USD rate. Since minting reward is a fixed value in BTC, the tx fee must also be valued in BTC as it is primarily for compensating the extra orphaning risk. As the BTC/USD rate increases, the tx fee measured in USD would also increase, making micro-payment (measured in USD) unsustainable. I agree, but how does that matter? I don't think high fees and full blocks should be the goal, but I think it would be a healthier outcome than what we have now. We might have less full nodes, but it was Satoshi's original plan: At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node. Theoretically, we only require one honest full node to prove wrongdoing on the blockchain and tell every SPV nodes to blacklist the invalid chain. Theoretically, we also only need one central bank, then? Sorry, if the outcome is one (or just a few) entities that keep the system in check, I think it loses any benefit it has over other systems, while still keeping its costs and disadvantages (confirmation speed, mining infrastructure, subsidy...). I know that 8 MB blocks do not immediately mean such a dramatic outcome. But I do believe that as a community setting the block size based on observed demand (which
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
2. Starting date: 30 days after 75% miner support, but not before 2016-01-12 00:00 UTC Rationale: A 30-day grace period is given to make sure everyone has enough time to follow. This is a compromise between 14 day in BIP101 and 1 year in BIP103. I tend to agree with BIP101. Even 1 year is given, people will just do it on the 364th day if they opt to procrastinate. Given the time recent softforks have taken to deploy, I think that's too soon. Since I'm using 30 days after 75% miner support, the actual deployment period will be longer than 30 days. Anyway, if all major exchanges and merchants agree to upgrade, people are forced to upgrade immediately or they will follow a worthless chain. If we don't want it to go fast, why let them? A hardfork is a means for the community to agree on the rules that different parties have to obey. 3. The block size at 2016-01-12 will be 1,414,213 bytes, and multiplied by 1.414213 by every 2^23 seconds (97 days) until exactly 8MB is reached on 2017-05-11. Rationale: Instead of jumping to 8MB, I suggest to increase it gradually to 8MB in 16 months. 8MB should not be particularly painful to run even with current equipment (you may see my earlier post on bitctointalk: https://bitcointalk.org/index.php?topic=1054482.0 ). 8MB is also agreed by Chinese miners, who control 60% of the network. I have considered suggesting a faster ramp-up in the beginning, but I don't think there is indisputable evidence that we can currently deal with significantly larger blocks. I don't think painful is the right criterion either; I'm sure my equipment can handle 20 MB blocks too, but with a huge impact on network propagation speed, and even more people choosing the outsource their full nodes. Regarding reasonable, I have a theory. What if we would have had 8 MB blocks from the start? My guess is that some more people would have decided to run their high-transaction-rate use cases on chain, that we'd regularly see 4-6 MB blocks, there would be more complaints about low full node counts, maybe 90% instead of 60% of the hash rate would be have SPV mining agreements with each other, we'd somehow have accepted that even worse reality, but people would still be complaining about the measly 25 transactions per second that Bitcoin could handle on-chain, and be demanding a faster rampup to a more reasonable 64 MB block size as well. Since the block reward is miners' major income source, no rational miner would create mega blocks unless the fee could cover the extra orphaning risk. Blocks were not constantly full until recent months, and many miners are still keeping the 750kB soft limit. This strongly suggests that we won't have 4MB blocks now even Satoshi set a 8MB limit. I disagree. I think demand is strongly influenced by the knowledge of space that looks available. If you look at historic block sizes, you see it follows a series of block functions, not nice organic growth. My theory is that this is changed defaults in software, new services appearing suddenly, and people reacting to it. Demand fills the available space. Also, SPV mining has nearly zero orphaning risk, only brief chance of loss of fees as income. I don't have the data now but I believe the Satoshi Dice model failed not primarily due to the 1MB cap, but the raise in BTC/USD rate. Since minting reward is a fixed value in BTC, the tx fee must also be valued in BTC as it is primarily for compensating the extra orphaning risk. As the BTC/USD rate increases, the tx fee measured in USD would also increase, making micro-payment (measured in USD) unsustainable. I agree, but how does that matter? I don't think high fees and full blocks should be the goal, but I think it would be a healthier outcome than what we have now. We might have less full nodes, but it was Satoshi's original plan: At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node. Theoretically, we only require one honest full node to prove wrongdoing on the blockchain and tell every SPV nodes to blacklist the invalid chain. Theoretically, we also only need one central bank, then? Sorry, if the outcome is one (or just a few) entities that keep the system in check, I think it loses any benefit it has over other systems, while still keeping its costs and disadvantages (confirmation speed, mining infrastructure, subsidy...). I know that 8 MB blocks do not immediately mean such a dramatic outcome. But I do believe that as a community setting the block size based on observed demand (which you do by saying 8 is a more reasonable size than 2 as argument) is the wrong way. What do you do when your 8 MB starts to look full, before your schedule says it can increase? The block size and its costs -
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
It will help to assume that there is at least one group of evil people who are investing in Bitcon's demise. Not because there are, but because there might be. So let's assume they are making a set of a billion transactions, or a trillion, and maintaining currently-being-legitimately-used hashing power. When block size is large enough to frustrate other miners, this hash power (or some piece of it) will be experimentally shifted to solving a block containing an internally consistent subset of the prepared trasnsactions to fill it - experimentally at first, but on the active Bitcoin network. One seemingly random, bloated, useless (except for the universal timestamp) block will be created and the evil group will measure the effect on the mining community - client takedowns, market exits, and whatever else interests them. Then they lie in wait, perhaps let out one more to do another experiment, but with the goal of eventually catching us unawares and doing as much damage to morale as possible. Good concrete descriptions of the threats against which we want to guard will be very helpful. Maybe there are already unit tests for such things or requests for miners' reactions to them (as opposed to just the software's behavior). My description might be a bit too long and perhaps not a very good example, but do we have a place where such examples can be constructed? While we will do our best to guard against such nightmares, it's also helpful to imagine what we will do if and when one of them ever actually occurs. Yes, I'm paranoid; because those who like to control everything are losing it. Dave On Sun, Aug 2, 2015 at 3:38 AM, Venzen Khaosan via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 +1 on every point, sipa ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
On Fri, Jul 31, 2015 at 10:39 AM, jl2012 via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: There is a summary of the proposals in my previous mail at https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html 1. Initiation: BIP34 style voting, with support of 750 out of the last 1000 blocks. The hardfork bit mechanism might be used: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009576.html This is fine, I think. I believe we shouldn't proceed with a hardfork without having reasonable expectation that it will be deployed by everyone in time, while we can only measure miner acceptance. Still, as a belt-and-suspenders this won't hurt. 2. Starting date: 30 days after 75% miner support, but not before 2016-01-12 00:00 UTC Rationale: A 30-day grace period is given to make sure everyone has enough time to follow. This is a compromise between 14 day in BIP101 and 1 year in BIP103. I tend to agree with BIP101. Even 1 year is given, people will just do it on the 364th day if they opt to procrastinate. Given the time recent softforks have taken to deploy, I think that's too soon. 2016-01-12 00:00 UTC is Monday evening in US and Tuesday morning in China. Most pool operators and devs should be back from new year holiday and not sleeping. (If the initiation is delayed, we may require that it must be UTC Tuesday midnight). That's an interesting thing to take into account. 3. The block size at 2016-01-12 will be 1,414,213 bytes, and multiplied by 1.414213 by every 2^23 seconds (97 days) until exactly 8MB is reached on 2017-05-11. Rationale: Instead of jumping to 8MB, I suggest to increase it gradually to 8MB in 16 months. 8MB should not be particularly painful to run even with current equipment (you may see my earlier post on bitctointalk: https://bitcointalk.org/index.php?topic=1054482.0). 8MB is also agreed by Chinese miners, who control 60% of the network. I have considered suggesting a faster ramp-up in the beginning, but I don't think there is indisputable evidence that we can currently deal with significantly larger blocks. I don't think painful is the right criterion either; I'm sure my equipment can handle 20 MB blocks too, but with a huge impact on network propagation speed, and even more people choosing the outsource their full nodes. Regarding reasonable, I have a theory. What if we would have had 8 MB blocks from the start? My guess is that some more people would have decided to run their high-transaction-rate use cases on chain, that we'd regularly see 4-6 MB blocks, there would be more complaints about low full node counts, maybe 90% instead of 60% of the hash rate would be have SPV mining agreements with each other, we'd somehow have accepted that even worse reality, but people would still be complaining about the measly 25 transactions per second that Bitcoin could handle on-chain, and be demanding a faster rampup to a more reasonable 64 MB block size as well. 4. After 8MB is reached, the block size will be increased by 6.714% every 97 days, which is equivalent to exactly octuple (8x) every 8.5 years, or double every 2.9 years, or +27.67% per year. Stop growth at 4096MB on 2042-11-17. Rationale: This is a compromise between 17.7% p.a. of BIP103 and 41.4% p.a. of BIP101. This will take us almost 8 years from now just to go back to the original 32MB size (4 years for BIP101 and 22 years for BIP103) SSD price is expected to drop by 50%/year in the coming years. In 2020, we will only need to pay 2% of current price for SSD. 98% price reduction is enough for 40 years of 27.67% growth. Source: http://wikibon.org/wiki/v/Evolution_of_All-Flash_Array_Architectures I know many technologies have had faster growth, but I believe that global bandwidth accessibility is the bottleneck, so the growth rate in my proposal is based on that. Global bandwidth is expected to grow by 37%/year until 2021 so 27.67% should be safe at least for the coming 10 years. Source: https://www.telegeography.com/research-services/global-bandwidth-forecast-service/ I'd rather be conservative here. My primary purpose is trying to create an uncontroversial proposal that introduces an expectation of growth with technology. -- Pieter ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
On 8/1/2015 1:45 PM, Pieter Wuille via bitcoin-dev wrote: Regarding reasonable, I have a theory. What if we would have had 8 MB blocks from the start? My guess is that some more people would have decided to run their high-transaction-rate use cases on chain, that we'd regularly see 4-6 MB blocks, You've proposed scaling the cap based on technology growth. There's still a cap to stop bad things from happening. Once that is done, why worry so much about whether the uses are efficient? Let people work in the space created. Let them figure out how to make good things happen in the application space. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
That's all well and fine. But the pattern of your argument I would say is arguing security down ie saying something is not secure anyway, nothing is secure, everything could be hacked, so lets forget that and give up, so that what is left is basically no decentralisation security. It is not paranoid to take decentralisation security seriously, it is necessary because it is critical to Bitcoin. Security in depth meaning take what security you can get from available defences. Adam On 31 July 2015 at 15:07, jl2...@xbt.hk wrote: Yes, data-center operators are bound to follow laws, including NSLs and gag orders. How about your ISP? Is it bound to follow laws, including NSLs and gag orders? https://edri.org/irish_isp_introduces_blocking/ Do you think everyone should run a full node behind TOR? No way, your repressive government could just block TOR: http://www.technologyreview.com/view/427413/how-china-blocks-the-tor-anonymity-network/ Or they could raid your home and seize your Raspberry Pi if they couldn't read your encrypted internet traffic. You will have a hard time proving you are not using TOR for child porn or cocaine. https://en.wikipedia.org/wiki/Encryption_ban_proposal_in_the_United_Kingdom If you are living in a country like this, running Bitcoin in an offshore VPS could be much easier. Anyway, Bitcoin shouldn't be your first thing to worry about. Revolution is probably your only choice. Data-centers would get hacked. How about your Raspberry Pi? Corrupt data-center employee is probably the only valid concern. However, there is nothing (except cost) to stop you from establishing multiple full nodes all over the world. If your Raspberry Pi at home could no longer fully validate the chain, it could become a header-only node to make sure your VPS full nodes are following the correct chaintip. You may even buy hourly charged cloud hosting in different countries to run header-only nodes at negligible cost. There is no single point of failure in a decentralized network. Having multiple nodes will also save you from Sybil attack and geopolitical risks. Again, if all data-centres and governments in the world are turning against Bitcoin, it is delusional to think we could fight against them without using any real weapon. By the way, I'm quite confident that my current full node at home are capable of running at 8MB blocks. Quoting Adam Back a...@cypherspace.org: I think trust the data-center logic obviously fails, and I was talking about this scenario in the post you are replying to. You are trusting the data-center operator period. If one could trust data-centers to run verified code, to not get hacked, filter traffic, respond to court orders without notifying you etc that would be great but that's unfortunately not what happens. Data-center operators are bound to follow laws, including NSLs and gag orders. They also get hacked, employ humans who can be corrupt, blackmailed, and themselves centralisation points for policy attack. Snowden related disclosures and keeping aware of security show this is very real. This isn't much about bitcoin even, its just security reality for hosting anything intended to be secure via decentralisation, or just hosting in general while at risk of political or policy attack. Adam ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
Yes, data-center operators are bound to follow laws, including NSLs and gag orders. How about your ISP? Is it bound to follow laws, including NSLs and gag orders? https://edri.org/irish_isp_introduces_blocking/ Do you think everyone should run a full node behind TOR? No way, your repressive government could just block TOR: http://www.technologyreview.com/view/427413/how-china-blocks-the-tor-anonymity-network/ Or they could raid your home and seize your Raspberry Pi if they couldn't read your encrypted internet traffic. You will have a hard time proving you are not using TOR for child porn or cocaine. https://en.wikipedia.org/wiki/Encryption_ban_proposal_in_the_United_Kingdom If you are living in a country like this, running Bitcoin in an offshore VPS could be much easier. Anyway, Bitcoin shouldn't be your first thing to worry about. Revolution is probably your only choice. Data-centers would get hacked. How about your Raspberry Pi? Corrupt data-center employee is probably the only valid concern. However, there is nothing (except cost) to stop you from establishing multiple full nodes all over the world. If your Raspberry Pi at home could no longer fully validate the chain, it could become a header-only node to make sure your VPS full nodes are following the correct chaintip. You may even buy hourly charged cloud hosting in different countries to run header-only nodes at negligible cost. There is no single point of failure in a decentralized network. Having multiple nodes will also save you from Sybil attack and geopolitical risks. Again, if all data-centres and governments in the world are turning against Bitcoin, it is delusional to think we could fight against them without using any real weapon. By the way, I'm quite confident that my current full node at home are capable of running at 8MB blocks. Quoting Adam Back a...@cypherspace.org: I think trust the data-center logic obviously fails, and I was talking about this scenario in the post you are replying to. You are trusting the data-center operator period. If one could trust data-centers to run verified code, to not get hacked, filter traffic, respond to court orders without notifying you etc that would be great but that's unfortunately not what happens. Data-center operators are bound to follow laws, including NSLs and gag orders. They also get hacked, employ humans who can be corrupt, blackmailed, and themselves centralisation points for policy attack. Snowden related disclosures and keeping aware of security show this is very real. This isn't much about bitcoin even, its just security reality for hosting anything intended to be secure via decentralisation, or just hosting in general while at risk of political or policy attack. Adam ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal
Here are some books that will help more people understand why Adam's concern is important: Kicking the Dragon (by Larken Rose) The State (by Franz Oppenheimer) Like he said, it isn't much about bitcoin. Our crypto is just one of the defenses we've created, and understanding what it defends will help us maintain its value. Dave On Fri, Jul 31, 2015 at 6:16 AM, Adam Back via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: I think trust the data-center logic obviously fails, and I was talking about this scenario in the post you are replying to. You are trusting the data-center operator period. If one could trust data-centers to run verified code, to not get hacked, filter traffic, respond to court orders without notifying you etc that would be great but that's unfortunately not what happens. Data-center operators are bound to follow laws, including NSLs and gag orders. They also get hacked, employ humans who can be corrupt, blackmailed, and themselves centralisation points for policy attack. Snowden related disclosures and keeping aware of security show this is very real. This isn't much about bitcoin even, its just security reality for hosting anything intended to be secure via decentralisation, or just hosting in general while at risk of political or policy attack. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev