Re: [bitcoin-dev] Capacity increases for the Bitcoin system.
On Tue, Dec 08, 2015 at 05:21:18AM +, Gregory Maxwell via bitcoin-dev wrote: > On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev >wrote: > > Having a cost function rather than separate limits does make it easier to > > build blocks (approximately) optimally, though (ie, just divide the fee by > > (base_bytes+witness_bytes/4) and sort). Are there any other benefits? > Actually being able to compute fees for your transaction: If there are > multiple limits that are "at play" then how you need to pay would > depend on the entire set of other candidate transactions, which is > unknown to you. Isn't that solvable in the short term, if miners just agree to order transactions via a cost function, without enforcing it at consensus level until a later hard fork that can also change the existing limits to enforce that balance? (1MB base + 3MB witness + 20k sigops) with segwit initially, to something like (B + W + 200*U + 40*S < 5e6) where B is base bytes, W is witness bytes, U is number of UTXOs added (or removed) and S is number of sigops, or whatever factors actually make sense. I guess segwit does allow soft-forking more sigops immediately -- segwit transactions only add sigops into the segregated witness, which doesn't get counted for existing consensus. So it would be possible to take the opposite approach, and make the rule immediately be something like: 50*S < 1M B + W/4 + 25*S' < 1M (where S is sigops in base data, and S' is sigops in witness) and just rely on S trending to zero (or soft-fork in a requirement that non-segregated witness transactions have fewer than B/50 sigops) so that there's only one (linear) equation to optimise, when deciding fees or creating a block. (I don't see how you could safely set the coefficient for S' too much smaller though) B+W/4+25*S' for a 2-in/2-out p2pkh would still be 178+206/4+25*2=280 though, which would allow 3570 transactions per block, versus 2700 now, which would only be a 32% increase... > These don't, however, apply all that strongly if only one limit is > likely to be the limiting limit... though I am unsure about counting > on that; after all if the other limits wouldn't be limiting, why have > them? Sure, but, at least for now, there's already two limits that are being hit. Having one is *much* better than two, but I don't think two is a lot better than three? (Also, the ratio between the parameters doesn't necessary seem like a constant; it's not clear to me that hardcoding a formula with a single limit is actually better than hardcoding separate limits, and letting miners/the market work out coefficients that match the sort of contracts that are actually being used) > > That seems kinda backwards. > It can seem that way, but all limiting schemes have pathological cases > where someone runs up against the limit in the most costly way. Keep > in mind that casual pathological behavior can be suppressed via > IsStandard like rules without baking them into consensus; so long as > the candidate attacker isn't miners themselves. Doing so where > possible can help avoid cases like the current sigops limiting which > is just ... pretty broken. Sure; it just seems to be halving the increase in block space (60% versus 100% extra for p2pkh, 100% versus 200% for 2/2 multisig p2sh) for what doesn't actually look like that much of a benefit in fee comparisons? I mean, as far as I'm concerned, segwit is great even if it doesn't buy any improvement in transactions/block, so even a 1% gain is brilliant. I'd just rather the 100%-200% gain I was expecting. :) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Capacity increases for the Bitcoin system.
The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating proposals were presented. I think this would be a good time to share my view of the near term arc for capacity increases in the Bitcoin system. I believe we’re in a fantastic place right now and that the community is ready to deliver on a clear forward path with a shared vision that addresses the needs of the system while upholding its values. I think it’s important to first clearly express some of the relevant principles that I think should guide the ongoing development of the Bitcoin system. Bitcoin is P2P electronic cash that is valuable over legacy systems because of the monetary autonomy it brings to its users through decentralization. Bitcoin seeks to address the root problem with conventional currency: all the trust that's required to make it work-- -- Not that justified trust is a bad thing, but trust makes systems brittle, opaque, and costly to operate. Trust failures result in systemic collapses, trust curation creates inequality and monopoly lock-in, and naturally arising trust choke-points can be abused to deny access to due process. Through the use of cryptographic proof and decentralized networks Bitcoin minimizes and replaces these trust costs. With the available technology, there are fundamental trade-offs between scale and decentralization. If the system is too costly people will be forced to trust third parties rather than independently enforcing the system's rules. If the Bitcoin blockchain’s resource usage, relative to the available technology, is too great, Bitcoin loses its competitive advantages compared to legacy systems because validation will be too costly (pricing out many users), forcing trust back into the system. If capacity is too low and our methods of transacting too inefficient, access to the chain for dispute resolution will be too costly, again pushing trust back into the system. Since Bitcoin is an electronic cash, it _isn't_ a generic database; the demand for cheap highly-replicated perpetual storage is unbounded, and Bitcoin cannot and will not satisfy that demand for non-ecash (non-Bitcoin) usage, and there is no shame in that. Fortunately, Bitcoin can interoperate with other systems that address other applications, and--with luck and hard work--the Bitcoin system can and will satisfy the world's demand for electronic cash. Fortunately, a lot of great technology is in the works that make navigating the trade-offs easier. First up: after several years in the making Bitcoin Core has recently merged libsecp256k1, which results in a huge increase in signature validation performance. Combined with other recent work we're now getting ConnectTip performance 7x higher in 0.12 than in prior versions. This has been a long time coming, and without its anticipation and earlier work such as headers-first I probably would have been arguing for a block size decrease last year. This improvement in the state of the art for widely available production Bitcoin software sets a stage for some capacity increases while still catching up on our decentralization deficit. This shifts the bottlenecks off of CPU and more strongly onto propagation latency and bandwidth. Versionbits (BIP9) is approaching maturity and will allow the Bitcoin network to have multiple in-flight soft-forks. Up until now we’ve had to completely serialize soft-fork work, and also had no real way to handle a soft-fork that was merged in core but rejected by the network. All that is solved in BIP9, which should allow us to pick up the pace of improvements in the network. It looks like versionbits will be ready for use in the next soft-fork performed on the network. The next thing is that, at Scaling Bitcoin Hong Kong, Pieter Wuille presented on bringing Segregated Witness to Bitcoin. What is proposed is a _soft-fork_ that increases Bitcoin's scalability and capacity by reorganizing data in blocks to handle the signatures separately, and in doing so takes them outside the scope of the current blocksize limit. The particular proposal amounts to a 4MB blocksize increase at worst. The separation allows new security models, such as skipping downloading data you're not going to check and improved performance for lite clients (especially ones with high privacy). The proposal also includes fraud proofs which make violations of the Bitcoin system provable with a compact proof. This completes the vision of "alerts" described in the "Simplified Payment Verification" section of the Bitcoin whitepaper, and would make it possible for lite clients to enforce all the rules of the system (under a new strong assumption that they're not partitioned from someone who would generate the proofs). The design has numerous other features like making further enhancements safer and eliminating signature malleability problems. If widely used this proposal gives a 2x capacity increase (more if multisig is widely used), but most importantly it makes that additional
Re: [bitcoin-dev] Capacity increases for the Bitcoin system.
On Mon, Dec 7, 2015 at 4:02 PM, Gregory Maxwell wrote: > The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating > proposals were presented. I think this would be a good time to share my > view of the near term arc for capacity increases in the Bitcoin system. I > believe we’re in a fantastic place right now and that the community > is ready to deliver on a clear forward path with a shared vision that > addresses the needs of the system while upholding its values. ACK. One of the interesting take-aways from the workshops for me has been that there is a large discrepancy between what developers are doing and what's more widely known. When I was doing initial research and work for my keynote at the Montreal conference ( http://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf -- an attempt at being exhaustive, prior to seeing the workshop proposals ), what I was most surprised by was the discrepancy between what we think is being talked about versus what has been emphasized or socially processed (lots of proposals appear in text, but review efforts are sometimes "hidden" in corners of github pull request comments, for example). As another example, the libsecp256k1 testing work reached a level unseen except perhaps in the aerospace industry, but these sorts of details are not apparent if you are reading bitcoin-dev archives. It is very hard to listen to all ideas and find great ideas. Sometimes, our time can be almost completely exhausted by evaluating inefficient proposals, so it's not surprising that rough consensus building could take time. I suspect we will see consensus moving in positive directions around the proposals you have highlighted. When Satoshi originally released the Bitcoin whitepaper, practically everyone-- somehow with the exception of Hal Finney-- didn't look, because the costs of evaluating cryptographic system proposals is so high and everyone was jaded and burned out for the past umpteen decades. (I have IRC logs from January 10th 2009 where I immediately dismissed Bitcoin after I had seen its announcement on the p2pfoundation mailing list, perhaps in retrospect I should not let family tragedy so greatly impact my evaluation of proposals...). It's hard to evaluate these proposals. Sometimes it may feel like random proposals are review-resistant, or designed to burn our time up. But I think this is more reflective of the simple fact that consensus takes effort, and it's hard work, and this is to be expected in this sort of system design. Your email contains a good summary of recent scaling progress and of efforts presented at the Hong Kong workshop. I like summaries. I have previously recommended making more summaries and posting them to the mailing list. In general, it would be good if developers were to write summaries of recent work and efforts and post them to the bitcoin-dev mailing list. BIP drafts are excellent. Long-term proposals are excellent. Short-term coordination happens over IRC, and that makes sense to me. But I would point out that many of the developments even from, say, the Montreal workshop were notably absent from the mailing list. Unless someone was paying close attention, they wouldn't have noticed some of those efforts which, in some cases, haven't been mentioned since. I suspect most of this is a matter of attention, review and keeping track of loose ends, which can be admittedly difficult. Short (or even long) summaries in emails are helpful because they increase the ability of the community to coordinate and figure out what's going on. Often I will write an email that summarizes some content simply because I estimate that I am going to forget the details in the near future, and if I am going to forget them then it seems likely that others might This creates a broad base of proposals and content to build from when we're doing development work in the future, making for a much richer community as a consequence. The contributions from the scalingbitcoin.org workshops are a welcome addition, and the proposal outlined in the above email contains a good summary of recent progress. We need more of this sort of synthesis, we're richer for it. I am excitedly looking forward to the impending onslaught of Bitcoin progress. - Bryan http://heybryan.org/ 1 512 203 0507 ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Coalescing Transactions BIP Draft
I made a post a few days ago where I laid out a scheme for implementing "coalescing transactions" using a new opcode. I have since come to the realization that an opcode is not the best way to do this. A much better approach I think is a new "transaction type" field that is split off from the version field. Other uses can come out of this type field, wildcard inputs is just the first one. There are two unresolved issues. First, there might need to be a limit on how many inputs are included in the "coalesce". Lets say you have an address that has 100,000,000 inputs. If you were to coalesce them all into one single input, that means that every node has to count of these 100,000,000 inputs, which could take a long time. But then again, the total number of inputs a wildcard can cover is limited to the actual number of UTXOs in the pool, which is very much a finite/constrained number. One solution is to limit all wildcard inputs to, say, 10,000 items. If you have more inputs that you want coalesced, you have to do it in 10,000 chunks, starting from the beginning. I want wildcard inputs to look as much like normal inputs as much as possible to facilitate implementation, so embedding a "max search" inside the transaction I don't think is the best idea. I think if there is going to be a limit, it should be implied. The other issue is with limiting wildcard inputs to only inputs that are confirmed into a fixed number of blocks. Sort of like how coinbase has to be a certain age before it can be spent, maybe wildcard inputs should only work on inputs older than a certain block age. Someone brought up in the last thread that re-orgs can cause problems. I don't quite see how that could happen, as re-orgs don't really affect address balances, only block header values, which coalescing transactions have nothing to do with. Here is the draft: https://github.com/priestc/bips/blob/master/bip-coalesc-wildcard.mediawiki ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev