Hi Matt, I agree that starting discussion on how to approach this problem is necessary and it's difficult taking positions without details on what is being discussed.
A simple hard 20-megabyte increase will likely create perverse incentives, perhaps a method can exist with some safe transition. I think ultimately, the underlying tension with this discussion is about the relative power of miners. Any transition of blocksize increase will increase the influence of miners, and it is about understanding the tradeoffs for each possible approach. On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote: > * I'd like to see some better conclusions to the discussion around > long-term incentives within the system. If we're just building Bitcoin > to work in five years, great, but if we want it all to keep working as > subsidy drops significantly, I'd like a better answer than "we'll deal > with it when we get there" or "it will happen, all the predictions based > on people's behavior today say so" (which are hopefully invalid thanks > to the previous point). Ideally, I'd love to see some real free pressure > already on the network starting to develop when we commit to hardforking > in a year. Not just full blocks with some fees because wallets are > including far greater fees than they really need to, but software which > properly handles fees across the ecosystem, smart fee increases when > transactions arent confirming (eg replace-by-fee, which could be limited > to increase-in-fees-only for those worried about double-spends). I think the long-term fee incentive structure needs to be significantly more granular. We've all seen miners and pools take the path of least resistance; often they just do whatever the community tells them to blindly. While this status quo can change in the future, I think designing sane defaults is a good path for any possible transition. It seems especially reasonable to maintain fee pressure for normal transactions during a hard-fork transition. It's possible to do so using some kind of soft-cap structure. Building in a default soft-cap of 1 megabyte for some far future scheduled fork would seem like a sane thing to do for bitcoin-core. It seems also viable to be far more aggressive. What's your (and the community's) opinion on some kind of coinbase voting protocol for soft-cap enforcement? It's possible to write in messages to the coinbase for a enforcible soft-cap that orphans out any transaction which violates these rules. It seems safest to have the transition has the first hardforked block be above 1MB, however, the next block default to an enforced 1MB block. If miners agree to go above this, they must vote in their coinbase to do so. There's a separate discussion about this starting on: cae-z3oxnjayluehbu0hdwu5pkrj6fpj7yptgbmq7hkxg3sj...@mail.gmail.com I think defaulting some kind of mechanism on reading the coinbase seems to be a good idea, I think left alone, miners may not do so. That way, it's possible to have your cake and eat it too, fee pressure will still exist, while block sizes can increase (provided it's in the miners' greater interests to do so). The Lightning Network's security model in the long-term may rely on a multi-tier soft-cap, but I'm not sure. If 2nd order systemic miner incentives were not a concern, a system which has an enforced soft-cap and permits breaching that soft-cap with some agreed upon much higher fee would work best. LN works without this, but it seems to be more secure if some kind of miner consensus rule is reached regarding prioritizing behavior of 2nd-layer consensus states. No matter how it's done, certain aspects of the security model of something like Lightning is reliant upon having block-space availability for transactions to enter into the blockchain in a timely manner (since "deprecated" channel states become valid again after some agreed upon block-time). I think pretty much everyone agrees that the 1MB block cap will eventually be a problem. While people may disagree with when that will be and how it'll play out, I think we're all in agreement that discussion about it is a good idea, especially when it comes to resolving blocking concerns. Starting a discussion on how a hypothetical blocksize increase will occur and the necessary blocking/want-to-have features/tradeoffs seems to be a great way to approach this problem. The needs for Lightning Network may be best optimized by being able to prioritizing a large mass of timeout transactions at once (when a well-connected node stops communicating). -- Joseph Poon ------------------------------------------------------------------------------ One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _______________________________________________ Bitcoin-development mailing list Bitcoinfirstname.lastname@example.org https://lists.sourceforge.net/lists/listinfo/bitcoin-development