> If you don't like the reduction of the block subsidy, well that's a much 
> bigger problem.
It is reversible, because you can also increase the block subsidy by using 
another kind of soft-fork. For example, you can create spendable outputs with 
zero satoshis. In this way, old nodes will accept that silently, but new nodes 
can check something more, because you can specify somewhere else, what is the 
"real" amount. Finally, if all nodes will upgrade, you will end up in a 
network, where all transactions spend zero satoshi inputs, create zero satoshi 
outputs and have zero fee. Old nodes would accept all of that, but new nodes 
would really see, what is going on, and they will check that all rules are met, 
and the new subsidy is for example increased x1000 (that could lead to the same 
situation as moving from satoshis to millisatoshis with some hard-fork, but 
doing that kind of change with a soft-fork is safer).
On 2021-12-31 10:35:06 user Keagan McClelland via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:
>  But whether or not it is a basic principle of general software engineering 
>kind of misses the point. Security critical software clearly isn't engineered 
>in the same way as a new social media app. Bugs are easily reverted in a new 
>social media app.On top of that we aren't just dealing with security critical 
>software. One of the most important objectives is to keep all the nodes on the 
>network in consensus. Introducing a consensus change before we are comfortable 
>there is community consensus for it is a massive effective bug in itself. The 
>network can split in multiple ways e.g. part of the network disagrees on 
>whether to activate the consensus change, part of the network disagrees on how 
>to resist that consensus change, part of the network disagrees on how to 
>activate that consensus change etc
 
>  A consensus change is extremely hard to revert and probably requires a hard 
>fork, a level of central coordination we generally attempt to avoid and a 
>speed of deployment that we also attempt to avoid.
 
This seems to assert the idea that soft forks are all the same: they are not. 
For instance a soft fork, lowering the block subsidy is completely different 
than changing the semantics of an OP_NOP to have semantics that may reject a 
subset of the witnesses that attest to the transactions permissibility. As a 
result, reversion means two entirely different things in these contexts. While 
a strict reversion of both soft forks is by definition a hard fork, the 
requirement of reversion as a result of undesired behavior is not the same. In 
the case of opcodes, there is almost never a requirement to revert it. If you 
don't like the way the opcodes behave, then you just don't use them. If you 
don't like the reduction of the block subsidy, well that's a much bigger 
problem.
 
I make this point to elucidate the idea that we cannot treat SoftForks™ as a 
single monolithic idea. Perhaps we need to come up with better terminology to 
be specific about what each fork actually is. The soft vs. hard distinction is 
a critical one but it is not enough and treating soft forks that are 
noninvasive such as OP_NOP tightenings. This has been proposed before [1], and 
while I do not necessarily think the terms cited are necessarily complete, they 
admit the low resolution of our current terminology.
 
> Soft fork features can (and should) obviously be tested thoroughly on 
> testnet, signet, custom signets, sidechains etc on a standalone basis and a 
> bundled basis.
 
I vehemently disagree that any consensus changes should be bundled, especially 
when it comes to activation parameters. When we start to bundle things, we 
amplify the community resources needed to do review, not reduce them. I suspect 
your opinion here is largely informed by your frustration with the Taproot 
Activation procedure that you underwent earlier this year. This is 
understandable. However, let me present the alternative case. If we start to 
bundle features, the review of the features gets significantly harder. As the 
Bitcoin project scales, the ability of any one developer to understand the 
entire codebase declines. Bundling changes reduces the number of people who are 
qualified to review a particular proposal, and even worse, intimidates people 
who may be willing and able to review logically distinct portions of the 
proposal, resulting in lower amounts of review overall. This will likely have 
the opposite effect of what you seem to desire. BIP8 and BIP9 give us the 
ability to have multiple independent soft forks in flight at once. Choosing to 
bundle them instead makes little sense when we do not have to. Bundling them 
will inevitably degenerate into political horse trading and everyone will be 
worse off for it.
 
> part of the network disagrees on whether to activate the consensus change, 
> part of the network disagrees on how to resist that consensus change, part of 
> the network disagrees on how to activate that consensus change etc
 
Disagreements, and by extension, forks are a part of Bitcoin. What is important 
is that they are well defined and clean. This is the reason why the mandatory 
signaling period exists in BIP8/9, so that clients that intend to reject the 
soft fork change have a very easy means of doing so in a clean break where 
consensus is clearly divergent. In accordance with this, consensus changes 
should be sequenced so that people can decide which sides of the forks they 
want to follow and that the economic reality can reorganize around that. If 
choose to bundle them, you have one of two outcomes: either consensus atomizes 
into a mist where people have different ideas of which subsets of a soft fork 
bundle they want to adopt, or what likely comes after is a reconvergence on the 
old client with none of the soft fork rules in place. This will lead to 
significantly more confusion as well given that with sufficient miner consensus 
some of the rules may stick anyway even if the rest of the user base 
reconverges on the old client.
 
It is quite likely less damaging to consensus to have frequent but strictly 
sequenced soft forks so that if one of the new rules is contentious the break 
can happen cleanly. That said, if Core or any other client wishes to cut a 
release of the software with the parameters bundled into a single release, that 
is a significantly more palatable state of affairs, as you can still pipeline 
signaling and activation. However, the protocol itself adopting a tendency to 
activate unrelated proposals in bundles is a recipe for disaster.
 
 
Respectfully,
Keagan
 
 
[1] https://www.truthcoin.info/blog/protocol-upgrade-terminology
On Sat, Oct 16, 2021 at 12:57 PM Michael Folkson via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> Interesting discussion. Correct me if I'm wrong: but putting too many 
> features together in one shot just can't make things harder to debug in 
> production if something very unexpected happens. It's a basic principle of 
> software engineering.
 
Soft fork features can (and should) obviously be tested thoroughly on testnet, 
signet, custom signets, sidechains etc on a standalone basis and a bundled 
basis. But whether or not it is a basic principle of general software 
engineering kind of misses the point. Security critical software clearly isn't 
engineered in the same way as a new social media app. Bugs are easily reverted 
in a new social media app. A consensus change is extremely hard to revert and 
probably requires a hard fork, a level of central coordination we generally 
attempt to avoid and a speed of deployment that we also attempt to avoid. On 
top of that we aren't just dealing with security critical software. One of the 
most important objectives is to keep all the nodes on the network in consensus. 
Introducing a consensus change before we are comfortable there is community 
consensus for it is a massive effective bug in itself. The network can split in 
multiple ways e.g. part of the network disagrees on whether to activate the 
consensus change, part of the network disagrees on how to resist that consensus 
change, part of the network disagrees on how to activate that consensus change 
etc
 
In addition, a social media app can experiment in production whether Feature A 
works, whether Feature B works or whether Feature A and B work best together. 
In Bitcoin if we activate consensus Feature A, later decide we want consensus 
Feature B but find out that by previously activating Feature A we can't have 
Feature B (it is now unsafe to activate it) or its design now has to be 
suboptimal because we have to ensure it can safely work in the presence of 
Feature A we have made a mistake by activating Feature A in the first place. 
Decentralized security critical consensus changes are an emerging field in 
itself and really can't be treated like any other software project. This will 
become universally understood I'm sure over time.
 
 
-- Michael Folkson Email: michaelfolkson at protonmail.com Keybase: 
michaelfolkson PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
 
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, October 15th, 2021 at 1:43 AM, Felipe Micaroni Lalli via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:
Interesting discussion. Correct me if I'm wrong: but putting too many features 
together in one shot just can't make things harder to debug in production if 
something very unexpected happens. It's a basic principle of software 
engineering.
 
Change. Deploy. Nothing bad happened? Change it a little more. Deployment.
Or: Change, change, change. Deploy. Did something bad happen? What change 
caused the problem?
On Thu, Oct 14, 2021 at 8:53 PM Anthony Towns via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:
On Mon, Oct 11, 2021 at 12:12:58PM -0700, Jeremy via bitcoin-dev wrote:
> > ... in this post I will argue against frequent soft forks with a single or
> minimal
> > set of features and instead argue for infrequent soft forks with batches
> > of features.
> I think this type of development has been discussed in the past and has been
> rejected.
> AJ: - improvements: changes might not make everyone better off, but we
>    don't want changes to screw anyone over either -- pareto
>    improvements in economics, "first, do no harm", etc. (if we get this
>    right, there's no need to make compromises and bundle multiple
>    flawed proposals so that everyone's an equal mix of happy and
>    miserable)
I don't think your conclusion above matches my opinion, for what it's
worth.
If you've got two features, A and B, where the game theory is:
 If A happens, I'm +100, You're -50
 If B happens, I'm -50, You're +100
then even though A+B is +50, +50, then I do think the answer should
generally be "think harder and come up with better proposals" rather than
"implement A+B as a bundle that makes us both +50".
_But_ if the two features are more like:
  If C happens, I'm +100, You're +/- 0
  If D happens, I'm +/- 0, You're +100
then I don't have a problem with bundling them together as a single
simultaneous activation of both C and D.
Also, you can have situations where things are better together,
that is:
  If E happens, we're both at +100
  If F happens, we're both at +50
  If E+F both happen, we're both at +9000
In general, I think combining proposals when the combination is better
than the individual proposals were is obviously good; and combining
related proposals into a single activation can be good if it is easier
to think about the ideas as a set.
It's only when you'd be rejecting the proposal on its own merits that
I think combining it with others is a bad idea in principle.
For specific examples, we bundled schnorr, Taproot, MAST, OP_SUCCESSx
and CHECKSIGADD together because they do have synergies like that; we
didn't bundle ANYPREVOUT and graftroot despite the potential synergies
because those features needed substantially more study.
The nulldummy soft-fork (bip 147) was deployed concurrently with
the segwit soft-fork (bip 141, 143), but I don't think there was any
particular synergy or need for those things to be combined, it just
reduced the overhead of two sets of activation signalling to one.
Note that the implementation code for nulldummy had already been merged
and were applied as relay policy well before activation parameters were
defined (May 2014 via PR#3843 vs Sep 2016 for PR#8636) let alone becoming
an active soft fork.
Cheers,
aj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to