Then I would suggest working on payment channel networks. No decrease of
the interblock time will ever compete with the approximately instant time
it takes to validate a microchannel payment.
On Fri, Aug 7, 2015 at 4:08 PM, Sergio Demian Lerner
sergio.d.ler...@gmail.com wrote:
In some rare
I would like very much to know how it is that we're supposed to be making
money off of lightning, and therefore how it represents a conflict of
interest. Apparently there is tons of money to be made in releasing
open-source protocols! I would hate to miss out on that.
We are working on lightning
Levin, it is a complicated issue for which there isn't an easy answer. Part
of the issue is that block size doesn't actually measure resource usage
very reliably. It is possible to support a much higher volume of typical
usage transactions than transactions specifically constructed to cause DoS
No, the nVersion would be = 4, so that we don't waste any version values.
On Thu, Aug 20, 2015 at 10:32 AM, jl2012 via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
Peter Todd via bitcoin-dev 於 2015-08-19 01:50 寫到:
2) nVersion mask, with IsSuperMajority()
In this option the
You are absolutely correct! My apologies for the oversight in editing. If
you could dig up the link though that would be really helpful.
On Tue, Aug 18, 2015 at 6:04 PM, Peter Todd via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
On Tue, Aug 18, 2015 at 02:22:10AM +0100, Thomas
We can use nVersion 0x8 to signal support, while keeping the consensus
rule as nVersion = 4, right? That way we don't waste a bit after this all
clears up.
On Aug 18, 2015 10:50 PM, Peter Todd via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
Deployment of the proposed CLTV, CSV,
A power of 2 would be far more efficient here. The key question is how long
of a relative block time do you need? Figure out what the maximum should be
( I don't know what that would be, any ideas?) and then see how many bits
you have left over.
On Aug 23, 2015 7:23 PM, Jorge Timón
Does it matter even in the slightest why the block size limit was put in
place? It does not. Bitcoin is a decentralized payment network, and the
relationship between utility (block size) and decentralization is
empirical. Why the 1MB limit was put in place at the time might be a
historically
Actually I gave a cached answer earlier which on further review may need
updating. (Bad Mark!)
I presume by what's more likely to matter is seconds you are referencing
point of sale. As you mention yourself, lightning network or green address
style payment escrow obviates the need for short
On Mon, Aug 10, 2015 at 10:34 PM, Thomas Zander via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
So, while LN is written, rolled out and tested, we need to respond with
bigger
blocks. 8Mb - 8Gb sounds good to me.
This is where things diverge. It's fine to pick a new limit or
On Mon, Aug 10, 2015 at 11:31 PM, Thomas Zander via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
On Monday 10. August 2015 23.03.39 Mark Friedenbach wrote:
This is where things diverge. It's fine to pick a new limit or growth
trajectory. But defend it with data and reasoned
Surely you have some sort of empirical measurement demonstrating the
validity of that statement? That is to say you've established some
technical criteria by which to determine how much centralization pressure
is too much, and shown that Pieter's proposal undercuts expected progress
in that area?
Please don't put words into Pieter's mouth. I guarantee you everyone
working on Bitcoin in their heart of hearts would prefer everyone in the
world being able to use the Bitcoin ledger for whatever purpose, if there
were no cost.
But like any real world engineering issue, this is a matter of
Tom, you appear to be misunderstanding how lightning network and
micropayment hub-and-spoke models in general work.
But neither can Bob receive money, unless payment hub has
advanced it to the channel (or (2) below applies). Nothing requires the
payment hub to do this.
On the contrary the
Adam, there is really no justification I can see to lower the interblock
interval on the Bitcoin blockchain, primarily due to the effects of network
latency. Lowering the interblock interval and raising the block size are
not equal alternatives - you can always get more throughput in bitcoin by
So I've created 2 new repositories with changed rules regarding
sequencenumbers:
https://github.com/maaku/bitcoin/tree/sequencenumbers2
This repository inverts (un-inverts?) the sequence number. nSequence=1
means 1 block relative lock-height. nSequence=LOCKTIME_THRESHOLD means 1
second relative
To follow up on this, let's say that you want to be able to have up to 1
year relative lock-times. This choice is somewhat arbitrary and what I
would like some input on, but I'll come back to this point.
* 1 bit is necessary to enable/disable relative lock-time.
* 1 bit is necessary to
Are you aware of the payment protocol?
On Sep 10, 2015 2:12 PM, "essofluffy . via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Hi Everyone,
>
> An issue I'm sure everyone here is familiar with is the problem concerning
> the fact that Bitcoin addresses are too complex to
Note that this violates present assumptions about transaction validity,
unless a constraint also exists that any output of such an expiry block is
not spent for at least 100 blocks.
Do you have a clean way of ensuring this?
On Thu, Sep 17, 2015 at 2:41 PM, jl2012 via bitcoin-dev <
Correction of a correction, in-line:
On Wed, Sep 16, 2015 at 5:51 PM, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > - Many interested or at least willing to accept a "short term bump", a
> > hard fork to modify block size limit regime to be cost-based via
> >
You don't need to appeal to human psychology. At 75% threshold, it takes
only 25.01% of the hashpower to report but not actually enforce the fork to
cause the majority hashpower to remain on the old chain, but for upgraded
clients to start rejecting the old chain. With 95% the same problem exists
This mailing list was never meant to be a place "to hold the bitcoin
development community accountable for its actions [sic]." I know other
developers that have switched to digest-only or unsubscribed. I know if
this became a channel for PR and populist venting as you describe, I would
leave as
Agree with all CLTV and nVersionBits points. We should deploy a lock-time
soft-fork ASAP, using the tried and true IsSuperMajoirty test.
However your information regarding BIPs 68 (sequence numbers), 112
(checksequenceverify) and 113 (median time past) is outdated. Debate
regarding semantics has
Alex, decreasing granularity is a soft-fork, increasing is a hard-fork.
Therefore I've kept the highest possible precision (1 second, 1 block) with
the expectation that at some point in the future if we need more low-order
bits we can soft-fork them to other purposes, we can decrease granularity
Replying to this specific email only because it is the most recent in my
mail client.
Does this conversation have to happen on-list? It seems to have wandered
incredibly far off-topic.
On Sun, Sep 20, 2015 at 5:25 AM, Mike Hearn via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
The builds made by Travis are for the purpose of making sure that the
source code compiles and tests run successfully on all supported platforms.
The binaries are not used anywhere else because Travis is not a trusted
platform.
The binaries on bitcoin.org are built using the gitian process and
Well the gitian builds are made available on bitcoin.org. If you mean a
build server where gitian builds are automatically done and made available,
well that rather defeats the point of gitian.
The quorum signatures are accumulated here:
https://github.com/bitcoin/gitian.sigs (it's a manual
It is in their individual interests when the larger block that is allowed
for them grants them more fees.
On Aug 28, 2015 4:35 PM, Chris Pacia via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
When discussing this with Matt Whitlock earlier we basically concluded the
block size will
AM, Mark Friedenbach via bitcoin-dev
bitcoin-dev@lists.linuxfoundation.org wrote:
It is in their individual interests when the larger block that is allowed
for them grants them more fees.
I realize now that this is not what Greg Maxwell proposed (aka
flexcap): this is just miner's voting
My apologies for the apparent miscommunication earlier. It is of interest
to me that the soft-fork be done which is necessary to put a commitment in
the most efficient spot possible, in part because that commitment could be
used for other data such as the merged mining auxiliary blocks, which are
Greg, if you have actual data showing that putting the commitment in the
last transaction would be disruptive, and how disruptive, that would be
appreciated. Of the mining hardware I have looked at, none of it cared at
all what transactions other than the coinbase are. You need to provide a
path
There are many reasons to support segwit beyond it being a soft-fork. For
example:
* the limitation of non-witness data to no more than 1MB makes the
quadratic scaling costs in large transaction validation no worse than they
currently are;
* redeem scripts in witness use a more accurate cost
A segwit supporting server would be required to support relaying segwit
transactions, although a non-segwit server could at least inform a wallet
of segwit txns observed, even if it doesn't relay all information necessary
to validate.
Non segwit servers and wallets would continue operations as if
I am fully in support of the plan laid out in "Capacity increases for the
bitcoin system".
This plan provides real benefit to the ecosystem in solving a number of
longstanding problems in bitcoin. It improves the scalability of bitcoin
considerably.
Furthermore it is time that we stop
Not entirely correct, no. Edge cases also matter. Segwit is described as
4MB because that is the largest possible combined block size that can be
constructed. BIP 102 + segwit would allow a maximum relay of 8MB. So you
have to be confident that an 8MB relay size would be acceptable, even if a
Looks like I'm the long dissenting voice here? As the originator of the
name CHECKSEQUENCEVERIFY, perhaps I can explain why the name was
appropriately chosen and why the proposed alternatives don't stand up.
First, the names are purposefully chosen to illustrate what they do:
What does
Wouldn't this be entirely on topic for #bitcoin-dev? It's probably better
not to fragment the communication channels and associated infrastructure
(logs, bots, etc.)
On Tue, Jan 19, 2016 at 3:54 AM, Eric Lombrozo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Hello, folks.
>
>
Greg,
If I understand correctly, the crux of your argument against BIP148 is that
it requires the segwit BIP9 activation flag to be set in every block after
Aug 1st, until segwit activates. This will cause miners which have not
upgrade and indicated support for BIP141 (the segwit BIP) to find
Jonas, I think his proposal is to enable extending the P2P layer, e.g.
adding new message types. Are you suggesting having externalized
message processing? That could be done via RPC/ZMQ while opening up a
much more narrow attack surface than dlopen, although I imagine such
an interface would
A fun exercise to be sure, but perhaps off topic for this list?
> On Aug 22, 2017, at 1:06 PM, Erik Aronesty via bitcoin-dev
> wrote:
>
> > The initial message I replied to stated:
>
> Yes, 3 years is silly. But coin expiration and quantum resistance is
Lock time transactions have been valid for over a year now I believe. In any
case we can't scan the block chain for usage patterns in UTXOs because P2SH
puts the script in the signature on spend.
> On Aug 22, 2017, at 4:29 PM, Thomas Guyot-Sionnest via bitcoin-dev
>
> > have
>>>> > to choose between BIP148 and Segwit2x if they want to activate Segwit.
>>>>
>>>> Miners can simply continuing signaling segwit, which will leave them
>>>> at least soft-fork compatible with BIP148 and BIP91 (and god knows
>>&
rt orphaning
>> their own blocks because they are failing to signal segwit.
>>
>> I don't think the rejection of segwit2x from Bitcoin's developers
>> could be any more resolute than what we've already seen:
>> https://en.bitcoin.it/wiki/Segwit_support
>>
>&g
It is essential that BIP-148 nodes connect to at least two other BIP-148 nodes
to prevent a network partition in August 1st. The temporary service but is how
such nodes are able to detect each other.
> On Jun 19, 2017, at 12:46 PM, Tom Zander via bitcoin-dev
>
I think it is very naïve to assume that any shift would be temporary.
We have a hard enough time getting miners to proactively upgrade to
recent versions of the reference bitcoin daemon. If miners interpret
the situation as being forced to run non-reference software in order
to prevent a chain
The 1MB classic block size is not redundant after segwit activation.
Segwit prevents the quadratic hashing problems, but only for segwit
outputs. The 1MB classic block size prevents quadratic hashing
problems from being any worse than they are today.
Mark
On Tue, May 30, 2017 at 6:27 AM, Jorge
> On Sep 18, 2017, at 8:09 PM, Luke Dashjr <l...@dashjr.org> wrote:
>
> On Tuesday 19 September 2017 12:46:30 AM Mark Friedenbach via bitcoin-dev
> wrote:
>> After the main discussion session it was observed that tail-call semantics
>> could still be maint
There is no harm in the value being a maximum off by a few bytes.
> On Sep 22, 2017, at 2:54 PM, Sergio Demian Lerner
> wrote:
>
> If the variable size increase is only a few bytes, then three possibilities
> arise:
>
> - one should allow signatures to be zero
You generally know the witness size to within a few bytes right before signing.
Why would you not? You know the size of ECDSA signatures. You can be told the
size of a hash preimage by the other party. It takes some contriving to come up
with a scheme where one party has variable-length
> On Sep 22, 2017, at 1:32 PM, Sergio Demian Lerner
> wrote:
>
>
>
> There are other solutions to this problem that could have been taken
> instead, such as committing to the number of items or maximum size of
> the stack as part of the sighash data, but cleanstack
Over the past few weeks I've been explaining the MERKLEBRANCHVERIFY
opcode and tail-call execution semantics to a variety of developers,
and it's come to my attention that the BIPs presentation of the
concept is not as clear as it could be. Part of this is the fault of
standards documents being
> On Sep 19, 2017, at 10:13 PM, Johnson Lau wrote:
>
> If we don’t want this ugliness, we could use a new script version for every
> new op code we add. In the new BIP114 (see link above), I suggest to move the
> script version to the witness, which is cheaper.
To be clear, I
Bech32 and WIF payload format are mostly orthogonal issues. You can design a
new wallet import format now and later switch it to Bech32.
> On Sep 17, 2017, at 7:42 AM, AJ West via bitcoin-dev
> wrote:
>
> Hi I have a small interjection about the point on
As some of you may know, the MAST proposal I sent to the mailing list
on September 6th was discussed that the in-person CoreDev meetup in
San Francisco. In this email I hope to summarize the outcome of that
discussion. As chatham house rules were in effect, I will refrain from
attributing names to
TL;DR I'll be updating the fast Merkle-tree spec to use a different
IV, using (for infrastructure compatability reasons) the scheme
provided by Peter Todd.
This is a specific instance of a general problem where you cannot
trust scripts given to you by another party. Notice that we run
I would like to propose two new script features to be added to the
bitcoin protocol by means of soft-fork activation. These features are
a new opcode, MERKLE-BRANCH-VERIFY (MBV) and tail-call execution
semantics.
In brief summary, MERKLE-BRANCH-VERIFY allows script authors to force
redemption to
On Sep 12, 2017, at 1:55 AM, Johnson Lau wrote:
> This is ugly and actually broken, as different script path may
> require different number of stack items, so you don't know how many
> OP_TOALTSTACK do you need. Easier to just use a new witness version
DEPTH makes this relatively
> (I was schooled by Peter Todd by a similar issue in the past.)
>
>> On Wed, Sep 6, 2017 at 8:38 PM, Mark Friedenbach via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> Fast Merkle Trees
>> BIP: https://gist.github.com/maak
This article by Ron Lavi, Or Sattath, and Aviv Zohar was forwarded to
me and is of interest to this group:
"Redesigning Bitcoin's fee market"
https://arxiv.org/abs/1709.08881
I'll briefly summarize before providing some commentary of my own,
including transformation of the proposed
Only if your keys are online and the transaction is self-signed. It wouldn’t
let you pre-sign a transaction for a third party to broadcast and have it clear
at just the market rate in the future. Like a payment channel refund, for
example.
> On Sep 28, 2017, at 7:17 PM, Nathan Wilcox via
The CLEANSTACK rule should be eliminated, and instead the number of items on
the stack should be incorporated into the signature hash. That way any script
with a CHECKSIG is protected from witness extension malleability, and those
rare ones that do not use signature operations can have a “DEPTH
Clean stack should be eliminated for other possible future uses, the most
obvious of which is recursive tail-call for general computation capability. I’m
not arguing for that at this time, just arguing that we shouldn’t prematurely
cut off an easy implementation of such should we want to. Clean
I would also suggest that the 520 byte push limitation be removed for v1
scripts as well. MERKLEBRANCHVERIFY in particular could benefit from larger
proof sizes. To do so safely would require reworking script internals to use
indirect pointers and reference counting for items on stack, but this
> On Oct 1, 2017, at 12:41 PM, Russell O'Connor wrote:
>
> Creating a Bitcoin script that does not allow malleability is difficult and
> requires wasting a lot of bytes to do so, typically when handling issues
> around non-0-or-1 witness values being used with OP_IF,
> On Oct 1, 2017, at 12:05 PM, Russell O'Connor wrote:
>
> Given the proposed fixed signature size, It seems better to me that we create
> a SIGHASH_WITNESS_WEIGHT flag as opposed to SIGHASH_WITNESS_DEPTH.
For what benefit? If your script actually uses all the items on
> On Oct 1, 2017, at 2:32 PM, Johnson Lau wrote:
>
> So there are 3 proposals with similar goal but different designs. I try to
> summarise some questions below:
>
> 1. How do we allow further upgrade within v1 witness? Here are some options:
> a. Minor version in witness.
This is correct. Under assumptions of a continuous mempool model however this
should be considered the outlier behavior, other than a little bit of empty
space at the end, now and then. A maximum fee rate calculated as a filter over
past block rates could constrain this outlier behavior from
> On Sep 28, 2017, at 7:02 PM, Peter Todd <p...@petertodd.org> wrote:
>
>> On Thu, Sep 28, 2017 at 06:06:29PM -0700, Mark Friedenbach via bitcoin-dev
>> wrote:
>> Unlike other proposed fixes to the fee model, this is not trivially
>> broken by paying th
While there is a lot that I would like to comment on, for the moment I will
just mention that you should consider using the 17 bit relative time format
used in CSV as an offset from the birthdate of the address, a field all
addresses should also have.
This would also mean that addresses cannot
First, there’s been no discussion so far for address expiration to be part of
“the protocol” which usually means consensus rules or p2p. This is purely about
wallets and wallet information exchange protocols.
There’s no way for the sender to know whether an address has been used without
a
> On Aug 28, 2017, at 8:29 AM, Alex Nagy via bitcoin-dev
> wrote:
>
> If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any way Bob
> can safely issue Native P2WPKH outputs to Alice?
>
No, and the whole issue of compressed vs uncompressed
The problem of fast acting but non vulnerable difficulty adjustment algorithms
is interesting. I would certainly like to see this space further explored, and
even have some ideas myself.
However without commenting on the technical merits of this specific proposal, I
think it must be said
Here’s an additional (uncontroversial?) idea due to Russell O’Connor:
Instead of requiring that the last item popped off the stack in a CHECKMULTISIG
be zero, have it instead be required that it is a bitfield specifying which
pubkeys are used, or more likely the complement thereof. This allows
ot; - is this a correct
> interpretation?
>
> In fact, beyond a no, it seems like a "no, and I disagree with the idea of
> creating one".
>
> So if Bitcoin comes under successful 51%, the project, in your vision, has
> simply failed?
>
> Ben Kloester
>
>
> On Oct 12, 2017, at 3:40 AM, ZmnSCPxj via bitcoin-dev
> wrote:
>
> As most Core developers hodl vast amounts, it is far more likely that any
> hardfork that goes against what Core wishes will collapse, simply by Core
> developers acting in their
As good of an idea as it may or may not be to remove this feature from the code
base, actually doing so would be crossing a boundary that we have not
previously been willing to do except under extraordinary duress. The nature of
bitcoin is such that we do not know and cannot know what
protocol to object, which should be
> sufficient to reconsider such a soft-fork.
>
> Independently, making them non-standard is a good change on its own, and
> if nothing else should better inform discussion about the possibility of
> anyone using these things.
>
> Matt
>
> On 11/1
Why would I send you coins to anything other than the address you provided to
me? If you send me a bech32 address I use the native segwit scripts. If you
send me an old address, I do what it specifies instead. The recipient has
control over what type of script the payment is sent to, without
Sign-to-contract enables some interesting protocols, none of which are in wide
use as far as I’m aware. But if they were (and arguably this is an area that
should be more developed), then SPV nodes validating these protocols will need
access to witness data. If a node is performing IBD with
For what it’s worth, I think it would be quite easy to do better than the
implied solution of rejiggering the message signing system to support non-P2PKH
scripts. Instead, have the signature be an actual bitcoin transaction with
inputs that have the script being signed. Use the salted hash of
Addresses are entirely a user-interface issue. They don’t factor into the
bitcoin protocol at all.
The bitcoin protocol doesn’t have addresses. It has a generic programmable
signature framework called script. Addresses are merely a UI convention for
representing common script templates. 1..
To reiterate, none of the current work focuses on Bitcoin integration, and many
architectures are possible.
However the Jets would have to be specified and agreed to upfront for costing
reasons, and so they would be known to all validators. There would be no reason
to include anything more
Yes, if you use a witness script version you can save about 40 witness bytes by
templating the MBV script, which I think is equivalent to what you are
suggesting. 32 bytes from the saved hash, plus another 8 bytes or so from
script templates and more efficient serialization.
I believe the
So enthused that this is public now! Great work.
Sent from my iPhone
> On Oct 30, 2017, at 8:22 AM, Russell O'Connor via bitcoin-dev
> wrote:
>
> I've been working on the design and implementation of an alternative to
> Bitcoin Script, which I call
I was just making a factual observation/correction. This is Russell’s project
and I don’t want to speak for him. Personally I don’t think the particulars of
bitcoin integration design space have been thoroughly explored enough to
predict the exact approach that will be used.
It is possible to
Script versions makes this no longer a hard-fork to do. The script version
would implicitly encode which jets are optimized, and what their optimized cost
is.
> On Oct 30, 2017, at 2:42 PM, Matt Corallo via bitcoin-dev
> wrote:
>
> I admittedly haven't
Nit, but if you go down that specific path I would suggest making just
the jet itself fail-open. That way you are not so limited in requiring
validation of the full contract -- one party can verify simply that
whatever condition they care about holds on reaching that part of the
contract. E.g.
I don’t think you need to set an order of operations, just treat the jet as
TRUE, but don’t stop validation. Order of operations doesn’t matter. Either way
it’ll execute both branches and terminate of the understood conditions don’t
hold.
But maybe I’m missing something here.
> On Oct 31,
You could do that today, with one of the 3 interoperable Lightning
implementations available. Lowering the block interval on the other hand comes
with a large number of centralizing downsides documented elsewhere. And getting
down to 1sec or less on a global network is simply impossible due to
I have completed updating the three BIPs with all the feedback that I have
received so far. In short summary, here is an incomplete list of the changes
that were made:
* Modified the hashing function fast-SHA256 so that an internal node cannot be
interpreted simultaneously as a leaf.
* Changed
Every transaction is replace-by-fee capable already. Opt-in replace by fee as
specified in BIP 125 is a fiction that held sway only while the income from
fees or fee replacement was so much smaller than subsidy.
> On Dec 21, 2017, at 3:35 PM, Paul Iverson via bitcoin-dev
>
The use of the alt stack is a hack for segwit script version 0 which has the
clean stack rule. Anticipated future improvements here are to switch to a
witness script version, and a new segwit output version which supports native
MAST to save an additional 40 or so witness bytes. Either approach
The downsides could be mitigated somewhat by only making the dual
interpretation apply to outputs older than a cutoff time after the activation
of the new feature. For example, five years after the initial activation of the
sigagg soft-fork, the sigagg rules will apply to pre-activation UTXOs
> On Jan 22, 2018, at 11:01 AM, Ilan Oh via bitcoin-dev
> wrote:
>
> The chain with the most mining power will tend to have more value.
I believe you have the causality on that backwards. The tokens which are worth
more value will attract more mining
However users can’t know with any certainty whether transactions will “age out”
as indicated, since this is only relay policy. Exceeding the specified timeout
doesn’t prevent a miner from including it in the chain, and therefore doesn’t
really provide any actionable information.
> On Dec 28,
0m/MV9M=
>
> (Of course, signed messages will verify better usually with plain text and
> not HTML interpreted email - need a switch for outlook.com to send plaintext.)
> From: bitcoin-dev-boun...@lists.linuxfoundation.org
> <bitcoin-dev-boun...@lists.linu
I had the opposite response in private, which I will share here. As recently as
Jan 9th feedback on BIP 117 was shared on this list by Pieter Wuille and others
suggesting we adopt native MAST template instead of the user programmable
combination of BIPs 116 and 117. Part of my response then
When Satoshi wrote the first version of bitcoin, s/he made what was almost
certainly an unintentional mistake. A bug in the original CHECKMULTISIG
implementation caused an extra item to be popped off the stack upon completion.
This extra value is not used in any way, and has no consensus
98 matches
Mail list logo