Re: [bitcoin-dev] maximum block height on transaction

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,


> I've come across this argument before, and it seems kind of like Satoshi's 
> word here is held as gospel. I haven't heard any deep discussion of this 
> topic, and I even asked a question on the bitcoin SE about it. Sorry to 
> hijack this conversation, but I'm very curious if there's something more to 
> this or if the thinking was simply decided that OP_BLOCKNUMBER wasn't useful 
> enough to warrant the (dubious) potential footgun of people accepting 
> sub-6-block transactions from a transaction that uses an expired spend-path?

Another argument I have encountered has to do with the implementation of 
Bitcoin Core.

As an optimization, SCRIPT is evaluated only when a transaction enters the 
mempool.
It is not evaluated at any other time.
Indeed, when accepting a new block, if a transaction in that block is in the 
mempool, its SCRIPT is not re-evaluated.

If the max-blockheight-constraint is implemented as a SCRIPT opcode, then at 
each block, every SCRIPT in every transaction in the mempool must be 
re-evaluated, as the SCRIPT might not reject.
During times of high chain bloat, there will be large numbers of transactions 
in the mempool, only a tiny fraction will be removed at each block before the 
mempool finally clears, leading to effective O(n^2) CPU time spent (n blocks 
are needed in order to empty a mempool with n transactions, each block triggers 
re-evaluation of SCRIPT of n transactions in the mempool).
That O(n^2) assumes a single SCRIPT is O(1), which is untrue as well (but is 
generally approached in practice as most transactions are simple singlesig or 
`OP_CHECKMULTISIG` affairs).

That is, the mempool assumes that once a SCRIPT accepts, it will always accept 
in the future.
Thus, any SCRIPT opcode cannot change from "accept" (because at the current 
blockheight the max-block is not yet reached) to "reject" (because the 
max-block constraint is now violated).

Thus, we cannot use an opcode to impose the max-block cosntraint.

The alternative is to add a new field `maxtime` to the transaction.
Then possibly, we can have an `OP_CHECKMAXTIMEVERIFY` opcode that checks that 
the field has a particular value.
Then the mempool can have a separate index according to `maxtime` fields, where 
it can remove the indexed transactions at each block.
The index will be likely O(log n), and the filtering at each block would be O(n 
log n), which is an improvement.
Note in particular that the index itself would require O(n) storage.

However, adding a new field to the transaction format would require techniques 
similar to what was used in SegWit, i.e. post-maxtime nodes have to "baby talk" 
to pre-maxtime nodes and pretend transactions do not have this field, in much 
the same way post-SegWit nodes "baby talk" to pre-SegWit nodes and pretend 
transactions do not have a `witness` field.
We would then need a third Merkle Tree to hold the "really real" transaction ID 
that contains the `maxtime` field as well.

Thus, it seems to me that the tradeoffs are simply not good enough, when you 
can get 99% of what you need using just another transaction with `nLockTime`:

* Using an opcode would greatly increase CPU usage because the script cache 
would need to be reworked (and probably cannot be made to work).
* Adding a field would greatly increase the code complexity to the level of 
SegWit, without all the important bugfixes+features (tx malleability, quadratic 
sighash, well-defined extensible outputs) that SegWit provides.
* You can do what you want with a second `nLockTime`d transaction that spends 
the output anyway.

Indeed, it is helpful to realize *why* `OP_CHECKLOCKTIMEVERIFY` and 
`OP_CHECKSEQUENCEVERIFY` work the way they are implemented.
They are typically discussed and described as if they were imposing time-based 
constraints, but the *real* implementation only imposes constraints on 
`nLockTime` and `nSequence` fields --- the SCRIPT interpreter itself does not 
look at the block that the transaction is in (because that is not available, as 
the SCRIPT interpreter is invoked at mempool entry, when the transaction *has* 
no block it is contained in).
There is instead a separate layer (the entry into the mempool) that implements 
the *actual* time-based cosntraints, based on the fields and not the SCRIPT 
opcodes.

Regards,
ZmnSCPxj

>
> On Fri, Apr 9, 2021 at 5:55 AM Jeremy via bitcoin-dev 
>  wrote:
>
> > You could accomplish your rough goal by having:
> >
> > tx A: desired expiry at H
> > tx B: nlocktime H, use same inputs as A, create outputs equivalent to 
> > inputs (have to be sure no relative timelocks)
> >
> > Thus after a timeout the participants in A can cancel the action using TX B.
> >
> > The difference is the coins have to move, without knowing your use case 
> > this may or may not help you. 
> >
> > On Fri, Apr 9, 2021, 4:40 AM Russell O'Connor via bitcoin-dev 
> >  wrote:
> >
> > > From https://bitcointalk.org/index.php?topic=1786.msg22119#msg22119:
> > >

Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning LL,

> On Tue, 16 Mar 2021 at 11:25, David A. Harding via bitcoin-dev 
>  wrote:
>
> > I curious about whether anyone informed about ECC and QC
> > knows how to create output scripts with lower difficulty that could be
> > used to measure the progress of QC-based EC key cracking.  E.g.,
> > NUMS-based ECDSA- or taproot-compatible scripts with a security strength
> > equivalent to 80, 96, and 112 bit security.
>
> Hi Dave,
>
> This is actually relatively easy if you are willing to use a trusted setup. 
> The trusted party takes a secp256k1 secret key and verifiably encrypt it 
> under a NUMS public key from the weaker group. Therefore if you can crack the 
> weaker group's public key you get the secp256k1 secret key. 
> Camenisch-Damgard[1] cut-and-choose verifiable encryption works here.
> People then pay the secp256k1 public key funds to create the bounty. As long 
> as the trusted party deletes the secret key afterwards the scheme is secure.
>
> Splitting the trusted setup among several parties where only one of them 
> needs to be honest looks doable but would take some engineering and analysis 
> work.

To simplify this, perhaps `OP_CHECKMULTISIG` is sufficient?
Simply have the N parties generate individual private keys, encrypt each of 
them with the NUMS pubkey from the weaker group, then pay out to an N-of-N 
`OP_CHECKMULTISIG` address of all the participants.
Then a single honest participant is enough to ensure security of the bounty.

Knowing the privkey from the weaker groups would then be enough to extract all 
of the SECP256K1 privkeys that would unlock the funds in Bitcoin.

This should reduce the need for analysis and engineering.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Prediction Markets and Bitcoin

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,


> I think prediction markets or such tokens might help in adding to the 
> information we already have however they don't decide or replace anything. 
> Bitcoin development should impact such markets and not the other way around. 

"Human behavior is economic behavior. The particulars may vary, but competition 
for limited resources remains a constant. Need as well as greed have followed 
us to the stars, and the rewards of wealth still await those wise enough to 
recognize this deep thrumming of our common pulse. " -- CEO Nwabudike Morgan, 
"The Centauri Monopoly", *Sid Meier's Alpha Centauri*

This is the tension between the necessary freedom of discovering strange new 
techniques, and the exigencies of life, where every joule of negentropy is a 
carefully measured resource.

Of course development must be free to do what is best technically, and to 
experiment and see what other techniques are possible or workable.
Thus the market must follow development.

Of course the people ultimately funding the development must impose what 
direction that development goes to, after all, it is their money that is being 
modified.
Thus development must follow the market.

It is the negotiation of the two that is difficult.

Overall, I think a lot of the developer arguments are reasonably clear --- what 
is unclear is what the market wants, thus I think prediction markets are 
something that are needed in order for the negotiation between these two 
aspects to advance.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev