[bitcoin-dev] Extension to BIP Format for Multiple Required SigHash Flags

2021-01-10 Thread Jeremy via bitcoin-dev
- *# The Issue:*
- Currently the PSBT BIP has a slight "conceptual gap" where it is possible
to both:
-
- 1) Have a PSBT which obtains multiple signatures per-input with any set
of SIGHASH Flags
- 2) Have a PSBT which obtains multiple signatures per-input with a fixed
SigHash type applying to all signatures.
-
- But not possible to:
3) Have a PSBT which obtains multiple signatures per-input with a fixed
sighash type applying to a specific key's signature
4) Gracefully handle a PSBT with multiple uses of the same key, but
different code-separators.

To solve this we should introduce a new key type compatible with the
existing PSBT Spec (no V2 Requirement).

(I'm not convinced that 4 needs to be fully supported, but I believe that
it makes sense to lay the groundwork for it to be supported as the handling
of different requests for sighash flags and multiple uses of the same key
with different codeseps should happen from the same field.)

Excerpted relevant BIP text:
```

* Type: Partial Signature PSBT_IN_PARTIAL_SIG = 0x02
** Key: The public key which corresponds to this signature.
*** {0x02}|{public key}
** Value: The signature as would be pushed to the stack from a
scriptSig or witness.
*** {signature}

* Type: Sighash Type PSBT_IN_SIGHASH_TYPE = 0x03
** Key: None. The key must only contain the 1 byte type.
*** {0x03}
** Value: The 32-bit unsigned integer specifying the sighash type to
be used for this input. Signatures for this input must use the sighash
type, finalizers must fail to finalize inputs which have signatures
that do not match the specified sighash type. Signers who cannot
produce signatures with the sighash type must not provide a signature.
*** {sighash type}

```

*# Motivation*:
As An example where it may be relevant to cleanly support this, consider
the script:

`2   2 CHECKMULTI`

Under such a script, we might have 2 HSMs operating each key. Key 2 is used
first which verifies internal business logic only about the permissibility
of spending an output, but does not sign off on any other logic. Key 1 is
used last which checks that transaction sends only to the currently allowed
addresses and is signed by Key 1. In such an example (discussion of this
particular application is off topic, this is a contrived example to
demonstrate the technical issue), it is not possible to express that Key 1
will sign with SIGHASH_ALL | ANYONECANPAY and Key 2 will sign with
SIGHASH_NONE | ANYONECANPAY. It would be impossible to finalize a PSBT with
SigHash type set, because the sighashes conflict. And while it is possible
to not have any Sighash type set and successfully finalize, this fails to
capture the relevant information around which sighash types are supposed to
be used.

(why the example is contrived: one could argue that you should just have
such business logic *always* use SIGHASH_ALL for such business logic
servers, but there are technical reasons (e.g., adding a change input or
output dynamically with SIGHASH_SINGLES) that you might have to do post-hoc)

*# A Solution*
To address this I propose to add a new key type
PSBT_IN_SIGHASH_PER_KEY_TYPE (e.g., 0x0e) which is followed by a public
key, a 8-bit bool (must be 0 or 1) if the next field will be a sighash
flag, optionally a 32-bit unsigned integer representing the sighash type,
and a compact size integer representing the codeseparator position + 1 (so
that 0 may represent no codeseparator) in the scriptpubkey. If a
codeseparator is set, the redeem script (+ witness script if witness) must
be present.

Finalizers should verify that each requested signature is available.

PSBT_IN_SIGHASH_PER_KEY_TYPE is fully compatible with existing PSBT as long
as PSBT_IN_SIGHASH_TYPE is not set (or, trivially, if it is set and all
PSBT_IN_SIGHASH_PER_KEY_TYPE's match it).

Finalizers could deduce which codeseparator was used if multiple
PSBT_IN_PARTIAL_SIGS are delivered by process of elimination, thus a new
PSBT_IN_PARTIAL_SIG type to specify codeseparator is not required. However,
in the case of multiple signatures, PSBT_IN_PARTIAL_SIG would lead to a
duplicated key-pair specification error so we should also introduce the
type PSBT_IN_PARTIAL_SIG_EXTRA which has a key of a public key followed by
a compact size integer code separator (n.b. no +1 value to exclude the
default!), and a signature as a value. Finalizers shall check that the
PSBT_IN_PARTIAL_SIG_EXTRA values match the corresponding
PSBT_IN_SIGHASH_PER_KEY_TYPE requests. Compatibility: PSBT_IN_PARTIAL_SIG
does not overlap with PSBT_IN_PARTIAL_SIG_EXTRA as '_EXTRA must specify a
codeseparator. Thus, as long as no repeated key/codeseparators are used,
the new PSBT remains fully backwards compatible.

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New PSBT version proposal

2021-01-01 Thread Jeremy via bitcoin-dev
One thing I think should be added in V2 is the ability to specify sighash
flags per-key as opposed to per-input.

The per-key restriction is unfitting given that there are circumstances
where multisig signers may validate heterogenous logic.
--
@JeremyRubin 



On Wed, Dec 23, 2020 at 1:37 PM Andrew Chow via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> On 12/22/20 10:30 PM, fiatjaf wrote:
> > Hi Andrew.
> >
> > I'm just a lurker here and I have not much experience with PSBTs, but
> still let me pose this very obvious question and concern: isn't this change
> going to create a compatibility nightmare, with some software supporting
> version 1, others supporting version 2, and the ones that care enough about
> UX and are still maintained being forced to support both versions -- and
> for no very important reason except some improvements in the way data is
> structured?
> No, it is not just "improvements in the way data is structured."
>
> The primary reason for these changes is to allow PSBT to properly
> support adding inputs and outputs. This is a feature that many people
> have requested, and the ways that people have been doing it are honestly
> just hacks and not really the right way to be doing that. These changes
> allow for that feature to be supported well.
>
> Furthermore, it is possible to downgrade and upgrade PSBTs between the
> two versions, once all inputs and outputs have been decided. Since
> PSBTv2 is essentially just taking all of the normal transaction fields
> and grouping them all with the rest of the data for those inputs and
> outputs, it is easy to reconstruct a global unsigned transaction and
> turn a PSBTv2 into a PSBTv0. It is likewise just as easy to go the other
> way and break apart the global unsigned tx to turn a PSBTv0 into a
> PSBTv2. Originally, I had considered requiring that once a transaction
> was fully constructed it must be downgraded to a PSBTv0, but the
> structure changes that were made do make it easier to work with PSBT so
> I decided not to add this requirement.
>
> Perhaps to maintain compatibility PSBT_GLOBAL_UNSIGNED_TX shouldn't be
> disallowed in PSBTv2 once the transaction is constructed? It would make
> things much more confusing though as it would no longer be a clean break.
>
>
> Andrew Chow
>
> > Ultimately I don't think it should matter if some data is structured in
> not-the-best-possible way, as long as it is clear enough for the computer
> and for the libraries already written to deal with it.
> Backwards-compatibility and general interoperability is worth much more
> than anything else in these cases.
> >
> > Also let me leave this article here, which I find very important (even
> if for some reason it ends up not being relevant to this specific case):
> http://scripting.com/2017/05/09/rulesForStandardsmakers.html
> >
> >    On Tue, 22 Dec 2020 17:12:22 -0300 Andrew Chow via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote 
> >   > Hi All,
> >   >
> >   > I have some updates on this after speaking with some people off-list.
> >   >
> >   > Firstly, the version number will be set to 2. In most discussions,
> this
> >   > proposal was being referred to as PSBT version 2, so it'll be easier
> and
> >   > clearer to set the version number to 2.
> >   >
> >   > For lock times, instead of a single  PSBT_IN_REQUIRED_LOCKTIME field,
> >   > there will be 2 of them, one for a time based lock time, and the
> other
> >   > for height based. These will be:
> >   > * PSBT_IN_REQUIRED_TIME_LOCKTIME = 0x10
> >   >* Key: empty
> >   >* Value: 32 bit unsigned little endian integer greater than or
> equal
> >   > to 5 representing the minimum Unix timestamp that this input
> >   > requires to be set as the transaction's lock time. Must be omitted in
> >   > PSBTv0, and may be omitted in PSBTv2
> >   > * PSBT_IN_REQUIRED_HEIGHT_LOCKTIME = 0x11
> >   >* Key: empty
> >   >* Value: 32 bit unsigned little endian integer less than 5
> >   > representing the minimum block height that this input requires to be
> set
> >   > as the transaction's lock time. Must be omitted in PSBTv0, and may be
> >   > omitted in PSBTv2.
> >   >
> >   > Having two lock time fields is necessary due to the behavior where
> all
> >   > inputs must use the same type of lock time (height or time). Thus if
> an
> >   > input requires a particular type of lock time, it must set the
> requisite
> >   > field. Any new inputs being added must be able to accommodate all
> >   > existing inputs' lock time type. This means they either must not
> have a
> >   > lock time specified (i.e. no OP_CLTV involved), or have branches that
> >   > allow the acceptance of either type. If an input has a lock time type
> >   > that is incompatible with the rest of the transaction, it must not
> be added.
> >   >
> >   > PSBT_GLOBAL_PREFERRED_LOCKTIME is changed to purely be 

Re: [bitcoin-dev] Floating-Point Nakamoto Consensus

2020-09-25 Thread Jeremy via bitcoin-dev
If I understand correctly, this is purely a policy level decision to accept
first-seen or a secondary deterministic test, but the most-work chain is
still always better than a "more fit" but less work chain.

In any case, I'm skeptical of the properties of this change. First-seen has
a nice property that once you receive a block, you have a substantially
reduced incentive to try to orphan it because the rest of the network is
going to work on building on that block. With fitness, I have a 50% shot if
I mine a block of mine being accepted, so floating point would have the
effect of destabilizing consensus convergence at the tip.

I could see using a fitness rule like this be useful if you see both blocks
within some very small window, e.g., 10 seconds, as it could decrease
partition risk if it's likely the orphan was mined within close range of
the other.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-23 Thread Jeremy via bitcoin-dev
Hi Suhas,

Thanks for your thoughtful response!

Overall I'll boil down my thoughts to the following:

If we can eventually come up with something clever at the user+policy layer
to emulate a sponsor like mechanism, I would still greatly prefer to expose
that sort of functionality directly and in a fully-abstracted usable way
for the minimum amount of mempool attack risk in 2nd layer protocols, even
at the expense of some base layer complexity. It's better to pay a security
sensitive engineering cost once, than to have to pay it repeatedly and
perhaps insufficiently.

Specific responses inline below:

Best,

Jeremy

>> The Sponsor Vector TXIDs must also be in the block the transaction is
validated in, with no restriction on order or on specifying a TXID more
than once.
> That means that if a transaction is confirmed in a block without its
sponsor, the sponsor is no longer valid.  This breaks a design principle
that has been discussed many times over the years, which is that once a
valid transaction is created, it should not become invalid later on unless
the inputs are double-spent.  This principle has some logical consequences
that we've come to accept, such as transaction chains being valid across
small reorgs in the absence of malicious (double-spend) behavior.

*Certainly, this property is strictly broken by this proposal. It does not
break the weaker property that the transactions can be reorged onto another
chain, however (like OP_GETBLOCKHASH or similar would), which is important
to note. It's also important to note this property is not preserved against
reorgs longer than 100 blocks.*

> I think that this principle is a useful one and that there should be a
high bar for doing away with it.  And it seems to me that this proposal
doesn't clear that bar -- the fee bumping improvement that this proposal
aims at is really coming from the policy change, rather than the consensus
change.

*I think this is possibly correct.*

*IMO the ability to implement the policy changes is purely derived from the
consensus changes. The consensus changes add a way of third parties to a
transaction to specify economic interest in the resolution of a
transaction. This requires a consensus change to work generically and
without forethought.*


*It's possible that with specific planning or opt-in, you can make
something roughly equivalent. But such a design might also consume more
bandwidth on-chain as you would likely have to e.g. always include a CPFP
hook output.*


> But if policy changes are the direction we're going to solve these
problems, we could instead just propose new policy rules for the existing
types of transaction chaining that we have, rather than couple them to a
new transaction type.
>
> My understanding of the main benefit of this approach is that this allows
3rd parties to participate in fee bumping.  But that behavior strikes me as
also problematic, because it introduces the possibility of 3rd party
griefing, to the extent that sponsor transactions in any way limit chains
of transactions that would be otherwise permitted.  If Alice sends Bob some
coins, and Alice and Bob are both honest and cooperating, Mallory shouldn't
be able to interfere with their low-feerate transaction by (eg) pinning it
with a large transaction that "sponsors" it (ie a large transaction that is
just above the feerate of the parent, which prevents additional child
transactions and makes it more expensive to RBF).

*It's possible to modify my implementation of the policy such that there is
no ability to interfere with the otherwise permitted limits, it just
requires a little bit more work to always discount sponsors on the
descendant counting.*


*W.r.t. griefing, the proposed amendment to limit sponsors to 1000 bytes
minimizes this concern. Further, pinning in this context is mainly an issue
if Alice and Bob are intending to RBF a transaction, at a policy level we
could make Sponsoring require that the transaction be RBF opted-out (or
sponsor opted in). *


> This last issue of pinning could be improved in this proposal by
requiring that a sponsor transaction bring the effective feerate of its
package up to something which should be confirmed soon (rather than just
being a higher feerate than the tx it is sponsoring).  However, we could
also carve out a policy rule just like that today, without any consensus
changes needed, to help with pinning (which is probably a good idea!  I
think this would be useful work).  So I don't think that approaches in that
direction would be unique to this proposal.

*I agree this is useful work and something that Ranked indexes would help
with if I understand them correctly, and can be worked on independently of
Sponsors. Overall I am skeptical that we want to accept any child if it
puts something into an upper percentile as we still need to mind our DoS
budgets (which the sponsors implementation keeps a tight bound on). *


>> We allow one Sponsor to replace another subject to normal replacement

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-21 Thread Jeremy via bitcoin-dev
Responses Inline:

Would it make sense that, instead of sponsor vectors
> pointing to txids, they point to input outpoints?  E.g.:
>
> 1. Alice and Bob open a channel with funding transaction 0123...cdef,
>output 0.
>
> 2. After a bunch of state updates, Alice unilaterally broadcasts a
>commitment transaction, which has a minimal fee.
>
> 3. Bob doesn't immediately care whether or not Alice tried to close the
>channel in the latest state---he just wants the commitment
>transaction confirmed so that he either gets his money directly or he
>can send any necessary penalty transactions.  So Bob broadcasts a
>sponsor transaction with a vector of 0123...cdef:0
>
> 4. Miners can include that sponsor transaction in any block that has a
>transaction with an input of 0123...cdef:0.  Otherwise the sponsor
>transaction is consensus invalid.
>
> (Note: alternatively, sponsor vectors could point to either txids OR
> input outpoints.  This complicates the serialization of the vector but
> seems otherwise fine to me.)
>

*This seems like a fine suggestion and I think addresses Antoine's issue.*


*I think there are likely some cases where you do want TXID and not Output
(e.g., if you *

*are sponsoring a payment to your locktime'd cold storage wallet (no CPFP)
from an untrusted third party (no RBF), they can grift you into paying for
an unrelated payment). This isn't a concern when the root utxo is multisig
& you are a participant.*

*The serialization to support both, while slightly more complicated, can be
done in a manner that permits future extensibility as well if there are
other modes people require.*



>
> > If we want to solve the hard cases of pinning, I still think mempool
> > acceptance of a whole package only on the merits of feerate is the
> easiest
> > solution to reason on.
>
> I don't think package relay based only on feerate solves RBF transaction
> pinning (and maybe also doesn't solve ancestor/dependent limit pinning).
> Though, certainly, package relay has the major advantage over this
> proposal (IMO) in that it doesn't require any consensus changes.
> Package relay is also very nice for fixing other protocol rough edges
> that are needed anyway.
>
> -Dave
>

*I think it's important to keep in mind this is not a rival to package
relay; I think you also want package relay in addition to this, as they
solve different but related problems.*


*Where you might be able to simplify package relay with sponsors is by
doing a sponsor-only package relay, which is always limited to 2
transactions, 1 sponsor, 1 sponsoree. This would not have some of the
challenges with arbitrary-package package-relay, and would (at least from a
ux perspective) allow users to successfully get parents with insufficient
fee into the mempool.*
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread Jeremy via bitcoin-dev
Antoine,

Yes I think you're a bit confused on where the actual sponsor vector is. If
you have a transaction chain A->B->C and a sponsor S_A, S_A commits to txid
A and A is unaware of S.


W.r.t your other points, I fully agree that the 1-to-N sponsored case is
very compelling. The consensus rules are clear that sponsor commitments are
non-rival, so there's no issue with allowing as many sponsors as possible
and including them in aggregate. E.g., if S_A and S'_A both sponsor A with
feerate(S*) > feerate(A), there's no reason not to include all of them in a
block. The only issue is denial of service in the mempool. In the future,
it would definitely be desirable to figure out rules that allow mempools to
track both multiple sponsors and multiple sponsor targets. But in the
interest of KISS, the current policy rules are designed to be minimally
invasive and maximally functional.

In terms of location for the sponsor vector, I'm relatively indifferent.
The annex is a possible location, but it's a bit odd as we really only need
to allow one such vector per tx, not one per input, and one per input would
enable some new use cases (maybe good, maybe bad). Further, being in the
witness space would mean that if two parties create a 2 input transaction
with a desired sponsor vector they would both need to specify it as you
can't sign another input's witness data. I wholeheartedly agree with the
sentiment though; there could be a more efficient place to put this data,
but nothing jumps out to me as both efficient and simple in implementation
(a new tx-level field sounds like a lot of complexity).


> n >=1 ? I think you can have at least one vector and this is matching the
code

yes, this has been fixed in the gist (cred to Dmitry Petukhov for pointing
it out first), but is correct in the code. Thank you for your careful
reading.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread Jeremy via bitcoin-dev
Hi David!

Thanks for taking a look, and great question.

> Is this in the reference implementation?

It is indeed in the reference implementation. Please see
https://github.com/bitcoin/bitcoin/compare/master...JeremyRubin:subsidy-tx#diff-24efdb00bfbe56b140fb006b562cc70bR741-R743

There is no requirement that there be any input in common, just that the
sponsor vectors are identical (keep in mind that we limit our sponsor
vector by policy to 1 element, because, as you rightfully point out,
multiple sponsors is more complex to implement).


> In the second case, I think Mallory can use an existing pinning
> technique to make it expensive for Bob to fee bump.  The normal
> replacement policies require a replacement to pay an absolute higher fee
> than the original transaction, so Mallory can create a 100,000 vbyte
> transaction with a single-vector sponsor at the end pointing to Bob's
> transaction.  This sponsor transaction pays the same feerate as Bob's
> transaction---let's say 50 nBTC/vbyte, so 5 mBTC total fee.  In order
> for Bob to replace Mallory's sponsor transaction with his own sponsor
> transaction, Bob needs to pay the incremental relay feerate (10
> nBTC/vbyte) more, so 6 mBTC total ($66 at $11k/BTC).

Yup, I was aware of this limitation but I'm not sure how practical it is as
an attack because it's quite expensive for the attacker. But there are a
few simple policies that can eliminate it:

1) A Sponsoring TX never needs to be more than, say, 2 inputs and 2
outputs. Restricting this via policy would help, or more flexibly limiting
the total size of a sponsoring paying transaction to 1000 bytes.
2) Make A Sponsoring TX not need to pay more absolute fee, just needs to
increase the feerate (perhaps with a constant relay fee bump to prevent
spam).

I think 1) is simpler and should allow full use of the sponsor mechanism
while preventing this class of issue mostly.

What do you think?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread Jeremy via bitcoin-dev
Hi Cory!

Thanks for taking a look. CC nopara as I think your questions are the same.

I think there are a few reason we won't see functionally worse privacy:

1. RBF/CPFP may require the use of an external to the original transaction
to pay sufficient fee.
2. RBF/CPFP may leak which address was the change and which was the payment.

In addition, I think there is a benefit in that:

1. RBF/CPFP requires access to the keys in the same 'security zone' as the
payment you made (e.g., if it's a multi-sig to multi-sig requires m of N to
cpfp/or RBF, whereas sponsors could be anyone).
2. Sponsors can be a fully separate arbitrary wallet.
3. You can continually coinjoin the funds in your fee-paying wallet without
tainting your main funds.
4. You can keep those funds in a lightning channel and pay your fees via
loop outs.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-18 Thread Jeremy via bitcoin-dev
Hi Bitcoin Devs,


I'd like to share with you a draft proposal for a mechanism to replace
CPFP and RBF for
increasing fees on transactions in the mempool that should be more
robust against attacks.

A reference implementation demonstrating these rules is available
[here](https://github.com/bitcoin/bitcoin/compare/master...JeremyRubin:subsidy-tx)
for those who
prefer to not read specs.

Should the mailing list formatting be bungled, it is also available as
a gist 
[here](https://gist.github.com/JeremyRubin/92a9fc4c6531817f66c2934282e71fdf).

Non-Destructive TXID Dependencies for Fee Sponsoring


This BIP proposes a general purpose mechanism for expressing
non-destructive (i.e., not requiring
the spending of a coin) dependencies on specific transactions being in
the same block that can be
used to sponsor fees of remote transactions.

Motivation
==

The mempool has a variety of protections and guards in place to ensure
that miners are economic and
to protect the network from denial of service.

The rough surface of these policies has some unintended consequences
for second layer protocol
developers. Applications are either vulnerable to attacks (such as
transaction pinning) or must go
through great amounts of careful protocol engineering to guard against
known mempool attacks.

This is insufficient because if new attacks are found, there is
limited ability to deploy fixes for
them against deployed contract instances (such as open lightning
channels). What is required is a
fully abstracted primitive that requires no special structure from an
underlying transaction in
order to increase fees to confirm the transactions.

Consensus Specification
===

If a transaction's last output's scripPubKey is of the form OP_VER
followed by n*32 bytes, where
n>1, it is interpreted as a vector of TXIDs (Sponsor Vector). The
Sponsor Vector TXIDs  must also be
in the block the transaction is validated in, with no restriction on
order or on specifying a TXID
more than once. This can be accomplished simply with the following patch:


```diff
+
+// Extract all required fee dependencies
+std::unordered_set dependencies;
+
+const bool dependencies_enabled = VersionBitsState(pindex->pprev,
chainparams.GetConsensus(),
Consensus::DeploymentPos::DEPLOYMENT_TXID_DEPENDENCY,
versionbitscache) == ThresholdState::ACTIVE;
+if (dependencies_enabled) {
+for (const auto& tx : block.vtx) {
+// dependency output is if the last output of a txn is
OP_VER followed by a sequence of 32*n
+// bytes
+// vout.back() must exist because it is checked in CheckBlock
+const CScript& dependencies_script = tx->vout.back().scriptPubKey;
+// empty scripts are valid, so be sure we have at least one byte
+if (dependencies_script.size() && dependencies_script[0]
== OP_VER) {
+const size_t size = dependencies_script.size() - 1;
+if (size % 32 == 0 && size > 0) {
+for (auto start = dependencies_script.begin() +1,
stop = start + 32; start < dependencies_script.end(); start = stop,
stop += 32) {
+uint256 txid;
+std::copy(start, stop, txid.begin());
+dependencies.emplace(txid);
+}
+}
+// No rules applied otherwise, open for future upgrades
+}
+}
+if (dependencies.size() > block.vtx.size()) {
+return
state.Invalid(BlockValidationResult::BLOCK_CONSENSUS,
"bad-dependencies-too-many-target-txid");
+}
+}
+
 for (unsigned int i = 0; i < block.vtx.size(); i++)
 {
 const CTransaction  = *(block.vtx[i]);
+if (!dependencies.empty()) {
+dependencies.erase(tx.GetHash());
+}

 nInputs += tx.vin.size();

@@ -2190,6 +2308,9 @@ bool CChainState::ConnectBlock(const CBlock&
block, BlockValidationState& state,
 }
 UpdateCoins(tx, view, i == 0 ? undoDummy :
blockundo.vtxundo.back(), pindex->nHeight);
 }
+if (!dependencies.empty()) {
+return state.Invalid(BlockValidationResult::BLOCK_CONSENSUS,
"bad-dependency-missing-target-txid");
+}
```

### Design Motivation
The final output of a transaction is an unambiguous location to attach
metadata to a transaction
such that the data is available for transaction validation. This data
could be committed to anywhere,
with added implementation complexity, or in the case of Taproot
annexes, incompatibility with
non-Taproot addresses (although this is not a concern for sponsoring a
transaction that does not use
Taproot).

A bare scriptPubKey prefixed with OP_VER is defined to be invalid in
any context, and is trivially
provably unspendable and therefore pruneable.

If there is another convenient place to put the TXID vector, that's fine too.

As the output type is 

Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2020-09-03 Thread Jeremy via bitcoin-dev
It's also not something that's trivial to set up in any scheme because you
have to have an ordering around when you set up the tx intended to be the
inverse lock before you create the tx using it.


--
@JeremyRubin 



On Thu, Sep 3, 2020 at 10:34 AM Jeremy  wrote:

> CTV does not enable this afaiu because it does not commit to the inputs
> (otherwise there's a hash cycle for predicting the output's TXID.
>
>
> --
> @JeremyRubin 
> 
>
>
> On Thu, Sep 3, 2020 at 7:39 AM Dmitry Petukhov  wrote:
>
>> Just had an idea that an an "inverse timelock" can be made
>> almost-certainly automatic: a revocation UTXO shall become
>> anyone-can-spend after a timeout, and bear some non-dust amount.
>>
>> Before the timelock expiration, it shall be spendable only along with
>> the covenant-locked 'main' UTXO (via a signature or mutual covenant)
>>
>> This way, after a timeout expires, a multitude of entities will be
>> incentivized to spend this UTXO, because this would be free money for
>> them. It will probably be spend by a miner, as they can always replace
>> the spending transaction with their own and claim the amount.
>>
>> After the revocation UTXO is spent, the covenant path that commits to
>> having it in the inputs will be unspendable, and this would effectively
>> constitute an "inverse timelock".
>>
>>
>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2020-09-03 Thread Jeremy via bitcoin-dev
CTV does not enable this afaiu because it does not commit to the inputs
(otherwise there's a hash cycle for predicting the output's TXID.


--
@JeremyRubin 



On Thu, Sep 3, 2020 at 7:39 AM Dmitry Petukhov  wrote:

> Just had an idea that an an "inverse timelock" can be made
> almost-certainly automatic: a revocation UTXO shall become
> anyone-can-spend after a timeout, and bear some non-dust amount.
>
> Before the timelock expiration, it shall be spendable only along with
> the covenant-locked 'main' UTXO (via a signature or mutual covenant)
>
> This way, after a timeout expires, a multitude of entities will be
> incentivized to spend this UTXO, because this would be free money for
> them. It will probably be spend by a miner, as they can always replace
> the spending transaction with their own and claim the amount.
>
> After the revocation UTXO is spent, the covenant path that commits to
> having it in the inputs will be unspendable, and this would effectively
> constitute an "inverse timelock".
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] reviving op_difficulty

2020-09-02 Thread Jeremy via bitcoin-dev
Yep this is a good example construction. I'd also point out that modulo a
privacy improvement, you can also script it as something like:

IF   IF  CLTV B DROP CHECKSIG ELSE  CLTV DROP A CHECKSIG ENDIF ELSE
2 A B 2 CHECKMULTI ENDIF

This way you equivalently have cooperative closing / early closing
positions, but you make the redeem script non-interactive to setup which
enable someone to pay into one of these contracts without doing
pre-signeds. This is unfortunate for privacy as the script is then visible,
but in a taproot world it's fine.

Of course the non interactivity goes away if you want non-binary outcomes
(e.g., Alice gets 1.5 Coin and Bob gets .5 Coin in case A, Bob gets 1.5
Coin Alice gets .5 coin in Case B).

And it's also possible to mix relative and absolute time locks for some
added fun behavior (e.g., you win if > Time and > Blocks)


A while back I put together some python code which handles these embedded
in basic channels between two parties (no routing). This enables you to
high-frequency update and model a hashrate perpetual swap, assuming your
counterparty is online.


The general issue with this construction family is that the contracts are
metastable. E.g., if you're targeting a 100 block deficit , that means you
have 100 blocks of time to claim the funds before either party can win. So
there's some minimum times and hashrate moves to play with, and the less
"clearly correct" you were, the less clearly correct the execution will be.
This makes the channel version of the contract compelling as you can update
and revoke frequently on further out contracts.


--
@JeremyRubin 



On Sat, Aug 22, 2020 at 9:47 AM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sun, Aug 16, 2020 at 11:41:30AM -0400, Thomas Hartman via bitcoin-dev
> wrote:
> > First, I would like to pay respects to tamas blummer, RIP.
> >
> >
> https://bitcoinmagazine.com/articles/remembering-tamas-blummer-pioneering-bitcoin-developer
>
> RIP, Tamas.
>
> > Tamas proposed an additional opcode for enabling bitcoin difficulty
> > futures, on this list at
> >
> >
> https://www.mail-archive.com/bitcoin-dev@lists.linuxfoundation.org/msg07991.html
>
> Subsequent to Blummer's post, I heard from Jeremy Rubin about a
> scheme[1] that allows difficulty futures without requiring any changes
> to Bitcoin.  In short, it takes advantage of the fact that changes in
> difficulty also cause a difference in maturation time between timelocks
> and height-locks.  As an simple example:
>
> 1. Alice and Bob create an unsigned transaction that deposits their
>money into a 2-of-2 multisig.
>
> 2. They cooperate to create and sign two conflicting spends from the
> multisig:
>
> a. Pays Alice with an nLockTime(height) of CURRENT_HEIGHT + 2016 blocks
>
> b. Pays Bob with an nLockTime(time) of CURRENT_TIME + 2016 * 10 * 60
> seconds
>
> 3. After both conflicting spends are signed, Alice and Bob sign and
>broadcast the deposit transaction from #1.
>
> 4. If hashrate increases during the subsequent period, the spend that
>pays Alice will mature first, so she broadcasts it and receives that
>money.  If hashrate decreases, the spend to Bob matures first, so he
>receives the money.
>
> Of course, this basic formula can be tweaked to create other contracts,
> e.g. a contract that only pays if hashrate goes down more than 25%.
>
> As far as I can tell, this method should be compatible with offchain
> commitments (e.g. payments within channels) and could be embedded in a
> taproot commitment using OP_CLTV or OP_CSV instead of nLockTime.
>
> -Dave
>
> [1] https://powswap.com/
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New tipe of outputs that saves space and give more privacy

2020-08-25 Thread Jeremy via bitcoin-dev
You may wish to review bip-119 ChecktemplateVerify, as it is designed to
support something very similar to what you've described. You can see more
at https://utxos.org

On Tue, Aug 25, 2020, 6:48 AM Jule.Adka via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hey, there! I have a new proposal to help Bitcoin’s scalability, while
> helping privacy.
>
> *Motivation*
>
> All transactions in the Bitcoin’s network have a header, an input list and
> an output list. Every transaction must consume some previous outputs and
> create new ones, this creates huge amounts of data through the years, and
> creates scalability problems. With segwit we solved some problems by moving
> part of the data to a separate structure that stores data useful to verify
> the transaction itself, but not its state and the state of the whole
> blockchain[1]. But we still have a problem with the outputs list, some
> transactions create various outputs, generating munch data and increasing
> the size of the unspent transactions outputs(UTXOs) that are held for every
> full node into the network.
>
> Another problem with this approach is the fact that all outputs are
> recorded, disclosed and accessible to everyone that looks at the
> transaction. This creates various privacy problems that are exploited for
> the chain analize companies and governments to track individuals and link
> it to their own personality.
>
> *Description*
>
> I propose a new type of output, called Mekelized Output Set and the
> p2mos(pay to Mekelized Output Set) standard. Instead of listing all the
> output set, as in an ordinary transaction, Alice only specifies a Markle
> root, and only when she tries to spend the coin, she may to show a path
> into the Merkle from her transaction to the recorded root (a.k.a Merkle
> Path), and proof that her output really exists.
>
> The extra data (the path) are stored into the witness structure, and can
> be striped after verification. Once the size of the witness structure is
> ignored/discounted when calculating the block size, it gives more space for
> transactions in a unique block, without increasing it’s actual size. As
> well, decrease the UTXO’s size, taking less resource from validators node.
>
>  An ordinary(the current standard) p2wpkh transaction with one output have
> 8 bytes to amount, 1-9 varInt for the locking-script size and 22 bytes
> (OP_0 OP_PUSHBYTES_20 <20-bytes-hash>), at most 39 bytes for each
> output[2]. If we use sha256 to encode the merkle, we need only 32 of script
> data, 49 in the total. 10 bytes more than an ordinary transaction with one
> output. But usually the transactions have 2 outputs (the actual payment and
> a change) or more. If the transaction have 2 outputs, we only record one
> commitment and the two outputs keep hidden until it has been spent (also
> the UTXO set is have one transaction instead of 2), the 2 outputs would
> require 78 bytes to record, we can do it with the same 49 bytes. For a 12
> outputs[3] transaction, it would require 468 bytes, and so on…
>
> By using p2mos saves space by reducing any transaction to a 49 bytes-wide
> output set, no matter how many outputs actually exist. Also, once only the
> peers are able to know the number and the value of the outputs, a third
> party has no way to know the ownership of the remaining coins, many of the
> privacy troubles associated with outputs, like Equal-output CoinJoin and
> different outputs types[4] are solved.
>
> *An example*
>
> When Alice’s wallet create a transaction, sending 5 bitcoins to Bob and
> spending from a 10 bitcoins output (forget the fees, for a while), Alice
> must send 5 bitcoins to Bob and 5 back to she as change, when Bob’s wallet
> create the invoice to be paid by Alice, he gives an output to Alice and she
> adds it together into a Merkle Tree, takes the root and build a transaction
> paying to this hash. Alice’s wallet then sends a path into the tree to
> prove to Bob that his output is really into a transaction and is fully
> expendable from Bob’s wallet. Bob now looks for the mempool (and the
> chain, of course) to find transactions that pay to the given Markle Root.
>
> Now let’s see how Bob spends from this UTXO. His wallet knows the path
> that has taken from his transaction to the top, and the wallet reveals it
> to the network, before evaluating the output. Bob sends the actual output,
> the path to the root of the tree as well the data to solve the lockscript
> on it(note that “actual output” means the output that keeps hidden from the
> world until Bob spends it). After checking if Bob’s output really exists,
> an node can evaluate it exactly in the same way as ordinary transactions,
> the output will look like any other.
>
> Alice’s wallet does the same to spend her 5 BTC, but presenting a totally
> different output, that she spends from a script that only she has a way to
> do, if they use p2wpkh she must present the public key and a valid
> signature. After evaluation, 

Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-24 Thread Jeremy via bitcoin-dev
On Mon, Aug 24, 2020 at 1:17 PM Eric Voskuil  wrote:

> I said security, not privacy. You are in fact exposing the feature to any
> node that wants to negotiate for it. if you don’t want to expose the buggy
> feature, then disable it. Otherwise you cannot prevent peers from accessing
> it. Presumably peers prefer the new feature if they support it, so there is
> no need for this complexity.
>
>
>
I interpreted* " This seems to imply a security benefit (I can’t discern
any other rationale for this complexity). It should be clear that this is
no more than trivially weak obfuscation and not worth complicating the
protocol to achieve.", *to be about obfuscation and therefore privacy.

The functionality that I'm mentioning might not be buggy, it might just not
support peers who don't support another feature. You can always disconnect
a peer who sends a message that you didn't handshake on (or maybe we should
elbow bump given the times).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-24 Thread Jeremy via bitcoin-dev
>
>
>
>
>
>
> * >> On 8/21/20 5:17 PM, Jeremy wrote: >> As for an example of where you'd
> want multi-round, you could imagine a scenario where you have a feature A
> which gets bugfixed by the introduction of feature B, and you don't want to
> expose that you support A unless you first negotiate B. Or if you can
> negotiate B you should never expose A, but for old nodes you'll still do it
> if B is unknown to them. This seems to imply a security benefit (I can’t
> discern any other rationale for this complexity). It should be clear that
> this is no more than trivially weak obfuscation and not worth complicating
> the protocol to achieve.*


The benefit is not privacy oriented and I didn't intend to imply as such.
The benefit is that you may only wish to expose functionality to peers
which support some other set of features. For example, with wtxid relay, I
might want to expose some additional functionality after establishing my
peer supports it, that peers which do not have wtxid relay should not be
allowed to use. The benefit over just exposing all functions is then a node
might be programmed to support the new feature but not wtxid relay, which
can lead to some incompatibilities.

You cannot implement this logic as a purely post-hoc "advertise all and
then figure out what is allowed" because then you require strict
consistency between peers of that post-hoc feature availability implication
map.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Jeremy via bitcoin-dev
As for an example of where you'd want multi-round, you could imagine a
scenario where you have a feature A which gets bugfixed by the introduction
of feature B, and you don't want to expose that you support A unless you
first negotiate B. Or if you can negotiate B you should never expose A, but
for old nodes you'll still do it if B is unknown to them. An example of
this would be (were it not already out without a feature negotiation
existing) WTXID/TXID relay.

The SYNC primitve simply codifies what order messages should be in and when
you're done for a phase of negotiation offering something. It can be done
without, but then you have to be more careful to broadcast in the correct
order and it's not clear when/if you should wait for more time before
responding.


On Fri, Aug 21, 2020 at 2:08 PM Jeremy  wrote:

> Actually we already have service bits (which are sadly limited) which
> allow negotiation of non bilateral feature support, so this would supercede
> that.
> --
> @JeremyRubin 
> 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Jeremy via bitcoin-dev
Actually we already have service bits (which are sadly limited) which allow
negotiation of non bilateral feature support, so this would supercede that.
--
@JeremyRubin 



On Fri, Aug 21, 2020 at 1:45 PM Matt Corallo 
wrote:

> This seems to be pretty overengineered. Do you have a specific use-case in
> mind for anything more than simply continuing
> the pattern we've been using of sending a message indicating support for a
> given feature? If we find some in the future,
> we could deploy something like this, though the current proposal makes it
> possible to do it on a per-feature case.
>
> The great thing about Suhas' proposal is the diff is about -1/+1 (not
> including tests), while still getting all the
> flexibility we need. Even better, the code already exists.
>
> Matt
>
> On 8/21/20 3:50 PM, Jeremy wrote:
> > I have a proposal:
> >
> > Protocol >= 70016 cease to send or process VERACK, and instead use
> HANDSHAKEACK, which is completed after feature
> > negotiation.
> >
> > This should make everyone happy/unhappy, as in a new protocol number
> it's fair game to change these semantics to be
> > clear that we're acking more than version.
> >
> > I don't care about when or where these messages are sequenced overall,
> it seems to have minimal impact. If I had free
> > choice, I slightly agree with Eric that verack should come before
> feature negotiation, as we want to divorce the idea
> > that protocol number and feature support are tied.
> >
> > But once this is done, we can supplant Verack with HANDSHAKENACK or
> HANDSHAKEACK to signal success or failure to agree
> > on a connection. A NACK reason (version too high/low or an important
> feature missing) could be optional. Implicit NACK
> > would be disconnecting, but is discouraged because a peer doesn't know
> if it should reconnect or the failure was
> > intentional.
> >
> > --
> >
> > AJ: I think I generally do prefer to have a FEATURE wrapper as you
> suggested, or a rule that all messages in this period
> > are interpreted as features (and may be redundant with p2p message types
> -- so you can literally just use the p2p
> > message name w/o any data).
> >
> > I think we would want a semantic (which could be based just on message
> names, but first-class support would be nice) for
> > ACKing that a feature is enabled. This is because a transcript of:
> >
> > NODE0:
> > FEATURE A
> > FEATURE B
> > VERACK
> >
> > NODE1:
> > FEATURE A
> > VERACK
> >
> > It remains unclear if Node 1 ignored B because it's an unknown feature,
> or because it is disabled. A transcript like:
> >
> > NODE0:
> > FEATURE A
> > FEATURE B
> > FEATURE C
> > ACK A
> > VERACK
> >
> > NODE1:
> > FEATURE A
> > ACK A
> > NACK B
> > VERACK
> >
> > would make it clear that A and B are known, B is disabled, and C is
> unknown. C has 0 support, B Node 0 should support
> > inbound messages but knows not to send to Node 1, and A has full
> bilateral support. Maybe instead it could a message
> > FEATURE SEND A and FEATURE RECV A, so we can make the split explicit
> rather than inferred from ACK/NACK.
> >
> >
> > --
> >
> > I'd also propose that we add a message which is SYNC, which indicates
> the end of a list of FEATURES and a request to
> > send ACKS or NACKS back (which are followed by a SYNC). This allows
> multi-round negotiation where based on the presence
> > of other features, I may expand the set of features I am offering. I
> think you could do without SYNC, but there are more
> > edge cases and the explicitness is nice given that this already
> introduces future complexity.
> >
> > This multi-round makes it an actual negotiation rather than a pure
> announcement system. I don't think it would be used
> > much in the near term, but it makes sense to define it correctly now.
> Build for the future and all...
> >
> >
> >
> > --
> > @JeremyRubin <
> https://twitter.com/JeremyRubin>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-21 Thread Jeremy via bitcoin-dev
I have a proposal:

Protocol >= 70016 cease to send or process VERACK, and instead use
HANDSHAKEACK, which is completed after feature negotiation.

This should make everyone happy/unhappy, as in a new protocol number it's
fair game to change these semantics to be clear that we're acking more than
version.

I don't care about when or where these messages are sequenced overall, it
seems to have minimal impact. If I had free choice, I slightly agree with
Eric that verack should come before feature negotiation, as we want to
divorce the idea that protocol number and feature support are tied.

But once this is done, we can supplant Verack with HANDSHAKENACK or
HANDSHAKEACK to signal success or failure to agree on a connection. A NACK
reason (version too high/low or an important feature missing) could be
optional. Implicit NACK would be disconnecting, but is discouraged because
a peer doesn't know if it should reconnect or the failure was intentional.

--

AJ: I think I generally do prefer to have a FEATURE wrapper as you
suggested, or a rule that all messages in this period are interpreted as
features (and may be redundant with p2p message types -- so you can
literally just use the p2p message name w/o any data).

I think we would want a semantic (which could be based just on message
names, but first-class support would be nice) for ACKing that a feature is
enabled. This is because a transcript of:

NODE0:
FEATURE A
FEATURE B
VERACK

NODE1:
FEATURE A
VERACK

It remains unclear if Node 1 ignored B because it's an unknown feature, or
because it is disabled. A transcript like:

NODE0:
FEATURE A
FEATURE B
FEATURE C
ACK A
VERACK

NODE1:
FEATURE A
ACK A
NACK B
VERACK

would make it clear that A and B are known, B is disabled, and C is
unknown. C has 0 support, B Node 0 should support inbound messages but
knows not to send to Node 1, and A has full bilateral support. Maybe
instead it could a message FEATURE SEND A and FEATURE RECV A, so we can
make the split explicit rather than inferred from ACK/NACK.


--

I'd also propose that we add a message which is SYNC, which indicates the
end of a list of FEATURES and a request to send ACKS or NACKS back (which
are followed by a SYNC). This allows multi-round negotiation where based on
the presence of other features, I may expand the set of features I am
offering. I think you could do without SYNC, but there are more edge cases
and the explicitness is nice given that this already introduces future
complexity.

This multi-round makes it an actual negotiation rather than a pure
announcement system. I don't think it would be used much in the near term,
but it makes sense to define it correctly now. Build for the future and
all...



--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Generalizing feature negotiation when new p2p connections are setup

2020-08-16 Thread Jeremy via bitcoin-dev
Concept ack!

It might be nice to include a few negotiation utility functions either in
this bip or at the same time in a separate bip. An example we might want to
include is a "polite disconnect", whereby a node can register that you
don't want to connect in the future due to incompatibility.

It also might be nice to standardize some naming convention or negotiation
message type so that we don't end up with different negotiation systems.
Then we can also limit the bip so that we're only defining negotiation
message types as ignorable v.s. some other message type (which can also be
ignored, but maybe we want to do something else in the future).

This also makes it easier for old (but newer than this bip) nodes to apply
some generic rules around reporting/rejecting/responding to unknown feature
negotiation v.s. an untagged message which might be a negotiation or
something else.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-11 Thread Jeremy via bitcoin-dev
Stellar work Antoine and Gleb! Really excited to see designs come out on
payment pools.

I've also been designing some payment pools (I have some not ready code I
can share with you guys off list), and I wanted to share what I learned
here in case it's useful.

In my design of payment pools, I don't think the following requirement: "A
CoinPool must satisfy the following *non-interactive any-order withdrawal*
property: at any point in time and any possible sequence of previous
CoinPool events, a participant should be able to move their funds from the
CoinPool to any address the participant wants without cooperation with
other CoinPool members." is desirable in O(1) space. I think it's much
better to set the requirement to O(log(n)), and this isn't just because of
wanting to use CTV, although it does help.

Let me describe a quick CTV based payment pool:

Build a payment pool for N users as N/2 channels between participants
created in a payment tree with a radix of R, where every node has a
multisig path for being used as a multi-party channel and the CTV branch
has a preset timeout. E.g., with radix 2:

  Channel(a,b,c,d,e,f,g,h)
 /   \
   Channel(a,b,c,d)
Channel(e,f,g,h)
/
\/ \
Channel(a,b)Channel(c,d)  Channel(e,f)
Channel(g,h)


All of these channels can be constructed and set up non-interatively using
CTV, and updated interactively. By default payments can happen with minimal
coordination of parties by standard lightning channel updates at the leaf
nodes, and channels can be rebalanced at higher layers with more
participation.


Now let's compare the first-person exit non cooperative scenario across
pools:

CTV-Pool:
Wait time: Log(N). At each branch, you must wait for a timeout, and you
have to go through log N to make sure there are no updated states. You can
trade off wait time/fees by picking different radixes.
TXN Size: Log(N) 1000 people with radix 4 --> 5 wait periods. 5*4 txn size.
Radix 20 --> 2 wait periods. 2*20 txn size.

Accumulator-Pool:
Wait Time: O(1)
TXN Size: Depending on accumulator: O(1), O(log N), O(N) bits. Let's be
favorable to Accumulators and assumer O(1), but keep in mind constant may
be somewhat large/operations might be expensive in validation for updates.


This *seems* like a clear win for Accumulators. But not so fast. Let's look
at the case where *everyone* exits non cooperatively from a payment pool.
What is the total work and time?

CTV Pool:
Wait time: Log(N)
Txn Size: O(N) (no worse than 2x factor overhead with radix 2, higher
radixes dramatically less overhead)

Accumulator Pool:
Wait time: O(N)
Txn Size: O(N) (bear in mind *maybe* O(N^2) or O(N log N) if we use an
sub-optimal accumulator, or validation work may be expensive depending on
the new primitive)


So in this context, CTV Pool has a clear benefit. The last recipient can
always clear in Log(N) time whereas in the accumulator pool, the last
recipient has to wait much much longer. There's no asymptotic difference in
Tx Size, but I suspect that CTV is at least as good or cheaper since it's
just one tx hash and doesn't depend on implementation.

Another property that is nice about the CTV pool style is the bisecting
property. Every time you have to do an uncooperative withdrawal, you split
the group into R groups. If your group is not cooperating because one
person is permanently offline, then Accumulator pools *guarantee* you need
to go through a full on-chain redemption. Not so with a CTV-style pool, as
if you have a single failure among [1,2,3,4,5,6,7,8,9,10] channels (let's
say channel 8 fails), then with a radix 4 setup your next steps are:
[1,2,3,4,5,6,7,8,9,10]
[1,2,3,4,5,6,7,X,9,10]
[1,2,3,4] [5,6,7,X] [9,10]
[1,2,3,4] 5 6 7 X [9,10]

So you only need to do Log(N) chain work to exit the bad actor, but then it
amortizes! A future failure (let's say of 5) only causes 5 to have to close
their channel, and does not affect anyone else.

With an accumulator based pool, if you re-pool after one failure, a second
failure causes another O(N) work. So then total work in that case is
O(N^2). You can improve the design by making the evict in any order option
such that you can *kick out* a member in any order, that helps solve some
of this nastiness (rather than them opting to leave). But I'm unclear how
to make this safe w.r.t. updated states. You could also allow, perhaps, any
number of operators to simultaneously leave in a tx. Also not sure how to
do that.



Availability:
With CTV Pools, you can make a payment if just your immediate conterparty
is online in your channel. Opportunistically, if people above you are
online, you can make channel updates higher up in the tree which have
better timeout properties. You can also create new channels, binding
yourself to different parties if 

[bitcoin-dev] [was BIP OP_CHECKTEMPLATEVERIFY] Fee Bumping Operation

2020-06-08 Thread Jeremy via bitcoin-dev
Broke out to a separate thread.

At core, the reason why this method *might* work is that it's essentially
just CPFP but we can guarantee that the link we're examining is always
exactly one hop away, so we get rid of most of the CPFP graph traversal
issues.

Your description largely matches my thinking for how something like this
could work (pay for neighbor). The issue is that the extant CPFP logic is
somewhat brittle and doesn't work as expected (Child not Children, which is
problematic for multiple PFN's).

> PFN transaction would still be valid if some of 'ghost parents' are
already confirmed, so the miners could have more fees than strictly
necessary. But this is the same as with CPFP.

This is problematic and can't be done as it requires a new index of all
past txns for consensus.

My thinking is that a Fee Bump transaction can name a list of TXIDs (Or one
TXID which implies all ancestors of) that it wishes to be included in a
block with. It must be included in that block. A Fee Bump transaction may
have no unconfirmed ancestors nor any children. Potentially, it also may
not be RBF'd. You treat the Fee Bump Transactions as the lowest descendant
of whatever it targets and then set it's feerate/total fee based on the
package that would have to co-confirm for it to be worth mining. This makes
it sort like normal transactions for inclusion. You can require some
minimums for mempool inclusion at all.

If it's target is confirmed or replaced, it should drop from the mempool.

Transactions in the mempool may set a flag that opts out of CPFP for
descendants/blocks any descendants. Channel protocols should set this bit
to prevent pinning, and then use the Fee Bump to add fees to whatever txns
need to go through. If done right you can also layer a coinswap protocol
with the fee-bumping txns change so that you are getting a privacy benefit
at the same time.

BTW the annex *could* be used for this purpose, but it would also be
acceptable to have it be in some kind of anyone can spend output. Then it
would just be a anyone-can-spend tx with OP_CHECK_TXID_IN_BLOCK (or
OP_CHECK_UTXO_SPENT_IN_BLOCK), and a miner could claim all such outputs at
the end of the block. This is worse in terms of on-chain overheads, but
nice in that it's the minimal semantic change & introduces some general
purpose functionality.

But my thoughts are still pretty loose at the moment around it. I suspect
that to make fee bumping work nicely would require removing CPFP entirely,
but I don't know that to be the case concretely.

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sun, Jun 7, 2020 at 11:02 PM Dmitry Petukhov  wrote:

> В Sun, 7 Jun 2020 15:45:16 -0700
> Jeremy via bitcoin-dev  wrote:
>
> > What I think we'll eventually land on is a way of doing a tx
> > that contributes fee to another tx chain as a passive observer to
> > them. While this breaks one abstraction around how dependencies
> > between transactions are processed, it also could help resolve some
> > really difficult challenges we face with application-DoS (pinning and
> > other attacks) in the mempool beyond CTV. I have a napkin design for
> > how this could work, but nothing quite ready to share yet.
>
> I had an idea of 'Pay for neighbor' transaction where a transaction
> that is not directly a child of some other transaction can specify that
> it wants to pay the fee for that other transaction(s). It can become
> like 'ghost child' transaction for them, in what it cannot be mined
> unless its 'ghost parents' are confirmed, too. It will be like CPFP,
> but without direct dependency via inputs. Such 'PFN' transaction would
> not spend any coins beside what it specifies in its own inputs, of
> course.
>
> The idea required a hardfork at first, but Anthony Towns suggested
> a way to make it into a soft fork (past-taproot) by putting the txids of
> 'ghost parents' into taproot annex.
>
> PFN transaction would still be valid if some of 'ghost parents' are
> already confirmed, so the miners could have more fees than strictly
> necessary. But this is the same as with CPFP.
>
> Looking at the mempool code, it seems that only a way how parent/child
> transactions relationships are established will need to be adjusted to
> account for this 'ghost relationships', and once established, other
> logic will work as with CPFP. There could be complications regarding
> transaction package size. But I cannot claim that I understand that
> code enough to say something about this with certainty.
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2020-06-07 Thread Jeremy via bitcoin-dev
s a way of doing a tx
that contributes fee to another tx chain as a passive observer to them.
While this breaks one abstraction around how dependencies between
transactions are processed, it also could help resolve some really
difficult challenges we face with application-DoS (pinning and other
attacks) in the mempool beyond CTV. I have a napkin design for how this
could work, but nothing quite ready to share yet.

3) Hopefully 2 solves pinning :)
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sun, Jun 7, 2020 at 9:51 AM Joachim Strömbergson <
joachim...@protonmail.com> wrote:

> Hello everyone,
>
> regarding OP_CTV, I am considering the scaling use case, specifically an
> exchange (or similar) who wants to batch pay to OP_CTV to many users, and I
> wonder
>
> 1) How do you expect the exchange to communicate the proof of the payment
> to the user wallets such that they are able to construct the follow up
> transactions and accept the payment. This is UI question. Do you expect
> exchanges to provide a certain importable file/blob that the wallet will
> allow you to entry?
>
> 2) Who pays the fees and how for the transaction within the structure that
> OP_CTVed output is committed to? Say there is a tree structure and I want
> to get the coin out. Someone needs to send log(N) transactions to the chain
> in order for me to get access to the final UTXO I am interested in. Who can
> construct such transaction path and what do they need for it and who pays
> fees on that (which input)?
>
> 3) Depending on 2) above, is it not possible for a malicious entity who is
> among the many users being paid, but who has very small UTXO there relative
> to others, to construct this middle transaction and use a very small fee
> rate in order to DoS other participants. Is it even possible for this
> attacker to create the middle transaction with RBF disabled?
>
> Thank you,
> Joachim
>
>
>
> Sent with ProtonMail <https://protonmail.com> Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, November 26, 2019 1:50 AM, Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Bitcoin Developers,
>
> Pleased to announce refinements to the BIP draft for
> OP_CHECKTEMPLATEVERIFY (replaces previous OP_SECURETHEBAG BIP). Primarily:
>
> 1) Changed the name to something more fitting and acceptable to the
> community
> 2) Changed the opcode specification to use the argument off of the stack
> with a primitive constexpr/literal tracker rather than script lookahead
> 3) Permits future soft-fork updates to loosen or remove "constexpr"
> restrictions
> 4) More detailed comparison to alternatives in the BIP, and why
> OP_CHECKTEMPLATEVERIFY should be favored even if a future technique may
> make it semi-redundant.
>
> Please see:
> BIP: https://github.com/JeremyRubin/bips/blob/ctv/bip-ctv.mediawiki
> Reference Implementation:
> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify
>
> I believe this addresses all outstanding feedback on the design of this
> opcode, unless there are any new concerns with these changes.
>
> I'm also planning to host a review workshop in Q1 2020, most likely in San
> Francisco. Please fill out the form here
> https://forms.gle/pkevHNj2pXH9MGee9 if you're interested in participating
> (even if you can't physically attend).
>
> And as a "but wait, there's more":
>
> 1) RPC functions are under preliminary development, to aid in testing and
> evaluation of OP_CHECKTEMPLATEVERIFY. The new command `sendmanycompacted`
> shows one way to use OP_CHECKTEMPLATEVERIFY. See:
> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-rpcs.
> `sendmanycompacted` is still under early design. Standard practices for
> using OP_CHECKTEMPLATEVERIFY & wallet behaviors may be codified into a
> separate BIP. This work generalizes even if an alternative strategy is used
> to achieve the scalability techniques of OP_CHECKTEMPLATEVERIFY.
> 2) Also under development are improvements to the mempool which will, in
> conjunction with improvements like package relay, help make it safe to lift
> some of the mempool's restrictions on longchains specifically for
> OP_CHECKTEMPLATEVERIFY output trees. See: 
> https://github.com/bitcoin/bitcoin/pull/17268
> This work offers an improvement irrespective of OP_CHECKTEMPLATEVERIFY's
> fate.
>
>
> Neither of these are blockers for proceeding with the BIP, as they are
> ergonomics and usability improvements needed once/if the BIP is activated.
>
> See prior mailing list discussions here:
>
> *
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016934.html
> *
> https://lists.linuxfoun

Re: [bitcoin-dev] BIP-341: Committing to all scriptPubKeys in the signature message

2020-05-01 Thread Jeremy via bitcoin-dev
At the end of the day I don't really care that much I just prefer something
that doesn't throw taproot in for another review cycle.

A side effect of this proposal is it would seem to make it not possible to
produce a signature for a transaction without having access to the inputs.
This is limiting for a number of cases where you don't care about that
data. There are a litany of use cases where you don't want to have
SIGHASH_ALL behavior, and having to sign the scriptpubkeys breaks that. So
at the very least it should respect other flags.

I also don't really understand the exact attack. So you submit a
transaction to the wallet asking them to sign input 10. They sign. They've
committed to the signature being bound to the specific COutpoint and input
index, so I don't see how they wouldn't be required to sign a second
signature with the other output too? Is there an attack you can describe
end-to-end relying on this behavior?

If you look at the TXID hash the vouts are one of the last fields
serialized. this makes it possible (at least, I think) to do a midstate
proof so that all you are providing is the hash midstate, and the relevant
transaction output,  the siblings after, and the locktime. So you get to
skip all the input data, the witness data, and most of the output data.

This sort of data can easily go into the proprietary use (maybe becoming
well defined if there's a standardization push) area in PSBT, so that
hardware devices can get easy access to it. All they have to do to verify
is to finalize the hash against that buffer and match to the correct input.


As an alternative proposal, I think you can just make a separate BIP for
some new sigash flags that can be reviewed separately from taproot. There's
a lot of value in investing in figuring out more granular controls over
what the signature hash is you sign, which may have some exciting
contracting implications!
--
@JeremyRubin 



On Fri, May 1, 2020 at 5:26 AM Greg Sanders via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> For what it's worth this measure had been discussed as a lightweight way
> of informing offline signers if inputs were segwit or not for malleability
> analysis reasons. So there's at least a couple direct use-cases it seems.
>
> On Fri, May 1, 2020, 8:23 AM Russell O'Connor via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> While I'm not entirely convinced yet that accertaining non-ownership of
>> an input is a robust method of solving the problem here, I also see little
>> reason not to amend BIP-341 as proposed. The ScriptPubKeys in question is
>> already indirectly covered through the outpoints, so it is just a matter of
>> optimization.  Furthermore in the consensus code, the ScriptPubKeys are
>> part of the UTXO data set, and it is already being retrieved as part of the
>> transaction checking process, so it is readily available.
>>
>> I'm not sure how much my opinion on the topic matters, but I did include
>> this kind of functionality in my design for Simplicity on Elements, and I
>> have been leaning towards adding this kind of functionality in my Bitcoin
>> demo application of Simplicity.
>>
>> Regarding specifics, I personally think it would be better to keep the
>> hashes of the ScriptPubKeys separate from the hashes of the input values.
>> This way anyone only interested in input values does not need to wade
>> through what are, in principle, arbitrarily long ScriptPubKeys in order to
>> check the input values (which each fixed size).  To that end, I would also
>> (and independently) propose separating the hashing of the output values
>> from the output ScriptPubKeys in `sha_outputs` so again, applications
>> interested only in summing the values of the outputs (for instance to
>> compute fees) do not have to wade through those arbitrarily long
>> ScriptPubKeys in the outputs.
>>
>> On Thu, Apr 30, 2020 at 4:22 AM Andrew Kozlik via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi everyone,
>>>
>>> In the current draft of BIP-0341 [1] the signature message commits to
>>> the scriptPubKey of the output being spent by the input. I propose that the
>>> signature message should commit to the scriptPubKeys of *all* transaction
>>> inputs.
>>>
>>> In certain applications like CoinJoin, a wallet has to deal with
>>> transactions containing external inputs. To calculate the actual amount
>>> that the user is spending, the wallet needs to reliably determine for each
>>> input whether it belongs to the wallet or not. Without such a mechanism an
>>> adversary can fool the wallet into displaying incorrect information about
>>> the amount being spent, which can result in theft of user funds [2].
>>>
>>> In order to ascertain non-ownership of an input which is claimed to be
>>> external, the wallet needs the scriptPubKey of the previous output spent by
>>> this input. It must acquire the full 

Re: [bitcoin-dev] BIP-341: Committing to all scriptPubKeys in the signature message

2020-05-01 Thread Jeremy via bitcoin-dev
Hi Andrew,

If you use SIGHASH_ALL it shall sign the COutPoints of all inputs which
commit to the scriptPubKeys of the txn.

Thus the 341 hash doesn't need to sign any additional data.

As a metadata protocol you can provide all input transactions to check the
scriptPubKeys.

Best,

Jeremy
--
@JeremyRubin 


On Thu, Apr 30, 2020 at 1:22 AM Andrew Kozlik via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi everyone,
>
> In the current draft of BIP-0341 [1] the signature message commits to the
> scriptPubKey of the output being spent by the input. I propose that the
> signature message should commit to the scriptPubKeys of *all* transaction
> inputs.
>
> In certain applications like CoinJoin, a wallet has to deal with
> transactions containing external inputs. To calculate the actual amount
> that the user is spending, the wallet needs to reliably determine for each
> input whether it belongs to the wallet or not. Without such a mechanism an
> adversary can fool the wallet into displaying incorrect information about
> the amount being spent, which can result in theft of user funds [2].
>
> In order to ascertain non-ownership of an input which is claimed to be
> external, the wallet needs the scriptPubKey of the previous output spent by
> this input. It must acquire the full transaction being spent and verify its
> hash against that which is given in the outpoint. This is an obstacle in
> the implementation of lightweight air-gapped wallets and hardware wallets
> in general. If the signature message would commit to the scriptPubKeys of
> all transaction inputs, then the wallet would only need to acquire the
> scriptPubKey of the output being spent without having to acquire and verify
> the hash of the entire previous transaction. If an attacker would provide
> an incorrect scriptPubKey, then that would cause the wallet to generate an
> invalid signature message.
>
> Note that committing only to the scriptPubKey of the output being spent is
> insufficient for this application, because the scriptPubKeys which are
> needed to ascertain non-ownership of external inputs are precisely the ones
> that would not be included in any of the signature messages produced by the
> wallet.
>
> The obvious way to implement this is to add another hash to the signature
> message:
> sha_scriptPubKeys (32): the SHA256 of the serialization of all
> scriptPubKeys of the previous outputs spent by this transaction.
>
> Cheers,
> Andrew Kozlik
>
> [1]
> https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#common-signature-message
> [2]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Jeremy via bitcoin-dev
Hi everyone,

Sorry to just be getting to a response here. Hadn't noticed it till now.

*(Plug: If anyone or their organizations would like to assist in funding
the work described below for a group of developers, I've been working to
put resources together for funding the above for a few months now, and I
think it would be high leverage towards seeing this through. There are a
lot of unsexy tasks to do  that aren't coming up with a solution
(e.g.,writing a myriad of Mempool stress test scenarios) that can be a well
defined full-time job for someone to do.)*

I've been working on exactly this problem in the mempool for months now.
I'm deeply familiar with the issues here and the types of pinning possible.
I think everyone can recognize that with my work on OP_CTV I want nothing
more than the mempool to be able to accept whatever long chains we can
throw at it, but I'm pretty well steeped at this point in the obstacles to
doing that.

I don't think that we should be entertaining further carve outs at the
moment, unless it is really trivial. Every new carve out rule added to the
way that the mempool operates is removing complexity invariants we aim to
preserve in the mempool in order to keep nodes operational. Many of these
invariants are well documented, some are not. I'm happy to go off list for
a more thorough discussion with anyone qualified to have it; this isn't the
best venue for that discussion.

>From my point of view the path forward here is to dedicate more development
resources towards finishing the mempool project I began. You can see the
outstanding work here: https://github.com/bitcoin/bitcoin/projects/14,
contributing review towards moving those PRs forward will greatly improve
our ability to consider a stopgap carve out measure.

The current focus of this work is primarily on:

1) Testing Construction to better test & catch regressions or
vulnerabilities introduced or extant in mempool
2) Refactoring algorithms in mempool to reduce constant factors &
asymptotics
3) Package Relay


None of these fix the exact problem at hand though, but here's part of how
they can help us:

If we finish up the algorithmic refactors I've been working on it seems
plausible to do a one-off increase of descendants limits to say, 100
descendants with no restriction. However, we could use the opportunity to
use the 75 descendant increase exclusively for a new carve out, and apply
some new stricter rules in that extra space. There are a few anti-pinning
countermeasures that you can apply in that space that you would not
generally want in the mempool. An example of one is that any new
transaction must pay more feerate and absolute fee than every child in that
space. Or that only the highest fee paying branch of the excess
transactions are mineable, no others. Another would be disabling RBF past
that watermark. In all likelihood, different subsystems interacting with
the mempool will require a different set of restrictions each with the
current architecture, I don't think there's a magic bullet.

Package relay is a promising approach for a future pinning solution as
there are opportunities to attach to packages compact proofs of improved
fee efficiency for pinned transactions. But the ground work for package
relay needs to come first. This is theoretically possible with our current
architecture of the mempool and can probably address much of the pinning
concerns by replacing pinning with more rational eviction policies.

Longer term I've been working on plans and designs to completely re-do the
mempool's architecture to make it behave for arbitrary cases. It's possible
to one day lift all preemptively enforced (e.g., before acceptance)
descendants limits, which can solve this problem for good. There is more
than one potentially good solution here, and a conjunction of them can be
used as they affect independent sub systems. But this work will probably
take years to complete to the point where restrictions can realistically be
lifted.

If developers would like to coordinate resources around completing this
work and making more regular progress on it I'm happy to help point people
to specific tasks that need to be done in order to accelerate this and help
serialize the work so that we can not get into rebase hell.

Originally I had the plug at the top as a closing note, but I figured
people might miss it.

Best,

Jeremy


--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Removing Single Point of Failure with Seed Phrase Storage

2020-02-26 Thread Jeremy via bitcoin-dev
As a replacement for paper, something like this makes sense v.s. what you
do with a ledger presently.

However, shamir's shares notoriously have the issue that the key does exist
plaintext on a device at some point.

Non-interactive multisig has the benefit of being able to sign transactions
without having keys in the same room/place/device ever.
--
@JeremyRubin 



On Wed, Feb 26, 2020 at 9:14 AM Contact Team via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Everyone,
> Seed phrase security has been a subject of discussion for a long time now.
> Though there are varying opinions on the subject but the conflict usually
> arises due to different security models used by different individuals. The
> general practice in the space has been to use paper or metal engraving
> options to secure seed phrase but those too act as a single point of
> failure when secure storage is concerned. The hardware wallets, no matter
> whether use a secure element or not can be hacked either through basic
> glitching or through bigger schemes state enforced backdoors in the closed
> soured SE used.
>
> The option that Cypherock (Cypherock X1 Wallet)  is working on removes a
> single point of failure when it comes to storage of seed phrases. It uses 2
> of 4 (with the option of setting up custom threshold limit) Shamir Secret
> Sharing to  split the seed phrase into 4 different shares. Each share gets
> stored in a PIN ( hardware enforced ) Card with an EAL 6+ secure element.
> The user would need any 2 of these 4 cyCards to recover the seed or make a
> transaction. Ideally they should all be stored at different locations and
> this added security through distribution makes losing seed phrase highly
> improbable. We have decoupled storage and computation aspect of a hardware
> wallet. More information can be obtained from cypherock.com. The purpose
> of this mail is to get feedback from the community. Let us know if there is
> any feedback, we would love it.
>
> Thanks
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot public NUMS optimization (Re: Taproot (and graftroot) complexity)

2020-02-14 Thread Jeremy via bitcoin-dev
I am working on CTV, which has cases where it's plausible you'd want a
taproot tree with a NUMS point.

The need for NUMS points is a little bit annoying. There are a few reasons
you would want to use them instead of multisig:

1) Cheaper to verify/create.
If I have a protocol with 1000 people in it, if I add a multisig N of N to
verify I need a key for all those people, and the probability of use seems
low.
I then also need to prove to each person in the tree that their key is
present. My memory on MuSig is a bit rusty, but I think they key
aggregation requires sending all the public keys and re-computing. (Maybe
you can compress this to O(log n) using a Merkle tree for the tweak L?)
Further, these keys can't just be the addresses provided for those 1000
people, as if those addresses are themselves N of Ns or scripts it gets
complicated, fast (and potentially broken). Instead we should ask that each
participant give us a list of keys to include in the top-level. We'd also
want each participant to provide
two signatures with that key of some piece of non-txn data (so as to prove
it itself wasn't a NUMS point -- otherwise may as well skip this all and
just use a top-level nums point).
2) Auditable.
If I set up an inheritance scheme, like an annuity or something, and the
IRS wants me to pay taxes on what I've received, adverse inference will
tell them to assume that my parent gave me a secret get all the money path
and this is a tax dodge. With a NUMS point, heirs can prove there was no
top-level N of N.
3) I simply don't want to spend it without a script condition, e.g.,
timelock.


Now, assuming you do want a NUMS, there is basically 4 ways to make one
(that I could think of):

1) Public NUMS -- this is a constant, HashToCurve("I am a NUMS Point").
Anyone scanning the chain can see spends are using this constant. Hopefully
everyone uses the same constant (or everyone uses 2,3,4) so that "what type
of NUMS you are using" isn't a new fingerprint.
2) Moslty Public NUMS -- I take the hash of some public data (like maybe
the txid) on some well defined protocol, and use that. Anyone scanning the
chain and doing an EC operation per-txid can see I'm using a constant --
maybe my HashToCurve takes 10 seconds (perhaps through a VDF to make it
extra annoying for anyone who hasn't been sent the shortcut), but in
practice it's no better than 1.
3) Interactive NUMS -- I swap H(Rx), H(Ry) with the other participant and
then NUMS with H(Rx || Ry). This is essentially equivalent to using a MuSig
key setup where one person's key is a NUMS. Now no one passively scanning
can see that it's NUMS, but I can prove to an auditor later.
4) 1/2 RTT Async-Interactive NUMS -- I take some public salt -- say the
txid T, and hash it with a piece of random data R and then HashToCurve(T ||
R)... I think this is secure? Not clear the txid adds any security. Now I
can prove to you that the hash was based on the txid, but I've blinded it
with R to stop passive observers. But I also need ot send you data out of
band for R (but I already had to do this for Taproot maybe?)

The downsides with 3/4 is that if you lose your setup, you lose your
ability to spend/prove it's private (maybe can generate R from a seed?). So
better hold on to those tightly! Or use a public NUMS.

Only 3,4 provide any "real" privacy benefit and at a small hit to
likelihood of losing funds (more non-deterministic data to store). I guess
the question becomes how likely are we to have support for generating a
bunch of NUMS points?

Comparing with this proposal which removes the NUMS requirement:

1) NUMS/Taproot anonymity set *until* spend, MAST set after spend
2) No complexity around NUMS generation/storage
3) If people don't have ecosystem-wide consistent NUMS practices, leads to
additional privacy leak v.s. bare MAST which would be equivalent to case 1
(Public NUMS)
4) Slightly less chain overhead (32 bytes/8 vbytes).
5) Slightly faster chain validation (EC Point tweak is what like 10,000 -
100,000 times slower than a hash?)


Matt raises a interesting point in the other thread, which is that if we
put the option for a more private NUMS thing, someone will eventually write
software for it. But that seems to be irrespective of if we make no-NUMS an
option for bare MAST spends.

Overall I think this is a reasonable proposal. It effectively only
introduces bare MAST to prevent the case where people are using a few
different Public NUMS leaking metadata by putting incentive to use the same
one -- none. Using a private NUMS is unaffected incentive wise as it's
essentially just paying a bit more to be in the larger anonymity set. I
think it makes some class of users better off, and no one else worse off,
so this change seems Pareto.

Thus I'm in favor of adding a rule like this.

I think reasonable alternative responses to accepting this proposed change
would be to:

1) Add a BIP for a standard Public NUMS Point exported through secp256k1 to
head off people defining their own 

Re: [bitcoin-dev] Taproot (and graftroot) complexity (reflowed)

2020-02-14 Thread Jeremy via bitcoin-dev
Dave,

I think your point:
















*When schnorr and taproot are done together, all of the following
transaction types can be part of the same set: - single-sig spends
(similar to current use of P2PKH and P2WPKH) - n-of-n spends with musig
or equivalent (similar to current use of   P2SH and P2WSH 2-of-2
multisig without special features as used by   Blockstream Green and LN
mutual closes) - k-of-n (for low values of n) using the most common k
signers   (similar to BitGo-style 2-of-3 where the keys involved are
alice_hot, alice_cold, and bob_hot and almost all transactions are
  expected to be signed by {alice_hot, bob_hot}; that common case   can
be the key-path spend and the alternatives {alice_hot,   alice_cold}
and {alice_cold, bob_hot} can be script-path spends) - contract
protocols that can sometimes result in all parties   agreeing on an
outcome (similar to LN mutual closes, cross-chain   atomic swaps, and
same-chain coinswaps) *

Is the same if Schnorr + Merkle Branch without Taproot optimization, unless
I'm missing something in one of the cases? I guess there's a distinction on
"can" v.s. "are likely"?


Jonas,

That's a really interesting point about K-N systems making the most likely
K-K the taproot key. (For the uninitiated, MuSig can do N-of-N aggregation
non-interactively, but K-of-N requires interaction). I think this works
with small (N choose K), but as (N choose K) increases it seems the
probability of picking the correct one goes down?

I guess the critical question is if cases where there's not some timelock
will be mandatory across all signing paths.


cheers,

jeremy

--
@JeremyRubin 



On Mon, Feb 10, 2020 at 9:16 AM Jonas Nick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I agree with most of the comments so far, but the group brings up an often
> overlooked point with respect to the privacy benefits of taproot. In the
> extreme
> case, if there would be no policies that have both a key and a script spend
> path, then taproot does not improve anonymity sets compared to the "Taproot
> Public NUMS Optimization" proposal (which saves 8 vbytes in a
> script-spend). (*)
>
> In fact, the cases where scripts would have to be used given usage of
> Bitcoin
> today are be rare because threshold policies, their conjunctions and
> disjunctions can be expressed with a single public key. Even if we
> disregard
> speculation that timelocks, ANYPREVOUT/NOINPUT and other interesting
> scripts
> will be used in the future (which can be added through the leaf or key
> versions
> without affecting key-spend anonymity sets), not all of today's
> applications are
> able to be represented single public keys because there are applications
> that
> can not deal with interactive key setups or interactive signing. For
> applications where this is possible it will be a gradual change because of
> the
> engineering challenges involved. For example, k-of-n threshold policies
> could
> have the most likely k-of-k in the taproot output key and other k-of-k in
> the
> leaves, instead of going for a k-of-n taproot output key immediately.
>
> Given that anonymity sets in Bitcoin are permanent and software tends to be
> deployed longer than anyone would expect at the time of deployment,
> realistically Taproot is superior to the "Public NUMS Optimization" and "An
> Alternative Deployment Path".
>
> (*) One could argue that the little plausible deniability gained by a very
> small
> probability of the change of a script-spend being a key-spend and vice
> versa is
> significantly better than no probability at all.
>
> On 2/9/20 8:47 PM, Bryan Bishop via bitcoin-dev wrote:
> > Apologies for my previous attempt at relaying the message- it looks like
> > the emails got mangled on the archive. I am re-sending them in this
> > combined email with what I hope will be better formatting. Again this is
> > from some nym that had trouble posting to this mailing list; I didn't see
> > any emails in the queue so I couldn't help to publish this sooner.
> >
> > SUBJECT: Taproot (and Graftroot) Complexity
> >
> > This email is the first of a collection of sentiments from a group of
> > developers who in aggregate prefer to remain anonymous. These emails have
> > been sent under a pseudonym so as to keep the focus of discussion on the
> > merits of the technical issues, rather than miring the discussion in
> > personal politics.  Our goal isn't to cause a schism, but rather to help
> > figure out what the path forward is with Taproot. To that end, we:
> >
> > 1) Discuss the merits of Taproot's design versus simpler alternatives
> (see
> > thread subject, "Taproot (and Graftroot) Complexity").
> >
> > 2) Propose an alternative path to deploying the technologies described in
> > BIP-340, BIP-341, and BIP-342 (see thread subject, "An Alternative
> > Deployment Path for Taproot Technologies").
> >

Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2020-02-14 Thread Jeremy via bitcoin-dev
s Slack channel).
>
> As far as I can tell, this principle can be violated with the use of
> RBF: "(tx) that was included in branch A and then RBF-ed (tx') in branch
> B and then branch A wins -> children of (tx') can't be replayed"
>
> Some may hold an opinion that introducing new rules that violate that
> principle should be done with caution.
>
> The 'revocation utxo' feature enabled by OP_CTV essentially introduces
> a manually triggered 'inverse timelock' -  normal timelocks make tx
> invalid until certain point in time, and inverse timelock make tx
> invalid _after_ certain point in time, in this case by spending an
> unrelated UTXO.
>
> In a reorg, one branch can have that UTXO spent before the OP_CTV
> transaction that depends on it is included in the block, and the OP_CTV
> transaction and its children can't be replayed.
>
> This is the same issue as an 'automatic inverse timelock' that could
> be enforced by the structure of the transaction itself, if there was
> appropriate mechanism, with the difference that 'revocation utxo' is
> manually triggered.
>
> The absense of 'automatic inverse timelock' mechanism in Bitcoin hints
> that it was not seen as desireable historically. I was not able to find
> the relevant discussions, though.
>
> I would like to add that the behaviour enabled by inverse timelocks
> could be useable in various schemes with covenants, like the vaults
> with access revocable by spending the 'revocation utxo', or in the
> trustless lending schemes where the covenant scripts can enforce
> different amounts of interest paid to lender based on the point in time
> when the loan is returned - the obsolete script paths (with smaller
> interest paid) can be disabled by inverse timelock.
>
> В Fri, 13 Dec 2019 23:37:19 -0800
> Jeremy  wrote:
>
> > That's a cool use case. I've thought previously about an
> > OP_CHECKINPUT, as a separate extension. Will need to think about if
> > your construction introduces a hash cycle (unless
> > SIGHASH_ALL|SIGHASH_ANYONECANPAY is used it seems likely).
> >
> > Also re signatures I think it's definitely possible to pick a
> > (signature, message) pair and generate a pk from it, but in general
> > the Bitcoin message commits to the pk so forging isn't possible.
> >
> > On Fri, Dec 13, 2019, 11:25 PM Dmitry Petukhov 
> > wrote:
> >
> > > Another idea for smart vaults:
> > >
> > > The ability to commit to scriptSig of a non-segwit input could be
> > > used for on-chain control of spending authorization (revoking the
> > > spending authorization), where CTV ensures that certain input is
> > > present in the transaction.
> > >
> > > scriptSig of that input can contain a signature that commits to
> > > certain prevout. Unless it is possible to forge an identical
> > > signature (and I don't know how strong are guarantees of that),
> > > such an input can only be valid if that prevout was not spent.
> > >
> > > Thus spending such prevout makes it impossible to spend the input
> > > with CTV that commits to such scriptSig, in effect revoking an
> > > ability to spend this input via CTV path, and alternate spending
> > > paths should be used (like, another taproot branch)
> > >
> > >
> > > В Fri, 13 Dec 2019 15:06:59 -0800
> > > Jeremy via bitcoin-dev 
> > > пишет:
> > > > I've prepared a draft of the changes noted above (some small
> > > > additional modifications on the StandardTemplateHash described in
> > > > the BIP), but have not yet updated the main branches for the BIP
> > > > to leave time for any further feedback.
> > > >
> > > > See below:
> > > >
> > > > BIP:
> > > > https://github.com/JeremyRubin/bips/blob/ctv-v2/bip-ctv.mediawiki
> > > > Implementation:
> > > > https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-v2
> > > >
> > > > Thank you for your feedback,
> > > >
> > > > Jeremy
> > > > --
> > > > @JeremyRubin <https://twitter.com/JeremyRubin>
> > > > <https://twitter.com/JeremyRubin>
> > >
> > >
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV through SIGHASH flags

2020-02-03 Thread Jeremy via bitcoin-dev
I think these ideas shows healthy review of how OP_CTV is specified against
alternatives, but I think most of the ideas presented are ill advised.

--
@JeremyRubin 



On Sat, Feb 1, 2020 at 2:15 PM Bob McElrath via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> We propose that OP_CHECKTEMPLATEVERIFY should behave more like CHECKSIG,
> including a flags byte that specify what is hashed. This unifies the ways a
> SigHash is computed, differing only in the final checksig which is omitted
> in
> favor of chacking the hash directly. Having two paths to create a
> signature hash
> introduces extra complexity, especially as concerns potential future
> SIGHASH
> flag upgrades.



> I believe the above may be possible *without* a new opcode and simply with
> a
> sighash flag. That is, consider a flag SIGHASH_NOSIG which behaves as
> follows:
> The stack is expected to contain  , where the hash to be
> checked is
>  and is in the place where you'd normally put a pubkey. The byte
> 
> is the second thing on the stack. This is intended to be an empty
> "signature"
> with the flags byte appended (which must contain SIGHASH_NOSIG to succeed).
>


I've previously brought this up in IRC
http://gnusha.org/bitcoin-wizards/2019-11-28.log


AFAIK, using an actual CheckSig SIGHASH Flag as is is a bad idea because
then you need to include an encoding valid signature and pubkey that map
onto the hash to check. This is not just extra 11 extra bytes of data (33
bytes PubKey + 9 bytes Signature + 2 push -32 bytes - 1 byte push), it's
also a very awkward API. I don't think you can soft-fork around these
encoding rules. But you're right that it's possible to add this as a
SIGHASH flag. I don't think doing CTV as a sighash flag is worth
considering further.

I get your point that CTV is kind of a signature hash, and that we might
want to not have a separate path. This ignores, however, that the current
SIGHASH code-path is kind of garbage and literally no one likes it and it
has been the source of nasty issues previously. Thus I posit that a
separate path creates less complexity, as we don't need to worry about
accidentally introducing a weird interaction with other sighash flags.




> CTV omits inputs as part of its semantics, so CTV-type functionality using
> CHECKSIG is also achievable if some form of NOINPUT flag is also deployed.
> With
> NOINPUT alone, a standard CHECKSIG can be used to implement a covenant --
> though
> it uses an unnecessarily large number of bytes just to check a 32-byte
> hash.
> Therefore, any pitfalls CTV intends to evade can be evaded by using a
> CHECKSIG,
> if NOINPUT is deployed in some form, adding new flexibility.  Beyond what's
> possible with NOINPUT/ANYPREVOUT, CTV additionally commits to:
>
> »···1. Number of inputs
> »···2. Number of outputs
> »···3. Index of input
>

NOINPUT as specified here
https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-anyprevout.mediawiki
(is this the latest?) isn't a great surrogate for CTV because CTV commits
to the input index which prevents half-spend. This also encumbers, as
proposed, an additional chaperone signature to fix it to a specific output.

This adds a lot of complexity and space to using CTV. Maybe NOINPUT could
make changes to special-case CTV, but then we're back to CTV again.



>
> The justification given for committing to the number of inputs and outputs
> is
> that "it makes CTV hashes easier to compute with script", however doing so
> would
> require OP_CAT. It's noted that both of these are actually redundant
> commitments. Since the constexpr requirement was removed, if OP_CAT were
> enabled, this commitment to the input index could be evaded by computing
> the CTV
> hash within the script, modifying the input index using data taken from the
> witness. Therefore committing to the input index is a
> sender-specified-policy
> choice, and not an anti-footgun measure for the redeemer. As such, it's
> appropriate to consider committing to the input index using a flag instead.
>
> This is incorrect almost entirely.

1. There is a semantic difference between the *commitment* being strictly
redundant, which has more to do with malleation, and being redundant from a
feature perspective. I could maybe do a better job here of expanding what
"easier" means here -- there are actually some scripts which are quite
difficult to write/impossible without this. I've described this a couple
places outside of the BIP, but essentially it allows you to pin the number
of inputs/outputs separately from the hashes themselves. So if you're
trying to build the template in script, you might want to allow the
Sequences to be set to any value, and pass them via a hash. But then from a
hash you can't check the validity of the length. An external length
commitment lets you do this, but without it you would have to pass in the
sequences directly.
2. The constexpr requirement was 

Re: [bitcoin-dev] op_checktemplateverify and number of inputs

2020-01-26 Thread Jeremy via bitcoin-dev
Hi Billy,

Restricting the number of inputs is necessary to preclude TXID
malleability. Committing to all of the information required necessitates
that the number of inputs be committed.

This allows us to build non-interactive layer 2 protocols which depend on
TXID non-malleability (most of them at writing).

You raise a good point that allowing *any number* of inputs is an
interesting case, which I had discussed offline with a few different
people. I think the conclusion was that that flexibility is better left
outside of the OP directly.

If you want an any number of inputs template, and we enable something like
OP_CAT (e.g., OP_CAT, OP_SHA256STREAM) then you can spend to something like:

 OP_SWAP OP_CAT OP_SWAP OP_CAT  OP_CAT OP_SHA256 OP_CTV

And then pass in the # of inputs and sequences hash as arguments to the
function.

I can respond separately to your bitcointalk post as you ask a different
set of questions there.

Best,

Jeremy
--
@JeremyRubin 



On Sun, Jan 26, 2020 at 8:59 AM Billy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I have a question about op_ctv related to the requirement to specify the
> number of inputs. I don't quite see why its necessary, but most
> importantly, I don't see why we want to *require* the user of the op to
> specify the number of inputs, tho I see the reasoning why one would want to
> specify it. If the op allowed both cases (specifying a number of inputs and
> allowing any number), it seems like the best of both worlds. I started a
> discussion on bitcointalk.org:
>
> https://bitcointalk.org/index.php?topic=5220520
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CTV Workshop & CFP February 1st, 2020

2020-01-17 Thread Jeremy via bitcoin-dev
It's not too late to sign up  to attend the workshop; but we are
approaching capacity!

Please fill out the form if you'd like to participate as soon as possible
so that we can plan accordingly.

Feel free to forward this posting to people who don't follow this list but
you think should attend.
--
@JeremyRubin 



On Sat, Jan 4, 2020 at 5:58 PM Jeremy  wrote:

> Dear Bitcoin Developers,
>
> On February 1st, 2020 in San Francisco (location to be shared with
> attendees only) I will be hosting a workshop to aid in reviewing and
> advancing OP_CHECKTEMPLATEVERIFY.
>
> The workshop will be from 10am-5pm. The basic schedule of events (subject
> to change) is in the footer of this email.
>
> If you would like to attend, please fill out the form
> https://forms.gle/ex2WLYS319HFdpJYA . We should have capacity for
> everyone who wants to come, but I'll need to know by January 15th if you
> plan to attend. The primary audience for the event is Bitcoin developers,
> ecosystem engineers (i.e., mining pools, wallets, exchanges, etc), and
> researchers.
>
> If you have research or projects related to OP_CTV you would be interested
> in presenting, please indicate in the application form with a brief
> summary of your topic.
>
> I may be able to sponsor travel for a few developers who would otherwise
> be unable to attend. Please indicate on the form if you require such
> support.
>
> If you're able to sponsor the event (for lunch/dinner, or for travel
> subsidies), please reach out or indicate on the form.
>
> If you cannot attend, I'll make a best effort to make all materials from
> the event available online. The channel ##ctv-bip-review is also available
> for general discussion about OP_CTV.
>
> Happy New Year!
>
> Jeremy
>
> 10:00 AM - 10:30 AM: coffee & registration
>
> BIP SESSION
> 10:30 AM - 11:00 AM: CTV BIP Design Walkthrough & Basic Motivation
> 11:00 AM - 11:30 AM: Small Group Discussion & BIP Reading
> 11:30 AM - 12:00 PM: BIP Q
>
> 12pm: Lunch
>
> IMPLEMENTATION SESSION
> 1:00 PM - 1:30 PM: BIP Implementation Walkthrough
> 1:30 PM - 2:00 PM: Q + silent review implementation review time
>
> DEPLOYMENT SESSION
> 2:00 PM - 2:15 PM: Deployment Plan Proposal
> 2:15 PM - 2:45 PM: Deployment Plan Discussion
>
> 2:45-3pm: BREAK
>
> ECOSYSTEM SUPPORT SESSION
> 3:00 PM - 3:30 PM: Mempool Updates Presentation & Discussion
> 3:30 PM - 4:00 PM: Package Relay Informational Updates
>
> DEMO SESSION & APPLICATION TALKS
> 4:00 PM - 4:10 PM: SENDMANYCOMPACTED Demo
> 4:10 PM - 4:20 PM: Vault Wallet Demo
> 4:20 PM: - 4:30 PM: TBA
> 4:30 PM - 4:40PM: TBA
> 4:40 PM - 4:50 PM: TBA
>
> WRAP UP
> 4:50 PM - 5:00 PM
>
> DINNER:
> 5:00 PM - 7:00 PM Dinner & Drinks
>
>
> --
> @JeremyRubin 
> 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Coins: A trustless sidechain protocol

2020-01-13 Thread Jeremy via bitcoin-dev
https://utxos.org/uses/

Yes, you should check out the material at the link above. Specifically non
interactive channels solve this problem of one sided opens, where the other
party is passive/offline.


On Mon, Jan 13, 2020, 12:42 PM Joachim Strömbergson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > Instead of using sidechains, just use channel factories.
>
> I am not familiar enough with the latest advancements in this field. Is it
> possible using LN/channel factories to achieve off-line-like participation
> user experience without previous registration with any kind of gateway
> provider? For example, can you go online, join the network [somehow
> instantly], generate address/invoice and then put it somewhere for others
> to later use it when you are off-line? Can you also participate while being
> off-line for very long periods of time without relying on third party
> providers to secure your channels? If not, is using sidechains really
> equally replaceable with LN/CF constructions?
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Modern Soft Fork Activation

2020-01-10 Thread Jeremy via bitcoin-dev
It's not at a "directly implementable policy state", but I think you might
be interested in checking out the spork protocol upgrade model I proposed a
while back. https://www.youtube.com/watch?v=J1CP7qbnpqA=youtu.be

It has some interesting properties around the 5 properties you've mentioned.

1) Avoid activating in the face of significant, reasonable, and directed
objection. Period.

Up to miners to orphan spork-activating blocks.

2) Avoid activating within a timeframe which does not make high
node-level-adoption likely.

Mandatory minimum flag day for Spork initiation, statistically
improbable/impossible for even earlier adoption.

3) Don't (needlessly) lose hashpower to un-upgraded miners.

Difficulty adjustments make the missing spork'd block "go away" over time,
the additional difficulty of *not activating* a rejected spork fills in as
an additional PoW.


4) Use hashpower enforcement to de-risk the upgrade process, wherever
possible.

Miners choose to activate or build on activating blocks.

5) Follow the will of the community, irrespective of individuals or
unreasoned objection, but without ever overruling any reasonable
objection.

Honest signalling makes people be forced to "put their money where there
mouth is" on what the community wants.
--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_CTV Workshop & CFP February 1st, 2020

2020-01-04 Thread Jeremy via bitcoin-dev
Dear Bitcoin Developers,

On February 1st, 2020 in San Francisco (location to be shared with
attendees only) I will be hosting a workshop to aid in reviewing and
advancing OP_CHECKTEMPLATEVERIFY.

The workshop will be from 10am-5pm. The basic schedule of events (subject
to change) is in the footer of this email.

If you would like to attend, please fill out the form
https://forms.gle/ex2WLYS319HFdpJYA . We should have capacity for everyone
who wants to come, but I'll need to know by January 15th if you plan to
attend. The primary audience for the event is Bitcoin developers, ecosystem
engineers (i.e., mining pools, wallets, exchanges, etc), and researchers.

If you have research or projects related to OP_CTV you would be interested
in presenting, please indicate in the application form with a brief summary
of your topic.

I may be able to sponsor travel for a few developers who would otherwise be
unable to attend. Please indicate on the form if you require such support.

If you're able to sponsor the event (for lunch/dinner, or for travel
subsidies), please reach out or indicate on the form.

If you cannot attend, I'll make a best effort to make all materials from
the event available online. The channel ##ctv-bip-review is also available
for general discussion about OP_CTV.

Happy New Year!

Jeremy

10:00 AM - 10:30 AM: coffee & registration

BIP SESSION
10:30 AM - 11:00 AM: CTV BIP Design Walkthrough & Basic Motivation
11:00 AM - 11:30 AM: Small Group Discussion & BIP Reading
11:30 AM - 12:00 PM: BIP Q

12pm: Lunch

IMPLEMENTATION SESSION
1:00 PM - 1:30 PM: BIP Implementation Walkthrough
1:30 PM - 2:00 PM: Q + silent review implementation review time

DEPLOYMENT SESSION
2:00 PM - 2:15 PM: Deployment Plan Proposal
2:15 PM - 2:45 PM: Deployment Plan Discussion

2:45-3pm: BREAK

ECOSYSTEM SUPPORT SESSION
3:00 PM - 3:30 PM: Mempool Updates Presentation & Discussion
3:30 PM - 4:00 PM: Package Relay Informational Updates

DEMO SESSION & APPLICATION TALKS
4:00 PM - 4:10 PM: SENDMANYCOMPACTED Demo
4:10 PM - 4:20 PM: Vault Wallet Demo
4:20 PM: - 4:30 PM: TBA
4:30 PM - 4:40PM: TBA
4:40 PM - 4:50 PM: TBA

WRAP UP
4:50 PM - 5:00 PM

DINNER:
5:00 PM - 7:00 PM Dinner & Drinks


--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-12-19 Thread Jeremy via bitcoin-dev
I've updated the main branch (ctv) to match ctv-v2, and pushed branches
ctv-v1 which points at the prior versions.

Thanks to Dmitry Petukhov for helping me fix several typos and errors.

I also wanted to share some some "non-technical" tax analysis covering the
use of OP_CTV for batched payments. See here:
https://utxos.org/analysis/taxes/

As an aside, the site https://utxos.org/ generally is a repository of
information & material on OP_CTV, it's design, applications, and analysis.
If you're interested in contributing any content please let me know!

Best,

Jeremy
--
@JeremyRubin 



On Fri, Dec 13, 2019 at 3:06 PM Jeremy  wrote:

> I've prepared a draft of the changes noted above (some small additional
> modifications on the StandardTemplateHash described in the BIP), but have
> not yet updated the main branches for the BIP to leave time for any further
> feedback.
>
> See below:
>
> BIP: https://github.com/JeremyRubin/bips/blob/ctv-v2/bip-ctv.mediawiki
> Implementation:
> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-v2
>
> Thank you for your feedback,
>
> Jeremy
> --
> @JeremyRubin 
> 
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-12-13 Thread Jeremy via bitcoin-dev
I've prepared a draft of the changes noted above (some small additional
modifications on the StandardTemplateHash described in the BIP), but have
not yet updated the main branches for the BIP to leave time for any further
feedback.

See below:

BIP: https://github.com/JeremyRubin/bips/blob/ctv-v2/bip-ctv.mediawiki
Implementation:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-v2

Thank you for your feedback,

Jeremy
--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-12-10 Thread Jeremy via bitcoin-dev
Three changes I would like to make to the OP_CTV draft. I think this should
put the draft in a very good place w.r.t. outstanding feedback.

The changes can all be considered/merged independently, though, they are
written below assuming all of them are reasonable.


1) *Make the hash commit to the INPUT_INDEX of the executing scriptPubKey.*

*Motivation:* As previously specified, a CTV template constructed
specifying a transaction with two or more inputs has a "half-spend" issue
whereby if the template script is paid to more than once (a form of
key-reuse), they can be joined in a single transaction leading to half of
the intended outputs being created.
*Example:*
Suppose I have a UTXO with a CTV requiring two inputs. The first is set to
be the CTV template, and the input has enough money to pay for all the
outputs. The second input is added to allow the adding of a fee-only utxo.
Now suppose someone creates an similar UTXO with this same CTV (even within
the same transaction).


TxA {vin: [/*elided...*/], vout: [TxOut{1 BTC,  CTV},
TxOut {1 BTC,  CTV}]}

*Intended Behavior:*
TxB0 {vin: [Outpoint{TxA.hash(), 0}, /*arbitrary fee utxo*/], vout :
[TxOut {1 BTC, /* arbitrary scriptPubKey */}]
TxB1 {vin: [Outpoint{TxA.hash(), 1}, /*arbitrary fee utxo*/], vout :
[TxOut {1 BTC, /* arbitrary scriptPubKey */}]
*Possible Unintended Behaviors:*
*Half-Spend:*
TxB {vin: [Outpoint{TxA.hash(), 1}, Outpoint{TxA.hash(), 0}], vout
: [TxOut {1 BTC, /* arbitrary scriptPubKey */}]
*Order-malleation:*
TxB0 {vin: [/*arbitrary fee utxo*/, Outpoint{TxA.hash(), 0}], vout
: [TxOut {1 BTC, /* arbitrary scriptPubKey */}]
TxB1 {vin: [Outpoint{TxA.hash(), 1}, /*arbitrary fee utxo*/], vout
: [TxOut {1 BTC, /* arbitrary scriptPubKey */}]

With the new rule, the CTV commits to the index in the vin array that it
will appear. This prevents both the half-spend issue and the
order-malleation issue.

Thus, the only execution possible is:

*Intended Behavior:*
TxB0 {vin: [Outpoint{TxA.hash(), 0}, /*arbitrary fee utxo*/], vout :
[TxOut {1 BTC, /* arbitrary scriptPubKey */}]
TxB1 {vin: [Outpoint{TxA.hash(), 1}, /*arbitrary fee utxo*/], vout :
[TxOut {1 BTC, /* arbitrary scriptPubKey */}]

*Impact of Change:*
This behavior change is minor -- in most cases we are expecting templates
with a single input, so committing the input index has no effect.

Only when we do specify multiple inputs, committing the INPUT_INDEX has the
side effect of making reused-keys not susceptible to the "half-spend" issue.

This change doesn't limit the technical capabilities of OP_CTV by much
because cases where the half-spend construct is desired can be specified by
selecting the correct inputs for the constituent transactions for the
transaction-program. In the future, Taproot can make it easier to express
contracts where the input can appear at any index by committing to a tree
of positions.

This change also has the benefit of reducing the miner-caused TXID
malleability in certain applications (e.g., in a wallet vault you can
reduce malleability from your deposit flow, preventing an exponential
blow-up). However in such constructions the TXIDs are still malleable if
someone decides to pay you Bitcoin that wasn't previously yours through a
withdrawal path (a recoverable error, and on the bright side, someone paid
you Bitcoin to do it).

This change also has a minor impact on the cacheability of OP_CTV. In the
reference implementation we currently precompute and store single hash for
the StandardTemplateHash of the entire transaction. Making the hash vary
per-input means that we would need to precompute one hash per-input, which
is impractical. Given that we expect the 0-index to be the exceedingly
common case, and it's not horribly expensive if we aren't cached (a
constant sized SHA-256), the spec will be updated to precompute and cache
only the hash for the 0th index. (The hash is also changed slightly to make
it more efficient for un-cached values, as noted in change 3).


*2) Remove Constexpr restriction*
*Changes:*
Currently it is checked that the template hash argument was not 'computed',
but came from a preceding push. Remove all this logic and accept any
argument.
*Motivation:*
I've had numerous conversations with Bitcoin developers (see above, see
#bitcoin-wizards on Nov 28th 2019, in person at local meetups, and in
private chats with ecosystem developers) about the constexpr restriction in
OP_CTV. There have been a lot of folks asking to remove template constexpr
restriction, for a few reasons:

a) Parsing Simplification / no need for special-casing in optimizers like
miniscript
b) The types of script it disables aren't dangerous
c) There are exciting things you can do were it not there and other
features were enabled (OP_CAT)
d) Without other features (like OP_CAT), there's not really too much you
can do

No one has expressed any strong justification to keep it.

The main motivation for the constexpr restriction 

Re: [bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-11-28 Thread Jeremy via bitcoin-dev
Thanks for the feedback Russell, now and early. It deeply informed the
version I'm proposing here.

I weighed carefully when selecting this design that I thought it would be
an acceptable tradeoff after our discussion, but I recognize this isn't
exactly what you had argued for.

First off, with respect to the 'global state' issue, I figured it was
reasonable with this choice of constexpr rule given that a reasonable tail
recursive parser might look something like:

parse (code : rest) stack alt_stack just_pushed =
match code with
OP_PUSH => parse rest (x:stack) alt_stack True
OP_DUP => parse rest (x:stack) alt_stack False
// ...

So we're only adding one parameter which is a bool, and we only need to
ever set it to an exact value based on the current code path, no
complicated rules. I'm sensitive to the complexity added when formally
modeling script, but I think because it is only ever a literal, you could
re-write it as co-recursive:

parse_non_constexpr (code : rest) stack alt_stack =
match code with
OP_PUSH => parse_constexpr rest (x:stack) alt_stack
OP_DUP => parse_non_constexpr rest (x:stack) alt_stack
// ...

parse_constexpr (code : rest) stack alt_stack  =
match code with
OP_CTV => ...
_ => parese_non_constexpr (code : rest) stack alt_stack


If I recall, this should help a bit with the proof automatability as it's
easier in the case by case breakdown to see the unconditional code paths.


In terms of upgrade-ability, one of the other reasons I liked this design
is that if we do enable OP_CTV for non-constexpr arguments, the issue
basically goes away and the OP becomes "pure" without any state tracking.
(I think the switching on argument size is much less a concern because we
already use similar upgrade mechanisms elsewhere, and it doesn't add
parsing context).


It's also possible, as I think *should be done* for tooling to treat an
unbalanced OP_CTV as a parsing error. This will always produce
consensus-valid scripts! However by keeping the consensus rules more
relaxed we keep our upgrade-ability paths open for OP_CTV, which as I
understand from speaking with other users is quite desirable.


Best (and happy thanksgiving to those celebrating),

Jeremy

--
@JeremyRubin 



On Thu, Nov 28, 2019 at 6:33 AM Russell O'Connor 
wrote:

> Thanks for this work Jeremy.
>
> I know we've discussed this before, but I'll restate my concerns with
> adding a new "global" state variable to the Script interpreter for tracking
> whether the previous opcode was a push-data operation or not.  While it
> isn't so hard to implement this in Bitcoin Core's Script interpreter,
> adding a new global state variable adds that much more complexity to anyone
> trying to formally model Script semantics.  Perhaps one can argue that
> there is already (non-stack) state in Script, e.g. to deal with
> CODESEPARATOR, so why not add more?  But I'd argue that we should avoid
> making bad problems worse.
>
> If we instead make the CHECKTEMPLATEVERIFY operation fail if it isn't
> preceded by (or alternatively followed by) an appropriate sized
> (canonical?) PUSHDATA constant, even in an unexecuted IF branch, then we
> can model the Script semantics by considering the
> PUSHDATA-CHECKTEMPLATEVERIFY pair as a single operation.  This allows
> implementations to consider improper use of CHECKTEMPLATEVERIFY as a
> parsing error (just as today unbalanced IF-ENDIF pairs can be modeled as a
> parsing error, even though that isn't how it is implemented in Bitcoin
> Core).
>
> I admit we would lose your soft-fork upgrade path to reading values off
> the stack; however, in my opinion, this is a reasonable tradeoff.  When we
> are ready to add programmable covenants to Script, we'll do so by adding
> CAT and operations to push transaction data right onto the stack, rather
> than posting a preimage to this template hash.
>
> Pleased to announce refinements to the BIP draft for
>> OP_CHECKTEMPLATEVERIFY (replaces previous OP_SECURETHEBAG BIP). Primarily:
>>
>> 1) Changed the name to something more fitting and acceptable to the
>> community
>> 2) Changed the opcode specification to use the argument off of the stack
>> with a primitive constexpr/literal tracker rather than script lookahead
>> 3) Permits future soft-fork updates to loosen or remove "constexpr"
>> restrictions
>> 4) More detailed comparison to alternatives in the BIP, and why
>> OP_CHECKTEMPLATEVERIFY should be favored even if a future technique may
>> make it semi-redundant.
>>
>> Please see:
>> BIP: https://github.com/JeremyRubin/bips/blob/ctv/bip-ctv.mediawiki
>> Reference Implementation:
>> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify
>>
>> I believe this addresses all outstanding feedback on the design of this
>> opcode, unless there are any new concerns with these changes.
>>
>> I'm also planning to host a review workshop 

[bitcoin-dev] BIP OP_CHECKTEMPLATEVERIFY

2019-11-25 Thread Jeremy via bitcoin-dev
Bitcoin Developers,

Pleased to announce refinements to the BIP draft for OP_CHECKTEMPLATEVERIFY
(replaces previous OP_SECURETHEBAG BIP). Primarily:

1) Changed the name to something more fitting and acceptable to the
community
2) Changed the opcode specification to use the argument off of the stack
with a primitive constexpr/literal tracker rather than script lookahead
3) Permits future soft-fork updates to loosen or remove "constexpr"
restrictions
4) More detailed comparison to alternatives in the BIP, and why
OP_CHECKTEMPLATEVERIFY should be favored even if a future technique may
make it semi-redundant.

Please see:
BIP: https://github.com/JeremyRubin/bips/blob/ctv/bip-ctv.mediawiki
Reference Implementation:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify

I believe this addresses all outstanding feedback on the design of this
opcode, unless there are any new concerns with these changes.

I'm also planning to host a review workshop in Q1 2020, most likely in San
Francisco. Please fill out the form here https://forms.gle/pkevHNj2pXH9MGee9
if you're interested in participating (even if you can't physically attend).

And as a "but wait, there's more":

1) RPC functions are under preliminary development, to aid in testing and
evaluation of OP_CHECKTEMPLATEVERIFY. The new command `sendmanycompacted`
shows one way to use OP_CHECKTEMPLATEVERIFY. See:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-rpcs.
`sendmanycompacted` is still under early design. Standard practices for
using OP_CHECKTEMPLATEVERIFY & wallet behaviors may be codified into a
separate BIP. This work generalizes even if an alternative strategy is used
to achieve the scalability techniques of OP_CHECKTEMPLATEVERIFY.
2) Also under development are improvements to the mempool which will, in
conjunction with improvements like package relay, help make it safe to lift
some of the mempool's restrictions on longchains specifically for
OP_CHECKTEMPLATEVERIFY output trees. See:
https://github.com/bitcoin/bitcoin/pull/17268
This work offers an improvement irrespective of OP_CHECKTEMPLATEVERIFY's
fate.


Neither of these are blockers for proceeding with the BIP, as they are
ergonomics and usability improvements needed once/if the BIP is activated.

See prior mailing list discussions here:

*
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016934.html
*
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/016997.html

Thanks to the many developers who have provided feedback on iterations of
this design.

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-27 Thread Jeremy via bitcoin-dev
Johan,

The issues with mempool limits for OP_SECURETHEBAG are related, but have
distinct solutions.

There are two main categories of mempool issues at stake. One is relay
cost, the other is mempool walking.

In terms of relay cost, if an ancestor can be replaced, it will invalidate
all it's children, meaning that no one paid for that broadcasting. This can
be fixed by appropriately assessing Replace By Fee update fees to
encapsulate all descendants, but there are some tricky edge cases that make
this non-obvious to do.

The other issue is walking the mempool -- many of the algorithms we use in
the mempool can be N log N or N^2 in the number of descendants. (simple
example: an input chain of length N to a fan out of N outputs that are all
spent, is O(N^2) to look up ancestors per-child, unless we're caching).

The other sort of walking issue is where the indegree or outdegree for a
transaction is high. Then when we are computing descendants or ancestors we
will need to visit it multiple times. To avoid re-expanding a node, we
currently cache it with a set. This uses O(N) extra memory and makes O(N
Log N) (we use std::set not unordered_set) comparisons.

I just opened a PR which should help with some of the walking issues by
allowing us to cheaply cache which nodes we've visited on a run. It makes a
lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
memory. See: https://github.com/bitcoin/bitcoin/pull/17268.


Now, for OP_SECURETHEBAG we want a particular property that is very
different from with lightning htlcs (as is). We want that an unlimited
number of child OP_SECURETHEBAG txns may extend from a confirmed
OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
lightning (one dangling unconfirmed to permit channels).

OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a tree
where they are individualized leaf nodes with a preceding CSV. Then, the
above fix would ensure each HTLC always has time to close properly as they
would have individualized lockpoints. This is desirable for some additional
reasons and not for others, but it should "work".



--
@JeremyRubin 



On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
wrote:

> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.
>
> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
>
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as
> long as there is an output available without any descendants. It changes
> the commitment from "you always need at least, and exactly, one non-CSV
> output per party. " to "you always need at least one non-CSV output per
> party. "
>
> I realize these limits are there for a reason though, but I'm wondering if
> could relax them. Also now that jeremyrubin has expressed problems with the
> current mempool limits.
>
> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
> wrote:
>
>> I may be missing something, but I'm not sure how this changes anything?
>>
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>>
>> Matt
>>
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> >
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked
>> on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> >
>> > Instead, what about letting the rule be
>> >
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> >
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles 

Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread Jeremy via bitcoin-dev
Interesting point.

The script is under your control, so you should be able to ensure that you
are always using a correctly constructed midstate, e.g., something like:

scriptPubKey: <-1> OP_SHA256STREAM DEPTH OP_SHA256STREAM <-2>
OP_SHA256STREAM
 OP_EQUALVERIFY

would hash all the elements on the stack and compare to a known hash.
How is that sort of thing weak to midstateattacks?


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Fri, Oct 4, 2019 at 4:16 AM Peter Todd  wrote:

> On Thu, Oct 03, 2019 at 10:02:14PM -0700, Jeremy via bitcoin-dev wrote:
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> > OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> > function to allow concatenation of an unlimited amount of data, provided
> > the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2>
> > OP_SHA256STREAM
>
> One issue with this is the simplest implementation where the state is just
> raw
> bytes would expose raw SHA256 midstates, allowing people to use them
> directly;
> preventing that would require adding types to the stack. Specifically I
> could
> write a script that rather than initializing the state correctly from the
> official IV, instead takes an untrusted state as input.
>
> SHA256 isn't designed to be used in situations where adversaries control
> the
> initialization vector. I personally don't know one way or the other if
> anyone
> has analyzed this in detail, but I'd be surprised if that's secure. I
> considered adding midstate support to OpenTimestamps but decided against
> it for
> exactly that reason.
>
> I don't have the link handy but there's even an example of an experienced
> cryptographer on this very list (bitcoin-dev) proposing a design that falls
> victim to this attack. It's a subtle issue and we probably don't want to
> encourage it.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-04 Thread Jeremy via bitcoin-dev
Good point -- in our discussion, we called it OP_FFS -- Fold Functional
Stream, and it could be initialized with a different integer to select for
different functions. Therefore the stream processing opcodes would be
generic, but extensible.
--
@JeremyRubin 



On Fri, Oct 4, 2019 at 12:00 AM ZmnSCPxj via Lightning-dev <
lightning-...@lists.linuxfoundation.org> wrote:

> Good morning Jeremy,
>
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> function to allow concatenation of an unlimited amount of data, provided
> the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2> OP_SHA256STREAM
> >
> > Or it coul
> >
>
> This seems a good idea.
>
> Though it brings up the age-old tension between:
>
> * Generically-useable components, but due to generalization are less
> efficient.
> * Specific-use components, which are efficient, but which may end up not
> being useable in the future.
>
> In particular, `OP_SHA256STREAM` would no longer be useable if SHA256
> eventually is broken, while the `OP_CAT` will still be useable in the
> indefinite future.
> In the future a new hash function can simply be defined and the same
> technique with `OP_CAT` would still be useable.
>
>
> Regards,
> ZmnSCPxj
>
> > --
> > @JeremyRubin
> >
> > On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:
> >
> > > I hope you are having an great afternoon ZmnSCPxj,
> > >
> > > You make an excellent point!
> > >
> > > I had thought about doing the following to tag nodes
> > >
> > > || means OP_CAT
> > >
> > > `node = SHA256(type||SHA256(data))`
> > > so a subnode would be
> > > `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> > > and a leaf node would be
> > > `leafnode = SHA256(0||SHA256(leafdata))`
> > >
> > > Yet, I like your idea better. Increasing the size of the two inputs to
> > > OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> > > size of object on the stack seems sensible and also doesn't special
> > > case the logic of OP_CAT.
> > >
> > > It would also increase performance. SHA256(tag||subnode2||subnode3)
> > > requires 2 compression function calls whereas
> > > SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> > > function calls (due to padding).
> > >
> > > >Or we could implement tagged SHA256 as a new opcode...
> > >
> > > I agree that tagged SHA256 as an op code that would certainty be
> > > useful, but OP_CAT provides far more utility and is a simpler change.
> > >
> > > Thanks,
> > > Ethan
> > >
> > > On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj 
> wrote:
> > > >
> > > > Good morning Ethan,
> > > >
> > > >
> > > > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > > > subject to OP_CAT.
> > > > >
> > > > > Responding to:
> > > > > """
> > > > >
> > > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > > > retained from the original BitCoin 0.1.0 Alpha for Windows
> design, on
> > > > > par with:
> > > > > [..]
> > > > >
> > > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > > > [..]
> > > > > """
> > > > >
> > > > > OP_CAT is an extremely valuable op code. I understand why it
> was
> > > > > removed as the situation at the time with scripts was dire.
> However
> > > > > most of the protocols I've wanted to build on Bitcoin run into
> the
> > > > > limitation that stack values can not be concatenated. For
> instance
> > > > > TumbleBit would have far smaller transaction sizes if OP_CAT
> was
> > > > > supported in Bitcoin. If it happens to me as a researcher it is
> > > > > probably holding other people back as well. If I could wave a
> magic
> > > > > wand and turn on one of the disabled op codes it would be
> OP_CAT. Of
> > > > > course with the change that size of each concatenated value
> must be 64
> > > > > Bytes or less.
> > > >
> > > > Why 64 bytes in particular?
> > > >
> > > > It seems obvious to me that this 64 bytes is most suited for
> building Merkle trees, being the size of two SHA256 hashes.
> > > >
> > > > However we have had issues with the use of Merkle trees in Bitcoin
> blocks.
> > > > Specifically, it is difficult to determine if a hash on a Merkle
> node is the hash of a Merkle subnode, or a leaf transaction.
> > > > My understanding is that this is the reason for now requiring
> transactions to be at least 80 bytes.
> > > >
> > > > The obvious fix would be to prepend the type of the hashed object,
> i.e. add at least one byte to determine this type.
> > > > Taproot for example uses tagged hash 

Re: [bitcoin-dev] [Lightning-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-03 Thread Jeremy via bitcoin-dev
Awhile back, Ethan and I discussed having, rather than OP_CAT, an
OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
function to allow concatenation of an unlimited amount of data, provided
the only use is to hash it.

You can then use it perhaps as follows:

// start a new hash with item
OP_SHA256STREAM  (-1) -> [state]
// Add item to the hash in state
OP_SHA256STREAM n [item] [state] -> [state]
// Finalize
OP_SHA256STREAM (-2) [state] -> [Hash]

<-1> OP_SHA256STREAM<3> OP_SHA256STREAM <-2>
OP_SHA256STREAM


Or it coul



--
@JeremyRubin 



On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:

> I hope you are having an great afternoon ZmnSCPxj,
>
> You make an excellent point!
>
> I had thought about doing the following to tag nodes
>
> || means OP_CAT
>
> `node = SHA256(type||SHA256(data))`
> so a subnode would be
> `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> and a leaf node would be
> `leafnode = SHA256(0||SHA256(leafdata))`
>
> Yet, I like your idea better. Increasing the size of the two inputs to
> OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> size of object on the stack seems sensible and also doesn't special
> case the logic of OP_CAT.
>
> It would also increase performance. SHA256(tag||subnode2||subnode3)
> requires 2 compression function calls whereas
> SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> function calls (due to padding).
>
> >Or we could implement tagged SHA256 as a new opcode...
>
> I agree that tagged SHA256 as an op code that would certainty be
> useful, but OP_CAT provides far more utility and is a simpler change.
>
> Thanks,
> Ethan
>
> On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj  wrote:
> >
> > Good morning Ethan,
> >
> >
> > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > subject to OP_CAT.
> > >
> > > Responding to:
> > > """
> > >
> > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > retained from the original BitCoin 0.1.0 Alpha for Windows design,
> on
> > > par with:
> > > [..]
> > >
> > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > [..]
> > > """
> > >
> > > OP_CAT is an extremely valuable op code. I understand why it was
> > > removed as the situation at the time with scripts was dire. However
> > > most of the protocols I've wanted to build on Bitcoin run into the
> > > limitation that stack values can not be concatenated. For instance
> > > TumbleBit would have far smaller transaction sizes if OP_CAT was
> > > supported in Bitcoin. If it happens to me as a researcher it is
> > > probably holding other people back as well. If I could wave a magic
> > > wand and turn on one of the disabled op codes it would be OP_CAT.
> Of
> > > course with the change that size of each concatenated value must
> be 64
> > > Bytes or less.
> >
> > Why 64 bytes in particular?
> >
> > It seems obvious to me that this 64 bytes is most suited for building
> Merkle trees, being the size of two SHA256 hashes.
> >
> > However we have had issues with the use of Merkle trees in Bitcoin
> blocks.
> > Specifically, it is difficult to determine if a hash on a Merkle node is
> the hash of a Merkle subnode, or a leaf transaction.
> > My understanding is that this is the reason for now requiring
> transactions to be at least 80 bytes.
> >
> > The obvious fix would be to prepend the type of the hashed object, i.e.
> add at least one byte to determine this type.
> > Taproot for example uses tagged hash functions, with a different tag for
> leaves, and tagged hashes are just
> prepend-this-32-byte-constant-twice-before-you-SHA256.
> >
> > This seems to indicate that to check merkle tree proofs, an `OP_CAT`
> with only 64 bytes max output size would not be sufficient.
> >
> > Or we could implement tagged SHA256 as a new opcode...
> >
> > Regards,
> > ZmnSCPxj
> >
> >
> > >
> > > On Tue, Oct 1, 2019 at 10:04 PM ZmnSCPxj via bitcoin-dev
> > > bitcoin-dev@lists.linuxfoundation.org wrote:
> > >
> > >
> > > > Good morning lists,
> > > > Let me propose the below radical idea:
> > > >
> > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> retained from the original BitCoin 0.1.0 Alpha for Windows design, on par
> with:
> > > > -   1 RETURN
> > > > -   higher-`nSequence` replacement
> > > > -   DER-encoded pubkeys
> > > > -   unrestricted `scriptPubKey`
> > > > -   Payee-security-paid-by-payer (i.e. lack of P2SH)
> > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > > -   transaction malleability
> > > > -   probably many more
> > > >
> > > > So let me propose the more radical excision, starting with SegWit v1:
> > > >
> > > > -   Remove `SIGHASH` from signatures.
> > > > -   Put `SIGHASH` on public keys.
> > > >
> > > > Public keys are now encoded as either 33-bytes (implicit
> 

Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-10-03 Thread Jeremy via bitcoin-dev
I've updated the BIP to no longer be based on Taproot, and instead based on
a OP_NOP upgrade. The example implementation and tests have also been
updated.

BIP:
https://github.com/JeremyRubin/bips/blob/op-secure-the-bag/bip-secure-the-bag.mediawiki
Implementation:
https://github.com/bitcoin/bitcoin/compare/master...JeremyRubin:securethebag_master

The BIP defines OP_NOP4 with the same semantics as previously presented.
This enables OP_SECURETHEBAG for segwit and bare script, but not p2sh
(because of hash cycle, it's impossible to put the redeemscript on the
scriptSig without changing the bag hash). The implementation also makes a
bare OP_SECURETHEBAG script standard as that is a common use case.

To address Russel's feedback, once Tapscript is fully prepared (with more
thorough script parsing improvements), multibyte opcodes can be more
cleanly specified.

Best,

Jeremy

n.b. the prior BIP version remains at
https://github.com/JeremyRubin/bips/blob/op-secure-the-bag-taproot/bip-secure-the-bag.mediawiki
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Mon, Jul 8, 2019 at 3:25 AM Dmitry Petukhov  wrote:

> If you make ANYPREVOUTANYSCRIPT signature not the only signature that
> controls this UTXO, but use it solely for restricting the spending
> conditions such as the set of outputs, and require another signature
> that would commit to the whole transaction, you can eliminate
> malleability, for the price of additional signature, of course.
>
>   CHECKSIG  CHECKSIG
>
> (CHECKMULTISIG/CHECKSIGADD might be used instead)
>
> where control-P can even be a pubkey of a key that is publicly known,
> and the whole purpose of control-sig would be to restrict the outputs
> (control-sig would be created with flags meaning ANYPREVOUTANYSCRIPT).
> Because control-sig does not depend on the script and on the current
> input, there should be no circular dependency, and it can be part of
> the redeem script.
>
> P would be the pubkey of the actual key that is needed to spend this
> UTXO, and the signature of P can commit to all the inputs and outputs,
> preventing malleability.
>
> I would like to add that it may make sense to just have 2 additional
> flags for sighash: NOPREVOUT and NOSCRIPT.
>
> NOPREVOUT would mean that previous output is not committed to, and when
> combined with ANYONECANPAY, this will mean ANYPREVOUT/NOINPUT:
> ANYONECANPAY means exclude all inputs except the current, and NOPREVOUT
> means exclude the current input. Thus NOPREVOUT|ANYONECANPAY = NOINPUT
>
> With NOPREVOUT|ANYONECANPAY|NOSCRIPT you would have ANYPREVOUTANYSCRIPT
>
> with NOPREVOUT|NOSCRIPT you can commit to "all the inputs beside the
> current one", which would allow to create a spending restriction like
> "this UTXO, and also one (or more) other UTXO", which might be useful
> to retroactively revoke or transfer the rights to spend certain UTXO
> without actually moving it:
>
> think 'vault' UTXO that is controlled by Alice, but requires additional
> 'control' UTXO to spend. Alice have keys for both 'vault' UTXO, and
> 'control' UTXO, but Bob have only key for 'control' UTXO.
>
> If Bob learnsthat Alice's vault UTXO key is at risk of compromize,
> he spends the control UTXO, and then Alice's ability to spend vault
> UTXO vanishes.
>
> You can use this mechanism to transfer this right to spend if you
> prepare a number of taproot branches with different pairs of (vault,
> control) keys and a chain of transactions that each spend the previous
> control UTXO such that the newly created UTXO becomes controlled by the
> control key of the next pair, together with vault key in that pair.
>
> В Sat, 22 Jun 2019 23:43:22 -0700
> Jeremy via bitcoin-dev  wrote:
>
> > This is insufficient: sequences must be committed to because they
> > affect TXID. As with scriptsigs (witness data fine to ignore). NUM_IN
> > too.
> >
> > Any malleability makes this much less useful.
> > --
> > @JeremyRubin <https://twitter.com/JeremyRubin>
> > <https://twitter.com/JeremyRubin>
> >
> >
> > On Fri, Jun 21, 2019 at 10:31 AM Anthony Towns via bitcoin-dev <
> > bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > > On Tue, Jun 18, 2019 at 04:57:34PM -0400, Russell O'Connor wrote:
> > > > So with regards to OP_SECURETHEBAG, I am also "not really seeing
> > > > any
> > > reason to
> > > > complicate the spec to ensure the digest is precommitted as part
> > > > of the opcode."
> > >
> > > Also, I think you can simulate OP_SECURETHEBAG with an ANYPREVOUT
> > > (NOINPUT) sighash (Johnson Lau's mentioned this before,

Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-25 Thread Jeremy via bitcoin-dev
I agree in principal, but I think that's just a bit of 'how things are'
versus how they should be.

I disagree that we get composability semantics because of OP_IF. E.g., the
script "OP_IF  " and "OP_END" are two scripts that separately are
invalid as parsed, but together are valid. OP_IF already imposes some
lookahead functionality... but as I understand it, it may be feasible to
get rid of OP_IF for tapscripts anyways. Also in this bucket are P2SH and
segwit, which I think breaks this because the concat of two p2sh scripts or
segwit scripts is not the same as them severally.

I also think that the OP_SECURETHEBAG use of pushdata is a backwards
compatible hack: we can always later redefine the parser to parse
OP_SECURETHEBAG as the 34 byte opcode, recapturing the purity of the
semantics. We can also fix it to not use an extra byte in a future tapleaf
version.



In any case, I don't disagree with figuring out what patching the parser to
handle multibyte opcodes would look like. If that sort of upgrade-path were
readily available when I wrote this, it's how I would have done it. There
are two approaches I looked at mostly:

1) Adding flags to GetOp to change how it parses
  a) Most of the same code paths used for new and old script
  b) Higher risk of breaking something in old script style/downstream
  c) Cleans up only one issue (multibyte opcodes) leaves other warts in
place
  d) less bikesheddable design (mostly same as old script)
  e) code not increased in size
2) Adding a completely new interpreter for Tapscript
  a) Fork the existing interpreter code
  b) For all places where scripts are run, switch based on if it is
tapscript or not
  c) Can clean up various semantics, can even do fancier things like
huffman encode opcodes to less than a byte
  d) Can clearly separate parsing the script from executing it
  e) Can improve versioning techniques
  f) Low risk of breaking something in old script style/downstream
  g) Increases amount of code substantially
  h) Bikesheddable design (everything is on the table).
  i) probably a better general mechanism for future changes to script
parsing, less consensus risk
  j) More compatible with templated script as well.

If not clear, I think that 2 is probably a better approach, but I'm worried
that 2.h means this would take a much longer time to implement.

2 can be segmented into two components:

1) the architecture of script parser versioning
2) the actual new script version

I think that component 1 can be relatively non controversial, thankfully,
using tapleaf versions (the architecture question is more around code
structure). A proof of concept of this would be to have a fork that uses
two independent, but identical, script parsers.

Part two of this plan would be to modify one of the versions substantially.
I'm not sure what exists on the laundry list, but I think it would be
possible to pick a few worthwhile cleanups. E.g.:

1) Multibyte opcodes
2) Templated scripts
3) Huffman Encoding opcodes
4) OP_IF handling (maybe just get rid of it in favor of conditional Verify
semantics)

And make it clear that because we can add future script versions fairly
easily, this is a sufficient step.


Does that seem in line with your understanding of how this might be done?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-25 Thread Jeremy via bitcoin-dev
Do you think the following hypothesis is more or less true:

H: There is no set of pure extensions* to script E such that enabling E and
OP_SECURETHEBAG as proposed enables recursive covenants, but E alone does
not enable recursive covenants?

* Of course there are things that specifically are specifically designed to
switch on if OP_SECURETHEBAG, so pure means normal things like OP_CAT that
are a function of the arguments on the stack or hashed txn data.

This is the main draw of the design I proposed, it should be highly
improbable or impossible to accidentally introduce more behavior than
intended with a new opcode.

I think that given that H is not true for the stack reading version of the
opcode, we should avoid doing it unless strongly motivated, so as to permit
more flexibility for which opcodes we can add in the future without
introducing recursion unless it is explicitly intended.



On Mon, Jun 24, 2019, 7:35 AM Russell O'Connor 
wrote:

> OP_SECURETHEBAG doesn't include the script being executed (i.e the
> scriptPubKey specified in the output that is redeemed by this input) in its
> hash like ordinary signatures do
> <https://github.com/bitcoin/bitcoin/blob/master/src/script/interpreter.cpp#L1271>.
> Of course, this ScriptPubKey is indirectly committed to through the input's
> prevoutpoint.  However Script isn't able to reconstruct this script being
> executed from the prevoutpoint in tapscript without an implementation of
> public key tweeking in Bitcoin Script.
>
> On Sun, Jun 23, 2019 at 2:41 AM Jeremy Rubin 
> wrote:
>
>> Can you clarify this comment?
>>
>> We do in fact commit to the script and scriptsig itself (not the witness
>> stack) in OP_SECURETHEBAG (unless I'm missing what you meant)?
>>
>> On Thu, Jun 20, 2019, 10:59 AM Russell O'Connor via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Just to be clear, while OP_CHECKTXDIGESTVERIFY would enable this style
>>> of covenants if it pulled data from the stack, the OP_SECURETHEBAG
>>> probably cannot create covenants even if it were to pull the data from the
>>> stack unless some OP_TWEEKPUBKEY operation is added to Script because the
>>> "commitment of the script itself" isn't part of the OP_SECURETHEBAG.
>>>
>>> So with regards to OP_SECURETHEBAG, I am also "not really seeing any
>>> reason to complicate the spec to ensure the digest is precommitted as part
>>> of the opcode."
>>>
>>> On Thu, Jun 6, 2019 at 3:33 AM ZmnSCPxj via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
>>>> Good morning aj,
>>>>
>>>>
>>>> Sent with ProtonMail Secure Email.
>>>>
>>>> ‐‐‐ Original Message ‐‐‐
>>>> On Wednesday, June 5, 2019 5:30 PM, Anthony Towns via bitcoin-dev <
>>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>>
>>>> > On Fri, May 31, 2019 at 10:35:45PM -0700, Jeremy via bitcoin-dev
>>>> wrote:
>>>> >
>>>> > > OP_CHECKOUTPUTSHASHVERIFY is retracted in favor of OP_SECURETHEBAG*.
>>>> >
>>>> > I think you could generalise that slightly and make it fit in
>>>> > with the existing opcode naming by calling it something like
>>>> > "OP_CHECKTXDIGESTVERIFY" and pull a 33-byte value from the stack,
>>>> > consisting of a sha256 hash and a sighash-byte, and adding a new
>>>> sighash
>>>> > value corresponding to the set of info you want to include in the
>>>> hash,
>>>> > which I think sounds a bit like "SIGHASH_EXACTLY_ONE_INPUT |
>>>> SIGHASH_ALL"
>>>> >
>>>> > FWIW, I'm not really seeing any reason to complicate the spec to
>>>> ensure
>>>> > the digest is precommitted as part of the opcode.
>>>> >
>>>>
>>>> I believe in combination with `OP_LEFT` and `OP_CAT` this allows
>>>> Turing-complete smart contracts, in much the same way as
>>>> `OP_CHECKSIGFROMSTACK`?
>>>>
>>>> Pass in the spent transaction (serialised for txid) and the spending
>>>> transaction (serialised for sighash) as part of the witness of the spending
>>>> transaction.
>>>>
>>>> Script verifies that the spending transaction witness value is indeed
>>>> the spending transaction by `OP_SHA256  OP_SWAP OP_CAT
>>>> OP_CHECKTXDIGESTVERIFY`.
>>>> Script verifies the spent transaction witness value is indeed the spent
>>>> transaction

Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-23 Thread Jeremy via bitcoin-dev
This is insufficient: sequences must be committed to because they affect
TXID. As with scriptsigs (witness data fine to ignore). NUM_IN too.

Any malleability makes this much less useful.
--
@JeremyRubin 



On Fri, Jun 21, 2019 at 10:31 AM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Jun 18, 2019 at 04:57:34PM -0400, Russell O'Connor wrote:
> > So with regards to OP_SECURETHEBAG, I am also "not really seeing any
> reason to
> > complicate the spec to ensure the digest is precommitted as part of the
> > opcode."
>
> Also, I think you can simulate OP_SECURETHEBAG with an ANYPREVOUT
> (NOINPUT) sighash (Johnson Lau's mentioned this before, but not sure if
> it's been spelled out anywhere); ie instead of constructing
>
>   X = Hash_BagHash( version, locktime, [outputs], [sequences], num_in )
>
> and having the script be " OP_SECURETHEBAG" you calculate an
> ANYPREVOUT sighash for SIGHASH_ANYPREVOUTANYSCRIPT | SIGHASH_ALL:
>
>   Y = Hash_TapSighash( 0, 0xc1, version, locktime, [outputs], 0,
>amount, sequence)
>
> and calculate a signature sig = Schnorr(P,m) for some pubkey P, and
> make your script be "  CHECKSIG".
>
> That loses the ability to commit to the number of inputs or restrict
> the nsequence of other inputs, and requires a bigger script (sig and P
> are ~96 bytes instead of X's 32 bytes), but is otherwise pretty much the
> same as far as I can tell. Both scripts are automatically satisfied when
> revealed (with the correct set of outputs), and don't need any additional
> witness data.
>
> If you wanted to construct "X" via script instead of hardcoding a value
> because it got you generalised covenants or whatever; I think you could
> get the same effect with CAT,LEFT, and RIGHT: you'd construct Y in much
> the same way you construct X, but you'd then need to turn that into a
> signature. You could do so by using pubkey P=G and nonce R=G, which
> means you need to calculate s=1+hash(G,G,Y)*1 -- calculating the hash
> part is easy, multiplying it by 1 is easy, and to add 1 you can probably
> do something along the lines of:
>
> OP_DUP 4 OP_RIGHT 1 OP_ADD OP_SWAP 28 OP_LEFT OP_SWAP OP_CAT
>
> (ie, take the last 4 bytes, increment it using 4-byte arithmetic,
> then cat the first 28 bytes and the result. There's overflow issues,
> but I think they can be worked around either by allowing you to choose
> different locktimes, or by more complicated script)
>
> Cheers,
> aj
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-03 Thread Jeremy via bitcoin-dev
Hi Russell,

Thanks for the response. I double checked my work in drafting my response
and realized I didn't address all the malleability concerns, I believe I
have now (fingers crossed) addressed all points of malleability.

*The malleability concerns are as follows:*

A TXID is computed as:

def txid(self):
 r = b""
 r += struct.pack("___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_SECURETHEBAG (supersedes OP_CHECKOUTPUTSVERIFY)

2019-06-01 Thread Jeremy via bitcoin-dev
Hi All,

OP_CHECKOUTPUTSHASHVERIFY is retracted in favor of OP_SECURETHEBAG*.
OP_SECURETHEBAG does more or less the same thing, but fixes malleability
issues and lifts the single output restriction to a known number of inputs
restriction.

OP_CHECKOUTPUTSVERIFY had some issues with malleability of version and
locktime. OP_SECURETHEBAG commits to both of these values.

OP_SECURETHEBAG also lifts the restriction that OP_CHECKOUTPUTSVERIFY had
to be spent as only a single input, and instead just commits to the number
of inputs. This allows for more flexibility, but keeps it easy to get the
same single output restriction.

BIP:
https://github.com/JeremyRubin/bips/blob/op-secure-the-bag/bip-secure-the-bag.mediawiki
Implementation: https://github.com/JeremyRubin/bitcoin/tree/secure_the_bag

A particularly useful topic of discussion is how best to eliminate the
PUSHDATA and treat OP_SECURETHEBAG like a pushdata directly. I thought
about how the interpreter works and is implemented and couldn't come up
with something noninvasive.

Thank you for your review and discussion,

Jeremy

* Plus the name is better
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] An alternative: OP_CAT & OP_CHECKSIGFROMSTACK

2019-05-25 Thread Jeremy via bitcoin-dev
What do you think about having it be OP_CHECK_TXID_TEMPLATE_DATA where the
hash checked is the TXID of the transaction with the inputs set to ...
(maybe appended to the fee paid)?

This allows for a variable number of inputs to be allowed (e.g., one, two,
etc). This also fixes potential bugs around TXID malleability for lightning
like setups (Greg and I discussed in wizards about version malleability).

Allowing multiple inputs is great for structuring more complex contracts
with multiple nodes paying into the same covenantted transaction.

Also I personally prefer a RISC+CISC approach -- we should enable the
common paths easily as they are known (didn't you come up with jets?) and
improve security for API users, but also piecemeal enable features in
script to allow for experimentation or custom contracts.
--
@JeremyRubin 



On Fri, May 24, 2019 at 4:15 PM Russell O'Connor 
wrote:

> In order of escalating scope of amendments to OP_COSHV, I suggest
>
> 1) Peeking at surrounding data surrounding data should definitely be
> replaced by a pushdata-like op-code that uses the subsequent 32-bytes
> directly.  The OP_SUCCESSx upgrade path specifically allows for this, and
> avoids complicating the semantics Bitcoin Script.
> 2) Furthermore, the number-of-input-verification and the
> outputhash-verification operations ought to be split into different opcodes
> as they are logically unrelated.
> 3) Better still, we should instead implement the transaction reflection
> operations of OP_PUSHOUTPUTHASH and OP_NUMINPUTS that puts the outputhash
> and number of inputs respectively onto the stack.  Recursive covenants
> appear to be effectively impossible without either an OP_TWEEKPUBKEY or an
> OP_PUSHSCRIPTPUBKEY so the effort your proposal goes through to guard
> against placing an arbitrary outputhash onto the stack appears to be wasted
> effort to me.
> 4) If we anticipate adding OP_CHECKSIGFROMSTACKVERIFY, then we should most
> definitely prefer (3) instead of OP_COSHV, if we still feel the need to do
> anything at all.  It is probably best to have both
> OP_CHECKSIGFROMSTACKVERIFY and transaction reflection operations of
> OP_PUSHOUTPUTHASH and OP_NUMINPUTS but I think I would be fine with just
> OP_CHECKSIGFROMSTACKVERIFY as well.
>
> On the other hand, if we are serious about preferring less per-block
> bandwidth over reusable primitive opcodes for programming, then we should
> instead abandon the RISC-style Bitcoin Script and instead add an
> alternative CISC-style taproot leaf type that directly provides (a
> conjunction of) the various popular common policies: channel opening,
> channel factories, coinjoins, hashlocks, timelocks, congestion control
> etc.  Segwit v0 already implements this CISC-style for the single most
> popular policy: single signature verification.
>
> On Fri, May 24, 2019 at 4:51 PM Jeremy  wrote:
>
>> Hi Russell,
>>
>> Thanks for this detailed comparison. The COSHV BIP does include a brief
>> comparison to OP_CHECKSIGFROMSTACKVERIFY and ANYPREVOUT, but this is more
>> detailed.
>>
>>
>> I think that the power from CHECKSIGFROMSTACKVERIFY is awesome. It's
>> clearly one of the more flexible options available and would enable a
>> multitude of new use cases.
>>
>> When I originally presented my work on congestion control at Jan 2017
>> BPASE, I also discussed it as an option for covenants. Unfortunately I
>> think it may be on the edge of too powerful -- there are a lot of use cases
>> and implications from having a potentially recursive covenant. If you see
>> my response to Matt in the OP_COSHV BIP thread I classify it as enabling a
>> non-computationally enumerable set of restrictions.
>>
>> I think also from a developer point of view working with OP_COSHV is much
>> much simpler (maybe this can be abstracted) which will lead to increased
>> adoption. OP_COSHV also uses less per-block bandwidth which also makes it
>> preferable for a measure intended to decongest blocks. Do you know the
>> exact byte cost for OP_CHECKSIGFROMSTACK? OP_COSHV scripts, with templating
>> changes to taproot, can be a single byte. OP_COSHV also has less potential
>> to have a negative interaction with future opcodes we may want like
>> OP_PUBKEYTWEAK. While we're getting to an exact spec for the features we
>> want in Bitcoin scripting, it's hard to sign on to OP_CHECKSIGFROMSTACK
>> unless there's an exact specification which makes us confident we're
>> hitting all the points.
>>
>> If the main complaint about OP_COSHV is that it peeks at surrounding
>> data, it's also possible to implement it more closely to a multi-byte
>> pushdata opcode or do the template optimization.
>>
>> Lastly, as I have previously noted, OP_LEFT is probably safer to
>> implement than OP_CAT and should be more efficient for OP_CHECKSIGFROMSTACK
>> scripts.
>>
>>
___
bitcoin-dev mailing list

Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-25 Thread Jeremy via bitcoin-dev
ZmnSCIPxj,

I think you're missing the general point, so I'm just going to respond to
one point to see if that helps your understanding of why OP_COSHV is better
than just pre-signed.

The reason why MuSig and other distributed signing solutions are not
acceptable for this case is they all require interaction for guarantee of
payout.

In contrast, I can use a OP_COSHV Taproot key to request a withdrawal from
an exchange which some time later pays out to a lot of people, rather than
having to withdraw multiple times and then pay. The exchange doesn't have
to know this is what I did. They also don't have to tell me the exact
inputs they'll spend to me or if I'm batched or not (batching largely
incompatible with pre-signing unless anyprevout)

The exchange can take my withdrawal request and aggregate it to other
payees into a tree as well, without requiring permission from the
recipients.

They can also -- without my permission -- make the payment not directly
into me, but into a payment channel between me and the exchange, allowing
me to undo the withdrawal by routing money back to the exchange over
lightning.

The exchange can take some inbound payments to their hot wallet and move
them into cold storage with pre-set spending paths. They don't need to use
ephemeral keys (how was that entropy created?) nor do they need to bring on
their cold storage keys to pre-sign the spending paths.

None of this really works well with just pre-signing because you need to
ask for permission first in order to do these operations, but with OP_COSHV
you can, just as the payer without talking to anyone else, or just as the
recipient commit your funds to a complex txn structure.

Lastly, think about this in terms of DoS. You have a set of N users who
request a payment. You build the tree, collect signatures, and then at the
LAST step of building the tree, one user drops out. You restart, excluding
that user. Then a different user drops. Meanwhile you've had to keep your
funds locked up to guarantee those inputs for the txn when it finalizes.

In contrast, once you receive the requests with OP_COSHV, there's nothing
else to do. You just issue the transaction and move on.


Does that make sense as to why a user would prefer this, even if there is
an emulation with pre-signed txns?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safety of committing only to transaction outputs

2019-05-25 Thread Jeremy via bitcoin-dev
Hi Johnson,

As noted on the other thread, witness replay-ability can be helped by
salting the taproot key or the taproot leaf script at the last stage of a
congestion control tree.

I also think that chaperone signatures should be opt-in; there are cases
where we may not want them. OP_COSHV is compatible with an additional
checksig operation.

There are also other mechanisms that can improve the safety. Proposed below:

OP_CHECKINPUTSHASHVERIFY -- allow checking that the hash of the inputs is a
particular value. The top-level of a congestion control tree can check that
the inputs match the desired inputs for that spend, and default to
requiring N of N otherwise. This is replay proof! This is useful for other
applications too.

OP_CHECKFEEVERIFY -- allowing an explicit commitment to the exact amount of
fee limits replay to txns which were funded with the exact amount of the
prior. If there's a mismatch, an alternative branch can be used. This is a
generally useful mechanism, but means that transactions using it must have
all inputs/outputs set.

Best,

Jeremy
--
@JeremyRubin 



On Fri, May 24, 2019 at 7:40 AM Johnson Lau via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> This is a meta-discussion for any approach that allows the witness
> committing to only transaction outputs, but not inputs.
>
> We can already do the following things with the existing bitcoin script
> system:
> * commit to both inputs and outputs: SIGHASH_ALL or SIGHASH_SINGLE, with
> optional SIGHASH_ANYONECANPAY
> * commit to only inputs but not outputs: SIGHASH_NONE with optional
> SIGHASH_ANYONECANPAY
> * not commit to any input nor output: not using any sigop; using a trivial
> private key; using the SIGHASH_SINGLE bug in legacy script
>
> The last one is clearly unsafe as any relay/mining node may redirect the
> payment to any output it chooses. The witness/scriptSig is also replayable,
> so any future payment to this script will likely be swept immediately
>
> SIGHASH_NONE with ANYONECANPAY also allows redirection of payment, but the
> signature is not replayable
>
> But it’s quite obvious that not committing to outputs are inherently
> insecure
>
> The existing system doesn’t allow committing only to outputs, and we now
> have 3 active proposals for this function:
>
> 1. CAT and CHECKSIGFROMSTACK (CSFS):
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016946.html
> 2. ANYPREVOUT (aka NOINPUT):
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html
> 3. CHECKOUTPUTSHASHVERIFY (COHV):
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016934.html
>
> With outputs committed, redirecting payment is not possible. On the other
> hand, not committing to any input means the witness is replayable without
> the consent of address owner. Whether replayability is acceptable is
> subject to controversy, but the ANYPREVOUT proposal fixes this by requiring
> a chaperone signature that commits to input. However, if the rationale for
> chaperone signature stands, it should be applicable to all proposals listed
> above.
>
> A more generic approach is to always require a “safe" signature that
> commits to at least one input. However, this interacts poorly with the
> "unknown public key type” upgrade path described in bip-tapscript (
> https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki),
> since it’d be a hardfork to turn an “unknown type sig” into a “safe sig”.
> But we could still use a new “leaf version” every time we introduce new
> sighash types, so we could have a new definition for “safe sig”. I expect
> this would be a rare event and it won’t consume more than a couple leaf
> versions. By the way, customised sighash policies could be done with
> CAT/CSFS.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] An alternative: OP_CAT & OP_CHECKSIGFROMSTACK

2019-05-25 Thread Jeremy via bitcoin-dev
Hi Russell,

Thanks for this detailed comparison. The COSHV BIP does include a brief
comparison to OP_CHECKSIGFROMSTACKVERIFY and ANYPREVOUT, but this is more
detailed.


I think that the power from CHECKSIGFROMSTACKVERIFY is awesome. It's
clearly one of the more flexible options available and would enable a
multitude of new use cases.

When I originally presented my work on congestion control at Jan 2017
BPASE, I also discussed it as an option for covenants. Unfortunately I
think it may be on the edge of too powerful -- there are a lot of use cases
and implications from having a potentially recursive covenant. If you see
my response to Matt in the OP_COSHV BIP thread I classify it as enabling a
non-computationally enumerable set of restrictions.

I think also from a developer point of view working with OP_COSHV is much
much simpler (maybe this can be abstracted) which will lead to increased
adoption. OP_COSHV also uses less per-block bandwidth which also makes it
preferable for a measure intended to decongest blocks. Do you know the
exact byte cost for OP_CHECKSIGFROMSTACK? OP_COSHV scripts, with templating
changes to taproot, can be a single byte. OP_COSHV also has less potential
to have a negative interaction with future opcodes we may want like
OP_PUBKEYTWEAK. While we're getting to an exact spec for the features we
want in Bitcoin scripting, it's hard to sign on to OP_CHECKSIGFROMSTACK
unless there's an exact specification which makes us confident we're
hitting all the points.

If the main complaint about OP_COSHV is that it peeks at surrounding data,
it's also possible to implement it more closely to a multi-byte pushdata
opcode or do the template optimization.

Lastly, as I have previously noted, OP_LEFT is probably safer to implement
than OP_CAT and should be more efficient for OP_CHECKSIGFROMSTACK scripts.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-25 Thread Jeremy via bitcoin-dev
Hi Johnson,

Thanks for the review. I do agree that OP_COSHV (note the pluralization --
it would also be possible to do a OP_COHV   to do specific
outputs).

I think the point of OP_COSHV is that something like ANYPREVOUT is much
more controversial. OP_COSHV is a subset by design. The IF on ANYPREVOUT is
substantial, discussion I've seen shows that the safety of ANYPREVOUT is
far from fully agreed. (I'll respond to your other email on the subject
too). OP_COSHV is also proposed specifically as a congestion control
mechanism, and so keeping it very easy to verify and minimal data
(optimizations allow reducing it to just OP_COSHV with no 32 byte argument)
suggest this approach is preferable.

In an earlier version, rather than have it be the first input restriction,
I had implemented it an an only one input restriction. This makes it easier
to work with SIGHASH_SINGLE. This works by having the PrecomputedData have
a atomic test_flag. However I felt that the statefulness between
verifications was not great and so I simplified it.

There actually is a reason to require minimal push -- maybe we can change
the rule to be non-minimal pushes are ignored, because we can later extend
it with a different rule. This seems a little error prone. There's also no
reason to not just treat OP_COSHV as a pushdata 32 itself, and drop the
extra byte if we don't care about versioning later.

Requiring a signature actually makes COSHV less useful. So I'm against that
-- such a signature prevents using OP_COSHV for non-interactive
setups/uncoordinated setups where the txids are unstable. It also makes
building the trees more expensive. If you want this feature, a better thing
to do would be to always tweak leaf nodes of the tx tree entropy so that
it's unique per key and doesn't impose extra data at every node, only the
leafs of the expansion tree.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Fri, May 24, 2019 at 12:13 PM Johnson Lau  wrote:

> Functionally, COHV is a proper subset of ANYPREVOUT (NOINPUT). The only
> justification to do both is better space efficiency when making covenant.
>
> With eltoo as a clear usecase of ANYPREVOUT, I’m not sure if we really
> want a very restricted opcode like COHV. But these are my comments, anyway:
>
> 1. The “one input” rule could be relaxed to “first input” rule. This
> allows adding more inputs as fees, as an alternative to CPFP. In case the
> value is insufficient to pay the required outputs, it is also possible to
> rescue the UTXO by adding more inputs.
>
> 2. While there is no reason to use non-minimal push, there is neither a
> reason to require minimal push. Since minimal push is never a consensus
> rule, COHV shouldn’t be a special case.
>
> 3. As I suggested in a different post (
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016963.html),
> the argument for requiring a prevout binding signature may also be
> applicable to COHV
>
> On 21 May 2019, at 4:58 AM, Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Hello bitcoin-devs,
>
> Below is a link to a BIP Draft for a new opcode,
> OP_CHECKOUTPUTSHASHVERIFY. This opcode enables an easy-to-use trustless
> congestion control techniques via a rudimentary, limited form of covenant
> which does not bear the same technical and social risks of prior covenant
> designs.
>
> Congestion control allows Bitcoin users to confirm payments to many users
> in a single transaction without creating the UTXO on-chain until a later
> time. This therefore improves the throughput of confirmed payments, at the
> expense of latency on spendability and increased average block space
> utilization. The BIP covers this use case in detail, and a few other use
> cases lightly.
>
> The BIP draft is here:
>
> https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki
>
> The BIP proposes to deploy the change simultaneously with Taproot as an
> OPSUCCESS, but it could be deployed separately if needed.
>
> An initial reference implementation of the consensus changes and  tests
> which demonstrate how to use it for basic congestion control is available
> at https://github.com/JeremyRubin/bitcoin/tree/congestion-control.  The
> changes are about 74 lines of code on top of sipa's Taproot reference
> implementation.
>
> Best regards,
>
> Jeremy Rubin
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-22 Thread Jeremy via bitcoin-dev
> * I do not think CoinJoin is much improved by this opcode.
>   Typically, you would sign off only if one of the outputs of the
CoinJoin transaction is yours, and this does not really improve this
situation.

Coinjoin benefits a lot I think.


Coinjoin is improved because you can fit more users into the protocol and
create many more outputs at lower cost or include more participants.
Ideally a coinjoin creates a lot of outputs so that the ownership is
smeared more, but this has a cost at the time of the coinjoin.

Coinjoin is also improved because you don't reveal the outputs created by
the coinjoin until some time, perhaps very far in the future, when you need
the coin. In fact, you only need to reveal where you're moving the coins to
participants in your subtree because participants need only verify their
branch.

It also makes the protocol more stable with respect to input choice. This
is because, similar to how NOINPUT may work, OP_COSHV outputs are spendable
without knowing what the TXID will be. Therefore if someone changes their
input or non segwit spend script, it won't break the presigned txns. This
also means that all the inputs can be ANYONECANPAY, so there is no need to
reveal your inputs before anyone else.

This culminates in being able to open channels from a coinjoin safely, I
believe this is difficult/impossible to do currently.




> * Using this for congestion control increases blockchain usage by one TXO
and one input, ending up with *more* bytes onchain, and a UTXO that will be
removed later in (we hope) short time.
>   I do not know if this is a good idea, to increase congestion by making
unnecessary intermediate transaction outputs, at times when congestion is a
problem.

This is a good idea because it improves QoS for most users.

For receiving money pending spendable but confirmed payment (i.e. certified
checks) is superior to having unconfirmed funds.

For sending money, being able to clear all liabilities in a single txn
decreases business exposure to fee variance and confirmation time variance.
E.g., if I'm doing payroll in Bitcoin I will pay big fines if I am a day
late. If I have 10,000 employees this might be painful if fees are
currently up.

It also helps to have a backlog of low priority txns to support the fee
market.

Overall block bandwidth utilization is fairly spikey, so having long term
well known outputs that are not time sensitive can be used to better
utilize bandwidth.

The total extra bandwidth btw is really small given the expansion factor
optimizations available.


> * I cannot find a way to implement Decker-Russell-Osuntokun (or any
offchain update mechanism) on top of this opcode, so I cannot support
replacing `SIGHASH_NOINPUT` with this opcode.
>   In particular, while the finite loop support by this opcode appears (at
first glance) to be useable as the "stepper" for an offchain update
mechanism, I cannot find a good way to short-circuit the transaction chain
without `SIGHASH_NOINPUT` anyway.

I'm not deeply familiar with DRO channels. This opcode isn't a replacement
for SIGHASH_NOINPUT -- SIGHASH_NOINPUT is mentioned merely to contrast
using SIGHASH_NOINPUT for the uses presented in this BIP.

Lastly there's no 'replacing'. Neither NOINPUT nor COSHV are accepted by
the community at large yet, and they do different things.


> * Channel factories created by this opcode do not, by themselves, support
updates to the channel structure.
>   But such simple "close only" channel factories can be done using n-of-n
and a pre-signed offchain transaction (especially since the entities
interested in the factory are known and enumerable, and thus can be induced
to sign in order to enter the factory).

I'm not really an expert at Bitcoin Lightning, but this basic mechanism
should work.

Imagine the script at a leaf node:

Taproot([Alice, Bob], [OP_COSHV ]
where uncooperative script is:

Taproot([Alice, Bob], ["1 week" CHECKSEQUENCEVERIFY DROP OP_COSHV )

Cooperative closing skips the extra transactions. Updates are signed
against the uncooperative script with repudation. E.g.:

HASH160  EQUAL
IF

ELSE
"1 week" CHECKSEQUENCEVERIFY DROP

ENDIF
CHECKSIG

It can even be optimized by letting the uncooperative script branches in
the leaf be blaming Alice or Bob.

Does that not work?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-22 Thread Jeremy via bitcoin-dev
Morning,

Yes, in general, Bitcoin does not do anything to prevent users from
discarding their keys.

I don't think this will be fixed anytime soon.

There are some protocols where, though, knowing that a key was once known
to the recipients may make it legally valid to inflict a punitive measure
(e.g., via HTLC), whereas if the key was never known that might be a breach
of contract for the payment provider.

Best,

Jeremy

On Tue, May 21, 2019 at 7:52 PM ZmnSCPxj  wrote:

> Good morning Jeremy,
>
> >If a sender needs to know the recipient can remove the covenant before
> spending, they may request a signature of an challenge string from the
> recipients
>
> The recipients can always choose to destroy the privkey after providing
> the above signature.
> Indeed, the recipients can always insist on not cooperating to sign using
> the taproot branch and thus force spending via the
> `OP_CHECKOUTPUTSHASHVERIFY`.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-22 Thread Jeremy via bitcoin-dev
I agree a little bit, but I think that logic is somewhat infectious. If
we're going to do covenants, we should also do it as a part of a more
comprehensive new scripting system that gives us other strong benefits for
our ability to template scripts. And so on. I'm excited to see what's
possible!

Given that this is very simple to implement and has obvious deployable big
wins with few controversial drawbacks, it makes more sense to streamline
adoption of something like this for now and work on a more comprehensive
solution without urgency.

The design is also explicitly versioned so short of an eventual full
redesign it should be easy enough to add more flexible features piecemeal
as they come up and their use cases are strongly justified as I have shown
here for certified post dated utxo creation.

Lastly I think that while these are classifiable as covenants in
implementation, they are closer in use to multisig pre-signed scripts,
without the requirement of interactive setup. We should think of these as
'certified checks' instead, which can also describe a pre-signed design
satisfactorily. With true covenants we don't want require the satisfying
conditions to be 'computationally enumerable' (e.g. we can't in
computational limits enumerate all public keys if the covenant expresses a
spend must be to a public key). And if the covenant is computationally
enumerable, then we should use this construct and put the spending paths
into a Huffman encoded taproot tree.

On Tue, May 21, 2019, 12:41 PM Matt Corallo 
wrote:

> If we're going to do covenants (and I think we should), then I think we
> need to have a flexible solution that provides more features than just
> this, or we risk adding it only to go through all the effort again when
> people ask for a better solution.
>
> Matt
>
> On 5/20/19 8:58 PM, Jeremy via bitcoin-dev wrote:
> > Hello bitcoin-devs,
> >
> > Below is a link to a BIP Draft for a new opcode,
> > OP_CHECKOUTPUTSHASHVERIFY. This opcode enables an easy-to-use trustless
> > congestion control techniques via a rudimentary, limited form of
> > covenant which does not bear the same technical and social risks of
> > prior covenant designs.
> >
> > Congestion control allows Bitcoin users to confirm payments to many
> > users in a single transaction without creating the UTXO on-chain until a
> > later time. This therefore improves the throughput of confirmed
> > payments, at the expense of latency on spendability and increased
> > average block space utilization. The BIP covers this use case in detail,
> > and a few other use cases lightly.
> >
> > The BIP draft is here:
> >
> https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki
> >
> > The BIP proposes to deploy the change simultaneously with Taproot as an
> > OPSUCCESS, but it could be deployed separately if needed.
> >
> > An initial reference implementation of the consensus changes and  tests
> > which demonstrate how to use it for basic congestion control is
> > available at
> > https://github.com/JeremyRubin/bitcoin/tree/congestion-control.  The
> > changes are about 74 lines of code on top of sipa's Taproot reference
> > implementation.
> >
> > Best regards,
> >
> > Jeremy Rubin
> >
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-21 Thread Jeremy via bitcoin-dev
Hello bitcoin-devs,

Below is a link to a BIP Draft for a new opcode, OP_CHECKOUTPUTSHASHVERIFY.
This opcode enables an easy-to-use trustless congestion control techniques
via a rudimentary, limited form of covenant which does not bear the same
technical and social risks of prior covenant designs.

Congestion control allows Bitcoin users to confirm payments to many users
in a single transaction without creating the UTXO on-chain until a later
time. This therefore improves the throughput of confirmed payments, at the
expense of latency on spendability and increased average block space
utilization. The BIP covers this use case in detail, and a few other use
cases lightly.

The BIP draft is here:
https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki

The BIP proposes to deploy the change simultaneously with Taproot as an
OPSUCCESS, but it could be deployed separately if needed.

An initial reference implementation of the consensus changes and  tests
which demonstrate how to use it for basic congestion control is available
at https://github.com/JeremyRubin/bitcoin/tree/congestion-control.  The
changes are about 74 lines of code on top of sipa's Taproot reference
implementation.

Best regards,

Jeremy Rubin
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Graftroot: Private and efficient surrogate scripts under the taproot assumption

2018-02-08 Thread Jeremy via bitcoin-dev
I'm also highly interested in the case where you sign a delegate
conditional on another delegate being signed, e.g. a bilateral agreement.

In order for this to work nicely you also need internally something like
segwit so that you can refer to one side's delegation by a signature-stable
identity.

I don't have a suggestion of a nice way to do this at this time, but will
stew on it.

--
@JeremyRubin 


On Thu, Feb 8, 2018 at 11:29 PM, Jeremy  wrote:

> This might be unpopular because of bad re-org behavior, but I believe the
> utility of this construction can be improved if we introduce functionality
> that makes a script invalid after a certain time (correct me if I'm
> wrong, I believe all current timelocks are valid after a certain time and
> invalid before, this is the inverse).
>
> Then you can exclude old delegates by timing/block height arguments, or
> even pre-sign delegates for different periods of time (e.g., if this
> happens in the next 100 blocks require y, before the next 1000 blocks but
> after the first 100 require z, etc).
>
>
>
> --
> @JeremyRubin 
> 
>
> On Mon, Feb 5, 2018 at 11:58 AM, Gregory Maxwell via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Mon, Feb 5, 2018 at 3:56 PM, Ryan Grant 
>> wrote:
>> > Am I reading correctly that this allows unilateral key rotation (to a
>> > previously unknown key), without invalidating the interests of other
>> > parties in the existing multisig (or even requiring any on-chain
>> > transaction), at the cost of storing the signed delegation?
>>
>> Yes, though I'd avoid the word rotation because as you note it doesn't
>> invalidate the interests of any key, the original setup remains able
>> to sign.  You could allow a new key of yours (plus everyone else) to
>> sign, assuming the other parties agree... but the old one could also
>> still sign.
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Graftroot: Private and efficient surrogate scripts under the taproot assumption

2018-02-08 Thread Jeremy via bitcoin-dev
This might be unpopular because of bad re-org behavior, but I believe the
utility of this construction can be improved if we introduce functionality
that makes a script invalid after a certain time (correct me if I'm wrong,
I believe all current timelocks are valid after a certain time and invalid
before, this is the inverse).

Then you can exclude old delegates by timing/block height arguments, or
even pre-sign delegates for different periods of time (e.g., if this
happens in the next 100 blocks require y, before the next 1000 blocks but
after the first 100 require z, etc).



--
@JeremyRubin 


On Mon, Feb 5, 2018 at 11:58 AM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mon, Feb 5, 2018 at 3:56 PM, Ryan Grant  wrote:
> > Am I reading correctly that this allows unilateral key rotation (to a
> > previously unknown key), without invalidating the interests of other
> > parties in the existing multisig (or even requiring any on-chain
> > transaction), at the cost of storing the signed delegation?
>
> Yes, though I'd avoid the word rotation because as you note it doesn't
> invalidate the interests of any key, the original setup remains able
> to sign.  You could allow a new key of yours (plus everyone else) to
> sign, assuming the other parties agree... but the old one could also
> still sign.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] UTXO growth scaling solution proposal

2017-07-21 Thread Jeremy via bitcoin-dev
Hi Major,

I think that you'll enjoy Peter Todd's blogpost on TXO commitments[1]. It
has a better scalability improvement with fewer negative consequence.

Best,

Jeremy



[1] https://petertodd.org/2016/delayed-txo-commitments


--
@JeremyRubin 


On Fri, Jul 21, 2017 at 3:28 PM, Major Kusanagi via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> I have a scaling solution idea that I would be interested in getting some
> feedback on. I’m new to the mailing list and have not been in the Bitcoin
> space as long as some have been, so I don’t know if anyone has thought of
> this idea.
>
> Arguably the biggest scaling problem for Bitcoin is the unbounded UTXO
> growth. Current scaling solutions like Segregated Witness, Lighting
> Network, and larger blocks does not address this issue. As more and more
> blocks are added to the block chain the size of the UTXO set that miners
> have to maintain continues to grow. This is the case even if the block size
> were to remain at 1 megabyte. There is no way out of solving this
> fundamental scaling problem other then to limit the maximum size of the
> UTXO set.
>
> The following soft fork solution is proposed. Any UTXO that is not spent
> within a set number of blocks is considered invalid. What this means for
> miners and nodes in the Bitcoin network is that they only have to ever
> store that set number of blocks. In others words the block chain will never
> be larger then the set number of blocks and the size of the block chain is
> capped.
>
> But what this means for users is that bitcoins that have not been spent
> for a long time are “lost” forever. This proposed solution is likely a
> difficult thing for Bitcoin users to accept. What Bitcoin users will
> experience is that all of a sudden their bitcoins are spendable one moment
> and the next moment they are not. The experience that they get is that all
> of a sudden their old bitcoins are gone forever.
>
> The solution can be improved by adding this new mechanism to Bitcoin, that
> I will call luster. UTXO’s that are less then X blocks old has not lost any
> luster and have a luster value of 1. As UTXO’s get older, the luster value
> will continuously decrease until the UTXO’s become Z blocks old (where Z >
> X), and has lost all it’s luster and have a luster value of 0. UTXO’s that
> are in between X and Z blocks old have a luster value between 0 and 1. The
> luster value is then used to compute the amount of bitcoins that must be
> burned in order for a transaction with that UTXO to be included in a block.
> So for example, a UTXO with a luster value of 0.5 must burn at least 50
> percent of its bitcoin value, a UTXO with a luster value of 0.25 must burn
> at least 75 percent of its bitcoin value, and a UTXO with a luster value of
> 0 must burn 100 percent of its bitcoin value. Thus the coins/UTXOs that
> have a luster value of 0 means it has no monetary value, and it would be
> safe for bitcoins nodes to drop those UTXOs from the set they maintain.
>
> The idea is that coins that are continuously being used in Bitcoin economy
> will never lose it’s luster. But coins that are old and not circulating
> will start to lose its luster up until all luster is lost and they become
> valueless. Or they reenter the economy and regains all its luster.
>
> But at what point should coins start losing their luster? A goal would be
> that we want to minimize the scenarios of when coins start losing their
> luster. One reasonable answer is that coins should only starting losing its
> luster after the lifespan of the average human. The idea being that a
> person will eventually have to spend all his coins before he dies,
> otherwise it will get lost anyways (assuming that only the dying person has
> the ability to spend those coins). Otherwise there are few cases where a
> person would never spend their bitcoins in there human life time. One
> example is in the case of inheritance where a dying person does not want to
> spend his remaining coins and have another person take them over. But with
> this propose scaling solution, coins can be stilled inherited, but it would
> have to be an on-chain inheritance. The longest lifespan of a human
> currently is about 120 years old. So a blockchain that stores the last 150
> years of history seems like one reasonable option.
>
> Then the question of how large blocks should be is simply a matter of what
> is the disk size requirement for a full node. For simplicity, assuming that
> a block is created every 10 minute, the blockchain size in terabyte can be
> express as the following.
> blockSize MB * 6 * 24 * 365 * years /100 = blockchainSize TB
>
> Example values:
> blockSize = 1MB, years = 150 -> blockchainSize = 7.884 TB
> blockSize = 2MB, years = 150 -> blockchainSize = 15.768 TB
>
> So if we don’t want the block chain to be bigger then 8 TB, then we should
> have a block size of 1 MB. 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-28 Thread Jeremy via bitcoin-dev
I think it's probably safer to have a fork-to-minumum (e.g. minimal
coinbase+header) after a certain date than to fork up at a certain date. At
least in that case, the default isn't breaking consensus, but you still get
the same pressure to fork to a permanent solution.

I don't endorse the above proposal, but remarked for the sake of guiding
the argument you are making.


--
@JeremyRubin 


On Tue, Mar 28, 2017 at 1:31 PM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The basic idea is, let's stop the debate for whether we should upgrade
> to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
> so any final decision would be a soft fork to this already deployed
> release. If by 2020, we still agree 1MB is enough, it can be changed
> back to 1MB limit and it would also a soft fork on top of that.
>
> On Wed, Mar 29, 2017 at 1:23 AM, Alphonse Pace 
> wrote:
> > What meeting are you referring to?  Who were the participants?
> >
> > Removing the limit but relying on the p2p protocol is not really a true
> > 32MiB limit, but a limit of whatever transport methods provide.  This can
> > lead to differing consensus if alternative layers for relaying are used.
> > What you seem to be asking for is an unbound block size (or at least
> > determined by whatever miners produce).  This has the possibility (and
> even
> > likelihood) of removing many participants from the network, including
> many
> > small miners.
> >
> > 32MB in less than 3 years also appears to be far beyond limits of safety
> > which are known to exist far sooner, and we cannot expect hardware and
> > networking layers to improve by those amounts in that time.
> >
> > It also seems like it would be much better to wait until SegWit
> activates in
> > order to truly measure the effects on the network from this increased
> > capacity before committing to any additional increases.
> >
> > -Alphonse
> >
> >
> >
> > On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> >  wrote:
> >>
> >> I've proposed this hard fork approach last year in Hong Kong Consensus
> >> but immediately rejected by coredevs at that meeting, after more than
> >> one year it seems that lots of people haven't heard of it. So I would
> >> post this here again for comment.
> >>
> >> The basic idea is, as many of us agree, hard fork is risky and should
> >> be well prepared. We need a long time to deploy it.
> >>
> >> Despite spam tx on the network, the block capacity is approaching its
> >> limit, and we must think ahead. Shall we code a patch right now, to
> >> remove the block size limit of 1MB, but not activate it until far in
> >> the future. I would propose to remove the 1MB limit at the next block
> >> halving in spring 2020, only limit the block size to 32MiB which is
> >> the maximum size the current p2p protocol allows. This patch must be
> >> in the immediate next release of Bitcoin Core.
> >>
> >> With this patch in core's next release, Bitcoin works just as before,
> >> no fork will ever occur, until spring 2020. But everyone knows there
> >> will be a fork scheduled. Third party services, libraries, wallets and
> >> exchanges will have enough time to prepare for it over the next three
> >> years.
> >>
> >> We don't yet have an agreement on how to increase the block size
> >> limit. There have been many proposals over the past years, like
> >> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> >> on. These hard fork proposals, with this patch already in Core's
> >> release, they all become soft fork. We'll have enough time to discuss
> >> all these proposals and decide which one to go. Take an example, if we
> >> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> >> from 32MiB to 2MB will be a soft fork.
> >>
> >> Anyway, we must code something right now, before it becomes too late.
> >> ___
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> >
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Script Abuse Potential?

2017-01-05 Thread Jeremy via bitcoin-dev
@Russell: Appreciate the historical note, but as that op code was
simultaneously disabled in that patch I don't think we can look back to how
it was non-functionally changed (that number means nothing... maybe Satoshi
was trying it out with 520 bytes but then just decided to all-out disable
it and accidentally included that code change? Hard to say what the intent
was.).

@Jorge:
That's one part of it that is worth hesitation and consideration. I'm not a
fan of the 520 byte limit as well. My gut feeling is that the "right"
answer is to compute the memory weight of the entire stack before/after
each operation and reasonably bound it.

Below text is from the chain core documentation:

"""
Most instructions use only the data stack by removing some items and then
placing some items back on the stack. For these operations, we define the
standard memory cost applied as follows:

Instruction’s memory cost value is set to zero.
For each item removed from the data stack, instruction’s memory cost is
decreased by 8+L where L is the length of the item in bytes.
For each item added to the data stack the cost is increased by 8+L where L
is the length of the item in bytes.
​​
Every instruction has a cost that affects VM run limit. Total instruction
cost consists of execution costand memory cost. Execution cost always
reduces remaining run limit, while memory usage cost can be refunded
(increasing the run limit) when previously used memory is released during
VM execution.
"""

​Is there a reason to favor one approach over the other? I think one reason
to favor a direct limit on op_cat is it favors what​
​
​ I'll dub "context free" analysis, where the performance doesn't depend on
what else is on the stack (perhaps by passing very large arguments to a
script you can cause bad behavior with a general memory limit?).​ On the
other hand, the reason I prefer the general memory limit is it solves the
problem for all future memory-risky opcodes (or present day memory risks!).
Further, OP_CAT is also a bit leaky, in that you could be catting onto a
passed in large string.  The chief argument I'm aware of against a general
memory limit argument is that it is tricky to make a non-implementation
dependent memory limit (e.g., can't just call DynamicMemoryUsage on the
stack), but I don't think this is a strong argument for several
(semi-obvious? I can go into them if need be) reasons.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>

On Wed, Jan 4, 2017 at 9:45 AM, Jorge Timón <jti...@jtimon.cc> wrote:

> I would assume that the controversial part of op_cat comes from the fact
> that it enables covenants. Are there more concerns than that?
>
> On 4 Jan 2017 04:14, "Russell O'Connor via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> For the record, the OP_CAT limit of 520 bytes was added by Satoshi
>> <https://github.com/bitcoin/bitcoin/commit/4bd188c4383d6e614e18f79dc337fbabe8464c82#diff-8458adcedc17d046942185cb709ff5c3R425>
>> on the famous August 15, 2010 "misc" commit, at the same time that OP_CAT
>> was disabled.
>> The previous limit was 5000 bytes.
>>
>> On Tue, Jan 3, 2017 at 7:13 PM, Jeremy via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Sure, was just upper bounding it anyways. Even less of a problem!
>>>
>>>
>>> RE: OP_CAT, not as OP_CAT was specified, which is why it was disabled.
>>> As far as I know, the elements alpha proposal to reenable a limited op_cat
>>> to 520 bytes is somewhat controversial...
>>>
>>>
>>>
>>> --
>>> @JeremyRubin <https://twitter.com/JeremyRubin>
>>> <https://twitter.com/JeremyRubin>
>>>
>>> On Mon, Jan 2, 2017 at 10:39 PM, Johnson Lau <jl2...@xbt.hk> wrote:
>>>
>>>> No, there could only have not more than 201 opcodes in a script. So you
>>>> may have 198 OP_2DUP at most, i.e. 198 * 520 * 2 = 206kB
>>>>
>>>> For OP_CAT, just check if the returned item is within the 520 bytes
>>>> limit.
>>>>
>>>> On 3 Jan 2017, at 11:27, Jeremy via bitcoin-dev <
>>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>>
>>>> It is an unfortunate script, but can't actually
>>>> ​do
>>>>  that much
>>>> ​ it seems​
>>>> . The MAX_SCRIPT_ELEMENT_SIZE = 520 Bytes.
>>>> ​ Thus, it would seem the worst you could do with this would be to 
>>>> (1-520*2)*520*2
>>>> bytes  ~=~ 10 MB.
>>>>
>>>> ​Much more concerning would be the op_dup/op_cat style bug, which under
>>>>

Re: [bitcoin-dev] Script Abuse Potential?

2017-01-02 Thread Jeremy via bitcoin-dev
It is an unfortunate script, but can't actually
​do
 that much
​ it seems​
. The MAX_SCRIPT_ELEMENT_SIZE = 520 Bytes.
​ Thus, it would seem the worst you could do with this would be to
(1-520*2)*520*2
bytes  ~=~ 10 MB.

​Much more concerning would be the op_dup/op_cat style bug, which under a
similar script ​would certainly cause out of memory errors :)



--
@JeremyRubin 


On Mon, Jan 2, 2017 at 4:39 PM, Steve Davis via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Suppose someone were to use the following pk_script:
>
> [op_2dup, op_2dup, op_2dup, op_2dup, op_2dup, ...(to limit)...,
> op_2dup, op_hash160, , op_equalverify, op_checksig]
>
> This still seems to be valid AFAICS, and may be a potential attack vector?
>
> Thanks.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Implementing Covenants with OP_CHECKSIGFROMSTACKVERIFY

2016-11-07 Thread Jeremy via bitcoin-dev
I think
​the following implementation may be advantageous. It uses the same number
of opcodes, without OP_CAT.

Avoiding use of OP_CAT is still desirable as I think it will be difficult
to agree on semantics for OP_CAT (given necessary measures to prevent
memory abuse) than for OP_LEFT. Another option I would be in support of
would be to have signature flags apply to OP_CHECKSIGFROMSTACK and all
OP_CHECKSIG flags be ignored if they aren't meaningful...

​


























*1. 
OP_DUP32.
OP_CHECKSIGVERIFY3. OP_SHA256 OP_ROT OP_SIZE OP_SUB1
OP_LEFT4.
OP_SWAP OP_ROT OP_CHECKSIGFROMSTACK​VERIFY​ (with same ​argument order​)​*



--
@JeremyRubin 


On Fri, Nov 4, 2016 at 7:35 AM, Tim Ruffing via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Not a covenant but interesting nevertheless: _One_ of OP_CAT and
> OP_CHECKSIGFROMSTACKVERIFY alone is enough to implement "opt-in miner
> takes double-spend" [1]:
>
> You can create an output, which is spendable by everybody if you ever
> double-spend the output with two different transactions. Then the next
> miner will probably take your money (double-spending against your two
> or more contradicting transactions again).
>
> If you spend such an output, then the recipient may be willing to
> accept a zero-conf transaction, because he knows that you'll lose the
> money when you attempt double-spending (unless you are the lucky
> miner). See the discussion in [1] for details.
>
> The implementation using OP_CHECKSIGFROMSTACKVERIFY is straight-
> forward. You add a case to the script which allows spending if two
> valid signatures on different message under the public key of the
> output are given.
>
> What is less known I think:
> The same functionality can be achieved in a simpler way just using
> OP_CAT, because it's possible to turn Bitcoin's ECDSA to an "opt-in
> one-time signature scheme". With OP_CAT, you can create an output that
> is only spendable using a signature (r,s) with a specific already fixed
> first part r=x_coord(kG). Basically, the creator of this output commits
> on r (and k) already when creating the output. Now, signing two
> different transaction with the same r allows everybody to extract the
> secret key from the two signatures.
>
> The drawbacks of the implementation with OP_CAT is that it's not
> possible to make a distinction between legitimate or illegitimate
> double-spends (yet to be defined) but just every double-spend is
> penalized. Also, it's somewhat hackish and the signer must store k (or
> create it deterministically but that's a good idea anyway).
>
> [1] https://www.mail-archive.com/bitcoin-development@lists.
> sourceforge.net/msg07122.html
>
> Best,
> Tim
>
> On Thu, 2016-11-03 at 07:37 +, Daniel Robinson via bitcoin-dev
> wrote:
> > Really cool!
> >
> > How about "poison transactions," the other covenants use case
> > proposed by Möser, Eyal, and Sirer? (I think
> > OP_CHECKSIGFROMSTACKVERIFY will also make it easier to check fraud
> > proofs, the other prerequisite for poison transactions.)
> >
> > Seems a little wasteful to do those two "unnecessary" signature
> > checks, and to have to construct the entire transaction data
> > structure, just to verify a single output in the transaction. Any
> > plans to add more flexible introspection opcodes to Elements, such as
> > OP_CHECKOUTPUTVERIFY?
> >
> > Really minor nit: "Notice that we have appended 0x83 to the end of
> > the transaction data"—should this say "to the end of the signature"?
> >
> > On Thu, Nov 3, 2016 at 12:28 AM Russell O'Connor via bitcoin-dev  > coin-...@lists.linuxfoundation.org> wrote:
> > > Right.  There are minor trade-offs to be made with regards to that
> > > design point of OP_CHECKSIGFROMSTACKVERIFY.  Fortunately this
> > > covenant construction isn't sensitive to that choice and can be
> > > made to work with either implementation of
> > > OP_CHECKSIGFROMSTACKVERIFY.
> > >
> > > On Wed, Nov 2, 2016 at 11:35 PM, Johnson Lau  wrote:
> > > > Interesting. I have implemented OP_CHECKSIGFROMSTACKVERIFY in a
> > > > different way from the Elements. Instead of hashing the data on
> > > > stack, I directly put the 32 byte hash to the stack. This should
> > > > be more flexible as not every system are using double-SHA256
> > > >
> > > > https://github.com/jl2012/bitcoin/commits/mast_v3_master
> > > >
> > > >
> > > >
> > > > > On 3 Nov 2016, at 01:30, Russell O'Connor via bitcoin-dev  > > > > oin-...@lists.linuxfoundation.org> wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > It is possible to implement covenants using two script
> > > > > extensions: OP_CAT and OP_CHECKSIGFROMSTACKVERIFY.  Both of
> > > > > these op codes are already available in the Elements Alpha
> > > > > sidechain, so it is possible to construct covenants in 

[bitcoin-dev] 1 Year bitcoin-dev Moderation Review

2016-10-09 Thread Jeremy via bitcoin-dev
Hi bitcoin-dev,

I'm well aware that discussion of moderation on bitcoin-dev is
discouraged*. However, I think that we should, as a year of moderation
approaches, discuss openly as a community what the impact of such policy
has been. Making such a post now is timely given that people will have the
opportunity to discuss in-person as well as online as Scaling Bitcoin is
currently underway. On the suggestion of others, I've also CC'd
bitcoin-discuss on this message.

Below, I'll share some of my own personal thoughts as a starter, but would
love to hear others feelings as well.

For me, the bitcoin-dev mailing list was a place where I started
frequenting to learn a lot about bitcoin and the development process and
interact with the community. Since moderation has begun, it seems that the
messages/day has dropped drastically. This may be a nice outcome overall
for our sanity, but I think that it has on the whole made the community
less accessible. I've heard from people (a > 1 number, myself included)
that they now self-censor because they think they will put a lot of work
into their email only for it to get moderated away as trolling/spam. Thus,
while we may not observe a high rate of moderated posts, it does mean the
"chilling effect" of moderation still manifests -- I think that people not
writing emails because they think it may be moderated reduces the rate of
people writing emails which is a generally valuable thing as it offers
people a vehicle through which they try to think through and communicate
their ideas in detail.

Overall, I think that at the time that moderation was added to the list, it
was probably the right thing to do. We're in a different place as a
community now, so I feel we should attempt to open up this valuable
communication channel once again. My sentiment is that we enacted
moderation to protect a resource that we all felt was valuable, but in the
process, the value of the list was damaged, but not irreparably so.

Best,

Jeremy


* From the email introducing the bitcoin-dev moderation policy, "Generally
discouraged: shower thoughts, wild speculation, jokes, +1s, non-technical
bitcoin issues, rehashing settled topics without new data, moderation
 concerns."


--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Call for Proposals for Scaling Bitcoin Hong Kong

2015-11-05 Thread Jeremy via bitcoin-dev
The second Scaling Bitcoin Workshop will take place December 6th-7th at the
Cyberport in Hong Kong. We are accepting technical proposals for improving
Bitcoin performance including designs, experimental results, and
comparisons against other proposals. The goals are twofold: 1) to present
potential solutions to scalability challenges while identifying key areas
for further research and 2) provide a venue where researchers, developers,
and miners can communicate about Bitcoin development.

We are accepting two types of proposals: one in which accepted authors will
have an opportunity to give a 20-30 minute presentation at the workshop,
and another where accepted authors can run an hour-long interactive
workshop.

Topics of interest include:

Improving Bitcoin throughput
Layer 2 ideas (i.e. payment channels, etc.)
Security and privacy
Incentives and fee structures
Testing, simulation, and modeling
Network resilience
Anti-spam measures
Block size proposals
Mining concerns
Community coordination


All as related to the scalability of Bitcoin.

Important Dates

November 9th - Last day for submission
November 16th - Last day for notification of acceptance and feedback

Formatting

We are doing rolling acceptance, so submit your proposal as soon as you
can. Proposals may be submitted as a BIP or as a 1-2 page extended abstract
describing ideas, designs, and expected experimental results. Indicate in
the proposal whether you are interested in speaking, running an interactive
workshop, or both. If you are interested in running an interactive
workshop, please include an agenda.

Proposals should be submitted to propos...@scalingbitcoin.org by November
9th.

All talks will be livestreamed and published online, including slide decks.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev