Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-01-28 Thread Jeremy via bitcoin-dev
Lloyd,

This is an excellent write up, the idea and benefits are clear.

Is it correct that in the case of a 3/5th threshold it is a total 10x * 30x
= 300x improvement? Quite impressive.

I have a few notes of possible added benefits / features of DLCs with CTV:

1) CTV also enables a "trustless timeout" branch, whereby you can have a
failover claim that returns funds to both sides.

There are a few ways to do this:

A) The simplest is just an oracle-free  CTV whereby the
timeout transaction has an absolute/relative timelock after the creation of
the DLC in question.

B) An alternative approach I like is to have the base DLC have a branch
` CTV` which pays into a DLC that is the exact same
except it removes the just-used branch and replaces it with ` CTV` which contains a relative timelock R for the desired amount of
time to resolve. This has the advantage of always guaranteeing at least R
amount of time since the Oracles have been claimed to be non-live to
"return funds"  to parties participating


2) CTV DLCs are non-interactive asynchronously third-party unilaterally
creatable.

What I mean by this is that it is possible for a single party to create a
DLC on behalf of another user since there is no required per-instance
pre-signing or randomly generated state. E.g., if Alice wants to create a
DLC with Bob, and knows the contract details, oracles, and a key for Bob,
she can create the contract and pay to it unilaterally as a payment to Bob.

This enables use cases like pay-to-DLC addresses. Pay-to-DLC addresses can
also be constructed and then sent (along with a specific amount) to a third
party service (such as an exchange or Lightning node) to create DLCs
without requiring the third party service to do anything other than make
the payment as requested.


3) CTV DLCs can be composed in interesting ways

Options over DLCs open up many exciting types of instrument where Alice can
do things like:
A) Create a Option expiring in 1 week where Bob can add funds to pay a
premium and "Open" a DLC on an outcome closing in 1 year
B) Create an Option expiring in 1 week where one-of-many Bobs can pay the
premium (on-chain DEX?).

 See https://rubin.io/bitcoin/2021/12/20/advent-23/ for more concrete stuff
around this.

There are also opportunities for perpetual-like contracts where you could
combine into one logical DLC 12 DLCs closing 1 per month that can either be
payed out all at once at the end of the year, or profit pulled out
partially at any time earlier.

4) This satisfies (I think?) my request to make DLCs expressible as Sapio
contracts in https://rubin.io/bitcoin/2021/12/20/advent-23/

5) An additional performance improvement can be had for iterative DLCs in
Lightning where you might trade over a fixed set of attestation points with
variable payout curves (e.g., just modifying some set of the CTV points).
Defer to you on performance, but this could help enable some more HFT-y
experiences for DLCs in LN

Best,

Jeremy
--
@JeremyRubin 



On Mon, Jan 24, 2022 at 3:04 AM Lloyd Fournier via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi dlc-dev and bitcoin-dev,
>
> tl;dr OP_CTV simplifies and improves performance of DLCs by a factor of *a 
> lot*.
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [dlc-dev] CTV dramatically improves DLCs

2022-01-28 Thread Jeremy via bitcoin-dev
Thibaut,

CSFS might have independent benefits, but in this case CTV is not being
used in the Oracle part of the DLC, it's being used in the user generated
mapping of Oracle result to Transaction Outcome.

So it'd only be complimentary if you came up with something CSFS based for
the Oracles.

Best,

Jeremy


On Thu, Jan 27, 2022 at 12:59 AM Thibaut Le Guilly via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> Lloyd, thanks for this excellent writeup. I must say that indeed using CTV
> seems like it would very much lower the complexity of the DLC protocol (and
> it seems like APO would also work, thanks Jonas for pointing that out).
> Though thinking about it, I can't help wondering if the ideal op code for
> DLC wouldn't actually be CHECKSIGFROMSTACK? It feels to me that this would
> give the most natural way of doing things. If I'm not mistaken, this would
> enable simply requiring an oracle signature over the outcome, without any
> special trick, and without even needing the oracle to release a nonce in
> advance (the oracle could sign `event_outcome + event_id` to avoid
> signature reuse). I must say that I haven't studied covenant opcodes in
> detail yet so is that line of thinking correct or am I missing something?
>
> Cheers,
>
> Thibaut
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-28 Thread Jeremy via bitcoin-dev
I probably need to reset it -- I ran into some issues with the IBD latch
bug IIRC and had difficulty producing new blocks.

I sent funds as a manual faucet to at least one person... not aware of
anyone else finding use for the signet. In part this is due to the fact
that in order to run a signet, you also kind of need to run some kind of
faucet on it, which wasn't readily available when I launched it previously.
I think I can use https://github.com/jsarenik/bitcoin-faucet-shell now
though.

Usually people are using Regtest to play around with CTV less so Signet.
There is value in a signet, but I don't think that "there's not a signet
for it" is a blocking issue v.s. nice to have.
--
@JeremyRubin 



On Fri, Jan 28, 2022 at 6:18 AM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Fri, Jan 28, 2022 at 01:14:07PM +, Michael Folkson via bitcoin-dev
> wrote:
> > There is not even a custom signet with CTV (as far as I know)
>
> https://twitter.com/jeremyrubin/status/1339699281192656897
>
>
> signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
> addnode=50.18.75.225
>
> But I think there's only been a single coinbase consolidation tx, and no
> actual CTV transactions?
>
> Cheers,
> aj
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-01-27 Thread Jeremy via bitcoin-dev
Gloria,

This is a brilliant post! Great job systematizing many of the issues. Quite
a lot to chew on & I hope other readers of this list digest the post fully.

Three things come to mind as partial responses:

under:

- **DoS Protection**: Limit two types of DoS attacks on the node's
>   mempool: (1) the number of times a transaction can be replaced and
> (2) the volume of transactions that can be evicted during a
> replacement.


I'd more simply put it:

Limiting the amount of work that must be done to consider the replacement

We don't particularly care about goal (1) or goal (2), we care about how
much it costs to do (1) or (2). And there are scenarios where the (1) or
(2) might not be particularly high, but the total work still might be. I
can give you some examples to consider if needed. There are also scenarios
where (1) and (2) might be high, but the cost is low overall. Therefore it
makes sense to be a little more general with what the anti-DoS goal is.




An issue I'd like to toss into the mix is that of iterative / additive
batching. E.g., https://bitcoinops.org/en/cardcoins-rbf-batching/

This is where an business puts a txn in the mempool that pays to N users,
and as they see additional requests for payouts the update it to N+2,
N+2... N+M payouts. This iterative batching can be highly efficient because
the number of transactions per business per 10 minutes is 1 (with variable
number of outputs).

One issue with this approach today is that because of the feerate rule, if
you go from N to N+1 you need to pay 1 sat/byte over the whole txn. Applied
M times, and you have to increase fees quadratically for this approach.
Therefore the less efficient long-chain of batches model ends up being
'rational' with respect to mempool policy and irrational with respect to
"optimally packing blocks with transactions".

If the absolute fee rule is dropped, but feerate remains, one thing you
might see is businesses doing iterative batches with N+2M outputs whereby
they drop 2 outputs for every input they add, allowing the iterative batch
to always increase the fee-rate but possibly not triggering the quadratic
feerate issue since the transaction gets smaller over time.

Another possible solution to this would be to allow relaying "txdiffs"
which only require re-relay of signatures + new/modified outputs, and not
the entire tx.

I think this iterative batching is pretty desirable to support, and so I'd
like to see a RBF model which doesn't make it "unfairly" expensive.

(I'll spare everyone the details on how CTV batching also solves this, but
feel free to ask elsewhere.)

A counterargument to additive batching is that if you instead do non
iterative batches every minute, and you have 100 txns that arrive
uniformly, you'd end up with 10 batches of size 10 on average. The bulk of
the benefit under this model is in the non-batched to batched transition,
and the iterative part only saves on space/fees marginally after that point.



A final point is that a verifiable delay function could be used over, e.g.,
each of the N COutpoints individually to rate-limit transaction
replacement. The VDF period can be made shorter / eliminated depending on
the feerate increase. E.g., always consider a much higher feerate txn
whenever available, for things of equal feerate only consider 1 per minute.
A VDF is like proof-of-work that doesn't parallelize, in case you are
unfamiliar, so no matter how many computers you have it would take about
the same amount of time (you could parallelize across N outputs, of course,
but you're still bound minimally to the time it takes to replace 1 output,
doing all outputs individually just is the most flexible option).


Cheers,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-26 Thread Jeremy via bitcoin-dev
Hi Russell,

Thanks for this email, it's great to see this approach described.

A few preliminary notes of feedback:

1) a Verify approach can be made to work for OP_TXHASH (even with CTV
as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
flag to read the hash at stack[-2], then the hash can be passed in instead
of put on the stack. This has the disadvantage of larger witnesses, but the
advantage of allowing undefined sighash flags to pass for any hash type.
2) using the internal key for APO covenants is not an option because it
makes transaction construction interactive and precludes contracts with a
NUMS point taproot key. Instead, if you want similar savings, you should
advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
APO variant which has split R and S values would permit something like
 OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
bytes than CTV.
3) I count something like 20 different flags in your proposal. As long as
flags are under 40 bytes (and 32 assuming we want it to be easy) without
upgrading math this should be feasible to manipulate on the stack
programmatically. This is ignoring some of the more flexible additions you
mention about picking which outputs/inputs are included. However, 20 flags
means that for testing we would want comprehensive tests and understanding
for ~1 million different flag combos and the behaviors they expose. I think
this necessitates a formal model of scripting and transaction validity
properties. Are there any combinations that might be undesirable?
4) Just hashing or not hashing isn't actually that flexible, because it
doesn't natively let you do things like (for example) TLUV. You really do
need tx operations for directly manipulating the data on the stack to
construct the hash if you want more flexible covenants. This happens to be
compatible with either a Verify or Push approach, since you either
destructure a pushed hash or build up a hash for a verify.
5) Flexible hashing has the potential for quadratic hashing bugs. The
fields you propose seem to be within similar range to work you could cause
with a regular OP_HASH256, although you'd want to be careful with some of
the proposed extensions that you don't create risk of quadratic hashing,
which seems possible with an output selecting opcode unless you cache
properly (which might be tricky to do). Overall for the fields explicitly
mentioned, seems safe, the "possibles" seem to have some more complex
interactions. E.g., CTV with the ability to pick a subset of outputs would
be exposed to quadratic hashing.
6) Missing field: covering the annex or some sub-range of the annex
(quadratic hashing issues on the latter)
7) It seems simpler to, for many of these fields, push values directly (as
in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
hash of a single output's amount to emulate OP_AMOUNT looks 'general but
annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
instead. This also makes it simpler to think about the combinations of
flags, since it's really N independent multi-byte opcodes.


Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
build out the use cases I care about for CTV (and more). So I don't have an
opposition on it with regards to lack of function.

However, if one finds the TXHASH approach acceptable, then you should also
be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
(whenever "ready"), unless you are particularly sensitive to "technical
debt" and "soft fork processes". The only costs of doing something for CTV
or APO given an eventual TXHASH is perhaps a wasted key version or the 32
byte argument of a NOP opcode and some code to maintain.

Are there other costs I am missing?

However, as it pertains to actual rollout:

- OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
power it intends to introduce.
- What sort of timeline would it take to ready something like TXHASH (and
desired friends) given greater scope of testing and analysis (standalone +
compared to CTV)?
- Is there opposition from the community to this degree of
general/recursive covenants?
- Does it make "more sense" to invest the research and development effort
that would go into proving TXHASH safe, for example, into Simplicity
instead?

Overall, *my opinion *is that:

- TXHASH is an acceptable theoretical approach, and I am happy to put more
thought into it and maybe draft a prototype of it.
- I prefer CTV as a first step for pragmatic engineering and availability
timeline reasons.
- If TXHASH were to take, optimistically, 2 years to develop and review,
and then 1 year to activate, the "path dependence of software" would put
Bitcoin in a much better place were we to have CTV within 1 year and
applications (that are to be 

[bitcoin-dev] BIP-119 CTV Meeting #2 Agenda for Tuesday January 25th at 12:00 PT

2022-01-23 Thread Jeremy via bitcoin-dev
Bitcoin Developers,

The 2nd instance of the recurring meeting is scheduled for Tuesday January
25th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC server.

The meeting should take approximately 2 hours.

The topics proposed to be discussed are agendized below. Please review the
agenda in advance of the meeting to make the best use of everyone's time.

If you have any feedback or proposed content changes to the agenda please
let me know.

See you Tuesday,

Jeremy

- Update on Bounty Program & Feedback (10 Min)
- Feedback Recap (20 Min)
  - In this section we'll review any recent feedback or review of CTV.
To expedite the meeting, a summary is provided below of the main
feedback received since the last meeting and responses to them so that the
time allotted may be devoted to follow up questions.
  - Luke Dashjr's feedback
- thread:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019776.html
- summary:
Dashjr notes that while CTV is not done, it may be nearly done.
Dashjr requests that some applications be made BIP-quality before
proceeding, amongst other smaller feedbacks.
Dashjr also expresses his concerns about activation logic.
Respondents debated the activation logic, and there was a general
sentiment to keep the discussion of CTV and activation logic somewhat
separate, as Activation is a general concern pertaining to all upgrades and
not CTV in particular.
Rubin responded asking if BIP-quality is required or if examples
like those in rubin.io/advent21 suffice.
  - James O'Beirne's feedback
- Github Link:
https://github.com/bitcoin/bitcoin/pull/21702#pullrequestreview-859718084
- summary:
O'Beirne tests the reindexing performance with the CTV patches and
finds a minor performance regression due to the cache precomputations.
Rubin responds with patches for an improved caching strategy that
precomputes the CTV hashes only when they are used, but it is a little more
complex to review.
Rubin also points out that the tested range is not representative
of "current" blocks which have a higher proportion of segwit.
  - Peter Todd's Feedback
- thread:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019738.html
- response:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019739.html
- summary:
Todd reviewed the BIP and an (outdated) implementation and was
disappointed to find that the testing was insufficient, the analysis of
validation resources was insufficient, and the quality of proof of concept
applications was insufficient.
Rubin responded by pointing Todd to the most up to date
implementation which has more tests, updated the link in the BIP to the PR,
updated the BIP to describe resource usage, and asked what the bar is
expected to be for applications.
Rubin further responded with an analysis of current congested
mempool behavior here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019756.html
.
Todd is yet to respond.
- What is Sapio / How to think about Programming with CTV (15 Min)
  - Resources to review
- https://learn.sapio-lang.org/ch02-00-bip-119.html
- https://rubin.io/bitcoin/2021/12/06/advent-9/
- https://rubin.io/bitcoin/2021/12/15/advent-18/
  - Composability
  - What's all this "Non-Interactivity" Business?
- Vaults (20 Min)
  - Resources:
https://rubin.io/bitcoin/2021/12/07/advent-10/
https://rubin.io/bitcoin/2021/12/08/advent-11/
https://github.com/kanzure/python-vaults
- Congestion Control (20 Mins)
  - Resources:
https://rubin.io/bitcoin/2021/12/09/advent-12/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019756.html
https://utxos.org/analysis/batching_sim/
https://utxos.org/analysis/bip_simulation/
- Payment Pools (20 Mins)
  - Resources:
https://rubin.io/bitcoin/2021/12/10/advent-13/
https://rubin.io/bitcoin/2021/12/15/advent-18/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019419.html

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017964.html
- General Q (15 Mins)


--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-19 Thread Jeremy via bitcoin-dev
s the whole package and that can get really messy.
>>>>
>>>> more generally speaking, you could imagine a future where mempools
>>>> track many alternative things that might want to be in a transaction.
>>>>
>>>> suppose there are N inputs each with a weight and an amount of fee
>>>> being added and the sighash flags let me pick any subset of them. However,
>>>> for a txn to be standard it must be < 100k bytes and for it to be consensus
>>>> < 1mb. Now it is possible you have to solve a knapsack problem in order to
>>>> rationally bundle this transaction out of all possibilities.
>>>>
>>>> This problem can get even thornier, suppose that the inputs I'm adding
>>>> themselves are the outputs of another txn in the mempool, now i have to
>>>> track and propagate the feerates of that child back up to the parent txn
>>>> and track all these dependencies.
>>>>
>>>> perhaps with very careful engineering these issues can be tamed.
>>>> however it seems with sponsors or fee accounts, by separating the pays-for
>>>> from the participates-in concerns we can greatly simplify it to something
>>>> like: compute effective feerate for a txn, including all sponsors that pay
>>>> more than the feerate of the base txn. Mine that txn and it's subsidies
>>>> using the normal algo. If you run out of space, all subsidies are
>>>> same-sized so just take the ones that pay the highest amount up until the
>>>> added marginal feerate is less than the next eligible txn.
>>>>
>>>>
>>>> --
>>>> @JeremyRubin <https://twitter.com/JeremyRubin>
>>>> <https://twitter.com/JeremyRubin>
>>>>
>>>>
>>>> On Tue, Jan 18, 2022 at 6:38 PM Billy Tetrud 
>>>> wrote:
>>>>
>>>>> I see, its not primarily to make it cheaper to append fees, but also
>>>>> allows appending fees in cases that aren't possible now. Is that right? I
>>>>> can certainly see the benefit of a more general way to add a fee to any
>>>>> transaction, regardless of whether you're related to that transaction or
>>>>> not.
>>>>>
>>>>> How would you compare the pros and cons of your account-based approach
>>>>> to something like a new sighash flag? Eg a sighash flag that says "I'm
>>>>> signing this transaction, but the signature is only valid if mined in the
>>>>> same block as transaction X (or maybe transactions LIST)". This could be
>>>>> named SIGHASH_EXTERNAL. Doing this would be a lot more similar to other
>>>>> bitcoin transactions, and no special account would need to be created. Any
>>>>> transaction could specify this. At least that's the first thought I would
>>>>> have in designing a way to arbitrarily bump fees. Have you compared your
>>>>> solution to something more familiar like that?
>>>>>
>>>>> On Tue, Jan 18, 2022 at 11:43 AM Jeremy  wrote:
>>>>>
>>>>>> Can you clarify what you mean by "improve the situation"?
>>>>>>
>>>>>> There's a potential mild bytes savings, but the bigger deal is that
>>>>>> the API should be much less vulnerable to pinning issues, fix dust 
>>>>>> leakage
>>>>>> for eltoo like protocols, and just generally allow protocol designs to be
>>>>>> fully abstracted from paying fees. You can't easily mathematically
>>>>>> quantify API improvements like that.
>>>>>> --
>>>>>> @JeremyRubin <https://twitter.com/JeremyRubin>
>>>>>> <https://twitter.com/JeremyRubin>
>>>>>>
>>>>>>
>>>>>> On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud 
>>>>>> wrote:
>>>>>>
>>>>>>> Do you have any back-of-the-napkin math on quantifying how much this
>>>>>>> would improve the situation vs existing methods (eg cpfp)?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
>>>>>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>>>>>
>>>>>>>> Happy new years devs,
>>>>>>>>
>>>>>>>> I figured I would share some thoughts for 

Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy via bitcoin-dev
gt;>> same block as transaction X (or maybe transactions LIST)". This could be
>>> named SIGHASH_EXTERNAL. Doing this would be a lot more similar to other
>>> bitcoin transactions, and no special account would need to be created. Any
>>> transaction could specify this. At least that's the first thought I would
>>> have in designing a way to arbitrarily bump fees. Have you compared your
>>> solution to something more familiar like that?
>>>
>>> On Tue, Jan 18, 2022 at 11:43 AM Jeremy  wrote:
>>>
>>>> Can you clarify what you mean by "improve the situation"?
>>>>
>>>> There's a potential mild bytes savings, but the bigger deal is that the
>>>> API should be much less vulnerable to pinning issues, fix dust leakage for
>>>> eltoo like protocols, and just generally allow protocol designs to be fully
>>>> abstracted from paying fees. You can't easily mathematically quantify API
>>>> improvements like that.
>>>> --
>>>> @JeremyRubin <https://twitter.com/JeremyRubin>
>>>> <https://twitter.com/JeremyRubin>
>>>>
>>>>
>>>> On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud 
>>>> wrote:
>>>>
>>>>> Do you have any back-of-the-napkin math on quantifying how much this
>>>>> would improve the situation vs existing methods (eg cpfp)?
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
>>>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>>>
>>>>>> Happy new years devs,
>>>>>>
>>>>>> I figured I would share some thoughts for conceptual review that have
>>>>>> been bouncing around my head as an opportunity to clean up the fee paying
>>>>>> semantics in bitcoin "for good". The design space is very wide on the
>>>>>> approach I'll share, so below is just a sketch of how it could work which
>>>>>> I'm sure could be improved greatly.
>>>>>>
>>>>>> Transaction fees are an integral part of bitcoin.
>>>>>>
>>>>>> However, due to quirks of Bitcoin's transaction design, fees are a
>>>>>> part of the transactions that they occur in.
>>>>>>
>>>>>> While this works in a "Bitcoin 1.0" world, where all transactions are
>>>>>> simple on-chain transfers, real world use of Bitcoin requires support for
>>>>>> things like Fee Bumping stuck transactions, DoS resistant Payment 
>>>>>> Channels,
>>>>>> and other long lived Smart Contracts that can't predict future fee rates.
>>>>>> Having the fees paid in band makes writing these contracts much more
>>>>>> difficult as you can't merely express the logic you want for the
>>>>>> transaction, but also the fees.
>>>>>>
>>>>>> Previously, I proposed a special type of transaction called a
>>>>>> "Sponsor" which has some special consensus + mempool rules to allow
>>>>>> arbitrarily appending fees to a transaction to bump it up in the mempool.
>>>>>>
>>>>>> As an alternative, we could establish an account system in Bitcoin as
>>>>>> an "extension block".
>>>>>>
>>>>>> *Here's how it might work:*
>>>>>>
>>>>>> 1. Define a special anyone can spend output type that is a "fee
>>>>>> account" (e.g. segwit V2). Such outputs have a redeeming key and an 
>>>>>> amount
>>>>>> associated with them, but are overall anyone can spend.
>>>>>> 2. All deposits to these outputs get stored in a separate UTXO
>>>>>> database for fee accounts
>>>>>> 3. Fee accounts can sign only two kinds of transaction: A: a fee
>>>>>> amount and a TXID (or Outpoint?); B: a withdraw amount, a fee, and
>>>>>> an address
>>>>>> 4. These transactions are committed in an extension block merkle
>>>>>> tree. While the actual signature must cover the TXID/Outpoint, the
>>>>>> committed data need only cover the index in the block of the transaction.
>>>>>> The public key for account lookup can be recovered from the message +
>>>>>> signature.
>>>>>> 5. In any block

Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy via bitcoin-dev
The issue with sighash flags is that because you make transactions third
party malleable it becomes possible to bundle and unbundle transactions.

This means there are circumstances where an attacker could e.g. see your
txn, and then add a lot of junk change/inputs + 25 descendants and strongly
anchor your transaction to the bottom of the mempool.

because of rbf rules requiring more fee and feerate, this means you have to
bump across the whole package and that can get really messy.

more generally speaking, you could imagine a future where mempools track
many alternative things that might want to be in a transaction.

suppose there are N inputs each with a weight and an amount of fee being
added and the sighash flags let me pick any subset of them. However, for a
txn to be standard it must be < 100k bytes and for it to be consensus <
1mb. Now it is possible you have to solve a knapsack problem in order to
rationally bundle this transaction out of all possibilities.

This problem can get even thornier, suppose that the inputs I'm adding
themselves are the outputs of another txn in the mempool, now i have to
track and propagate the feerates of that child back up to the parent txn
and track all these dependencies.

perhaps with very careful engineering these issues can be tamed. however it
seems with sponsors or fee accounts, by separating the pays-for from the
participates-in concerns we can greatly simplify it to something like:
compute effective feerate for a txn, including all sponsors that pay more
than the feerate of the base txn. Mine that txn and it's subsidies using
the normal algo. If you run out of space, all subsidies are same-sized so
just take the ones that pay the highest amount up until the added marginal
feerate is less than the next eligible txn.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Tue, Jan 18, 2022 at 6:38 PM Billy Tetrud  wrote:

> I see, its not primarily to make it cheaper to append fees, but also
> allows appending fees in cases that aren't possible now. Is that right? I
> can certainly see the benefit of a more general way to add a fee to any
> transaction, regardless of whether you're related to that transaction or
> not.
>
> How would you compare the pros and cons of your account-based approach to
> something like a new sighash flag? Eg a sighash flag that says "I'm signing
> this transaction, but the signature is only valid if mined in the same
> block as transaction X (or maybe transactions LIST)". This could be named
> SIGHASH_EXTERNAL. Doing this would be a lot more similar to other bitcoin
> transactions, and no special account would need to be created. Any
> transaction could specify this. At least that's the first thought I would
> have in designing a way to arbitrarily bump fees. Have you compared your
> solution to something more familiar like that?
>
> On Tue, Jan 18, 2022 at 11:43 AM Jeremy  wrote:
>
>> Can you clarify what you mean by "improve the situation"?
>>
>> There's a potential mild bytes savings, but the bigger deal is that the
>> API should be much less vulnerable to pinning issues, fix dust leakage for
>> eltoo like protocols, and just generally allow protocol designs to be fully
>> abstracted from paying fees. You can't easily mathematically quantify API
>> improvements like that.
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud 
>> wrote:
>>
>>> Do you have any back-of-the-napkin math on quantifying how much this
>>> would improve the situation vs existing methods (eg cpfp)?
>>>
>>>
>>>
>>> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
>>>> Happy new years devs,
>>>>
>>>> I figured I would share some thoughts for conceptual review that have
>>>> been bouncing around my head as an opportunity to clean up the fee paying
>>>> semantics in bitcoin "for good". The design space is very wide on the
>>>> approach I'll share, so below is just a sketch of how it could work which
>>>> I'm sure could be improved greatly.
>>>>
>>>> Transaction fees are an integral part of bitcoin.
>>>>
>>>> However, due to quirks of Bitcoin's transaction design, fees are a part
>>>> of the transactions that they occur in.
>>>>
>>>> While this works in a "Bitcoin 1.0" world, where all transactions are
>>>> simple on-chain transfers, real world use of Bitcoin requires support for
>>>> things l

Re: [bitcoin-dev] CTV BIP review

2022-01-18 Thread Jeremy via bitcoin-dev
Thanks for the detailed review.

I'll withhold comment around activation logic and leave that for others to
discuss.

w.r.t. the language cleanups I'll make a PR that (I hope) clears up the
small nits later today or tomorrow. Some of it's kind of annoying because
the legal definition of covenant is "A formal agreement or promise, usually
included in a contract or deed, to do or not do a particular act; a compact
or stipulation made in writing or by parol." so I do think things like
CLTV/CSV are covenants since it's a binding promise to not spend before a
certain time... it might be out of scope for the BIP to fully define these
terms because it doesn't really matter what a covenant could be as much as
it matters what CTV is specifically.

On the topic of drafting BIPs for specific use cases, I agree that would be
valuable and can consider it.

However, I'm a bit skeptical of that approach overall as I don't
necessarily think that the applications *must be* standard, and I view BIPs
as primarily for standardization whereas part of the flexibility of
CTV/Sapio allows users to figure out how they want to use it.

E.g., we do not yet have a BIP for MuSig or even Multisig in Taproot,
although there are some papers and example implementations but nothing
formal yet
https://bitcoin.stackexchange.com/questions/111666/support-for-taproot-multisig-descriptors).
Perhaps this is an opportunity for CTV to lead on the amount of formal
application designs available before 'release'.

As a starting point, maybe you could review some of the application focused
posts in rubin.io/advent21 and let me know where they seem deficient?

Also a BIP describing how to build something like Sapio (and less so Sapio
itself, since it's still early days for that) might help for folks to be
able to think through how to compile to CTV contracts? But again, I'm
skeptical of the value of a BIP v.s. the documentation and examples
available in the code and https://learn.sapio-lang.org.

I think it's an interesting discussion too because as we've just seen the
LN ecosystem start the BLIP standards, would an example of non-interactive
channels be best written up as a BIP, a BLIP, or a descriptive blog/mailing
list post?

--
@JeremyRubin 



On Tue, Jan 18, 2022 at 1:19 PM Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> tl;dr: I don't think CTV is ready yet (but probably close), and in any
> case
> definitely not worth reviving BIP 9 with its known flaws and vulnerability.
>
> My review here is based solely on the BIP, with no outside context (aside
> from
> current consensus rules, of course). In particular, I have _not_ looked at
> the CTV code proposed for Bitcoin Core yet.
>
> >Covenants are restrictions on how a coin may be spent beyond key
> ownership.
>
> nit: Poorly phrased. Even simple scripts can do that already.
>
> >A few examples are described below, which should be the subject of future
> non-consensus standardization efforts.
>
> I would ideally like to see fully implemented BIPs for at least one of
> these
> (preferably the claimed CoinJoin improvements) before we move toward
> activation.
>
> >Congestion Controlled Transactions
>
> I think this use case hasn't been fully thought through yet. It seems like
> it
> would be desirable for this purpose, to allow any of the recipients to
> claim
> their portion of the payment without footing the fee for every other
> payment
> included in the batch. This is still a covenant-type solution, but one
> that
> BIP 119 cannot support as-is.
>
> (I realise this may be a known and accepted limitation, but I think it
> should
> be addressed in the BIP)
>
> >Payment Channels
>
> Why batch mere channel creation? Seems like the spending transaction
> should
> really be the channel closing.
>
> >CHECKTEMPLATEVERIFY makes it much easier to set up trustless CoinJoins
> than
> previously because participants agree on a single output which pays all
> participants, which will be lower fee than before.
>
> I don't see how. They still have to agree in advance on the outputs, and
> the
> total fees will logically be higher than not using CTV...?
>
> >Further Each participant doesn't need to know the totality of the outputs
> committed to by that output, they only have to verify their own sub-tree
> will
> pay them.
>
> I don't see any way to do this with the provided implementation.
>
> >Deployment could be done via BIP 9 VersionBits deployed through Speedy
> Trial.
>
> Hard NACK on this. BIP 9 at this point represents developers attempting to
> disregard and impose their will over community consensus, as well as an
> attempt to force a miner veto backdoor/vulnerability on deployment. It
> should
> never be used again.
>
> Speedy Trial implemented with BIP 8 made sense* as a possible neutral
> compromise between LOT=True and LOT=False (which could be deployed prior
> to
> or in parallel), but using BIP 9 would 

Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy via bitcoin-dev
Can you clarify what you mean by "improve the situation"?

There's a potential mild bytes savings, but the bigger deal is that the API
should be much less vulnerable to pinning issues, fix dust leakage for
eltoo like protocols, and just generally allow protocol designs to be fully
abstracted from paying fees. You can't easily mathematically quantify API
improvements like that.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud  wrote:

> Do you have any back-of-the-napkin math on quantifying how much this would
> improve the situation vs existing methods (eg cpfp)?
>
>
>
> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Happy new years devs,
>>
>> I figured I would share some thoughts for conceptual review that have
>> been bouncing around my head as an opportunity to clean up the fee paying
>> semantics in bitcoin "for good". The design space is very wide on the
>> approach I'll share, so below is just a sketch of how it could work which
>> I'm sure could be improved greatly.
>>
>> Transaction fees are an integral part of bitcoin.
>>
>> However, due to quirks of Bitcoin's transaction design, fees are a part
>> of the transactions that they occur in.
>>
>> While this works in a "Bitcoin 1.0" world, where all transactions are
>> simple on-chain transfers, real world use of Bitcoin requires support for
>> things like Fee Bumping stuck transactions, DoS resistant Payment Channels,
>> and other long lived Smart Contracts that can't predict future fee rates.
>> Having the fees paid in band makes writing these contracts much more
>> difficult as you can't merely express the logic you want for the
>> transaction, but also the fees.
>>
>> Previously, I proposed a special type of transaction called a "Sponsor"
>> which has some special consensus + mempool rules to allow arbitrarily
>> appending fees to a transaction to bump it up in the mempool.
>>
>> As an alternative, we could establish an account system in Bitcoin as an
>> "extension block".
>>
>> *Here's how it might work:*
>>
>> 1. Define a special anyone can spend output type that is a "fee account"
>> (e.g. segwit V2). Such outputs have a redeeming key and an amount
>> associated with them, but are overall anyone can spend.
>> 2. All deposits to these outputs get stored in a separate UTXO database
>> for fee accounts
>> 3. Fee accounts can sign only two kinds of transaction: A: a fee amount
>> and a TXID (or Outpoint?); B: a withdraw amount, a fee, and an address
>> 4. These transactions are committed in an extension block merkle tree.
>> While the actual signature must cover the TXID/Outpoint, the committed data
>> need only cover the index in the block of the transaction. The public key
>> for account lookup can be recovered from the message + signature.
>> 5. In any block, any of the fee account deposits can be: released into
>> fees if there is a corresponding tx; consolidated together to reduce the
>> number of utxos (this can be just an OP_TRUE no metadata needed); or
>> released into fees *and paid back* into the requested withdrawal key
>> (encumbering a 100 block timeout). Signatures must be unique in a block.
>> 6. Mempool logic is updated to allow attaching of account fee spends to
>> transactions, the mempool can restrict that an account is not allowed more
>> spend more than it's balance.
>>
>> *But aren't accounts "bad"?*
>>
>> Yes, accounts are bad. But these accounts are not bad, because any funds
>> withdrawn from the fee extension are fundamentally locked for 100 blocks as
>> a coinbase output, so there should be no issues with any series of reorgs.
>> Further, since there is no "rich state" for these accounts, the state
>> updates can always be applied in a conflict-free way in any order.
>>
>>
>> *Improving the privacy of this design:*
>>
>> This design could likely be modified to implement something like
>> Tornado.cash or something else so that the fee account paying can be
>> unlinked from the transaction being paid for, improving privacy at the
>> expense of being a bit more expensive.
>>
>> Other operations could be added to allow a trustless mixing to be done by
>> miners automatically where groups of accounts with similar values are
>> trustlessly  split into a common denominator and change, and keys are
>> derived via a verifiable stealth address like 

[bitcoin-dev] SASE Invoices

2022-01-17 Thread Jeremy via bitcoin-dev
Devs,

I was recently speaking with Casey R about some of the infrastructural
problems with addresses and felt it would be worth summarizing some notes
from that conversation for y'all to consider more broadly.

Currently, when you generate (e.g., a Taproot address):

- The key may or may not be a NUMS point
- Script paths might still be required for safety (e.g. a backup federation)
- There may be single use constructs (e.g. HTLC)
- The amount required to be sent might be specific (e.g., HTLC or a vault)

These issues exist in other address types as well, and covenants (such as
the kinds enabled by TLUV, APO, or CTV) make exact amounts also important.

As such, it may make sense to specify a new type of Invoice that's a bit
like a SASE, a "Self Addressed Stamped Envelope". SASEs simplify mail
processing because the processor just puts whatever was requested in the
exact envelope you provided, and that's "self authenticated".

A SASE Invoice for Bitcoin might look like an address *plus* a signature
covering that address and any metadata required for the payment to be
considered valid. For example, I might make a TR key and specify that it is
my hot wallet and therefore permitted for only between 0 to 1 Bitcoin. Or I
might specify for a covenant containing address it should only have 0.1234
Bitcoin exactly. Other use cases might include "good for one payment only"
or "please do not use after  date, contact to renew". Some of these
might be perilous, so it's worth careful thought on what acceptable SASE
policies might be.

Businesses making payments might receive a SASE Invoice and save the SASE.
Then, in the future, a SASE can be used e.g. in dispute mediation to show
that the payment sent corresponded to the one requested by that address.
Businesses could even give users unique codes to put into their SASE
generator to bind the address for their own use / to ensure the usage right
of the address isn't transferrable.

if the top-level TR key is a NUMS point, and no signature can be produced
(as might happen for a covenant), then it could be a NUMS point derived
from the hash-to-curve of the SASE Invoice policy.

Such SASE Invoice standards would also go a long way towards
combating address reuse. If standard software does not produce reusable
SASE Invoices, then it would be clear to users that they should generate a
SASE with the expected amount per requested payment.

A well designed SASE spec could also cover things like EPKs and derivation
paths as well.

Previously, https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki
was designed in a similar problem space. A big part of SASE invoices would
be for it to be focused on generating fixed payment codes rather than
initiating an online protocol / complicated handshaking.

Cheers,

Jeremy

p.s.:

There's something that looks even *more* like a single use SASE where you
might use one of your existing UTXOs with anyonecanpay and single to pay to
an output which has the funds requested + the funds in the output. a payer
paying this transaction has no choice but to pay you the correct
amount/fees for the specific txn, and it clearly cannot be reused. This is
quite bizarre though, but is noted here if anyone wants something even
closer to a physical SASE.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bip39

2022-01-17 Thread Jeremy via bitcoin-dev
This is a good point, but can be addressed by having a non-void whitespace
character (e.g., win x estate).

changing BIP39 would be hard since software expects a standard list; it
would also be possible to rejection sample for seeds that do not contain
these pairs, unclear how much entropy would be lost from that.
--
@JeremyRubin 



On Mon, Jan 17, 2022 at 2:26 PM Erik Aronesty via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> really don't like that art, work, and artwork are 3 different words
>
> would be nice to clean up adjacent ambiguity
>
> it's not a big deal, but it can lead to confusion when writing things down
>
>
> dup: ('canal', 'arm') ('can', 'alarm')
> dup: ('canal', 'one') ('can', 'alone')
> dup: ('canal', 'ready') ('can', 'already')
> dup: ('card', 'anger') ('car', 'danger')
> dup: ('card', 'ice') ('car', 'dice')
> dup: ('card', 'inner') ('car', 'dinner')
> dup: ('card', 'raw') ('car', 'draw')
> dup: ('cart', 'able') ('car', 'table')
> dup: ('cart', 'ask') ('car', 'task')
> dup: ('cart', 'hat') ('car', 'that')
> dup: ('cart', 'hen') ('car', 'then')
> dup: ('cart', 'issue') ('car', 'tissue')
> dup: ('cart', 'one') ('car', 'tone')
> dup: ('cart', 'own') ('car', 'town')
> dup: ('cart', 'rack') ('car', 'track')
> dup: ('cart', 'rain') ('car', 'train')
> dup: ('cart', 'win') ('car', 'twin')
> dup: ('catch', 'air') ('cat', 'chair')
> dup: ('erase', 'arch') ('era', 'search')
> dup: ('fatal', 'arm') ('fat', 'alarm')
> dup: ('fatal', 'one') ('fat', 'alone')
> dup: ('fatal', 'ready') ('fat', 'already')
> dup: ('feed', 'anger') ('fee', 'danger')
> dup: ('feed', 'ice') ('fee', 'dice')
> dup: ('feed', 'inner') ('fee', 'dinner')
> dup: ('feed', 'raw') ('fee', 'draw')
> dup: ('feel', 'earn') ('fee', 'learn')
> dup: ('feel', 'end') ('fee', 'lend')
> dup: ('gasp', 'act') ('gas', 'pact')
> dup: ('gasp', 'age') ('gas', 'page')
> dup: ('gasp', 'air') ('gas', 'pair')
> dup: ('gasp', 'ill') ('gas', 'pill')
> dup: ('gasp', 'raise') ('gas', 'praise')
> dup: ('gasp', 'rice') ('gas', 'price')
> dup: ('gasp', 'ride') ('gas', 'pride')
> dup: ('gasp', 'roof') ('gas', 'proof')
> dup: ('kite', 'merge') ('kit', 'emerge')
> dup: ('kite', 'motion') ('kit', 'emotion')
> dup: ('kite', 'state') ('kit', 'estate')
> dup: ('lawn', 'arrow') ('law', 'narrow')
> dup: ('lawn', 'either') ('law', 'neither')
> dup: ('lawn', 'ice') ('law', 'nice')
> dup: ('legal', 'arm') ('leg', 'alarm')
> dup: ('legal', 'one') ('leg', 'alone')
> dup: ('legal', 'ready') ('leg', 'already')
> dup: ('seat', 'able') ('sea', 'table')
> dup: ('seat', 'ask') ('sea', 'task')
> dup: ('seat', 'hat') ('sea', 'that')
> dup: ('seat', 'hen') ('sea', 'then')
> dup: ('seat', 'issue') ('sea', 'tissue')
> dup: ('seat', 'one') ('sea', 'tone')
> dup: ('seat', 'own') ('sea', 'town')
> dup: ('seat', 'rack') ('sea', 'track')
> dup: ('seat', 'rain') ('sea', 'train')
> dup: ('seat', 'win') ('sea', 'twin')
> dup: ('skin', 'arrow') ('ski', 'narrow')
> dup: ('skin', 'either') ('ski', 'neither')
> dup: ('skin', 'ice') ('ski', 'nice')
> dup: ('tent', 'able') ('ten', 'table')
> dup: ('tent', 'ask') ('ten', 'task')
> dup: ('tent', 'hat') ('ten', 'that')
> dup: ('tent', 'hen') ('ten', 'then')
> dup: ('tent', 'issue') ('ten', 'tissue')
> dup: ('tent', 'one') ('ten', 'tone')
> dup: ('tent', 'own') ('ten', 'town')
> dup: ('tent', 'rack') ('ten', 'track')
> dup: ('tent', 'rain') ('ten', 'train')
> dup: ('tent', 'win') ('ten', 'twin')
> dup: ('used', 'anger') ('use', 'danger')
> dup: ('used', 'ice') ('use', 'dice')
> dup: ('used', 'inner') ('use', 'dinner')
> dup: ('used', 'raw') ('use', 'draw')
> dup: ('wine', 'merge') ('win', 'emerge')
> dup: ('wine', 'motion') ('win', 'emotion')
> dup: ('wine', 'state') ('win', 'estate')
> dup: ('wing', 'host') ('win', 'ghost')
> dup: ('wing', 'love') ('win', 'glove')
> dup: ('wing', 'old') ('win', 'gold')
> dup: ('wing', 'own') ('win', 'gown')
> dup: ('wing', 'race') ('win', 'grace')
> dup: ('wing', 'rain') ('win', 'grain')
> dup: ('wink', 'now') ('win', 'know')
> dup: ('youth', 'under') ('you', 'thunder')
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Pay-to-contract tweak fields for PSBT (bip-psbt-p2c)

2022-01-16 Thread Jeremy via bitcoin-dev
High level feedback:

It would be nice if this field was not distinct from BIP32 derivation
descriptors so that you could have a single representation for the Extended
Key that doesn't need some additional field only in PSBT.

If I understood correctly, and this is just an arbitrary hash being
provably added (but has not direct cryptographic function), this can also
be done with no changes to BIP32 as I did in
https://github.com/sapio-lang/sapio/blob/master/ctv_emulators/src/lib.rs.

Best,

Jeremy


--
@JeremyRubin 



On Sun, Jan 16, 2022 at 1:00 PM Dr Maxim Orlovsky via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Dear Bictoin dev community,
>
>
> In Mar 2019 Andrew Poelstra sent to bitcoin dev mail list a proposal
> for extending existing PSBT standard [6], which among other was suggesting
> adding a field for P2C tweaks:
>
> > (c) a map from public keys to 32-byte "tweaks" that are used in the
> > pay-to-contract construction. Selfishly I'd like this to be a
> > variable-length bytestring with the semantics that (a) the first
> > 33 bytes represent an untweaked pubkey; (b) the HMAC-SHA256 of
> > the whole thing, when multiplied by G and added to the untweaked
> > pubkey, result in the target key. This matches the algorithm in
> > [3] which is deployed in Blockstream's Liquid, but I'd be happy
> > with a more efficient scheme which e.g. used SHA256 rather than
> > HMAC-SHA256.
>
> This BIP proposal is an attempt to structure that idea into a more
> universal and standard form, following a discussion happened in
> https://github.com/bitcoin/bips/pull/1239. Specifically, it adds a PSBT
> input field for inputs spending UTXOs with previously created
> pay-to-contract (P2C) public key tweaks.
>
>
> ---
>
> 
>   BIP: ?
>   Layer: Applications
>   Title: Pay-to-contract tweak fields for PSBT
>   Author: Maxim Orlovsky ,
>   Andrew Poelstra 
>   Discussions-To: 
>   Comments-URI: 
>   Status: Draft
>   Type: Standards Track
>   Created: 2022-01-16
>   License: BSD-2-Clause
>   Requires: BIP-174
> 
>
> ==Introduction==
>
> ===Abstract===
>
> This document proposes additional fields for BIP 174 PSBTv0 and BIP 370
> PSBTv2
> that allow for pay-to-contract key tweaking data data to be included in a
> PSBT
> of any version. These will represent an extra-transaction information
> required
> for the signer to produce valid signatures spending previous outputs.
>
> ===Copyright===
>
> This BIP is licensed under the 2-clause BSD license.
>
> ===Background===
>
> Key tweaking is a procedure for creating a cryptographic commitment to some
> message using elliptic curve properties. The procedure uses the discrete
> log
> problem (DLP) to commit to an extra-transaction message. This is done by
> adding
> to a public key (for which the output owner knows the corresponding
> private key)
> a hash of the message multiplied on the generator point G of the elliptic
> curve.
> This produces a tweaked public key, containing the commitment. Later, in
> order
> to spend an output containing P2C commitment, the same commitment should be
> added to the corresponding private key.
>
> This type of commitment was originally proposed as a part of the pay to
> contract
> concept by Ilja Gerhardt and Timo Hanke in [1] and later used by Eternity
> Wall
> [2] for the same purpose. Since that time multiple different protocols for
> P2C
> has been developed, including OpenTimeStamps [3], Elements sidechain P2C
> tweaks
> [4] and LNPBP-1 [5], used in for constructing Peter Todd's
> single-use-seals [6]
> in client-side-validation protocols like RGB.
>
> ===Motivation===
>
> P2C outputs can be detected onchain and spent only if the output owner
> not just knowns the corresponding original private key, but also is aware
> about
> P2C tweak applied to the public key. In order to produce a valid
> signature, the
> same tweak value must be added (modulo group order) to the original
> private key
> by a signer device. This represents a channelge for external signers,
> which may
> not have any information about such commitment. This proposal addresses
> this
> issue by adding relevant fields to the PSBT input information.
>
> The proposal abstracts details of specific P2C protocols and provides
> universal
> method for spending previous outpus containing P2C tweaks, applied to the
> public
> key contained within any standard form of the scriptPubkey,
> including
> bare scripts and P2PK, P2PKH, P2SH, witness v0 P2WPKH, P2WSH, nested
> witness v0
> P2WPKH-P2SH, P2WSH-P2SH and witness v1 P2TR outputs.
>
>
> ==Design==
>
> P2C-tweaked public keys are already exposed in the
> PSBT_IN_REDEEM_SCRIPT, PSBT_IN_WITNESS_SCRIPT,
> PSBT_IN_TAP_INTERNAL_KEY and PSBT_IN_TAP_LEAF_SCRIPT
> fields;
> the only information signer is needed to recognize which keys it should
> sign

Re: [bitcoin-dev] Bitcoin Legal Defense Fund

2022-01-14 Thread Jeremy via bitcoin-dev
If I understand the intent of your message correctly, that's unfortunately
not how the law works.

If there is a case that is precedent setting, whether it directly involves
bitcoin or not, a bitcoin focused legal fund might want to either offer
representation or file an amicus brief to guide the court to making a
decision beneficial to Bitcoin Developers.

More than likely, some of these cases would involve developers of
alternative projects (as they might be "ahead of the curve" on legal
problems) and heading off a strong precedent for other communities would be
protective for Bitcoiners in general. As an example, were the developers
building Rollups on Ethereum to face a legal threat, since we might one day
want similar software for Bitcoin, ensuring a good outcome for them helps
Bitcoin.

That said, all organizations must at some point have a defined scope, and
it seems the BLDF is primarily focused for now on things impacting the
developers of Bitcoin or software for bitcoin specifically. I "trust" the
legal team behind BLDF will form a coherent strategy around what is
relevant to Bitcoin defense, even if the particulars of a case are not
directly about Bitcoin.

cheers,

Jeremy
--
@JeremyRubin 



On Fri, Jan 14, 2022 at 10:25 AM qmccormick13 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I very much hope the fund will not finance lawsuits irrelevant to bitcoin.
>
> On Fri, Jan 14, 2022 at 5:23 PM Aymeric Vitte via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> (P2P?) Electronic Cash (Defense?) Fund or Electronic Cash Foundation ?
>> More neutral, potentially covering others than Bitcoin, mimicking a bit
>> EFF (even if as stated US is not the only target), referring to
>> Satoshi's paper where everything started
>>
>> Maybe I am not up to date but it would be good to know what are the
>> current procedures with the Tulip thing
>>
>> Aymeric
>>
>>
>> Le 13/01/2022 à 19:20, jack via bitcoin-dev a écrit :
>> > Hi Prayank,
>> >
>> >> On 13 Jan 2022, at 10:13, Prayank  wrote:
>> >> I had few suggestions and feel free to ignore them if they do not make
>> sense:
>> >>
>> >> 1.Name of this fund could be anything and 'The Bitcoin Legal Defense
>> Fund' can be confusing or misleading for newbies. There is nothing official
>> in Bitcoin however people believe things written in news articles and some
>> of them might consider it as an official bitcoin legal fund.
>> > Excellent point. Will come up with a better name.
>> >
>> >> 2.It would be better if people involved in such important funds do not
>> comment/influence soft fork related discussions. Example: Alex Morcos had
>> some opinions about activation mechanism during Taproot soft fork IIRC.
>> > Yes. Will think through this and board operating principles we can
>> share publicly, which would probably include criteria for how cases are
>> chosen, to protect against this board and fund influencing direction.
>> >
>> > Open to ideas and suggestions on all.
>> >
>> > jack
>> > ___
>> > bitcoin-dev mailing list
>> > bitcoin-dev@lists.linuxfoundation.org
>> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Documenting the lifetime of a transaction during mempool congestion from the perspective of a rational user

2022-01-13 Thread Jeremy via bitcoin-dev
Devs,

This email is primarily about existing wallet behaviors and user
preferences, and not about CTV. However, towards the end I will describe
the relevance of CTV, but the email is worth reading even if you have no
interest in CTV as the problems described exist today.

One point of confusion I've seen while discussing CTV based congestion
control is that it requires a bunch of new wallet software.

Most of the software requirements that would make CTV work well are things
that either already exist in Bitcoin Core, or are 'bugs' (where bug is
defined as deviation from rational utility maximizing behavior) that should
be fixed *whether or not CTV exists.*

In this post, I am going to walk through what I expect rational behavior to
be for a not unlikely circumstance.

First, let's define what rational behavior for a wallet is. A rational
wallet should have a few goals:

1) Maximize 'fully trusted' balance (fully confirmed + unconfirmed change
outputs from our own txns)
2) Process payments requested by the owner within the "urgency budget"
requested by the user.
3) Maximize "privacy" (this is a vague goal, so we'll largely ignore it
here.).

Rational wallet behavior may not be possible without metadata. For example,
a rational wallet might prompt the user for things like "how much do you
trust the sender of this payment to not RBF this transaction?", or "how
much do you trust this sender to not double spend you?". For example, a
self-transfer from cold wallet to hot wallet could have a trust score of 1,
whereas a payment from an anonymous source would have a trust score of 0.
Exchanges where you have a legal agreement to not RBF might sit somewhere
in between. Other pieces of exogenous information that could inform wallet
behavior include "has hashrate decreased recently, making longer reorgs
likely".

In the model above, a user does not request transactions, they request
payments. The rational wallet serves as an agent to assist the user in
completing these payments. For example, if I have a wallet with a single
unconfirmed output, and I spend from it to pay Alice, if the unconfirmed
gets replaced, my wallet should track that it was replaced and prompt me to
re-sign a new transaction. Rational wallets that maximize balance should be
careful to ensure that replaced payments are exclusive, guaranteed either
through sufficient confirmations or 'impossibility proofs' by reusing an
input (preventing double-send behavior).

-

Now that we've sketched out a basic framework for what a rational wallet
should be doing, we can describe what the process of receiving a payment is.

Suppose I have a wallet with a bevy of fully confirmed coins such that for
my future payments I am sufficiently funded.

Then, I receive a payment from a highly trusted source (e.g., self
transfer) that is unconfirmed.

I then seek to make an outgoing payment. I should have no preference
towards or against spending the unconfirmed transfer, I should simply
account for it's cost in coin selection of CPFP-ing the parent transaction.
If fees are presently historically low, I may have a preference to spend it
so as to not have a higher fee later (consolidation).

Later, I receive payment from an untrusted source (e.g., an anonymous
donation to me). I have no reason to trust this won't be double spent.
Perhaps I can even observe that this output has been RBF'd many times
previously. I do not count on this money arriving. The feerate on the
transaction suggests it won't be confirmed immediately.

In order to maximize balance, I should prioritize spending from this output
(even if I don't have a payment I desire to make) in order to CPFP it to
the top of the mempool and guarantee I am paid. This is inherently "free"
since my cost to CPFP would be checked to be lower than the funds I am
receiving, and I have no expected value to receive the payment if it is not
confirmed. If I do have a transaction I desire to do, I should prioritize
spending this output at that time. If not, I would do a CPFP just in favor
of balance maximizing. Perhaps I can piggyback something useful, like
speculatively opening a lightning channel.

If I just self-spend to CPFP, it is very simple since the only party set up
for disappointment is myself (note: I patched the behavior in this case to
accurately *not* count this as a trusted balance in
https://github.com/bitcoin/bitcoin/pull/16766, since a parent could disrupt
this). However, if I try to make a payment, my wallet agent must somehow
prompt me to re-sign or automatically sign an alternative payment once it
is proven (e.g. 6 blocks) I won't receive the output, or to re-sign on a
mutually exclusive output (e.g., fee bumping RBF) such that issuing two
payments will not causes a double-send error. This adds complexity to both
the user story and logic, but is still rational.

Now, suppose that I receive a new payment from  a **trusted** source that
is a part of a "long chain". A long chain is a 

Re: [bitcoin-dev] Bitcoin Legal Defense Fund

2022-01-13 Thread Jeremy via bitcoin-dev
A further point -- were it to be a norm if a contributor to something like
this be denied their full capacity for "free speech" by social convention,
it would either encourage anonymous funding (less accountable) or would
disincentivize creating such initiatives in the future.

Both of those outcomes would be potentially bad, so I don't see limiting
speech on an unrelated topic as a valid action.

However, I think the inverse could have merit -- perhaps funders can
somehow commit to 'abstracting' themselves from involvement in cases / the
process of accepting prospective clients. As neither Alex nor Jack are
lawyers (afaict?), this should already be true to an extent as the legal
counsel would be bound to attorney client privilege.

Of course we live in a free country and however Jack and Alex determine
they should spend their own money is their god-given right, as much as it
is unfortunately the right of anyone to sue a developer for some alleged
infringement. I'm personally glad that Jack and Alex are using their money
to help developers and not harass them -- many thanks for that!

One question I have is how you might describe the differences between what
BLDF can accomplish and what e.g. EFF can accomplish. Having been
represented by the EFF on more than one occasion, they are fantastic. Do
you feel that the Bitcoin-specific focus of BLDF outweighs the more general
(but deeper experience/track record) of an organization like the EFF (or
others, like Berkman Cyberlaw Clinic, etc)? My main opinion is "the more
the merrier", so don't consider it a critique, more a question so that you
have the opportunity to highlight the unique strengths of this approach.

Best,

Jeremy
--
@JeremyRubin 



On Thu, Jan 13, 2022 at 10:50 AM Steve Lee via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I think the word "The" is important. The title of the email and the name
> of the fund is Bitcoin Legal Defense Fund. It is "a" legal defense fund;
> not THE Bitcoin Legal Defense Fund. There is room for other funds and
> strategies and anyone is welcome to create alternatives.
>
> I also don't see why Alex or anyone should be denied the opportunity to
> comment on future soft forks or anything about bitcoin. Alex should have no
> more or less right to participate and his comments should be judged on
> their merit, just like yours and mine.
>
> On Thu, Jan 13, 2022 at 9:37 AM Prayank via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Jack,
>>
>>
>> > The main purpose of this Fund is to defend developers from lawsuits
>> regarding their activities in the Bitcoin ecosystem, including finding and
>> retaining defense counsel, developing litigation strategy, and paying legal
>> bills. This is a free and voluntary option for developers to take advantage
>> of if they so wish. The Fund will start with a corps of volunteer and
>> part-time lawyers. The board of the Fund will be responsible for
>> determining which lawsuits and defendants it will help defend.
>>
>> Thanks for helping the developers in legal issues. Appreciate your
>> efforts and I understand your intentions are to help Bitcoin in every
>> possible way.
>>
>>
>> Positives that I see in this initiative:
>>
>> 1.Developers don't need to worry about rich scammers and can focus on
>> development.
>>
>> 2.Financial help for developers as legal issues can end up in wasting lot
>> of time and money.
>>
>> 3.People who have misused courts to affect bitcoin developers will get
>> better response that they deserve.
>>
>>
>> I had few suggestions and feel free to ignore them if they do not make
>> sense:
>>
>> 1.Name of this fund could be anything and 'The Bitcoin Legal Defense
>> Fund' can be confusing or misleading for newbies. There is nothing official
>> in Bitcoin however people believe things written in news articles and some
>> of them might consider it as an official bitcoin legal fund.
>>
>> 2.It would be better if people involved in such important funds do not
>> comment/influence soft fork related discussions. Example: Alex Morcos had
>> some opinions about activation mechanism during Taproot soft fork IIRC.
>>
>>
>>
>> --
>> Prayank
>>
>> A3B1 E430 2298 178F
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_PUSH_KEY_* & BIP-118 0x01 Pun

2022-01-12 Thread Jeremy via bitcoin-dev
Note:

BIP-118 as-is enables something similar to OP_PUSH_KEY_INTERNAL_TAGGED via
the following script fragment:

witness:  
program: DUP 0x01 CHECKSIG SWAP DUP TOALTSTACK CHECKSIG FROMALTSTACK


It's unclear how useful this might be, since the signature already covers
the transaction.

--
@JeremyRubin 



On Wed, Jan 12, 2022 at 4:35 PM Jeremy  wrote:

> Hi Devs,
>
> Two small transaction introspection opcodes that are worth considering are
> OP_PUSH_KEY_INTERNAL or OP_PUSH_KEY_EXTERNAL which can return the taproot
> key for the current input.
>
> While the internal key could be included in the tree already, and this is
> just a performance improvement, the external key creates a hash cycle and
> is not possible to include directly.
>
> This came up as a potential nicety while looking at how BIP-118 "puns" a
> single 0x01 byte as a key argument to refer to the Internal key for
> compactness. It would be more general if instead of 0x01, there were an
> opcode that actually put the Internal key on the stack.
>
> There is a small incompatibility with BIP-118 with this approach, which is
> that keys are not tagged for APO-enablement. Thus, there should either be a
> version of this opcode for APO tagged or not, or, APO should instead define
> some CheckSig2 which has APO if tagging is still desired. (Or we could
> abandon tagging keys too...)
>
> It might be worth pursuing simplifying APO to use these OP_PUSH_KEY
> opcodes because future plans for more generalized covenant might benefit
> from being able to get the current key off the stack. For example, TLUV
> might be able to be decomposed into simpler (RISC) opcodes for getting the
> internal key, getting the current merkel path, and then manipulating it,
> then tweaking the internal key.
>
> The internal key might be useful for signing in a path not just for APO,
> but also because you might want to sign e.g. a transaction that is
> contingent on a HTLC scriptcode being satisfied. Because it is cheaper to
> use the 0x01 CHECKSIG than doing a separate key ( CHECKSIG), it also
> causes an unintended side effect from APO of incentivizing not using a
> unique key per branch (privacy loss) and incentivizing enabling an APO
> tagged key where one is not required (unless 0x00, as I've noted elsewhere
> is added to the 118 spec as a pun for an untagged key).
>
> Pushing the external key's use is less obvious, but with the development
> of future opcodes it would be helpful for some recursive covenants.
>
> Both opcodes are very design specific -- there's only one choice of what
> data they could push.
>
> Of course, we could keep 118 spec'd as is, and add these PUSH_KEYs later
> if ever desired redundantly with the Checksig puns.
>
> Cheers,
>
> Jeremy
>
>
>
>
>
>
>
> --
> @JeremyRubin 
> 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_PUSH_KEY_* & BIP-118 0x01 Pun

2022-01-12 Thread Jeremy via bitcoin-dev
Hi Devs,

Two small transaction introspection opcodes that are worth considering are
OP_PUSH_KEY_INTERNAL or OP_PUSH_KEY_EXTERNAL which can return the taproot
key for the current input.

While the internal key could be included in the tree already, and this is
just a performance improvement, the external key creates a hash cycle and
is not possible to include directly.

This came up as a potential nicety while looking at how BIP-118 "puns" a
single 0x01 byte as a key argument to refer to the Internal key for
compactness. It would be more general if instead of 0x01, there were an
opcode that actually put the Internal key on the stack.

There is a small incompatibility with BIP-118 with this approach, which is
that keys are not tagged for APO-enablement. Thus, there should either be a
version of this opcode for APO tagged or not, or, APO should instead define
some CheckSig2 which has APO if tagging is still desired. (Or we could
abandon tagging keys too...)

It might be worth pursuing simplifying APO to use these OP_PUSH_KEY opcodes
because future plans for more generalized covenant might benefit from being
able to get the current key off the stack. For example, TLUV might be able
to be decomposed into simpler (RISC) opcodes for getting the internal key,
getting the current merkel path, and then manipulating it, then tweaking
the internal key.

The internal key might be useful for signing in a path not just for APO,
but also because you might want to sign e.g. a transaction that is
contingent on a HTLC scriptcode being satisfied. Because it is cheaper to
use the 0x01 CHECKSIG than doing a separate key ( CHECKSIG), it also
causes an unintended side effect from APO of incentivizing not using a
unique key per branch (privacy loss) and incentivizing enabling an APO
tagged key where one is not required (unless 0x00, as I've noted elsewhere
is added to the 118 spec as a pun for an untagged key).

Pushing the external key's use is less obvious, but with the development of
future opcodes it would be helpful for some recursive covenants.

Both opcodes are very design specific -- there's only one choice of what
data they could push.

Of course, we could keep 118 spec'd as is, and add these PUSH_KEYs later if
ever desired redundantly with the Checksig puns.

Cheers,

Jeremy







--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Summary of BIP-119 Meeting #1 Tuesday January 11th

2022-01-12 Thread Jeremy via bitcoin-dev
Hi Devs,

Below you'll find my summary of the BIP-119 Meeting held earlier today.

Overall the meeting was pleasant although fast paced. Thank you all for
attending and participating. I look forward to seeing (more of) you next
time!

Meeting notes available here:
https://gnusha.org/ctv-bip-review/2022-01-11.log, please check my work that
I've accurately reflected the opinions expressed. You'll also find the
notes instructive to peruse if you wish to use it as a guide for reviewing
the BIP.

Cheers,

Jeremy

*In brief:*
- CTV's design seems relatively uncontroversial/easy to comprehend.
- The desirability / suitability of CTV for its use cases still seemed
uncertain to participants.
- Among participants, there seemed to be sentiment that if we are to stick
to taproot speedy-trial like timelines, Springtime ('22, '23, ...) seems to
make sense.
- For the next meeting, the sessions will focus more heavily on
applications and will be slower paced.

*Detailed summary:*

*First participants noted what they were excited for CTV to do.*

Among participants in the meeting:
There was strong interest expressed in the Vaults use case by a number of
individuals.
There was lighter interest in non-interactive channels and payment pools.
There was strong skepticism expressed about congestion control; it was
noted that non-interactive channels+congestion control was a strong
motivator.

*Then, BIP review began:*

While reviewing the BIP:
Quadratic hashing during validation, and how CTV should be immune to it via
caching, was reviewed.
The costs and lifetime of PrecomputedData caching was reviewed.
A question was asked as to why the witness data was not in the CTV hash,
which was explained that it could prevent signatures from being used with
CTV.
The half-spend problem was explained and CTV's mitigations against it were
reviewed.

CTV's usage of a NOP was reviewed; after CTV 6 upgradable NOPs would
remain, it was pointed out that multibyte  NOP10 could
extend the NOP space indefinitely (since CTV only uses 32-byte arguments,
CTV's NOP is only partly used).
Only adding CTV to tapscript (and not segwit v0, bare script, p2sh) was
discussed; no one expressed strongly that presence in legacy script was
problematic if there was a use for it -- i.e., let the market decide (since
some scripts are cheapest in bare script), although there seemed to be
agreement that you'd usually want Taproot.

It was clearly preferred that CTV use SHA256 and not RIPEMD160.

How big interactive protocols can get without DoS was discussed. CTV makes
non-interactive protocols possible, open question of if it matters. The
bulk of the benefit of a batch is in the first 10 participants (90%),
additional may not matter as much. It was agreed upon that CTV did at least
seem to make protocols asynchronous and non-interactive, once participants
agreed on what that terminology meant. A desire was expressed to see more
batched openings done in Lightning to see if/how CTV might help. *bonus:
Alex tweeted after the meeting about currently operational LN batched
opens, without congestion control
**https://twitter.com/alexbosworth/status/1481104624085917699
.*

*Then, Code Review began.*

First, the "main" BIP functionality commits were reviewed.

The current state of code coverage was discussed & improvements on that.
HandleMissingData remains difficult to code cover.
CTV's policy discouraging rules before activation were discussed, and how
this improves on prior soft forks not discouraging spends.
The TODO in the caching heuristic was discussed. The history of the
PrecomputedData caching heuristics was discussed, and how they became more
aggressive under taproot and that this might be a minor regression. It was
explained by the author that "TODO" meant someone could do something in the
future, not that something must be done now.

The bare CTV script type was discussed. The difference between legacy
script validation and standardness was discussed.
Bare CTV script does not (yet?) get it's own address type, but a standard
output type is still needed for relaying from internal services.
That it could be removed and added later was discussed, but that it causes
difficulty for testing was also mentioned.

Tests and test vectors were discussed.
A non blocking action item was raised to make our hex json test vectors
(for all Bitcoin) human readable was raised (otherwise, how do we know what
they do?).

*Then, discussion of the bug bounty began.*

An offer was made to make the administration of the program through a tax
deductible 501c3, Lincoln Network.
Difficulties were discussed in practical administration of the program
funds.
Desire was expressed to reward more than just 'showstopping' bugs, but also
strong reviews/minor issues/longer term maintenance.
Desire was expressed for a covenants-only Scaling Bitcoin like event.
Some discussion was had about bounties based around mutation testing, but
it was 

Re: [bitcoin-dev] Stumbling into a contentious soft fork activation attempt

2022-01-10 Thread Jeremy via bitcoin-dev
Please see the following bips PRs which are follow ups to the concrete
actionables raised by Peter. Thanks for bringing these up, it certainly
improves the reviewability of the BIP.

https://github.com/bitcoin/bips/pull/1271
https://github.com/bitcoin/bips/pull/1272

--
@JeremyRubin 



On Mon, Jan 10, 2022 at 7:42 PM Jeremy  wrote:

> Hi Peter,
>
> Thank you for your review and feedback.
>
> Apologies for the difficulties in reviewing. The branch linked from the
> BIP is not the latest, the branch in the PR is what should be considered
> https://github.com/bitcoin/bitcoin/pull/21702 for review and has more
> thorough well documented tests and test vectors. The version you reviewed
> should still be compatible with the current branch as there have not been
> any spec changes, though.
>
> I'm not sure what best practice is w.r.t. linking to BIPs and
> implementations given need to rebase and respond to feedback with changes.
> Appreciate any pointers on how to better solve this. For the time being, I
> will suggest an edit to point it to the PR, although I recognize this is
> not ideal. I understand your preference for a commit hash and can do one
> if it helps. For what it's worth, the taproot BIPs do not link to a
> reference implementation of Taproot so I'm not sure what best practice is
> considered these days.
>
> One note that is unfortunate in your review is that there is a
> discrepancy between the BIP and the implementation (the original reference
> or the current PR either) in that caching and DoS is not addressed. This
> was an explicit design goal of CTV and for it not to be mentioned in the
> BIP (and just the reference) is an oversight on my part to not aid
> reviewers more explicitly. Compounding this, I accepted a third-party PR to
> make the BIP more clear as to what is required to implement it that does
> not have caching (functional correctness), that exposes the issue if
> implemented by the BIP directly and not by the reference implementation. I
> have explained this in a review last year to pyskell
>  on
> the PR that caching is required for non-DoS. I will add a note to the BIP
> about the importance of caching to avoid DoS as that should make third
> party implementers aware of the issue.
>
> That said, this is not a mis-considered part of CTV. The reference
> implementation is specifically designed to not have quadratic hashing and
> CTV is designed to be friendly to caching to avoid denial of service. It's
> just a part of the BIP that can be more clear. I will make a PR to more
> clearly describe how that should happen.
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Stumbling into a contentious soft fork activation attempt

2022-01-10 Thread Jeremy via bitcoin-dev
Hi Peter,

Thank you for your review and feedback.

Apologies for the difficulties in reviewing. The branch linked from the BIP
is not the latest, the branch in the PR is what should be considered
https://github.com/bitcoin/bitcoin/pull/21702 for review and has more
thorough well documented tests and test vectors. The version you reviewed
should still be compatible with the current branch as there have not been
any spec changes, though.

I'm not sure what best practice is w.r.t. linking to BIPs and
implementations given need to rebase and respond to feedback with changes.
Appreciate any pointers on how to better solve this. For the time being, I
will suggest an edit to point it to the PR, although I recognize this is
not ideal. I understand your preference for a commit hash and can do one if
it helps. For what it's worth, the taproot BIPs do not link to a reference
implementation of Taproot so I'm not sure what best practice is considered
these days.

One note that is unfortunate in your review is that there is a
discrepancy between the BIP and the implementation (the original reference
or the current PR either) in that caching and DoS is not addressed. This
was an explicit design goal of CTV and for it not to be mentioned in the
BIP (and just the reference) is an oversight on my part to not aid
reviewers more explicitly. Compounding this, I accepted a third-party PR to
make the BIP more clear as to what is required to implement it that does
not have caching (functional correctness), that exposes the issue if
implemented by the BIP directly and not by the reference implementation. I
have explained this in a review last year to pyskell
 on
the PR that caching is required for non-DoS. I will add a note to the BIP
about the importance of caching to avoid DoS as that should make third
party implementers aware of the issue.

That said, this is not a mis-considered part of CTV. The reference
implementation is specifically designed to not have quadratic hashing and
CTV is designed to be friendly to caching to avoid denial of service. It's
just a part of the BIP that can be more clear. I will make a PR to more
clearly describe how that should happen.

--
use cases
--

One thing that's not clear to me is the amount of work a BIP needs to do
within itself to fully describe all applications and use cases. I don't
think it's appropriate for most BIPs to do so, but in some cases it is a
good idea. However, for CTV the applications actually are relatively
fleshed out, just outside the BIP. Further, the availability of generic
tooling through Sapio and it's examples has demonstrated how one might
build a variety of applications. See rubin.io/advent21 for numerous worked
examples.


## Congestion Controlled Transactions

Generally, the existence of these transactions can be tracked using
existing wallets if the transaction is seen in the mempool, it will be
marked as "mine" and can even be marked as "trusted". See
https://utxos.org/analysis/taxes/ which covers the legal obligations of
senders with respect to payees under congestion control. Generally, a
legally identifiable party such as an exchange sending a congestion control
payment must retain and serve it to the user to prove that they made
payment to the user. Users of said exchanges can either download a list of
their transactions at the time of withdrawal or they can wait to see it
e.g. in the mempool. This was also discussed at
https://diyhpl.us/wiki/transcripts/ctv-bip-review-workshop/ where you can
see notes/videos of what was discussed if the notes are hard to parse.

Lightning specific wallets such as Muun and LND particularly plan to use
CTV to batch-open a multitude of channels for users, using both congestion
control and non-interactive batching. Channels have to be opened on-chain
and if channels are to be the future so will on-chain opening of them.
These wallets can be built out to track and receive these opening proofs.

## Wallet Vaults

There exists at least 3 implementations of Vaults using CTV (one by me in
C++, one by me in Sapio, another by Bryan Bishop in python), and there
exist oracles as you mention for emulating it.

## Payment Channels

Actually taking advantage of them is quite simple and has been discussed
and reviewed with a number of independent lightning developers.

You can see here a rudimentary implementation and description of how it can
work https://rubin.io/bitcoin/2021/12/11/advent-14/.

This is composable with any `impl Revokable` channel update specification
so generalizes to Lightning.

Of course, making it production grade requires a lot of work, but the
concept is sound.


## CoinJoin


CTV trees may mean more transactions, not less, but if feerates are not
monotonic and CTV allows you to defer the utilization of chainspace.

CTV CoinJoins also open the opportunity to cooperation through payment
pools (which can be opened via a coinjoin), which 

[bitcoin-dev] BIP-119 Meeting Reminder and Prelim Agenda

2022-01-09 Thread Jeremy via bitcoin-dev
Hi all,

As a reminder the first meeting for CTV will be this Tuesday at 12:00PM PT.

Based on feedback, I have included a preliminary agenda and time allocation
for the meeting at the end of this email. The main part of the meeting will
run for 1.5 hours, and will be followed by a post meeting discussion of
length 30 minutes for discussing broader next steps and consensus seeking
processes (this is separate to break up the technical review from the
metaphysics of consensus discussion and allow those who do not wish to
discuss a polite exit).

The agenda does not thoroughly cover motivations or use cases for CTV, such
as congestion control, vaults, payment pools, or non-interactive contract
openings. Those can be found in a multitude of sources (such as
https://rubin.io/advent21, https://learn.sapio-lang.org, https://utxos.org,
or https://github.com/kanzure/python-vaults/tree/master/vaults). Specific
applications built on CTV will be best reviewed in follow up meetings as
technical evaluation of how well CTV works for use cases requires a deep
understanding of how the CTV primitive works.

For similar reasons, this agenda does not do a deep dive into alternatives
to CTV. That discussion can be best had following a thorough review of CTV
itself. Helpful links for depthening understanding on covenant properties,
proposals, and varieties included below in a (loosely) recommended reading
order:
https://rubin.io/bitcoin/2021/12/04/advent-7/
https://rubin.io/bitcoin/2021/12/05/advent-8/
https://rubin.io/blog/2021/07/02/covenants/
https://utxos.org/alternatives/
https://arxiv.org/abs/2006.16714
https://rubin.io/bitcoin/2021/12/24/advent-27/
https://github.com/bitcoin/bips/blob/master/bip-0119.mediawiki#feature-redundancy
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019424.html


If you feel particular topics important to you are not represented in this
agenda or if I can make any improvements otherwise, please drop me a note
and I will endeavor to ensure they are either slotted into this meeting or
included in a second meeting.

That the meeting is tightly scheduled is by design: I want to respect
everyone's time and ensure that the meeting is highly productive. There is
always room for follow ups or further exploration at future meetings or as
mailing list follow ups.

Looking forward to discussing with you on tuesday,

Jeremy




*#topic Overview of BIP & Q (40 Mins)*
#subtopic what does CTV do? (5 minutes)

#subtopic which fields are in the digest? (5 minutes)

#subtopic the order / structure of fields in the digest? (5 minutes)

#subtopic the half-spend problem/solution? (5 minutes)

#subtopic using a NOP v.s. successX / legacy script types? (5 minutes)

#subtopic using sha256 v.s. Ripemd160 (5 minutes)

#subtopic general q (10 minutes)


*#topic Overview of Implementation & Testing (30 Minutes)*
#subtopic implementation walkthrough (15 minutes)

#subsubtopic validation burdens & caching (5 minutes)

#subtopic vectors: tx_valid.json + tx_invalid.json + transaction hashes
checking (2 minutes)

#subtopic functional test walkthrough (8 minutes)

*#topic Proposed Timeline Technical Feasibility (not advisibility) (10
minutes)*


*#topic Feedback on how to Structure Bounty Program (10 minutes)*
#post-meeting


*#topic open-ended feedback (is this meeting helpful, what could be better,
etc) (10 minutes)#topic What's required to get consensus / next steps? (20
minutes)*
#subtopic Discussion of "soft signals" utxos.org/signals (10 minutes)
#subtopic Discussion of activation mechanisms (10 minutes)



--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Why CTV, why now? Was RE: Stumbling into a contentious soft fork activation attempt

2022-01-05 Thread Jeremy via bitcoin-dev
Hi Devs,

There's a lot of noise in the other thread and it's hard to parse out what
merits a response or not without getting into a messy quagmire, so I
figured a separate email with high level points was the best way to respond.

Covenants are an important part of Bitcoin's future, not for "adding use
cases" but for making the fundamental pillars underlying Bitcoin more
robust. For example, covenants play a central role in privacy, scalability,
self custody, and decentralization (as I attempted to show in
https://rubin.io/advent21).

Bitcoin researchers have known about covenants conceptually for a long
time, but the implications and problems with them led to them being viewed
with heavy skepticism and concern for many years.

CTV was an output of my personal "research program" on how to make simple
covenant types without undue validation burdens. It is designed to be the
simplest and least risky covenant specification you can do that still
delivers sufficient flexibility and power to build many useful applications.

CTV has been under development for multiple years and the spec has been
essentially unmodified for 2 years (since the BIP was assigned a number).

CTV's specification is highly design specific to being a pre-committed
transaction. It'd be difficult to engineer an alternative for what it does
in a substantially different way.

CTV composes with potential future upgrades, such as OP_AMOUNT, CAT, CSFS,
TLUV. (See https://rubin.io/blog/2021/07/02/covenants/ and
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019423.html
)

CTV is non-rival (that means "both can happen") with any other upgrade
(e.g. APO, TLUV).

During the last 2 years, CTV has been reviewed by a wide range of folks and
there have not been (any?) conceptual or concrete NACKs for CTV to have or
introduce any risk or vulnerability to Bitcoin.

The main complaints about CTV are that we might come up with something
better eventually, a better system of things, or that CTV is not flexible
or general enough to make interesting applications, and it would be
unfortunate to go through with using up the 32 byte argument version of an
OP_NOP and the pains of any soft fork for something that we may eventually
know how to do better, replacing CTV.

More general approaches (e.g., based on CAT+CSFS) while more capability
powerful, have limitations given large script sizes and difficulty in
manipulating transactions and their outputs (e.g., Taproot outs requires
some OP_TWEAK as well), and are harder to reason about given higher degrees
of malleability.

During the last 2 years, while some other interesting concepts have arisen
(such as IIDs or TLUV), nothing in particular has fully overlapped CTV's
functionality, the closest being APO and they would both be valuable tools
to have independently.

During the last 2 years, no other proposal has reached the level of
"technical maturity" as CTV in terms of spec, implementation, testing,
tooling (rust miniscript integration, Sapio, python-vaults), and the
variety of applications demonstrated possible. As the saying goes, one in
the hand is worth two in the bush.

Many current users (not just end users, but businesses and protocol
developers as well) see CTV as delivering useful functionality for existing
applications despite its limitations (and some of those limitations emerge
as strengths). In particular, CTV is helpful for Lightning Network
companies to deliver non-custodial channels to more users and generally
improving wallet vault custody software.

Applications that are improved/enabled by CTV and not used today, like
Payment Pools, deliver strong privacy benefits. Privacy is something that
the longer we exist in a worse state, the harder it becomes to improve.
This is unlike e.g. scalability or self custody where improvements can be
made independent of previous activity. On the other hand, information leaks
from records of transactions are forever. There is more benefit from
reducing privacy leaks sooner than later. In other words, privacy is a path
dependent property not immediately upgradable to whatever current
technology provides.

Software Development is also path dependent. Many have remarked that there
is not great alternative research on other covenant proposals, but not many
application builders or protocol researchers are investing deep time and
expertise on producing alternative paths to covenants either. Accepting an
upgrade for limited covenants, like CTV, will give rise to many application
builders including covenants in their stack (e.g. for batching or vaults or
other applications) and will encourage more developers to contribute to
generic tooling (Sapio can be improved!) and also to -- via market
processes -- determine what other types of covenant would be safe and high
value for those already using CTV.

In my advocacy, I published the essay "Roadmap or Load o' Crap" (
https://rubin.io/bitcoin/2021/12/24/advent-27/), which presents a
hypothetical 

[bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-01 Thread Jeremy via bitcoin-dev
Happy new years devs,

I figured I would share some thoughts for conceptual review that have been
bouncing around my head as an opportunity to clean up the fee paying
semantics in bitcoin "for good". The design space is very wide on the
approach I'll share, so below is just a sketch of how it could work which
I'm sure could be improved greatly.

Transaction fees are an integral part of bitcoin.

However, due to quirks of Bitcoin's transaction design, fees are a part of
the transactions that they occur in.

While this works in a "Bitcoin 1.0" world, where all transactions are
simple on-chain transfers, real world use of Bitcoin requires support for
things like Fee Bumping stuck transactions, DoS resistant Payment Channels,
and other long lived Smart Contracts that can't predict future fee rates.
Having the fees paid in band makes writing these contracts much more
difficult as you can't merely express the logic you want for the
transaction, but also the fees.

Previously, I proposed a special type of transaction called a "Sponsor"
which has some special consensus + mempool rules to allow arbitrarily
appending fees to a transaction to bump it up in the mempool.

As an alternative, we could establish an account system in Bitcoin as an
"extension block".

*Here's how it might work:*

1. Define a special anyone can spend output type that is a "fee account"
(e.g. segwit V2). Such outputs have a redeeming key and an amount
associated with them, but are overall anyone can spend.
2. All deposits to these outputs get stored in a separate UTXO database for
fee accounts
3. Fee accounts can sign only two kinds of transaction: A: a fee amount and
a TXID (or Outpoint?); B: a withdraw amount, a fee, and an address
4. These transactions are committed in an extension block merkle tree.
While the actual signature must cover the TXID/Outpoint, the committed data
need only cover the index in the block of the transaction. The public key
for account lookup can be recovered from the message + signature.
5. In any block, any of the fee account deposits can be: released into fees
if there is a corresponding tx; consolidated together to reduce the number
of utxos (this can be just an OP_TRUE no metadata needed); or released into
fees *and paid back* into the requested withdrawal key (encumbering a 100
block timeout). Signatures must be unique in a block.
6. Mempool logic is updated to allow attaching of account fee spends to
transactions, the mempool can restrict that an account is not allowed more
spend more than it's balance.

*But aren't accounts "bad"?*

Yes, accounts are bad. But these accounts are not bad, because any funds
withdrawn from the fee extension are fundamentally locked for 100 blocks as
a coinbase output, so there should be no issues with any series of reorgs.
Further, since there is no "rich state" for these accounts, the state
updates can always be applied in a conflict-free way in any order.


*Improving the privacy of this design:*

This design could likely be modified to implement something like
Tornado.cash or something else so that the fee account paying can be
unlinked from the transaction being paid for, improving privacy at the
expense of being a bit more expensive.

Other operations could be added to allow a trustless mixing to be done by
miners automatically where groups of accounts with similar values are
trustlessly  split into a common denominator and change, and keys are
derived via a verifiable stealth address like protocol (so fee balances can
be discovered by tracing the updates posted). These updates could also be
produced by individuals rather than miners, and miners could simply honor
them with better privacy. While a miner generating an update would be able
to deanonymize their mixes, if you have your account mixed several times by
independent miners that could potentially add sufficient privacy.

The LN can also be used with PTLCs to, in theory, have another individual
paid to sponsor a transaction on your behalf only if they reveal a valid
sig from their fee paying account, although under this model it's hard to
ensure that the owner doesn't pay a fee and then 'cancel' by withdrawing
the rest. However, this could be partly solved by using reputable fee
accounts (reputation could be measured somewhat decentralized-ly by
longevity of the account and transactions paid for historically).

*Scalability*

This design is fundamentally 'decent' for scalability because adding fees
to a transaction does not require adding inputs or outputs and does not
require tracking substantial amounts of new state.

Paying someone else to pay for you via the LN also helps make this more
efficient if the withdrawal issues can be fixed.

*Lightning:*

This type of design works really well for channels because the addition of
fees to e.g. a channel state does not require any sort of pre-planning
(e.g. anchors) or transaction flexibility (SIGHASH flags). This sort of
design is naturally immune to pinning issues since 

[bitcoin-dev] BIP-119 Deployment and Review Workshops

2021-12-30 Thread Jeremy via bitcoin-dev
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BIP-119 Events
X-WR-TIMEZONE:UTC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T02
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T02
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220111T12
DTEND;TZID=America/Los_Angeles:20220111T13
RRULE:FREQ=WEEKLY;WKST=MO;INTERVAL=2;BYDAY=TU
DTSTAMP:20211230T201324Z
UID:1pfkhrdl3sm03kn71jmn9r7...@google.com
CREATED:20211230T195758Z
DESCRIPTION:Event will be in ##ctv-bip-review on IRC Libera Chat\, logs ava
 ilable.
LAST-MODIFIED:20211230T195900Z
LOCATION:
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:BIP-119 Deployment Workshop
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


BIP-119 Events_9k5gabum1lca4vs00rsk9bcrhk@group.calendar.google.com.ics
Description: application/ics
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Derivatives and Options

2021-12-24 Thread Jeremy via bitcoin-dev
On Fri, Dec 24, 2021, 8:42 AM Prayank  wrote:

> Hi Jeremy,
>
> > Wheres the info come from? Well, multiple places. We could get it from a
> third party (maybe using an attestation chain of some sort?), or there are
> certain ways it could be self-referential (like for powswap
> ).
>
> > Now let’s define a threshold oracle – we wouldn’t want to trust just one
> lousy oracle, so let’s trust M out of N of them!
>
> Similar approach is used in discreet log contracts for multi oracles.
> There is even a project for P2P derivatives but it was not used for any
> real trades on mainnet or further developed. What difference would OP_CTV
> make in this project if its implemented in Bitcoin?
>
> https://github.com/p2pderivatives/p2pderivatives-client
>
> https://github.com/p2pderivatives/p2pderivatives-server
>
> https://github.com/p2pderivatives/p2pderivatives-oracle
>

Discussed a bit here
https://twitter.com/JeremyRubin/status/1473175356366458883?t=7U4vI4CYIM82vNc8T8n6_g=19


A core benefit is unilateral opens. I.e. you can pay someone into a
derivative without them being online.


For example, you want to receive your payment in a Bitcoin backed Magnesium
risk reversal in exchange for some phys magnesium. I can create the
contract with your signing keys offline.

>
>
> > Does this NEED CTV?
> No, not in particular. Most of this stuff could be done with online signer
> server federation between you and counterparty. CTV makes some stuff nicer
> though, and opens up new possibilities for opening these contracts
> unilaterally.
>
> Nicer? How would unilateral derivatives work because my understanding was
> that you always need a peer to take the other side of the trade. I wish we
> could discuss this topic in a trading community with some Bitcoiners that
> even had some programming knowledge.
>
> Derivatives are interesting and less explored or used in Bitcoin projects.
> They could be useful in solving lot of problems.
>
>
I have a decent understanding of a bit of the trading world and can answer
most questions you have, or point you to someone else who would.


The way a unilateral option would work is that I can create a payment to
you paying you into an Option expiring next week that gives you the right
to purchase from me a magnesium risk reversal contract that settles next
month.



An example where this type of pattern must be used is in conjunction with
DCFMP and PowSwap where miners could commit to, instead of just keys,
'trade specs' and an Automatic market maker inside the DCFMP could attempt
to match that miner to a counterparty who wants the opposite hashrate
hedge. The need to exchange signatures would make this unviable otherwise.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Autonomous Organizations (DAOs) Will Save Bitcoin

2021-12-23 Thread Jeremy via bitcoin-dev
Oscar,

Sapio is essentially a 'Compiler toolchain' you run it once and then send
money to the contract. This is like Solidity in Ethereum.

Sapio Studio is a GUI for interacting with the outputs of a Sapio contract.
This is like Metamask/web3.js in Ethereum.

It's really not comparable to Lightning.

recommend starting with learn.sapio-lang.org :)

Cheers,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-23 Thread Jeremy via bitcoin-dev
> If you introduce signing into mining, then you will have cases, where
> someone is powerful enough to produce blocks, but cannot, because signing
> is needed. Then, your consensus is no longer "the heaviest chain", but "the
> heaviest signed chain". That means, your computing power is no longer
> enough by itself (as today), because to make a block, you also need some
> kind of "permission to mine", because first you sign things (like in
> signet) and then you mine them. That kind of being "reliably unreliable"
> may be ok for testing, but not for the main network.


this is a really great point worth underscoring. this is the 'key
ingredient' for DCFMP, which is that there is no signing or other network
system that is 'in the way' of normal bitcoin mining, just an opt-in set of
rules for sharing the bounties of your block in exchange for future shares.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] History and Future of Sapio

2021-12-23 Thread Jeremy via bitcoin-dev
Hi devs,

This post details a little on the origins of Sapio as well as features that
are in development this year (other than bugfixes).

https://rubin.io/bitcoin/2021/12/23/advent-26/

cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Autonomous Organizations (DAOs) Will Save Bitcoin

2021-12-22 Thread Jeremy via bitcoin-dev
Hi Devs,

Enjoy! https://rubin.io/bitcoin/2021/12/22/advent-25/

I'm really excited about opportunities for capital formation to happen
natively in Bitcoin. This is actually a really big deal and something (I
think) to pay close attention to. This is basically like running a little
company with shareholders inside of Bitcoin, which to me really helps us
inhabit the "be your own bank" part of Bitcoin. None of this particularly
requires CTV, but it does require the type of composable and flexible
software that I aspire to deliver with Sapio.

business matter:

There are two more posts, and they will both be focused on getting this
stuff out into the wild more. If you particularly have thoughts on BIP-119
activation I would love to hear them publicly, or at your preference,
privately.

If you like or dislike BIP-119 and wish to "soft-signal" yes or no
publicly, you may do so on https://utxos.org/signals by editing the
appropriate file(s) and making a PR. Alternatively, comment somewhere
publicly I can link to, send it to me, and I will make the edits.

edit links:
- for individuals/devs:
https://github.com/JeremyRubin/utxos.org/edit/master/data/devs.yaml
- organizations:
https://github.com/JeremyRubin/utxos.org/edit/master/data/bizs.yaml
- miners/pools:
https://github.com/JeremyRubin/utxos.org/edit/master/data/hashratesnapshot.json

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] POWSWAP: Oracle Free Bitcoin Hashrate Derivatives

2021-12-21 Thread Jeremy via bitcoin-dev
Hi devs,

Today's post details how to make fully trustless hashrate derivative
contracts that can be embedded on-chain, inside of channels, options, or
inside of DCFMPs. These contracts can be used today without CTV, but they
obviously get better with CTV :)

enjoy: https://rubin.io/bitcoin/2021/12/21/advent-24/

I have not done any work to analyze the profitability of these contracts or
how you might price and risk them, or if a two sided market among miners
actually exists. That's not really my expertise.

But maybe someone can figure that out and let us all know :)

cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Derivatives and Options

2021-12-20 Thread Jeremy via bitcoin-dev
Hi Devs,

Today's post is on building options/derivatives in Sapio!

https://rubin.io/bitcoin/2021/12/20/advent-23

Enjoy!

Cheers,

Jeremy



--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] NFTs Part Two: Auctions, Royalties, Mints, Generative, Game Items

2021-12-19 Thread Jeremy via bitcoin-dev
Hi Devs!

More on NFTs today! Code demos of dutch auctions of NFTs + royalties, and
then discussion of a few other concepts I'm excited about.

https://rubin.io/bitcoin/2021/12/19/advent-22/

Particularly novel is the combination of attestation chains, lightning
invoices, and NFTs to create off-chain updatable and on-chain sellable
in-game items.

Till tomorrow!

Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Packaging Sapio Applications

2021-12-18 Thread Jeremy via bitcoin-dev
hi devs,

today's topic is packaging Sapio applications. maybe a bit more annoying
than usual, but important.

https://rubin.io/bitcoin/2021/12/18/advent-21/


I think WASM is really really cool! It's definitely been very helpful for
Sapio. It'd be kinda neat if at some point software like Bitcoin Core could
run Sapio modules natively and offer users extended functionality based on
that. For now I'm building out the wallet as Sapio Studio, but a boy can
dream. I know there are some bitcoiners (in particular, the rust-bitcoiners
& rust-lightning) who like WASM for shipping stuff to browsers!

WASM is also something I've been thinking about w.r.t. how we ship
consensus upgrades. It would be kinda groovy if we could implement the
semantics of pieces of bitcoin code as WASM modules... e.g., imagine pieces
of consensus being able to be compiled to and run through a WASM system, it
would help guarantee that those pieces of the code are entirely
deterministic. Maybe something for Simplicity to consider WASM being the
host language for JET extensions!


Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0

2021-12-18 Thread Jeremy via bitcoin-dev
Small idea:

ease into getting rid of full-rbf by keeping the flag working, but make
enforcement of non-replaceability something that happens n seconds after
first seen.

this reduces the ability to partition the mempools by broadcasting
irreplaceable conflicts all at once, and slowly eases clients off of
relying on non-RBF.

we might start with 60 seconds, and then double every release till we get
to 600 at which point we disable it.
--
@JeremyRubin 



On Tue, Jun 15, 2021 at 10:00 AM Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> I'm writing to propose deprecation of opt-in RBF in favor of full-RBF as
> the Bitcoin Core's default replacement policy in version 24.0. As a
> reminder, the next release is 22.0, aimed for August 1st, assuming
> agreement is reached, this policy change would enter into deployment phase
> a year from now.
>
> Even if this replacement policy has been deemed as highly controversial a
> few years ago, ongoing and anticipated changes in the Bitcoin ecosystem are
> motivating this proposal.
>
> # RBF opt-out as a DoS Vector against Multi-Party Funded Transactions
>
> As explained in "On Mempool Funny Games against Multi-Party Funded
> Transactions'', 2nd issue [0], an attacker can easily DoS a multi-party
> funded transactions by propagating an RBF opt-out double-spend of its
> contributed input before the honest transaction is broadcasted by the
> protocol orchester. DoSes are qualified in the sense of either an attacker
> wasting timevalue of victim's inputs or forcing exhaustion of the
> fee-bumping  reserve.
>
> This affects a series of Bitcoin protocols such as Coinjoin, onchain DLCs
> and dual-funded LN channels. As those protocols are still in the early
> phase of deployment, it doesn't seem to have been executed in the wild for
> now.  That said, considering that dual-funded are more efficient from a
> liquidity standpoint, we can expect them to be widely relied on, once
> Lightning enters in a more mature phase. At that point, it should become
> economically rational for liquidity service providers to launch those DoS
> attacks against their competitors to hijack user traffic.
>
> Beyond that, presence of those DoSes will complicate the design and
> deployment of multi-party Bitcoin protocols such as payment
> pools/multi-party channels. Note, Lightning Pool isn't affected as there is
> a preliminary stage where batch participants are locked-in their funds
> within an account witnessScript shared with the orchestrer.
>
> Of course, even assuming full-rbf, propagation of the multi-party funded
> transactions can still be interfered with by an attacker, simply
> broadcasting a double-spend with a feerate equivalent to the honest
> transaction. However, it tightens the attack scenario to a scorched earth
> approach, where the attacker has to commit equivalent fee-bumping reserve
> to maintain the pinning and might lose the "competing" fees to miners.
>
> # RBF opt-out as a Mempools Partitions Vector
>
> A longer-term issue is the risk of mempools malicious partitions, where an
> attacker exploits network topology or divergence in mempools policies to
> partition network mempools in different subsets. From then a wide range of
> attacks can be envisioned such as package pinning [1], artificial
> congestion to provoke LN channels closure or manipulation of
> fee-estimator's feerate (the Core's one wouldn't be affected as it relies
> on block confirmation, though other fee estimators designs deployed across
> the ecosystem are likely going to be affected).
>
> Traditionally, mempools partitions have been gauged as a spontaneous
> outcome of a distributed systems like Bitcoin p2p network and I'm not aware
> it has been studied in-depth for adversarial purposes. Though, deployment
> of second-layer
> protocols, heavily relying on sanity of a local mempool for fee-estimation
> and robust propagation of their time-sensitive transactions might lead to
> reconsider this position. Acknowledging this, RBF opt-out is a low-cost
> partitioning tool, of which the existence nullifies most of potential
> progresses to mitigate malicious partitioning.
>
>
> To resume, opt-in RBF doesn't suit well deployment of robust second-layers
> protocol, even if those issues are still early and deserve more research.
> At the same time, I believe a meaningful subset of the ecosystem  are still
> relying
> on 0-confs transactions, even if their security is relying on far weaker
> assumptions (opt-in RBF rule is a policy rule, not a consensus one) [2] A
> rapid change of Core's mempool rules would be harming their quality of
> services and should be
> weighed carefully. On the other hand, it would be great to nudge them
> towards more secure handling of their 0-confs flows [3]
>
> Let's examine what could be deployed ecosystem-wise as enhancements to the
> 0-confs security model.
>
> # Proactive 

Re: [bitcoin-dev] [Bitcoin Advent Calendar] Oracles, Bonds, and Attestation Chains

2021-12-17 Thread Jeremy via bitcoin-dev
Yep, these are great points. There is no way to punish signing the wrong
thing directly, just not changing your answers without risk to funds.

One of the interesting things is that upon a single equivocation you get
unbounded equivocation by 3rd parties, e.g., you can completely rewrite the
entire signature chain!

Another interesting point: if you use a musig key for your staking key that
is musig(a,b,c) you can sign with a until you equivocate once, then switch
to b, then c. Three strikes and you're out! IDK what that could be used for.

Lastly, while you can't punish lying, you could say "only the stakers who
sign with the majority get allocated reward tokens for that slot". So you
could equivocate to switch and get tokens, but you'd burn your collateral
for them. But this does make an incentive for the stakers to try to sign
the "correct" statement in line with peers.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Globally Broadcasting Workshares to Improve Finality Heuristics

2021-12-17 Thread Jeremy via bitcoin-dev
An interesting concept occurred to me today while chatting with Nic Carter.

If we set Bitcoin Core up to gossip headers for work shares (e.g., expected
500 headers per block would have 20kb overhead, assuming we don't need to
send the prev hash) we'd be able to have more accurate finality estimates
and warnings if we see hashrate abandoning our chain tip. This is
observable regardless of if dishonest miners choose not to publish their
work on non tip shares, since you can notice the missing work.

In the GUI, we could give users an additional warning if they are
accepting a payment during a sudden hashrate decrease that they might wait
longer.

Has this been discussed before?

Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Oracles, Bonds, and Attestation Chains

2021-12-17 Thread Jeremy via bitcoin-dev
Today's post is pretty cool: it details how covenants like CTV can be used
to improve on-chain bitcoin signing oracles by solving the timeout/rollover
issue and solving the miner/oracle collusion issue on punishment. This
issue is similar to the Blockstream Liquid Custody Federation rollover bug
from a while back (which this type of design also helps to fix).

https://rubin.io/bitcoin/2021/12/17/advent-20/

It also describes:
- how a protocol on top can make 'branch free' attestation chains where if
you equivocate your funds get burned.
- lightly, various uses for these chained attestations

In addition, Robin Linus has a great whitepaper he put out getting much
more in the weeds on the concepts described in the post, it's linked in the
first bit of the post.

cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Part One: Implementing NFTs in Sapio

2021-12-16 Thread Jeremy via bitcoin-dev
I know NFTs are controversial, but here's my take on them in Sapio:

https://rubin.io/bitcoin/2021/12/16/advent-19/

If you don't like NFTs, don't worry: the results and techniques are
entirely generalizable here and can apply to many other types of things
that aren't stupid JPGs.

E.g.,

- If you squint, Lightning Channels are NFTs: I have a channel with someone
and I can't transfer it to a third party fungibly because both the
remaining side and entering side want to know about the counterparty
reputation.
- DLCs are NFTs because I want to know not just counterparties, but also
which oracles.
- Colored Coins/Tokens, definitionally, are not NFTs, but fractional shares
of an NFT are Colored Coins, so NFT research might yield new results for
Colored Coins.

Advancing the state of the art for NFTs advances the state of the art for
all sorts of other purposes, while letting us have a little fun. This is a
strong callback to https://rubin.io/bitcoin/2021/12/14/advent-17/ and
https://rubin.io/bitcoin/2021/12/03/advent-6/ if you want to read more on
why things like NFTs are cool even if JPGs are lame.

Cheers,

Jeremy



--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-16 Thread Jeremy via bitcoin-dev
high level response:

including a small number of block headers (10?) directly as op_return
metadata (or something) doesn't have that high overhead necessarily, but
could be super effective at helping miners participate with lower hashrate.
the reason to include this as on-chain data is so that the mining pool
doesn't require any external network software.

this would balance out the issues if the data is somewhat bounded (e.g., 10
headers). what's nice is this data has no consensus meaning as it's client
side validated by the DCFMP block filter.

interestingly, the participating pools could 'vote' on how difficult shares
should be as a metaparameter to the pool over blocks... but analysis gets
more complex with that.

cheers,

jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-15 Thread Jeremy via bitcoin-dev
I could add a comparison to p2pool if you want, but bear in mind this is a
blog post designed to introduce a complex topic to a wide audience, not a
literature review of all possible designs and prior art.

In particular, while P2Pool and DCFMP share a goal (decentralize mining),
the approaches to them bear very little similarity as DCFMP is focused on
making the pooling a pure client side validatable function of the existing
chain, and not create a major risk to mining centralization with a reliance
on a new network running on top of Bitcoin. DCFMP also lacks the core value
prop of P2Pool which is higher resolution on share assignment.

Further, DCFMP's core innovations are Payment Pool and non interactive
channel based, something the P2Pool does not have, but could adopt, in
theory, to solve their payout problems[^note]. I still believe that making
a unified layer of networked software all miners are running on top of
Bitcoin in the loop of mining is a major risk and architecturally bad idea,
hence my advocacy for doing such designs as micro pools inside a DCFMP; It
would be possible to make the "micropools" run on a P2Pool like software,
the DCFMP allows for smaller P2Pools to aggregate their hashrate
trustlessly with the main DCFMP shares.



[^note]: for what it's worth, I was not familiar with p2pool very much
before I came up with DCFMP. The lineage of my conceptual work was
determinism, payment pools, and then realizing they could do something for
mining.
--
@JeremyRubin 



On Wed, Dec 15, 2021 at 1:11 PM  wrote:

> How does this differ from p2pool?
>
> If you've just re-invented p2pool, shouldn't you credit their prior art?
>
> Monero is doing their implementation of p2pool. They have viable solo
> mining, as far as I understand. The basic idea is you have several
> P2pools. If you have a block time of 10 minutes, p2pool has 20% of
> hashrate, and there's 100 p2pool chains, each chain gets 0.2% of net
> hash. If you're OK with 20s block times (orphans aren't really a big
> problem), you need (20/600) * (0.02/100) = 0.00067% of network hash to
> get a payout every 10m.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-15 Thread Jeremy via bitcoin-dev
Hi Billy!

Thanks for your response. Some replies inline:


On Wed, Dec 15, 2021 at 10:01 AM Billy Tetrud via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Looks like an interesting proposal, but it doesn't seem to quite match the
> goals you mentioned. As you do mention, this mining pool coordination
> doesn't get rid of the need for mining pools in the first place. So it
> doesn't satisfy item 1 on your goal list afaict.
>

It does, actually :) Point 1 was

   1. Funds should not be centrally custodied, ever, if at all

And for top-level pool participants there is never any central custody.
What the windows are there (100 blocks, 2016, 4032, etc) is up to the
specific implementation which sets limits on how small you can be to
participate.

Further, for the entities that are too small:

from the article:
*> **The blocks that they mine should use a taproot address/key which is a
multisig of some portion of the workshares, that gets included in the
top-level pool as a part of Payment Pool.*

The micro-pools embed a multisig of top-contributors, 'reputable' members,
or on a rotating basis, as a leaf node to the parent. They then opt-out of
having their leaf channel-ized, as noted.

This would be fully non-custodial if we always included all miners. The
issue is that opens up DoS if one miner goes away, so you do want to anchor
around a few.

In this mode, you can set the protocol up such that immediately after
getting a reward in a block, you should see the chosen nodes for multi-sigs
distribute the spoils according to the schedule that is agreed on in the
block causing the share to be granted.

the main issue is data availability, without extra in-band storage local
mining pools have to track the work shares (which can be committed to in a
block) locally for auditing.

This is not fully non-custodial, but it doesn't have to be centrally
custodied by one party. We can multisig immediately after every block (and
nodes should quit their pool if they don't get sigs quickly perhaps).
Further, nodes can hash into multiple pools dividing their risk (modulo
sybil attack) across many pools.

If we had stronger covenants (CAT, AMOUNT, DIVIDE/MUL), we could make every
leaf node commit to payment pools that operate on percents instead of fixed
amounts and we'd be able to handle this in a manner that the payment pools
work no matter what amount is assigned to them.



The primary benefits over what we have today that I can see are:
> 1. increased payout regularity, which lowers the viable size of mining
> pools, and
> 2. Lower on chain footprint through combining pay outs from multiple pools.
>
> Am I missing some?
>
> These are interesting benefits, but it would be nice if your post was
> clearer on that, since the goals list is not the same as the list of
> potential benefits of this kind of design.
>

I think I hit all the benefits mentioned:

1. Funds should not be centrally custodied, ever, if at all.
see above -- we can do better for smaller miners, but we hit this for
miners above the threshold.

2. No KYC/AML.
see above, payouts are done 'decentralized' by every miner mining to the
payout

3. No “Extra network” software required.
you need the WASM, but do not need any networked software to participate,
so there are no DoS concerns from participating.

You do need extra software to e.g. use channels or cut-through multiple
pools, but only after the fact of minding.

4. No blockchain bloat.

Very little, if cut-through + LN works.


5. No extra infrastructure.

Not much needed, if anything. I don't really know what 'infrastructure'
means, but I kind of imagined it to mean 'big expensive things' that would
make it hard to partake.


6. The size of a viable pool should be smaller. Remember our singer – if
you just pool with one other songwriter it doesn’t make your expected time
till payout in your lifetime. So bigger the pools, more regular the
payouts. We want the smallest possible “units of control” with the most
regular payouts possible.

I think this works, roughly?


> As far as enabling solo mining, what if this concept were used off chain?
> Have a public network of solo miners who publish "weak blocks" to that
> network, and the next 100 (or 1000 etc) nice miners pay you out as long as
> you're also being nice by following the protocol? All the nice
> optimizations you mentioned about eg combined taproot payouts would apply i
> think. The only goals this wouldn't satisfy are 3 and 5 since an extra
> network is needed, but to be fair, your proposal requires pools which all
> need their own extra network anyways.
>
> The missing piece here would be an ordering of weak blocks to make the
> window possible. Or at least a way to determine what blocks should
> definitely be part of a particular block's pay out. I could see this being
> done by a separate ephemeral blockchain (which starts fresh after each
> Bitcoin block) that keeps track of which weak blocks have been submitted,
> potentially 

[bitcoin-dev] [Bitcoin Advent Calendar] Sapio Studio Payment Pool Walthrough

2021-12-15 Thread Jeremy via bitcoin-dev
Hi Devs,

Today's post is showing off how the Sapio Studio, the GUI smart contract
composer for Sapio, functions.
https://rubin.io/bitcoin/2021/12/15/advent-18/

In contrast to other posts this is mostly pictures.

This is a part of the project that could definitely use some development
assistance if anyone is interested in pushing the frontier of bitcoin
wallet functionality :)

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-14 Thread Jeremy via bitcoin-dev
I've received some confused messages that whatever I was replying to didn't
come through, I've reproduced Bob's e-mail below that I was responding to
for context:

























*This, quite simply, is not a "pool". A pool is by definition a tool to
reduceprofit variance by miners by collecting "weak blocks" that do not
meet thedifficulty target, so as to get a better statistical measure of
each miner'shashrate, which is used to subdivide profits. These are called
"shares" and areentirely absent here.The only available information here to
decide payouts is the blocks themselves,I do not have any higher statistics
measurement to subdivide payments. If Iexpect to earn 3 blocks within the
window, sometimes I will earn 2 and sometimesI will earn 4. Whether I keep
the entire coinbase in those 2-4 blocks, or I have100 other miners paying
me 1/100 as much 100 times, my payment is the same andmust be proportional
to the number of blocks I mine in the window.  My varianceis not
reduced.Further, by making miners pay other miners within the window N,
this results inN^2 payments to miners which otherwise would have had N
coinbase payments. So,this is extremely block-space inefficient for no good
reason. P2Pool had thesame problem and generated giant coinbases which
competed with fee revenue."Congestion control" makes this somewhat worse
since is it is an absoluteincrease in the block space consumed for these
N^2 payments.The only thing this proposal does do is smooth out fee
revenue. While hedging onfee revenue is valuable, this is an extremely
complicated and expensive way togo about it, that simultaneously *reduces*
fee revenue due to all the extrablock space used for miner payouts.*

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-14 Thread Jeremy via bitcoin-dev
Bitcoin didn't invent the concept of pooling:
https://en.wikipedia.org/wiki/Pooling_(resource_management). This is a
Bitcoin Mining Pool, although it may not be your favorite kind, which is
fixated on specific properties of computing contributions before finding a
block. Pooling is just a general technique for aggregating resources to
accomplish something. If you have another name like pooling that is in
common use for this type of activity I would be more than happy to adopt it.

This sort of pool can hedge not only against fee rates but also against
increases in hashrate since your historical rate 'carries' into the future
as a function of the window. Further, windows and reward functions can be
defined in a myriad of ways that could, e.g., pay less to blocks found in
more rapid succession, contributing to the smoothing functionality.

With respect to sub-block pooling, as described in the article, this sort
of design also helps with micro-pools being able to split resources
non-custodially in every block as a part of the higher order DCFMP. The
point is not, as noted, to enable solo mining an S9, but to decrease the
size of the minimum viable pool. It's also possible to add, without much
validation or data, some 'uncle block' type mechanism in an incentive
compatible way (e.g., add 10 pow-heavy headers on the last block for cost
48 bytes header + 32 bytes payout key) such that there's an incentive to
include the heaviest ones you've seen, not just your own, that are worth
further study and consideration (particularly because it's non-consensus,
only for opt-in participation in the pool).

With respect to space usage, it seems you wholly reject the viability of a
payment pool mechanism to cut-through chain space. Is this a critique that
holds for all Payment Pools, or just in the context of mining? Is there a
particular reason why you think it infeasible that "strongly online"
counterparties would be able to coordinate more efficiently? Is it
preferable for miners, the nexus of decentralization for Bitcoin, to prefer
to use custodial services for pooling (which may require KYC/AM) over
bearing a cost of some extra potential chainload?

Lastly, with respect to complexity, the proposal is actually incredibly
simple when you take it in a broader context. Non Interactive Channels and
Payment Pools are useful by themselves, so are the operations to merge them
and swap balance across them. Therefore most of the complexity in this
proposal is relying on tools we'll likely see in everyday use in any case,
DCFMP or no.

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] A Defense of Having Fun (and maybe staying poor)

2021-12-14 Thread Jeremy via bitcoin-dev
Hi Devs,

Today's post is a little more philosophical and less technical. Based on
the private feedback I received (from >1 persons, perhaps surprisingly)
I'll continue to syndicate the remaining posts to this list.

Here it is: https://rubin.io/bitcoin/2021/12/14/advent-17/

To having a little fun every now and again, as a treat,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Composability in Sapio Contracts

2021-12-13 Thread Jeremy via bitcoin-dev
Devs,

Here's today's post: https://rubin.io/bitcoin/2021/12/13/advent-16/

It covers how you can use Sapio modules composably. This is an active area
of research for the Sapio platform, so definitely welcome and appreciate
ideas and feedback.

One area I'm particularly happy with but also unhappy with is the
"JSONSchema Type System". It is remarkably flexible, which is useful, but a
better type system would be able to enforce guarantees more strongly. Of
course, comparing to things like ERC-20, Eth interfaces aren't particularly
binding (functions could do anything) so maybe it's OK. If you have
thoughts on better ways to accomplish this, would love to think it through
more deeply. I'm particularly excited about ways to introduce more formal
correctness.

Cheers,

Jeremy

p.s. -- feel free to send me any general feedback on the series out
of band. There's a couple posts in the pipeline that are a bit less
development focused like the earlier posts I excluded, and I could filter
them if folks are feeling like it's too much information, but I'd bias
towards posting the remaining pieces as they come for continuity. Let me
know if you feel strongly about a couple posts that might be a topical
reach for this list.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-12 Thread Jeremy via bitcoin-dev
Hey there!

Thanks for your response!

One of the reasons to pick a longer window of, say, a couple difficulty
periods would be that you can make participation in the pool hedge you
against hashrate changes.

You're absolutely spot on to think about the impact of pooling w.r.t.
variance when fees > subsidy. That's not really in the analysis I had in
the (old) post, but when the block revenues swing, dcfmp over longer
periods can really smooth out the revenues for miners in a great way. This can
also help with the "mind the gap" problem when there isn't a backlog of
transactions, since producing an empty block still has some value (in order
to incentivize mining transaction at all and not cheating, we need to
reward txn inclusion as I think you're trying to point out.

Sadly, I've read the rest of your email a couple times and I don't really
get what you're proposing at all. It jumps right into "things you could
compute". Can you maybe try stating the goals of your payout function, and
then demonstrate how what you're proposing meets that? E.g., we want to pay
more to miners that do x?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Decentralized Coordination Free Mining Pools

2021-12-12 Thread Jeremy via bitcoin-dev
Howdy, welcome to day 15!

Today's post covers a form of a mining pool that can be operated as sort of
a map-reduce over blocks without any "infrastructure".

https://rubin.io/bitcoin/2021/12/12/advent-15/

There's still some really open-ended questions (perhaps for y'all to
consider) around how to select an analyze the choice of window and payout
functions, but something like this could alleviate a lot of the
centralization pressures typically faced by pools.

Notably, compared to previous attempts, combining the payment pool payout
with this concept means that there is practically very little on-chain
overhead from this approach as the chain-load
for including payouts in every block is deferred for future cooperation
among miners. Although that can be considered cooperation itself, if you
think of it like a pipeline, the cooperation happens out of band from
mining and block production so it really is coordination free to mine.


Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Payment Channels in a CTV+Sapio World

2021-12-11 Thread Jeremy via bitcoin-dev
hola devs,

This post details more formally a basic version of payment channels built
on top of CTV/Sapio and the implications of having non-interactive channel
creation.

https://rubin.io/bitcoin/2021/12/11/advent-14/

I'm personally incredibly bullish on where this concept can go since it
would make channel opening much more efficient, especially when paired with
the payment pool concept shared the other day.

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Payment Pools/ Coin Pools

2021-12-10 Thread Jeremy via bitcoin-dev
This post showcases building payment pools / coin pools* in Sapio!

https://rubin.io/bitcoin/2021/12/10/advent-13/

There will be many more posts in the series that will take this concept a
lot further and showcase some more advanced things that can be built.

I think that payment pools are incredibly exciting -- we know that it's
going to be tough to give every human a UTXO, even with Lightning. Payment
Pools promise to help compress that chain load into single utxos so that
users can be perfectly secure with a proof root and just need to do some
transactions to recover their coins. While channels could live inside of
payment pools, scaling via payment pools without nested channels can be
nice because there is no degradation of assumptions for the coins inside
being able to broadcast transactions quickly.

Payment pools in Sapio also provide a natural evolution path for things
like Rollups (they're essentially federated rollups with unilateral exits),
where state transitions in pools could one day be enforced by either
covenants or some sort of ZK system in place of N-of-N signatures.

Hopefully this stimulates some folks to muck around with Sapio and
experiment creating their own custom Payment Pools! I'd love to see someone
hack some kind of EVM into the state transition function of a payment pool
;)

Cheers,

Jeremy

* we should probably nail down some terminology -- I think Payment Pools /
Coin Pools are kinda "generic" names for the technique, but we should give
specific protocols more specific names like payment channels : lightning
network.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar]: Congestion Control

2021-12-09 Thread Jeremy via bitcoin-dev
Today's post is a follow up to some older content about congestion control
& CTV.

It's written (as with the rest of the series) to be a bit more approachable
than technical, but there are code samples in Sapio of constructing a
payout tree.

today's post:
https://rubin.io/bitcoin/2021/12/09/advent-12/

older posts:
- https://utxos.org/analysis/bip_simulation/
- https://utxos.org/analysis/batching_sim/

Generally, I think the importance and potential of congestion control is
currently understated. The next couple posts will build on this with Coin
Pools, Mining Pools, and Lighting which also leverage congestion control
structures with multi-party opt-outs for added punch. But even in the base
case, these congestion control primitives can be really important for large
volume large value businesses to close out liabilities reliably without
being impacted too much by transient chain weather. Those types of demand
(high volume, high value) aren't served well by the lightning network
(ever) since the large values of flows would be difficult to route and
might prefer being deposited directly into cold storage given the amounts
at stake.

best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Inheritance Schemes

2021-12-08 Thread Jeremy via bitcoin-dev
Devs,

For today's post, something near and dear to our hearts: giving our sats to
our loved ones after we kick the bucket.

see: https://rubin.io/bitcoin/2021/12/08/advent-11/

Some interesting primitives, hopefully enough to spark a discussion around
different inheritance schemes that might be useful.

One note I think is particularly discussion worthy is how the UTXO model
makes inheritance backups sort of fundamentally difficult to do v.s. a
monolithic account model.

Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy via bitcoin-dev
IMO this is not a big problem. The problem is not if a 0 value ever enters
the mempool, it's if it is never spent. And even if C2/P1 goes in, C1 still
can be spent. In fact, it increases it's feerate with P1's confirmation so
it's somewhat likely it would go in. C2 further has to be pretty expensive
compared to C1 in order to be mined when C2 would not be, so the user
trying to do this has to pay for it.

If we're worried it might never be spent again since no incentive, it's
rational for miners *and users who care about bloat* to save to disk the
transaction spending it to resurrect it. The way this can be broken is if
the txn has two inputs and that input gets spent separately.

That said, I think if we can say that taking advantage of keeping the 0
value output will cost you more than if you just made it above dust
threshold, it shouldn't be economically rational to not just do a dust
threshold value output instead.

So I'm not sure the extent to which we should bend backwards to make 0
value outputs impossible v.s. making them inconvenient enough to not be
popular.



-
Consensus changes below:
-

Another possibility is to have a utxo with drop semantics; if UTXO X with
some flag on it is not spent in the block it is created, it expires and can
never be spent. This is essentially an inverse timelock, but severely
limited to one block and mempool evictions can be handled as if a conflict
were mined.

These types of 0 value outputs could be present just for attaching fee in
the mempool but be treated like an op_return otherwise. We could add two
cases for this: one bare segwit version (just the number, no data) and one
that's equivalent to taproot. This covers OP_TRUE anchors very efficiently
and ones that require a signature as well.

This is relatively similar to how Transaction Sponsors works, but without
full tx graph de-linkage... obviously I think if we'll entertain a
consensus change, sponsors makes more sense, but expiring utxos doesn't
change as many properties of the tx-graph validation so might be simpler.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy via bitcoin-dev
Bastien,

The issue is that with Decker Channels you either use SIGHASH_ALL / APO and
don't allow adding outs (this protects against certain RBF pinning on the
root with bloated wtxid data) and have anchor outputs or you do allow them
and then are RBF pinnable (but can have change).

Assuming you use anchor outs, then you really can't use dust-threshold
outputs as it either breaks the ratcheting update validity (if the specific
amount paid to output matters) OR it allows many non-latest updates to
fully drain the UTXO of any value.

You can get around the needing for N of them by having a congestion-control
tree setup in theory; then you only need log(n) data for one bumper, and
(say) 1.25x the data if all N want to bump. This can be a nice trade-off
between letting everyone bump and not. Since these could be chains of
IUTXO, they don't need to carry any weight directly.

The carve out would just be to ensure that CPFP 0 values are known how to
be spent.





--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-07 Thread Jeremy via bitcoin-dev
Bitcoin Devs (+cc lightning-dev),

Earlier this year I proposed allowing 0 value outputs and that was shot
down for various reasons, see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html

I think that there can be a simple carve out now that package relay is
being launched based on my research into covenants from 2017
https://rubin.io/public/pdfs/multi-txn-contracts.pdf.

Essentially, if we allow 0 value outputs BUT require as a matter of policy
(or consensus, but policy has major advantages) that the output be used as
an Intermediate Output (that is, in order for the transaction to be
creating it to be in the mempool it must be spent by another tx)  with the
additional rule that the parent must have a higher feerate after CPFP'ing
the parent than the parent alone we can both:

1) Allow 0 value outputs for things like Anchor Outputs (very good for not
getting your eltoo/Decker channels pinned by junk witness data using Anchor
Inputs, very good for not getting your channels drained by at-dust outputs)
2) Not allow 0 value utxos to proliferate long
3) It still being valid for a 0 value that somehow gets created to be spent
by the fee paying txn later

Just doing this as a mempool policy also has the benefits of not
introducing any new validation rules. Although in general the IUTXO concept
is very attractive, it complicates mempool :(

I understand this may also be really helpful for CTV based contracts (like
vault continuation hooks) as well as things like spacechains.

Such a rule -- if it's not clear -- presupposes a fully working package
relay system.

I believe that this addresses all the issues with allowing 0 value outputs
to be created for the narrow case of immediately spendable outputs.

Cheers,

Jeremy

p.s. why another post today? Thank Greg
https://twitter.com/JeremyRubin/status/1468390561417547780


--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Bitcoin Advent Calendar] What's Smart about Smart Contracts

2021-12-07 Thread Jeremy via bitcoin-dev
--
@JeremyRubin 


Hi!

On Tue, Dec 7, 2021 at 4:33 PM ZmnSCPxj  wrote:

> Good morning Jeremy,
>
> >
> > Here's the day 6 post: https://rubin.io/bitcoin/2021/12/03/advent-6/,
> the topic is why smart contracts (in extended form) may be a critical
> precursor to securing Bitcoin's future rather than something we should do
> after making the base layer more robust.
>
>
> *This* particular post seems to contain more polemic than actual content.
> This is the first post I read of the series, so maybe it is just a
> "breather" post between content posts?
>

The series in general is intended to be a bit more on the approachable side
than hardcore detail.



>
> In any case, given the subject line, it seems a waste not to discuss the
> actual "smart" in "smart" contract...
>
>
Yeah maybe a better title would be "The Case for Enhanced Functionality in
Bitcoin" -- it's not really about smart contracts per se, but the thing
that people are calling smart contracts in the broader community. This gets
down to prescriptive v.s. descriptive lingo and it's not really a debate I
care much for :)




> ## Why would a "Smart" contract be "Smart"?
>
> A "smart" contract is simply one that somehow self-enforces rather than
> requires a third party to enforce it.
> It is "smart" because its execution is done automatically.
>

There are no automatic executing smart contracts on any platform I'm aware
of. Bitcoin requires TX submission, same with Eth.

Enforcement and execution are different subjects.


> Consider the humble HTLC.
> **
> This is why the reticence of Bitcoin node operators to change the
> programming model is a welcome feature of the network.
> Any change to the programming model risks the introduction of bugs to the
> underlying virtual machine that the Bitcoin network presents to contract
> makers.
> And without that strong reticence, we risk utterly demolishing the basis
> of the "smart"ness of "smart" contracts --- if a "smart" contract cannot
> reliably be executed, it cannot self-enforce, and if it cannot
> self-enforce, it is no longer particularly "smart".
>

I don't think that anywhere in the post I advocated for playing fast and
loose with the rules to introduce any sort of unreliability.

What I'm saying is more akin to we can actually improve the "hardware" that
Bitcoin runs on to the extent that it actually does give us better ability
to adjudicate the transfers of value, and we should absolutely and
aggressively pursue that rather than keeping Bitcoin running on a set
mechanisms that are insufficient to reach the scale, privacy, self custody,
and decentralization goals we have.



> ## The N-of-N Rule
>
> What is a "contract", anyway?
>
> A "contract" is an agreement between two or more parties.
> You do not make a contract to yourself, since (we assume) you are
> completely a single unit (in practice, humans are internally divided into
> smaller compute modules with slightly different incentives (note: I did not
> get this information by *personally* dissecting the brains of any humans),
> hence the "we assume").



> Thus, a contract must by necessity require N participants


This is getting too pedantic about contracts. If you want to go there,
you're also missing "consideration".

Smart Contracts are really just programs. And you absolutely can enter
smart contracts with yourself solely, for example, Vaults (as covered in
day 10) are an example where you form a contract where you are intended to
be the only party.

You could make the claim that a vault is just an open contract between you
and some future would be hacker, but the intent is that the contract is
there to just safeguard you and those terms should mostly never execute. +
you usually want to define contract participants as not universally
quantified...

>
> This is of interest since in a reliability perspective, we often accept
> k-of-n.
> 
> But with an N-of-N, *you* are a participant and your input is necessary
> for the execution of the smart contract, thus you can be *personally*
> assured that the smart contract *will* be executed faithfully.
>
>
Yes I agree that N-N or K-N have uses -- Sapio is designed to work with
arbitrary thresholds in lieu of CTV/other covenant proposals which can be
used to emulate arbitrary business logic :)


However, the benefit of the contracts without that is non-interactivity of
sending. Having everyone online is a major obstacle for things like
decentralized coordination free mining pools (kinda, the whole coordination
free part). So if you just say "always do N-of-N" you basically lose the
entire thread of"smart contract capabilities improving the four pillars
(covered in earlier posts) which solidifies bitcoin's adjudication of
transfers of value.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Contract Primitives and Upgrades to Bitcoin

2021-12-07 Thread Jeremy via bitcoin-dev
This post is a mini high level SoK covering basic details of a number of
different new proposed primitives that folks might find useful -- I think
there's less to discuss around this post, since it is at a higher level and
the parts contained here could be discussed separately.

If something isn't on this list, it's an oversight by me and I'd love to
add it. The subjective criteria for inclusion/exclusion is if it seems
something the community is actively considering and is relatively well
researched.

Post here: https://rubin.io/bitcoin/2021/12/05/advent-8/

best,

Jeremy

(sorry it's out of order sent from the wrong email so it bounced, this is
day 8)

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Vaults

2021-12-07 Thread Jeremy via bitcoin-dev
Last one for today -- sorry for the overload, I had meant to post as the
series kicked off...

This post covers building various vaults/better cold storage using sapio
https://rubin.io/bitcoin/2021/12/07/advent-10/.

In an earlier post I motivated why self-custody is so critical (see
https://rubin.io/bitcoin/2021/11/30/advent-3/); this post demonstrates how
Sapio + CTV can dramatically enhance what users can do.

Cheers, you'll see me in the inbox tomorrow,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Sapio Primer

2021-12-07 Thread Jeremy via bitcoin-dev
This post covers a basic intro to Sapio and links to more complete docs.
https://rubin.io/bitcoin/2021/12/06/advent-9/

I've previously shared Sapio on this list, and there's been a lot of
progress since then! I think Sapio is a fantastic system to express Bitcoin
ideas in, even if you don't want to use it for your production
implementation. Most of the future posts in the series will make heavy use
of Sapio so it's worth getting comfortable with, at least for reading.

Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Review of Smart Contract Concepts

2021-12-07 Thread Jeremy via bitcoin-dev
This post covers some high-level smart contract concepts that different
opcodes or proposals could have (or not).

https://rubin.io/bitcoin/2021/12/04/advent-7/

Interested to hear about other properties that you think are relevant!

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] What's Smart about Smart Contracts

2021-12-07 Thread Jeremy via bitcoin-dev
Hi!

Over the next month I'm doing a one-a-day blog post series till Christmas,
and I think some of the posts might be appropriate for discussion here.

Unfortunately I forgot to start the calendar series syndicated here too...
The first few posts are less bitcoin development related and philosophical,
so I think we could skip them and start around Day 6 and I'll post the rest
up to Day 10 here today (and do every day starting tomorrow). You can see
an archive of all posts at https://rubin.io/archive/. Every post will have
[Bitcoin Advent Calendar] if you wish to filter it :(.

-

Here's the day 6 post: https://rubin.io/bitcoin/2021/12/03/advent-6/, the
topic is why smart contracts (in extended form) may be a critical precursor
to securing Bitcoin's future rather than something we should do after
making the base layer more robust.

Cheers,

Jeremy Rubin

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the regularity of soft forks

2021-10-11 Thread Jeremy via bitcoin-dev
*> ... in this post I will argue against frequent soft forks with a single
or minimal*
*> set of features and instead argue for infrequent soft forks with batches*
*> of features.*

I think this type of development has been discussed in the past and has
been rejected.


from: Matt Corallo's post:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017547.html










*Matt: Follow the will of the community, irrespective of individuals
orunreasoned objection, but without ever overruling any
reasonableobjection. Recent history also includes "objection" to soft forks
in theform of "this is bad because it doesn't fix a different problem I
wantfixed ASAP". I don't think anyone would argue this qualifies as
areasonable objection to a change, and we should be in a place, as
acommunity (never as developers or purely one group), to ignore
suchobjections and make forward progress in spite of them. We don't
makegood engineering decisions by "bundling" unrelated features together to*
*enable political football and compromise.*

*AJ: - improvements: changes might not make everyone better off, but we*




*   don't want changes to screw anyone over either -- pareto   improvements
in economics, "first, do no harm", etc. (if we get this   right, there's no
need to make compromises and bundle multiple   flawed proposals so that
everyone's an equal mix of happy and*
*   miserable)*


I think Matt and AJ's PoV is widely reflected in the community that
bundling changes leads to the inclusion of suboptimal features.

This also has strong precedent in other important technical bodies, e.g.
from https://datatracker.ietf.org/doc/html/rfc7282 On Consensus and Humming
in the IETF.

  Even worse is the "horse-trading" sort of compromise: "I object to
   your proposal for such-and-so reasons.  You object to my proposal for
   this-and-that reason.  Neither of us agree.  If you stop objecting to
   my proposal, I'll stop objecting to your proposal and we'll put them
   both in."  That again results in an "agreement" of sorts, but instead
   of just one outstanding unaddressed issue, this sort of compromise
 results in two, again ignoring them for the sake of expedience.

   These sorts of "capitulation" or "horse-trading" compromises have no
   place in consensus decision making.  In each case, a chair who looks
   for "agreement" might find it in these examples because it appears
   that people have "agreed".  But answering technical disagreements is
   what is needed to achieve consensus, sometimes even when the people

  who stated the disagreements no longer wish to discuss them.


If you would like to advocate bitcoin development run counter to that,
you should provide a much stronger refutation of these engineering
norms.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-24 Thread Jeremy via bitcoin-dev
John let me know that he's posted some responses in his Github repo
https://github.com/JohnLaw2/btc-iids

probably easiest to respond to him via e.g. a github issue or something.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-17 Thread Jeremy via bitcoin-dev
Bitcoin & LN Devs,

The below is a message that was shared to me by an anon account on Telegram
(nym: John Law). You can chat with them directly in the https://t.me/op_ctv
or https://t.me/bips_activation group. I'm reproducing it here at their
request as they were unsure of how to post to the mailing list without
compromising their identity (perhaps we should publish a guideline on how
to do so?).

Best,

Jeremy


Hi,

I'd like to propose an alternative to BIP-118 [1] that is both safer and
more
powerful. The proposal is called Inherited IDs (IIDs) and is described in a
paper that can be found here [2]. The paper presents IIDs and Layer 2
protocols
using IIDs that are far more scalable and usable than those proposed for
BIP-118
(including eltoo [3]).

Like BIP-118, IIDs are a proposal for a softfork that changes the rules for
calculating certain signatures. BIP-118 supports signatures that do not
commit to the transaction ID of the parent transaction, thus allowing
"floating
transactions". In contrast, the IID proposal does not allow floating
transactions, but it does allow an output to specify that child transaction
signatures commit to the parent transaction's IID, rather than its
transaction
ID.

IID Definitions
===
* If T is a transaction, TXID(T) is the transaction ID of T.
* An output is an "IID output" if it is a native SegWit output with version
2
  and a 32-byte witness program, and is a "non-IID output" otherwise.
* A transaction is an "IID transaction" if it has at least one IID output.
* If T is a non-IID transaction, or a coinbase transaction, IID(T) =
TXID(T).
* If T is a non-coinbase IID transaction, first_parent(T) = F is the
transaction
  referenced by the OutPoint in T's input 0, and IID(T) = hash(IID(F) ||
F_idx)
  where F_idx is the index field in the OutPoint in T's input 0 (that is,
T's
  input 0 spends F's output F_idx).

IID Signature Validation

* Signatures that spend IID outputs commit to signature messages in which
IIDs
  replace transaction IDs in all OutPoints of the child transaction that
spend
  IID outputs.

Note that IID(T) can be calculated from T (if it is a non-IID or a coinbase
transaction) or from T and F (otherwise). Therefore, as long as nodes store
(or
calculate) the IID of each transaction in the UTXO set, they can validate
signatures of transactions that spend IID outputs. Thus, the IID proposal
fits
Bitcoin's existing UTXO model, at the small cost of adding a 32-byte IID
value
for certain unspent outputs. Also, note that the IID of a transaction may
not
commit to the exact contents of the transaction, but it does commit to how
the
transaction is related to some exactly-specified transaction (such as being
the
first child of the second child of a specific transaction). As a result, a
transaction that is signed using IIDs cannot be used more than once or in an
unanticipated location, thus making it much safer than a floating
transaction.

2-Party Channel Protocols
=
BIP-118 supports the eltoo protocol [3] for 2-party channels, which improves
upon the Lightning protocol for 2-party channels [4] by:
1) simplifying the protocol,
2) eliminating penalty transactions, and
3) supporting late determination of transaction fees [1, Sec. 4.1.5].

The IID proposal does not support the eltoo protocol. However, the IID
proposal
does support a 2-party channel protocol, called 2Stage [2, Sec. 3.3], that
is
arguably better than eltoo. Specifically, 2Stage achieves eltoo's 3
improvements
listed above, plus it:
4) eliminates the need for watchtowers [2, Sec. 3.6], and
5) has constant (rather than linear) worst-case on-chain costs [2, Sec.
3.4].

Channel Factories
=
In general, an on-chain transaction is required to create or close a 2-party
channel. Multi-party channel factories have been proposed in order to allow
a
fixed set of parties to create and close numerous 2-party channels between
them,
thus amortizing the on-channel costs of those channels [5]. BIP-118 also
supports simple and efficient multi-party channel factories via the eltoo
protocol [1, Sec. 5.2] (which are called "multi-party channels" in that
paper).

While the IID proposal does not support the eltoo protocol, it does support
channel factories that are far more scalable and powerful than any
previously-
proposed channel factories (including eltoo factories). Specifically, IIDs
support a simple factory protocol in which not all parties need to sign the
factory's funding transaction [2, Sec. 5.3], thus greatly improving the
scale
of the factory (at the expense of requiring an on-chain transaction to
update
the set of channels created by the factory). These channel factories can be
combined with the 2Stage protocol to create trust-free and watchtower-free
channels including very large numbers of casual users.

Furthermore, IIDs support channel factories with an unbounded number of
parties
that allow all of the channels in the factory to be 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-09 Thread Jeremy via bitcoin-dev
I'm a bit skeptical of the safety of the control byte. Have you considered
the following issues?



> The low bit of C indicates the parity of X; if it's 0, X has even y,
> if it's 1, X has odd y.
>
> The next bit of C indicates whether the current script is dropped from
> the merkle path, if it's 0, the current script is kept, if it's 1 the
> current script is dropped.
>
> The remaining bits of C (ie C >> 2) are the number of steps in the merkle
> path that are dropped. (If C is negative, behaviour is to be determined
> -- either always fail, or always succeed and left for definition via
> future soft-fork)
>
> For example, suppose we have a taproot utxo that had 5 scripts
> (A,B,C,D,E), calculated as per the example in BIP 341 as:
>
> AB = H_TapBranch(A, B)
> CD = H_TapBranch(C, D)
> CDE = H_TapBranch(CD, E)
> ABCDE = H_TapBranch(AB, CDE)
>
> And we're spending using script E, in that case the control block includes
> the script E, and the merkle path to it, namely (AB, CD).
>
> So here's some examples of what you could do with TLUV to control how
> the spending scripts can change, between the input sPK and the output sPK.
>
> At it's simplest, if we used the script "0 0 0 TLUV", then that says we
> keep the current script, keep all steps in the merkle path, don't add
> any new ones, and don't change the internal public key -- that is that
> we want to resulting sPK to be exactly the same as the one we're spending.
>
> If we used the script "0 F 0 TLUV" (H=F, C=0) then we keep the current
> script, keep all the steps in the merkle path (AB and CD), and add
> a new step to the merkle path (F), giving us:
>
> EF = H_TapBranch(E, F)
> CDEF =H_TapBranch(CD, EF)
> ABCDEF = H_TapBranch(AB, CDEF)
>
> If we used the script "0 F 2 TLUV" (H=F, C=2) then we drop the current
> script, but keep all the other steps, and add a new step (effectively
> replacing the current script with a new one):
>
> CDF = H_TapBranch(CD, F)
> ABCDF = H_TapBranch(AB, CDF)
>

If we recursively apply this rule, would it not be possible to repeatedly
apply it and end up burning out path E beyond the 128 Taproot depth limit?

Suppose we protect against this by checking that after adding F the depth
is not more than 128 for E.

The E path that adds F could also be burned for future use once the depth
is hit, and if adding F is necessary for correctness, then we're burned
anyways.

I don't see a way to protect against this generically.

Perhaps it's OK: E can always approve burning E?




>
> If we used the script "0 F 4 TLUV" (H=F, C=4) then we keep the current
> script, but drop the last step in the merkle path, and add a new step
> (effectively replacing the *sibling* of the current script):
>
> EF = H_TapBranch(E, F)
> ABEF = H_TapBranch(AB, EF)


> If we used the script "0 0 4 TLUV" (H=empty, C=4) then we keep the current
> script, drop the last step in the merkle path, and don't add anything new
> (effectively dropping the sibling), giving just:
>
> ABE = H_TapBranch(AB, E)
>
>
>
Is C = 4 stable across all state transitions? I may be missing something,
but it seems that the location of C would not be stable across transitions.


E.g., What happens when, C and E are similar scripts and C adds some
clauses F1, F2, F3, then what does this sibling replacement do? Should a
sibling not be able to specify (e.g., by leaf version?) a NOREPLACE flag
that prevents siblings from modifying it?

What happens when E adds a bunch of F's F1 F2 F3, is C still in the same
position as when E was created?

Especially since nodes are lexicographically sorted, it seems hard to
create stable path descriptors even if you index from the root downwards.

Identifying nodes by Hash is also not acceptable because of hash cycles,
unless you want to restrict the tree structure accordingly (maybe OK
tradeoff?).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-09 Thread Jeremy via bitcoin-dev
I like this proposal, I think it has interesting use cases! I'm quick to
charitably Matt's comment, "I’ve been saying we need more covenants
research and proposals before we move forward with one", as before we move
forward with *any.* I don't think that these efforts are rival -- different
opcodes for different nodes as they say.

I've previously done some analysis comparing Coin / Payment Pools with CTV
to TapLeafUpdate which make CTV out favorably in terms of chain load and
privacy.

On the "anyone can withdraw themselves in O(1) transactions" front, is that
if you contrast a CTV-style tree, the withdraws are O(log(n)) but E[O(1)]
for all participants, e.g. summing over the entire tree as it splits to
evict a bad actor ends up being O(N) total work over N participants, so you
do have to look at the exact transactions that come out w.r.t. script size
to determine which Payment Pool has overall less chain work to trustlessly
withdraw. This is compounded by the fact that a Taproot for N participants
uses a O(log N) witness.


Let's do out that basic math. First, let's assume we have 30 participants.
The basic script for each node would be:

TLUV: Taproot(Tweaked Key, { DUP "" 1 TLUV
CHECKSIGVERIFY
IN_OUT_AMOUNT SUB  GREATERTHANOREQUAL, ...})

Under this, the first withdraw for TLUV would require in witnesses stack:
Assume average amount is 0.005BTC, so we have 4.2 B users = 18.9 bits =3
bytes

1 signature (1+64 bytes) + (1 Script = (+ 1 1 32 1 1 1 1 1 1 1 3 1 1) = 46
bytes) + (1 taproot path = 2 + 33 + log2(N)*32)
= 146+log2(N)*32.

now, because we delete the key, we need to sum this from N=0 to N=30:

>>> sum([65+46+35+math.log(N,2)*32 for N in range(1, 31)])
7826.690154943152 bytes of witness data

Each transaction should have 1 input (40 bytes), 2 outputs (2* (34+8) =
84), 4 bytes locktime, 4 bytes version, 2 byte witness flag, 1 byte in
counter 1 byte out counter  = 136 bytes (we already count witnesses above)


136 * 30 + 7827 = 11907 bytes to withdraw all trustlessly

Now for CTV:
-CTV: Taproot(MuSigKey(subparties),  CTV)

*sidebar: **why radix 4? A while ago, I did the math out and a radix of 4
or 5 was optimal for bare script... assuming this result holds with
taproot.*


balance holders: 0..30
you have a base set of transactions paying out: 0..4 4..8 8..12 12..16
16..20 20..24 24..27 27..30
interior nodes covering: 0..16 16..30
root node covering: 0..30

The witness for each of these looks like:

(Taproot Script = 1+1+32+1) + (Taproot Control = 33) = 68 bytes

A transaction with two outputs should have 1 input (40 bytes), 2 outputs
(2* (34+8) = 84), 4 bytes locktime, 4 bytes version, 2 byte witness flag, 1
byte in counter 1 byte out counter  = 136 bytes + 68 bytes witness = 204
A transaction with three outputs should have 1 input (40 bytes), 3 outputs
(3* (34+8) = 126), 4 bytes locktime, 4 bytes version, 2 byte witness flag,
1 byte in counter 1 byte out counter  = 178 bytes + 68 bytes witness = 246
A transaction with 4 outputs should have 1 input (40 bytes), 4 outputs (4*
(34+8) = 126), 4 bytes locktime, 4 bytes version, 2 byte witness flag, 1
byte in counter 1 byte out counter  = 220 bytes + 68 bytes witness = 288

204 + 288*6 + 246*2 = 2424 bytes

Therefore the CTV style pool is, in this example, about 5x more efficient
in block space utilization as compared to TLUV at trustlessly withdrawing
all participants. This extra space leaves lots of headroom to e.g.
including things like OP_TRUE anchor outputs (12*10) = 120 bytes total for
CPFP; an optional script path with 2 inputs for a gas-paying input (cost is
around 32 bytes for taproot?). The design also scales beyond 30
participants, where the advantage grows further (iirc, sum i = 0 to n log i
is relatively close to n log n).

In the single withdrawal case, the cost to eject a single participant with
CTV is 204+288 = 492 bytes, compared to 65+46+35+math.log(30,2)*32+136 =
439 bytes. The cost to eject a second participant in CTV is much smaller as
it amortizes -- worst case is 288, best case is 0 (already expanded),
whereas in TLUV there is limited amortization so it would be about 438
bytes.

The protocols are identical in the cooperative case.

In terms of privacy, the CTV version is a little bit worse. At every
splitting, radix of the root nodes total value gets broadcast. So to eject
a participant, you end up leaking a bit more information. However, it might
be a reasonable assumption that if one of your counterparties is
uncooperative, they might dox you anyways. CTV trees are also superior
during updates for privacy in the cooperative case. With the TLUV pool, you
must know all tapleafs and the corresponding balances. Whereas in CTV
trees, you only need to know the balances of the nodes above you. E.g., we
can update the balances

from: [[1 Alice, 2 Bob], [3 Carol, 4 Dave]]
to: [[2.5 Alice, 0.5 Bob], [3 Carol, 4 Dave]]

without informing Carol or Dave about the updates in our subtree, just that
our slice of participants signed off 

Re: [bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-08 Thread Jeremy via bitcoin-dev
eate a
> higher level of commitment by the base layer software instead of a pure
> communication on the ML/GH, which might not be concretized in the announced
> release due to slow review process/feature freeze/rebase conflicts...
> Reversing the process and asking for Bitcoin applications/higher layers to
> update first might get us in the trap of never doing the change, as someone
> might have a small use-case in the corner relying on a given policy
> behavior.
>
> That said, w.r.t to the proposed policy change in #22871, I think it's
> better to deploy full-rbf first, then give a time buffer to higher
> applications to free up the `nSequence` field and finally start to
> discourage the usage. Otherwise, by introducing new discouragement waivers,
> e.g not rejecting the usage of the top 8 bits, I think we're moving away
> from the policy design principle we're trying to establish (separation of
> mempool policies signaling from consensus data)
>
> Le ven. 3 sept. 2021 à 23:32, Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi Bitcoin Devs,
>>
>> I recently noticed a flaw in the Sequence lock implementation with
>> respect to upgradability. It might be the case that this is protected
>> against by some transaction level policy (didn't see any in policy.cpp, but
>> if not, I've put up a blogpost explaining the defect and patching it
>> https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/
>>
>> I've proposed patching it here
>> https://github.com/bitcoin/bitcoin/pull/22871, it is proper to widely
>> survey the community before patching to ensure no one is depending on the
>> current semantics in any live application lest this tightening of
>> standardness rules engender a confiscatory effect.
>>
>> Best,
>>
>> Jeremy
>>
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-07 Thread Jeremy via bitcoin-dev
If you make the to be reorged flag 2 bits, 1 bit can mark final block and
the other can mark to be reorged.

That way the nodes opting into reorg can see the reorg and ignore the final
blocks (until a certain time? Or until it's via a reorg?), and the nodes
wanting not to see reorgs get continuous service without disruption

On Tue, Sep 7, 2021, 9:12 AM 0xB10C via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hello,
>
> tl;dr: We want to make reorgs on SigNet a reality and are looking for
> feedback on approach and parameters.
>
> One of the ideas for SigNet is the possibility for it to be reliably
> unreliable, for example, planned chain reorganizations. These have not
> been implemented yet.
>
> My summerofbitcoin.org mentee Nikhil Bartwal and I have been looking at
> implementing support for reorgs on SigNet. We are looking for feedback
> on which approach and parameters to use. Please consider answering the
> questions below if you or your company is interested in chain
> reorganizations on SigNet.
>
> With feedback from AJ and Kalle Alm (thanks again!), we came up with two
> scenarios that could be implemented in the current SigNet miner script
> [0]. Both would trigger automatically in a fixed block interval.
> Scenario 1 simulates a race scenario where two chains compete for D
> blocks. Scenario 2 simulates a chain rollback where the top D blocks get
> replaced by a chain that outgrows the earlier branch.
>
> AJ proposed to allow SigNet users to opt-out of reorgs in case they
> explicitly want to remain unaffected. This can be done by setting a
> to-be-reorged version bit flag on the blocks that won't end up in the
> most work chain. Node operators could choose not to accept to-be-reorged
> SigNet blocks with this flag set via a configuration argument.
>
> The reorg-interval X very much depends on the user's needs. One could
> argue that there should be, for example, three reorgs per day, each 48
> blocks apart. Such a short reorg interval allows developers in all time
> zones to be awake during one or two reorgs per day. Developers don't
> need to wait for, for example, a week until they can test their reorgs
> next. However, too frequent reorgs could hinder other SigNet users.
>
> We propose that the reorg depth D is deterministically random between a
> minimum and a maximum based on, e.g., the block hash or the nonce of the
> last block before the reorg. Compared to a local randint() based
> implementation, this allows reorg-handling tests and external tools to
> calculate the expected reorg depth.
>
> # Scenario 1: Race between two chains
>
> For this scenario, at least two nodes and miner scripts need to be
> running. An always-miner A continuously produces blocks and rejects
> blocks with the to-be-reorged version bit flag set. And a race-miner R
> that only mines D blocks at the start of each interval and then waits X
> blocks. A and R both have the same hash rate. Assuming both are well
> connected to the network, it's random which miner will first mine and
> propagate a block. In the end, the A miner chain will always win the race.
>
> # Scenario 2: Chain rollback
>
> This scenario only requires one miner and Bitcoin Core node but also
> works in a multiminer setup. The miners mine D blocks with the
> to-be-reorged version bit flag set at the start of the interval. After
> allowing the block at height X+D to propagate, they invalidate the block
> at height X+1 and start mining on block X again. This time without
> setting the to-be-reorged version bit flag. Non-miner nodes will reorg
> to the new tip at height X+D+1, and the first-seen branch stalls.
>
> # Questions
>
> 1. How do you currently test your applications reorg handling? Do
>the two discussed scenarios (race and chain rollback) cover your
>needs? Are we missing something you'd find helpful?
>
> 2. How often should reorgs happen on the default SigNet? Should
>there be multiple reorgs a day (e.g., every 48 or 72 blocks
>assuming 144 blocks per day) as your engineers need to be awake?
>Do you favor less frequent reorgs (once per week or month)? Why?
>
> 3. How deep should the reorgs be on average? Do you want to test
>deeper reorgs (10+ blocks) too?
>
>
> # Next Steps
>
> We will likely implement Scenario 1, the race between two chains, first.
> We'll set up a public test-SigNet along with a faucet, block explorer,
> and a block tree visualization. If there is interest in the second
> approach, chain rollbacks can be implemented too. Future work will add
> the possibility to include conflicting transactions in the two branches.
> After enough testing, the default SigNet can start to do periodical
> reorgs, too.
>
> Thanks,
> 0xB10C
>
> [0]: https://github.com/bitcoin/bitcoin/blob/master/contrib/signet/miner
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> 

Re: [bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-05 Thread Jeremy via bitcoin-dev
BIP 68 says >= 2:
*This specification defines the meaning of sequence numbers for
transactions with an nVersion greater than or equal to 2 for which the rest
of this specification relies on.*
BIP-112 says not < 2
// Fail if the transaction's version number is not set high
// enough to trigger BIP 68 rules.
if (static_cast(txTo->nVersion) < 2) return false;

A further proof that this needs fix: the flawed upgradable semantic exists
in script as well as in the transaction nSeqeunce. we can't really control
the transaction version an output will be spent with in the future, so it
would be weird/bad to change the semantic in transaction version 3.

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sun, Sep 5, 2021 at 7:36 PM David A. Harding  wrote:

> On Fri, Sep 03, 2021 at 08:32:19PM -0700, Jeremy via bitcoin-dev wrote:
> > Hi Bitcoin Devs,
> >
> > I recently noticed a flaw in the Sequence lock implementation with
> respect
> > to upgradability. It might be the case that this is protected against by
> > some transaction level policy (didn't see any in policy.cpp, but if not,
> > I've put up a blogpost explaining the defect and patching it
> > https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/
>
> Isn't this why BIP68 requires using tx.version=2?  Wouldn't we just
> deploy any new nSequence rules with tx.version>2?
>
> -Dave
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-04 Thread Jeremy via bitcoin-dev
In working on resolving this issue, one issue that has come up is what
sequence values get used by wallet implementations?

E.g., in Bitcoin Core a script test says

BIP125_SEQUENCE_NUMBER = 0xfffd  # Sequence number that is rbf-opt-in
(BIP 125) and csv-opt-out (BIP 68)

Are any other numbers currently expected by any wallet software to be
broadcastable with the DISABLE flag set? Does anyone use *this* number? Is
there any advantage of this number v.s. just 0? Do people commonly use
0xfffd? 0xfffe is special, but it seems the former has the
alternative of either 0 valued sequence lock (1<<22 or 0).

Are there any other sequence numbers that are not defined in a BIP that
might be used somewhere?

Cheers,

Jeremy
--
@JeremyRubin 



On Fri, Sep 3, 2021 at 8:32 PM Jeremy  wrote:

> Hi Bitcoin Devs,
>
> I recently noticed a flaw in the Sequence lock implementation with respect
> to upgradability. It might be the case that this is protected against by
> some transaction level policy (didn't see any in policy.cpp, but if not,
> I've put up a blogpost explaining the defect and patching it
> https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/
>
> I've proposed patching it here
> https://github.com/bitcoin/bitcoin/pull/22871, it is proper to widely
> survey the community before patching to ensure no one is depending on the
> current semantics in any live application lest this tightening of
> standardness rules engender a confiscatory effect.
>
> Best,
>
> Jeremy
>
> --
> @JeremyRubin 
> 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-03 Thread Jeremy via bitcoin-dev
Hi Bitcoin Devs,

I recently noticed a flaw in the Sequence lock implementation with respect
to upgradability. It might be the case that this is protected against by
some transaction level policy (didn't see any in policy.cpp, but if not,
I've put up a blogpost explaining the defect and patching it
https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/

I've proposed patching it here https://github.com/bitcoin/bitcoin/pull/22871,
it is proper to widely survey the community before patching to ensure no
one is depending on the current semantics in any live application lest this
tightening of standardness rules engender a confiscatory effect.

Best,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is there a tool like Ethereum EVM at present for Bitcoin script?

2021-08-26 Thread Jeremy via bitcoin-dev
Will update those soon / in November. Sapio needs the rust Bitcoin taproot
ecosystem to mature, as well as a spec for miniscript taproot (altho we can
kinda monkey patch one in without it).

To be honest, I had some technical difficulties with getting Libera to work
and I gave up... But perhaps I can retry getting it to work again. Irc
infra 路‍♂️ struggles...


On Thu, Aug 26, 2021, 6:10 AM Michael Folkson 
wrote:

> The "No Taproot" section of the Sapio docs need updating :) What are
> your plans to take advantage of Taproot with Sapio? It would have been
> interesting to see what a Taproot emulator would have looked like,
> although no need for it now. It seems to me Taproot would have been
> harder to emulate than CTV though I could be wrong.
>
> https://learn.sapio-lang.org/ch05-02-taproot.html
>
> Also there have been a number of people asking questions about Sapio
> and CTV on the Libera equivalents of Freenode channels #sapio and
> ##ctv-bip-review over the past months. Do you plan to join and claim
> those channels?
>
> Date: Thu, 26 Aug 2021 03:26:23 -0700
> From: Jeremy 
> To: Andrew Poelstra , Bitcoin Protocol
> Discussion 
> Subject: Re: [bitcoin-dev] Is there a tool like Ethereum EVM at
> present for Bitcoin script?
> Message-ID:
>  3j1sodea...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> This has actually never been true (Sapio assumes extensions).
>
> If the extensions are not present, you can stub them out with a signing
> federation instead, configurable as flags, and you can also write many
> contracts that do not use the ctv based components at all.
>
> The protocol for emulation is a bit clever (if I do say so myself) since it
> ensures that contract compilation is completely offline and the oracles are
> completely stateless.
>
> Relevant links:
>
> https://learn.sapio-lang.org/ch05-01-ctv-emulator.html
> https://learn.sapio-lang.org/ch03-02-finish.html
>
> Cheers,
>
> Jeremy
>
> --
> Michael Folkson
> Email: michaelfolk...@gmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is there a tool like Ethereum EVM at present for Bitcoin script?

2021-08-26 Thread Jeremy via bitcoin-dev
This has actually never been true (Sapio assumes extensions).

If the extensions are not present, you can stub them out with a signing
federation instead, configurable as flags, and you can also write many
contracts that do not use the ctv based components at all.

The protocol for emulation is a bit clever (if I do say so myself) since it
ensures that contract compilation is completely offline and the oracles are
completely stateless.

Relevant links:

https://learn.sapio-lang.org/ch05-01-ctv-emulator.html
https://learn.sapio-lang.org/ch03-02-finish.html

Cheers,

Jeremy

On Tue, Aug 24, 2021, 6:19 AM Andrew Poelstra via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Simplicity does not compile to Bitcoin Script, and Sapio assumes extensions
> to Bitcoin Script that are not currently part of the consensus code.
>
>
> On Tue, Aug 24, 2021 at 03:36:29PM +0800, Gijs van Dam via bitcoin-dev
> wrote:
> > Hi,
> >
> >
> > Bitcoin does not have a virtual machine. But you do have [Miniscript][1],
> > [Min.sc][2], [Simplicity][3] and [Sapio][4]. These are all higher level
> > languages that compile to Bitcoin Script. Sapio is "just" Rust, so that
> > might fit your setting best.
> >
> > By the way, this question also has an answer on [Bitcoin
> Stackexchange][5]
> > which is a great resource for questions like this.
> >
> > [1]: http://bitcoin.sipa.be/miniscript/
> > [2]: https://min.sc/
> > [3]: https://github.com/ElementsProject/simplicity
> > [4]: https://learn.sapio-lang.org/
> > [5]:
> >
> https://bitcoin.stackexchange.com/questions/108261/is-there-a-tool-like-ethereum-evm-at-present-for-bitcoin-script
> >
> > On Tue, Aug 24, 2021 at 2:55 PM Null Null via bitcoin-dev <
> > bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > > Hi all,
> > >
> > > Is there a tool like Ethereum EVM at present? Users can write bitcoin
> > > scripts in a syntax just like python(or like other programming
> language);
> > > through this tool, they can be translated into bitcoin original
> scripts; it
> > > sounds like a new programming language has been invented.
> > >
> > > In my opinion, Bitcoin script programming is based on reverse Polish
> > > expression; this is not friendly to programmers;
> > >
> > > In fact, Bitcoin's opcode expression ability is very rich, and it may
> be
> > > unfriendly, which has affected the promotion of Bitcoin in the
> technical
> > > community.
> > >
> > > Hope for hearing some voice about this.
> > >
> > > Best wish.
> > >
> > > ___
> > > bitcoin-dev mailing list
> > > bitcoin-dev@lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> > >
>
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
> --
> Andrew Poelstra
> Director of Research, Blockstream
> Email: apoelstra at wpsoftware.net
> Web:   https://www.wpsoftware.net/andrew
>
> The sun is always shining in space
> -Justin Lewis-Webster
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-19 Thread Jeremy via bitcoin-dev
one interesting point that came up at the bitdevs in austin today that
favors remove that i believe is new to this discussion (it was new to me):

the argument can be reduced to:

- dust limit is a per-node relay policy.
- it is rational for miners to mine dust outputs given their cost of
maintenance (storing the output potentially forever) is lower than their
immediate reward in fees.
- if txn relaying nodes censor something that a miner would mine, users
will seek a private/direct relay to the miner and vice versa.
- if direct relay to miner becomes popular, it is both bad for privacy and
decentralization.
- therefore the dust limit, should there be demand to create dust at
prevailing mempool feerates, causes an incentive to increase network
centralization (immediately)

the tradeoff is if a short term immediate incentive to promote network
centralization is better or worse than a long term node operator overhead.


///

my take is that:

1) having a dust limit is worse since we'd rather not have an incentive to
produce or roll out centralizing software, whereas not having a dust limit
creates an mild incentive for node operators to improve utreexo
decentralizing software.
2) it's hard to quantify the magnitude of the incentives, which does matter.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] src/httprpc.cpp InterruptHTTPRPC

2021-08-12 Thread Jeremy via bitcoin-dev
This is probably best to open as an issue in github!
--
@JeremyRubin 



On Thu, Aug 12, 2021 at 11:03 AM Ali Sherief via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I am using Bitcoin Core's HTTP RPC server as a basis for my own
> application. While browsing the source code of src/httprpc.cpp, I notice
> that the InterruptHTTPRPC function
> https://github.com/bitcoin/bitcoin/blob/7fcf53f7b4524572d1d0c9a5fdc388e87eb02416/src/httprpc.cpp#L310-L314
>  just
> calls LogPrint() without doing anything else.
>
> Does the HTTP RPC server support interrupting the event loop at this time,
> or is this method a stub?
>
> - Ali
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-09 Thread Jeremy via bitcoin-dev
You might be interested in https://eprint.iacr.org/2017/1066.pdf which
claims that you can make CT computationally hiding and binding, see section
4.6.

with respect to utreexo, you might review
https://github.com/mit-dci/utreexo/discussions/249?sort=new which discusses
tradeoffs between different accumulator designs. With a swap tree, old
things that never move more or less naturally "fall leftward", although
there are reasons to prefer alternative designs.


>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy via bitcoin-dev
some additional answers/clarifications



> Question for Jeremy: would you also allow zero-value outputs?  Or would
> you just move the dust limit down to a fixed 1-sat?
>

I would remove it entirely -- i don't think there's a difference between
the two realistically.



>
> Allowing 0-value or 1-sat outputs minimizes the cost for polluting the
> UTXO set during periods of low feerates.
>
>
Maybe that incentivizes people to make better use of the low
feerate periods to do more important work like consolidations so that
others do not have the opportunity to pollute (therefore eliminating the
low fee period ;)



> If your stuff is going to slow down my node and possibly reduce my
> censorship resistance, how is that not my business?
>

You don't know that's what I'm doing, it's a guess as to my future behavior.

If it weren't worth it to me, I wouldn't be doing it. Market will solve
what is worth v.s. not worth.



>
> > 2) dust outputs can be used in various authentication/delegation smart
> > contracts
>
> All of which can also use amounts that are economically rational to
> spend on their own.  If you're gonna use the chain for something besides
> value transfer, and you're already wiling to pay X in fees per onchain
> use, why is it not reasonable for us to ask you to put up something on
> the order of X as a bond that you'll actually clean up your mess when
> you're no longer interested in your thing?
>

These authentication/delegation smart contracts can be a part of value
transfer e.g. some type of atomic swaps or other escrowed payment.

A bond to clean it up is a fair reason; but perhaps in a protocol it might
not make sense to clean up the utxo otherwise and so you're creating a
cleanup transaction (potentially has to be presigned in a way it can't be
done as a consolidation) and then some future consolidation to make the
dusts+eps aggregately convenient to spend. So you'd be trading a decent
amount more chainspace v.s. just ignoring the output and writing it to disk
and maybe eventually into a utreexo (e.g. imagine utreexo where the last N
years of outputs are held in memory, but eventually things get tree'd up)
so the long term costs need not be entirely bourne in permanent storage.


>
> Nope, nothing is forced.  Any LN node can simply refuse to accept/route
> HTLCs below the dust limit.
>

I'd love to hear some broad thoughts on the impact of this on routing (cc
Tarun who thinks about these things a decent amount) as this means for
things like multipath routes you have much stricter constraints on which
nodes you can route payments through. The impact on capacity from every
user's pov might be not insubstantial.



>
> I also doubt your proposed solution fixes the problem.  Any LN node that
> accepts an uneconomic HTLC cannot recover that value, so the money is
> lost either way.  Any sane regulation would treat losing value to
> transaction fees the same as losing value to uneconomical conditions.
>
> Finally, if LN nodes start polluting the UTXO set with no economic way
> to clean up their mess, I think that's going to cause tension between
> full node operators and LN node operators.
>



My anticipation is that the LN operators would stick the uneconomic HTLCs
aggregately into a fan out utxo and try to cooperate, but failing that only
pollute the chain by O(1) for O(n) non economic HTLCs. There is a
difference between losing money and knowing exactly where it is but not
claiming it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy via bitcoin-dev
Under no circumstances do I think we should *increase* the dust limit. That
would have a mildly confiscatory effect on current Lightning Channel
operators, among others.

Generally, the UTXO set will grow. We should work to accommodate the worst
case scenario under current consensus rules. I think this points to using
things like Utreexo or similar rather than meddling in the user's business.

I am skeptical that 0 value outputs are a real spam problem given the cost
to create. Generally one creates an output when one either believes it
would make sense to redeem it in the future. So surely this is a market
problem, if people want them they can pay what it is worth for them to have
it. Again, it's not my business.

Matt proposes that people might use a nominal amount of bitcoin on a zero
value input so that it doesn't look like dust. What Matt is asking for is
that in any protocol you pay for your space not via fees, but instead via
an assurance bond that you will eventually redeem it and clean the state
up. In my opinion, this is worse than just allowing a zero value input
since then you might accrue the need for an additional change output to
which the bond's collateral be returned.

With respect to the check in the mail analogy, cutting down trees for paper
is bad for everyone and shipping things using fossil fuels contributes to
climate change. Therefore it's a cost borne by society in some respects.
Still, if someone else decides it's worth sending a remittance of whichever
value, it is still not my business.

With respect to CT and using the range proofs to exclude dust, I'm aware
that can be done (hence compromising allowed transfers). Again, I don't
think it's quite our business what people do, but on a technical level,
this would have the impact of shrinking the anonymity set so is also
suspect to me.

---

If we really want to create incentives for state clean up, I think it's a
decent design space to consider.

e.g., we could set up a bottle deposit program whereby miners contribute an
amount of funds from fee revenue from creating N outputs to a "rolling
utxo" (e.g., a coinbase utxo that gets spent each block op_true to op_true
under some miner rules) and the rolling utxo can either disperse funds to
the miner reward or soak up funds from the fees in order to encourage
blocks which have a better ratio of inputs to outputs than the mean. Miners
can then apply this rule in the mempool to prioritize transactions that
help their block's ratio. This is all without directly interfering with the
user's intent to create whatever outputs they want, it just provides a way
of paying miners to clean up the public common.

Gas Token by Daian et al comes to mind, from Eth, w.r.t. many pitfalls
arbing these state space freeing return curves, but it's worth thinking
through nonetheless.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy via bitcoin-dev
We should remove the dust limit from Bitcoin. Five reasons:

1) it's not our business what outputs people want to create
2) dust outputs can be used in various authentication/delegation smart
contracts
3) dust sized htlcs in lightning (
https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
force channels to operate in a semi-trusted mode which has implications
(AFAIU) for the regulatory classification of channels in various
jurisdictions; agnostic treatment of fund transfers would simplify this
(like getting a 0.01 cent dividend check in the mail)
4) thinly divisible colored coin protocols might make use of sats as value
markers for transactions.
5) should we ever do confidential transactions we can't prevent it without
compromising privacy / allowed transfers

The main reasons I'm aware of not allow dust creation is that:

1) dust is spam
2) dust fingerprinting attacks

1 is (IMO) not valid given the 5 reasons above, and 2 is preventable by
well behaved wallets to not redeem outputs that cost more in fees than they
are worth.

cheers,

jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Covenant opcode proposal OP_CONSTRAINDESTINATION (an alternative to OP_CTV)

2021-07-28 Thread Jeremy via bitcoin-dev
High level feedback:

you should spec out the opcodes as separate pieces of functionality as it
sounds like OP_CD is really 3 or 4 opcodes in one (e.g., amounts to
outputs, output addresses, something with fees).

One major drawback of your approach is that all transactions are twice as
large as they might otherwise need to be for simple things like congestion
control trees, since you have to repeat all of the output data twice.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-12 Thread Jeremy via bitcoin-dev
On Sun, Jul 11, 2021 at 10:01 PM Anthony Towns  wrote:

> On Thu, Jul 08, 2021 at 08:48:14AM -0700, Jeremy wrote:
> > This would disallow using a relative locktime and an absolute
> locktime
> > for the same input. I don't think I've seen a use case for that so
> far,
> > but ruling it out seems suboptimal.
> > I think you meant disallowing a relative locktime and a sequence
> locktime? I
> > agree it is suboptimal.
>
> No? If you overload the nSequence for a per-input absolute locktime
> (well in the past for eltoo), then you can't reuse the same input's
> nSequence for a per-input relative locktime (ie CSV).
>
> Apparently I have thought of a use for it now -- cut-through of PTLC
> refunds when the timeout expires well after the channel settlement delay
> has passed. (You want a signature that's valid after a relative locktime
> of the delay and after the absolute timeout)
>

Ah -- I didn't mean a per input abs locktime, I mean the  tx global
locktime.

I agree that at some point we should just separate all locktime types per
input so we get rid of all weirdness/overlap.



>
> > What do you make of sequence tagged keys?
>
> I think we want sequencing restrictions to be obvious from some (simple)
> combination of nlocktime/nsequence/annex so that you don't have to
> evaluate scripts/signatures in order to determine if a transaction
> is final.
>
> Perhaps there's a more general principle -- evaluating a script should
> only return one bit of info: "bool tx_is_invalid_script_failed"; every
> other bit of information -- how much is paid in fees (cf ethereum gas
> calculations), when the tx is final, if the tx is only valid in some
> chain fork, if other txs have to have already been mined / can't have
> been mined, who loses funds and who gets funds, etc... -- should already
> be obvious from a "simple" parsing of the tx.
>
> Cheers,
> aj
>
>
I don't think we have this property as is.

E.g. consider the transaction:

TX:
   locktime: None
   sequence: 100
   scriptpubkey: 101 CSV

How will you tell it is able to be included without running the script?

I agree this is a useful property, but I don't think we can do it
practically.

What's nice is the transaction in this form cannot go from invalid to valid
-- once invalid it is always invalid for a given UTXO.

sequence tagged keys have this property -- a txn is either valid or invalid
and that never changes w/o any external information needing to be passed up.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CAT Makes Bitcoin Quantum Secure [was CheckSigFromStack for Arithmetic Values]

2021-07-09 Thread Jeremy via bitcoin-dev
I thought about this, but at the time of writing I couldn't come up with
something I thought was substantially better. I spent a few more cycles
thinking on it -- you can definitely do better. It's not clear how much
better Winternitz might be, or if it would be secure in this context?
Here's some exploration...

maybe you can do something like:

   || IF SWAP HASH SWAP ELSE HASH FROMALTSTACK
<2**n> TOALTSTACK ADD ENDIF CAT

you can process this (assume HASH160) into chunks of 26 bits, cat them all
together, and then stash that hash. You would need 6 gadgets, and then 1
overflow + 4 bare hashes for the final key hash (e.g. your tree looks like)
H(H(26x20) || H(26x20)...H(bit)|| H(bit) || H(bit) || H(bit)). It doesn't
make sense to have a "nice" merkle tree, just fit in as much data as
possible per call (520 bytes). If OP_SHASTREAM, this is even better since
you can ignore structuring...

This would bring your cost down by about 20 bytes per bit, for 160 bits, so
around a savings of 3200 bytes... not bad! 1/3 cheaper.

Script is about 15x160 = 2400 and change, witness is 43x160 = 6880

If you were to convert to 3-ary, you could cut this down to 101 gates with
a script like:

witnesses:
<0> 
 <1> 
  <2> 

script:
HASH SWAP
IFDUP
NOTIF# 0 passed in (0)
SWAP CAT
ELSE
<3**n> TOALT
1SUB
IF # 2 passed in (+1)
FROMALT # do nothing
ELSE # 1 passed in (T)
SWAP # Swaps H(xT) to back
FROMALT NEGATE # negate
END
FROMALT ADD TOALT # add to accumulator
ENDIF
CAT


you would end up having to publish ~64x101 data in the witness, so only
6464 total (and about 24x101 = 2424 and change for the script)

Making the script smaller also means that choice of hash160/sha256 doesn't
change script size much, just witness. And the witnesses are free to
provide their own preimages, so it would be OK to use something > 20 bytes,
< 32 for more variable security/length tradeoff.


At the cost of marginally bigger script (by about 6x101 bytes), you can
save 20x101 off the witness stack by making each key H(H(xT) || H(x0)) ||
H(x1). 43x101 + 30x101 = 7373 + change for the final grouping.

witnesses:
<0> 
  <1> 
  <2> 

script:
HASH SWAP
IFDUP
NOTIF# 0 passed in (0)
ROT SWAP CAT HASH
ELSE
<3**n> TOALT
1SUB
IF # 2 passed in (+1)
FROMALT # do nothing
ELSE # 1 passed in (T)
TOALTSTACK CAT HASH FROMALTSTACK SWAP # Swaps H(xT) to back
FROMALT NEGATE # negate
END
FROMALT ADD TOALT # add to accumulator
ENDIF
CAT


--
@JeremyRubin 



On Fri, Jul 9, 2021 at 12:03 PM Ethan Heilman  wrote:

> >Yes, quite neat indeed, too bad Lamport signatures are so huge (a couple
> kilobytes)... blocksize increase *cough*
>
> Couldn't you significantly compress the signatures by using either
> Winternitz OTS or by using OP_CAT to build a merkle tree so that the
> full signature can be derived during script execution from a much
> shorter set of seed values?
>
> On Thu, Jul 8, 2021 at 4:12 AM ZmnSCPxj via bitcoin-dev
>  wrote:
> >
> >
> > Good morning Jeremy,
> >
> > Yes, quite neat indeed, too bad Lamport signatures are so huge (a couple
> kilobytes)... blocksize increase *cough*
> >
> > Since a quantum computer can derive the EC privkey from the EC pubkey
> and this scheme is resistant to that, I think you can use a single
> well-known EC privkey, you just need a unique Lamport keypair for each UTXO
> (uniqueness being mandatory due to Lamport requiring preimage revelation).
> >
> > Regards,
> > ZmnSCPxj
> >
> >
> > > Dear Bitcoin Devs,
> > >
> > > As mentioned previously, OP_CAT (or similar operation) can be used to
> make Bitcoin "quantum safe" by signing an EC signature. This should work in
> both Segwit V0 and Tapscript, although you have to use HASH160 for it to
> fit in Segwit V0.
> > >
> > > See [my blog](https://rubin.io/blog/2021/07/06/quantum-bitcoin/) for
> the specific construction, reproduced below.
> > >
> > > Yet another entry to the "OP_CAT can do that too" list.
> > >
> > > Best,
> > >
> > > Jeremy
> > > -
> > >
> > > I recently published [a blog
> > > post](https://rubin.io/blog/2021/07/02/signing-5-bytes/) about
> signing up to a
> > > 5 byte value using Bitcoin script arithmetic and Lamport signatures.
> > >
> > > By itself, this is neat, but a little limited. What if we could sign
> longer
> > > messages? If we can sign up to 20 bytes, we could sign a HASH160
> digest which
> > > is most likely quantum safe...
> > >
> > > What would it mean if we signed the HASH160 digest of a signature?
> What the
> > > what? Why would we do that?
> > >
> > > Well, as it turns out, even if a quantum computer were able to crack
> ECDSA, it
> > > would yield revealing the private key but not the ability to malleate
> the
> > > content of what was actually signed.  I asked my good friend and
> cryptographer
> > > [Madars Virza](https://madars.org/) if my intuition was 

Re: [bitcoin-dev] Taproot Fields for PSBT

2021-07-08 Thread Jeremy via bitcoin-dev
Suggestion:

It should be allowed that different keys can specify different sighash
flags.

As an example, if chaperone signatures were desired with anyprevout, it
would be required to specify that the anyprevout key sign with APO and the
chaperone sign with ALL. As another example, Sapio emulator oracles sign
with SIGHASH_ALL whereas other signatories might be instructed to sign with
a different flag.

The current sighashtype key is per-input:
- If a sighash type is provided, the signer must check that the sighash is
acceptable. If unacceptable, they must fail.
- If a sighash type is not provided, the signer should sign using
SIGHASH_ALL, but may use any sighash type they wish.

So a new per-key mapping can be added safely.

I have no strong opinions on the format for said per-key sighash hints.

Why do this now? Well, I requested it when spec'ing V2 as well, but it
would be nice to get it spec'd and implemented.


--
@JeremyRubin 



On Mon, Jun 28, 2021 at 1:32 PM Salvatore Ingala via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Andrew,
>
> Thanks for the clarification, I was indeed reading it under the mistaken
> assumption that only one leaf would be added to the PSBT.
>
> En passant, for the less experienced readers, it might be helpful if the
> key types that are possibly present multiple times (with different keydata)
> were somehow labeled in the tables.
>
> Best,
> Salvatore Ingala
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-08 Thread Jeremy via bitcoin-dev
>
> This would disallow using a relative locktime and an absolute locktime
> for the same input. I don't think I've seen a use case for that so far,
> but ruling it out seems suboptimal.


I think you meant disallowing a relative locktime and a sequence locktime?
I agree it is suboptimal.


What do you make of sequence tagged keys?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-07 Thread Jeremy via bitcoin-dev
I made a comment on
https://github.com/bitcoin/bips/pull/943#issuecomment-876034559 but it
occurred to me it is more ML appropriate.

In general, one thing that strikes me is that when anyprevout is used for
eltoo you're generally doing a script like:

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
 CLTV DROP
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```

This means that you're overloading the CLTV clause, which means it's
impossible to use Eltoo and use a absolute lock time, it also means you
have to use fewer than a billion sequences, and if you pick a random # to
mask how many payments you've done / pick random gaps let's say that
reduces your numbers in half. That may be enough, but is still relatively
limited. There is also the issue that multiple inputs cannot be combined
into a transaction if they have signed on different locktimes.

Since Eltoo is the primary motivation for ANYPREVOUT, it's worth making
sure we have all the parts we'd need bundled together to see it be
successful.

A few options come to mind that might be desirable in order to better serve
the eltoo usecase

1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated
to eltoo sequences). This has the benefit of giving a per input sequence,
but the drawback of using a CSV bit. Because there's only 1 CSV per input,
this technique cannot be used with a sequence tag.
2) CSFS -- it would be possible to take a signature from stack for an
arbitrary higher number, e.g.:
```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
DUP musigkey(Aseq, BSeq) CSFSV  GTE VERIFY
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```
Then, posession of a higher signed sequence would allow for the use of the
update path. However, the downside is that there would be no guarantee that
the new state provided for update would be higher than the past one without
a more advanced covenant.
3) Sequenced Signature: It could be set up such that ANYPREVOUT keys are
tagged with a N byte sequence (instead of 1), and a part of the process of
signature verification includes hashing a sequence on the signature itself.

E.g.

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
   ::musigkey(Au,Bu) CHECKSIG
ENDIF
```
To satisfy this clause, a signature `::S` would be required. When
validating the signature S, the APO digest would have to include the value
. It is non cryptographically checked that N+1 > N.
5) Similar to 3, but look at more values off the stack. This is also OK,
but violates the principle of not making opcodes take variable numbers of
things off the stack. Verify semantics on the extra data fields could
ameliorate this concern, and it might make sense to do it that way.
4) Something in the Annex: It would also be possible to define a new
generic place for lock times in the annex (to permit dual height/time
relative/absolute, all per input. The pro of this approach is that it would
be solving an outstanding problem for script that we want to solve anyways,
the downside is that the Annex is totally undefined presently so it's
unclear that this is an appropriate use for it.
5) Do Nothing :)


Overall I'm somewhat partial to option 3 as it seems to be closest to
making ANYPREVOUT more precisely designed to support Eltoo. It would also
be possible to make it such that if the tag N=1, then the behavior is
identical to the proposal currently.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-07 Thread Jeremy via bitcoin-dev
Hah -- ZmnSCPxj that post's a doozy -- but it more or less makes sense the
argument you're making in favor of permitting recursion at the transaction
level.

One part that's less clear is if you can make a case against being
recursive in Script fragments themselves -- ignoring bitcoin script for the
moment, what would be wrong with a small VM that a spender is able to
"purchase" a number of cycles and available memory via the annex, and the
program must execute and halt within that time? Then, per block/txn, you
can enforce a total cycle and memory limit. This still isn't quite the EVM,
since there's no cross object calling convention and the output is still
UTXOs. What are the arguments against this model from a safety perspective?



One of my general concerns with recursive covenants is the ability to "go
wrong" in surprising ways. Consider the following program (Sapio
-pseudocode), which is a non recursive
covenant (i.e., doable today with presigning oracles) that demonstrates the
issue.

struct Pool {
members: Vec<(Amount, Key)>,
}
impl Pool {
then!{
fn withdraw(self, ctx) {
let mut builder = ctx.template();
for (a, k) in self.members.iter() {
builder = builder.add_output(a, k.into(), None)?;
}
builder.into()
}
}
guard! {
fn all_signed(self, ctx) {
Clause::And(self.members.iter().map(|(a,k)|
Clause::Key(k.clone())).into())
}
}
finish! {
guarded_by: [all_signed]
fn add_member(self, ctx, o_member: Option<(Amount, Key)>) {
let member = o_member.into()?;
let mut new_members = self.members.clone();
new_members.push(member.clone());
ctx.template().add_output(ctx.funds() + member.0,
  Pool {members: new_members}, None)?.into()
}
}
}

Essentially this is a recursive covenant that allows either Complete via
the withdraw call or Continue via add_member, while preserving the same
underlying code. In this case, all_signed must be signed by all current
participants to admit a new member.

This type of program is subtly "wrong" because the state transition of
add_member does not verify that the Pool's future withdraw call will be
valid. E.g., we could add more than a 1MB of outputs, and then our program
would be "stuck". So it's critical that in our "production grade" covenant
system we do some static analysis before proceeding to a next step to
ensure that all future txns are valid. This is a strength of the CTV/Sapio
model presently, you always output a list of future txns to aid static
analysis.

However, when we make the leap to "automatic" covenants, I posit that it
will be *incredibly* difficult to prove that recursive covenants don't have
a "premature termination" where a state transition that should be valid in
an idealized setting is accidentally invalid in the actual bitcoin
environment and the program reaches a untimely demise.

For instance, OP_CAT has this footgun -- by only permitting 520 bytes, you
hit covenant limits at around 13 outputs assuming you are length checking
each one and not permitting bare script. We can avoid this specific footgun
some of the time by using SHA256STREAM instead, of course.

However, it is generally very difficult to avoid all sorts of issues. E.g.,
with the ability to generate/update tapscript trees, what happens when
through updating a well formed tapscript tree 128 times you bump an
important clause past the 129 depth limit?

I don't think that these sorts of challenges mean that we shouldn't enable
covenants or avoid enabling them, but rather that as we explore we should
add primitives in a methodical way and give users/toolchain builders
primitives that enable and or encourage safety and good program design.

My personal view is that CTV/Sapio with it's AOT compilation of automated
state transitions and ability to statically analyze is a concept that can
mature and be used in production in the near term. But the tooling to
safely do recursive computations at the txn level will take quite a bit
longer to mature, and we should be investing effort in producing
compilers/frameworks for emitting well formed programs before we get too in
the weeds on things like OP_TWEAK. (side note -- there's an easy path for
adding this sort of experimental feature to Sapio if anyone is looking for
a place to start)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_CAT Makes Bitcoin Quantum Secure [was CheckSigFromStack for Arithmetic Values]

2021-07-06 Thread Jeremy via bitcoin-dev
Dear Bitcoin Devs,

As mentioned previously, OP_CAT (or similar operation) can be used to make
Bitcoin "quantum safe" by signing an EC signature. This should work in both
Segwit V0 and Tapscript, although you have to use HASH160 for it to fit in
Segwit V0.

See [my blog](https://rubin.io/blog/2021/07/06/quantum-bitcoin/) for the
specific construction, reproduced below.

Yet another entry to the "OP_CAT can do that too" list.

Best,

Jeremy
-


I recently published [a blog
post](https://rubin.io/blog/2021/07/02/signing-5-bytes/) about signing up
to a
5 byte value using Bitcoin script arithmetic and Lamport signatures.

By itself, this is neat, but a little limited. What if we could sign longer
messages? If we can sign up to 20 bytes, we could sign a HASH160 digest
which
is most likely quantum safe...

What would it mean if we signed the HASH160 digest of a signature? What the
what? Why would we do that?

Well, as it turns out, even if a quantum computer were able to crack ECDSA,
it
would yield revealing the private key but not the ability to malleate the
content of what was actually signed.  I asked my good friend and
cryptographer
[Madars Virza](https://madars.org/) if my intuition was correct, and he
confirmed that it should be sufficient, but it's definitely worth closer
analysis before relying on this. While the ECDSA signature can be malleated
to a
different, negative form, if the signature is otherwise made immalleable
there
should only be one value the commitment can be opened to.

If we required the ECDSA signature be signed with a quantum proof signature
algorithm, then we'd have a quantum proof Bitcoin! And the 5 byte signing
scheme
we discussed previously is a Lamport signature, which is quantum secure.
Unfortunately, we need at least 20 contiguous bytes... so we need some sort
of
OP\_CAT like operation.

OP\_CAT can't be directly soft forked to Segwit v0 because it modifies the
stack, so instead we'll (for simplicity) also show how to use a new opcode
that
uses verify semantics, OP\_SUBSTRINGEQUALVERIFY that checks a splice of a
string
for equality.

```
... FOR j in 0..=5
<0>
... FOR i in 0..=31
SWAP hash160 DUP  EQUAL IF DROP <2**i> ADD ELSE
 EQUALVERIFY ENDIF
... END FOR
TOALTSTACK
... END FOR

DUP HASH160

... IF CAT AVAILABLE
FROMALTSTACK
... FOR j in 0..=5
FROMALTSTACK
CAT
... END FOR
EQUALVERIFY
... ELSE SUBSTRINGEQUALVERIFY AVAILABLE
... FOR j in 0..=5
FROMALTSTACK <0+j*4> <4+j*4> SUBSTRINGEQUALVERIFY DROP DROP DROP
...  END FOR
DROP
... END IF

 CHECKSIG
```

That's a long script... but will it fit? We need to verify 20 bytes of
message
each bit takes around 10 bytes script, an average of 3.375 bytes per number
(counting pushes), and two 21 bytes keys = 55.375 bytes of program space
and 21
bytes of witness element per bit.

It fits! `20*8*55.375 = 8860`, which leaves 1140 bytes less than the limit
for
the rest of the logic, which is plenty (around 15-40 bytes required for the
rest
of the logic, leaving 1100 free for custom signature checking). The stack
size
is 160 elements for the hash gadget, 3360 bytes.

This can probably be made a bit more efficient by expanding to a ternary
representation.

```
SWAP hash160 DUP  EQUAL  IF DROP  ELSE <3**i> SWAP DUP
 EQUAL IF DROP SUB ELSE  EQUALVERIFY ADD  ENDIF
ENDIF
```

This should bring it up to roughly 85 bytes per trit, and there should be
101
trits (`log(2**160)/log(3) == 100.94`), so about 8560 bytes... a bit
cheaper!
But the witness stack is "only" `2121` bytes...

As a homework exercise, maybe someone can prove the optimal choice of radix
for
this protocol... My guess is that base 4 is optimal!

## Taproot?

What about Taproot? As far as I'm aware the commitment scheme (`Q = pG +
hash(pG
|| m)G`) can be securely opened to m even with a quantum computer (finding
`q`
such that `qG = Q` might be trivial, but suppose key path was disabled, then
finding m and p such that the taproot equation holds should be difficult
because
of the hash, but I'd need to certify that claim better).  Therefore this
script can nest inside of a Tapscript path -- Tapscript also does not
impose a
length limit, 32 byte hashes could be used as well.

Further, to make keys reusable, there could be many Lamport keys comitted
inside
a taproot tree so that an address could be used for thousands of times
before
expiring. This could be used as a measure to protect accidental use rather
than
to support it.

Lastly, Schnorr actually has a stronger non-malleability property than
ECDSA,
the signatures will be binding to the approved transaction and once Lamport
signed, even a quantum computer could not steal the funds.






--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-06 Thread Jeremy via bitcoin-dev
I don't think Elements engineering decisions or management timelines should
have any bearing on what Bitcoin adopts, beyond learning what
works/doesn't. Same as litecoin, dogecoin, or bitcoin cash :)

With my understanding of elements it makes sense that you wouldn't want to
break compatibility script version to script version, although that seems
inevitable that you will need to either hard fork or break compatibility if
you want to fix the CHECKSIGFROMSTACK has verify semantics bug. But perhaps
that's a smaller change than the # of stack elements popped? It makes sense
having CAT that adding a split CSFS wouldn't be a priority. However, I'd
suggest that as far as elements is concerned, if the bitcoin community
decides on something that is incompatible, elements can use up some
addition opcodes or a keytype to add CSFS_BITCOIN_COMPAT ops.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-06 Thread Jeremy via bitcoin-dev
heh -- I pointed out these evil multisig covenants in 2015 :)
https://medium.com/@jeremyrubin/regulating-bitcoin-by-mining-the-regulator-miner-attack-c8fd51185b78
I'm relatively unconcerned by it except to the extent that mining
centralizes to the point of censoring other traffic.

Overall, I think this is a great conversation to be having.

However, I want to push back on David's claim that  "Respecting the
concerns of others doesn't require lobotomizing useful tools.".

CHECKSIGFROMSTACK is a primitive and the opcode is not being nerfed in any
way shape or form. The argument here is that doing CSFS and not CAT is
nerfing CSFS... but CSFS is an independently useful and cool opcode that
has many of it's own merits.

Further, as described in my [blog post](
https://rubin.io/blog/2021/07/02/covenants/), CSFS has very high "design
specificity"... that is there's not *that* many design choices that could
possibly go into it. It's checking a signature. From the stack. That's all
folks! There are no design compromises in it. No lobotomy.

OP_CAT is more or less completely unrelated to CSFS. As Andrew has
[demonstrated](
https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html),
*just* OP_CAT alone (no CSFS) gives you covenants (albeit in a hacky way)
with Schnorr.

I think roconnor agrees that CAT(+CSFS?) are not really a "fantastic" way
to do covenants, that there are more direct approaches that will be better
or neccessary such as TWEAK or UPDATETAPLEAF. Let's work on those! But
let's also not hold up progress on other useful things while those are
brewing.

Non-Redundancy should be a non-goal for script -- although we strive to be
minimal, redundancy is inevitable. For example, OP_SWAP has identical
semantics to <1> ROLL, but SWAP is a common enough use that it is pragmatic
to assign it an opcode and OP_ROLL does something distinctly enhanced.
Similarly, even if we add CAT we will surely come up with saner ways to
implement covenant logic than Andrew's Schnorr tricks.

CTV in particular is designed to be a part of that story -- enough
functionality w/o OP_CAT to work *today* and serve a purpose long into the
future, but with OP_CAT (or shastream preferably) enhances it's
functionality in a useful way and with introspection opcodes (perhaps like
those being developed by elements) further gains functionality. Perhaps the
functionality available today will be redundant with a future way of doing
things, but we can only see so far into the future. However, we can see
that there are good things to build with it today.

It's the inverse of a lobotomy. Independent components that can come
together for a newer greater purpose rather than parts being torn apart
irreparably.

In the future when we have specific use cases in mind that *aren't* served
well (either efficiently or at all) by the existing primitives, it's
completely acceptable to add something new even if it makes an existing
feature redundant. APO, for example, will be redundant (afaict) will Glen
Willen's [Bitmask SigHash Flags](
https://bc-2.jp/archive/season2/materials/0203_NewElementsFeaturesEn.pdf)
should we ever get those.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-06 Thread Jeremy via bitcoin-dev
Re-threading Sanket's comment on split R value:

I also am in general support of the `OP_CHECKSIGFROMSTACK` opcode. We would
> need to update the suggestion to BIP340, and add it to sigops budget. I
> have no strong preference for splitting R and s values or variable-length
> messages.
>

Back to my comment:


I see a few options:

1) Making a new 64 byte PK standard which is (R, PK)
2) Splitting (R,S)
3) Different opcodes
4) CAT

The drawback of option 1 is that it's designed to support only very
specific use cases. The main drawback of splitting via option 2 is that you
entail an extra push byte for every use. Option 3 wastes opcodes. CAT has
the general drawbacks of CAT, but worth noting that CAT will likely
eventually land making the splitting feature redundant.


Before getting too in the weeds, it might be worth listing out interesting
script fragments that people are aware of with split R/S so we can see how
useful it might be?

Use a specific R Value
-   ||  SWAP  CSFS

Reuse arbitrary R for a specific M (pay to leak key)
-  ||  DUP2 EQUAL NOT VERIFY 2 PICK SWAP  DUP TOALTSTACK
CSFSV FROMALTSTACK CSFS

Verify 2 different messages reuse the same R.
-  ||  2 PICK EQUAL NOT VERIFY 3 PICK  DUP
TOALTSTACK CSFSV FROMALTSTACK CSFS

Use a R Value signed by an oracle:
-  || DUP TOALTSTACK  CSFSV
FROMALTSTACK SWAP  CSFS
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread Jeremy via bitcoin-dev
>
> Do you have concerns about sophisticated covenants, and if so, would you
> mind describing them?


Personally, not in particular worried about arbitrary covenants as I think
that: 1 validation costs can be kept in check; 2 you're free to burn your
coins it you want to.

I *do* care that when we enable covenants we don't make people jump through
too many hoops, but I also respect Russel's points that we can enable
functionality and then later figure out how to make it more efficient or
useful guided by use cases.


However, I think the broader community is unconvinced by the cost benefit
of arbitrary covenants. See
https://medium.com/block-digest-mempool/my-worries-about-too-generalized-covenants-5eff33affbb6
as a recent example. Therefore as a critical part of building consensus on
various techniques I've worked to emphasize that specific additions do not
entail risk of accidentally introducing more than was bargained for to
respect the concerns of others.


>
> I'm a fan of CSFS, even mentioning it on zndtoshi's recent survey[2],
> but it seems artificially limited without OP_CAT.  (I also stand by my
> answer on that survey of believing there's a deep lack of developer
> interest in CSFS at the moment.  But, if you'd like to tilt at that
> windmill, I won't stop you.)


Well if you're a fan of it, I'm a fan of it, Russel's a fan of it, and
Sanket's a fan of it that sounds like a good amount of dev interest :) I
know Olaoluwa is also a fan of it too and has some cool L2 protocols using
it.

I think it might not be *hype* because it's been around a while and has
always been bundled with cat so been a non starter for the reasons above. I
think as an independent non-bundle it's exciting and acceptable to a number
of devs. I also believe upgrades can be developed and tracked in parallel
so I'm taking on the windmill tilting personally to spearhead that -- on
the shoulders of Giants who have been creating specs for this already of
course.

Best,

Jeremy

P.s. icymi https://rubin.io/blog/2021/07/02/covenants/ covers my current
thinking about how to proceed w.r.t. deploying and developing covenant
systems for bitcoin
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread Jeremy via bitcoin-dev
I don't really see the point of CHECKSIGFROMSTACKADD since it's not bound
to the txdata? When might you use this?

And yes -- "Add OP_CHECKSIGFROMSTACK and OP_CHECKSIGFROMSTACKVERIFY to
follow the semantics from bip340-342 when witness program is v1." is a bit
light on detail for what the BIP would end up looking like. If you're able
to open up the design process a bit more on that it would be good as I
think there are some topics worth discussing at large before things proceed
with Elements (assuming feature compatibility remains a goal).

The non-prehashed argument seems OK (at the cost of an extra byte...) but
are there specific applications for !=32 arguments? I can't think of a
particular one beyond perhaps efficiency. Can we safely use 0-520 byte
arguments?

Also do you have thoughts on the other questions i posed above? E.g.
splitting R/S could be helpful w/o CAT.

--
@JeremyRubin 



On Sat, Jul 3, 2021 at 1:13 PM Russell O'Connor 
wrote:

> There is one line written at
> https://github.com/ElementsProject/elements/pull/949/files#r660130155.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-03 Thread Jeremy via bitcoin-dev
Awesome to hear that!

Actually I don't think I did know (or I forgot/didn't catch it) that there
was an updated spec for elements, I searched around for what I could find
and came up empty handed. Do you have any links for that? That sounds
perfect to me.


On Sat, Jul 3, 2021, 10:50 AM Russell O'Connor 
wrote:

> Hi Jermy,
>
> As you are aware, we, and by we I mean mostly Sanket, are developing an
> updated OP_CHECKSIGFROMSTACK implementation for tapscript on elements.  The
> plan here would be to effectively support the an interface to the
> variable-length extension of BIP-0340 schnorr signatures.
>
> BIP-0340 would dispense with DER encoding (good riddance).
> BIP-0340 signatures are batch verifiable along with other BIP-0340
> transaction signatures and taproot tweak verification.
> Support for variable length messages in BIP-0340 has been discussed in <
> https://github.com/sipa/bips/issues/207> and an implementation has
> recently been merged in <
> https://github.com/bitcoin-core/secp256k1/pull/844>.  The BIP has not yet
> been updated but the difference is that the message m does not have to be
> 32-bytes (it is recommended that the message be a 32-bit tagged hash or a
> message with a 64-bit application specific prefix). The CHECKSIGFROMSTACK
> operation (in tapscript) would use a stack item for this m value to
> BIP-0340 signature verification and would not necessarily have to be 32
> bytes.
>
> I think this design we are aiming for would be perfectly suited for
> Bitcoin as well.
>
> On Sat, Jul 3, 2021 at 12:32 PM Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Reproduced below is the BIP text from Bitcoin Cash's (MIT-Licensed)
>> specification for "CheckDataSig", more or less the same thing as
>> CHECKSIGFROMSTACK
>> https://github.com/bitcoincashorg/bitcoincash.org/blob/master/spec/op_checkdatasig.md.
>> In contrast to Element's implementation, it does not have Element's bugs
>> around verify semantics and uses the nullfail rule, and there is a
>> specification document so it seemed like the easiest starting point for
>> discussion v.s. drafting something from scratch.
>>
>> Does anyone have any issue with adapting this exact text and
>> implementation to a BIP for Bitcoin using 2 OP_SUCCESSX opcodes?
>>
>> Note that with *just* CheckSigFromStack, while you can do some very
>> valuable use cases, but without OP_CAT it does not enable sophisticated
>> covenants (and as per
>> https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html
>> just CAT alone enables such uses).
>>
>> Design questions worth considering as modifications:
>>
>> 1. Should CSFS require some sort of tagged hash? Very likely answer is no
>> – tags interfere with certain use cases
>> 2. Should CSFS split the signature’s R & S value stack items for some
>> applications that otherwise may require OP_CAT? E.g. using a pinned R value
>> allows you to extract a private key if ever double signed, using 2 R values
>> allows pay-to-reveal-key contracts. Most likely answer is no, if that is
>> desired then OP_CAT can be introduced
>> 3. Should CSFS support a cheap way to reference the taproot internal or
>> external key? Perhaps, can be handled with undefined upgradeable keytypes.
>> One might want to use the internal key, if the signed data should be valid
>> independent of the tapscript tree. One might want to use the external key,
>> if the data should only be valid for a single tapscript key + tree.
>> 4. Should invalid public keys types be a NOP to support future extended
>> pubkey types?
>>
>>
>>
>> Best,
>>
>>
>> Jeremy
>>
>>
>> ---
>> layout: specification
>> title: OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY Specification
>> category: spec
>> date: 2018-08-20
>> activation: 154230
>> version: 0.6
>> ---
>>
>> OP_CHECKDATASIG
>> ===
>>
>> OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY check whether a signature is valid 
>> with respect to a message and a public key.
>>
>> OP_CHECKDATASIG permits data to be imported into a script, and have its 
>> validity checked against some signing authority such as an "Oracle".
>>
>> OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY are designed to be implemented 
>> similarly to OP_CHECKSIG [1]. Conceptually, one could imagine OP_CHECKSIG 
>> functionality being replaced by OP_CHECKDATASIG, along with a separate Op 
>> Code to create a hash from the transaction based on the SigHash algorithm.
>>
>> OP_CHECKDATASIG Specificat

[bitcoin-dev] CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-03 Thread Jeremy via bitcoin-dev
Reproduced below is the BIP text from Bitcoin Cash's (MIT-Licensed)
specification for "CheckDataSig", more or less the same thing as
CHECKSIGFROMSTACK
https://github.com/bitcoincashorg/bitcoincash.org/blob/master/spec/op_checkdatasig.md.
In contrast to Element's implementation, it does not have Element's bugs
around verify semantics and uses the nullfail rule, and there is a
specification document so it seemed like the easiest starting point for
discussion v.s. drafting something from scratch.

Does anyone have any issue with adapting this exact text and implementation
to a BIP for Bitcoin using 2 OP_SUCCESSX opcodes?

Note that with *just* CheckSigFromStack, while you can do some very
valuable use cases, but without OP_CAT it does not enable sophisticated
covenants (and as per
https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html just
CAT alone enables such uses).

Design questions worth considering as modifications:

1. Should CSFS require some sort of tagged hash? Very likely answer is no –
tags interfere with certain use cases
2. Should CSFS split the signature’s R & S value stack items for some
applications that otherwise may require OP_CAT? E.g. using a pinned R value
allows you to extract a private key if ever double signed, using 2 R values
allows pay-to-reveal-key contracts. Most likely answer is no, if that is
desired then OP_CAT can be introduced
3. Should CSFS support a cheap way to reference the taproot internal or
external key? Perhaps, can be handled with undefined upgradeable keytypes.
One might want to use the internal key, if the signed data should be valid
independent of the tapscript tree. One might want to use the external key,
if the data should only be valid for a single tapscript key + tree.
4. Should invalid public keys types be a NOP to support future extended
pubkey types?



Best,


Jeremy


---
layout: specification
title: OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY Specification
category: spec
date: 2018-08-20
activation: 154230
version: 0.6
---

OP_CHECKDATASIG
===

OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY check whether a signature is
valid with respect to a message and a public key.

OP_CHECKDATASIG permits data to be imported into a script, and have
its validity checked against some signing authority such as an
"Oracle".

OP_CHECKDATASIG and OP_CHECKDATASIGVERIFY are designed to be
implemented similarly to OP_CHECKSIG [1]. Conceptually, one could
imagine OP_CHECKSIG functionality being replaced by OP_CHECKDATASIG,
along with a separate Op Code to create a hash from the transaction
based on the SigHash algorithm.

OP_CHECKDATASIG Specification
-

### Semantics

OP_CHECKDATASIG fails immediately if the stack is not well formed. To
be well formed, the stack must contain at least three elements
[``, ``, ``] in this order where `` is the
top element and
  * `` must be a validly encoded public key
  * `` can be any string
  * `` must follow the strict DER encoding as described in [2]
and the S-value of `` must be at most the curve order divided by
2 as described in [3]

If the stack is well formed, then OP_CHECKDATASIG pops the top three
elements [``, ``, ``] from the stack and pushes true
onto the stack if `` is valid with respect to the raw
single-SHA256 hash of `` and `` using the secp256k1
elliptic curve. Otherwise, it pops three elements and pushes false
onto the stack in the case that `` is the empty string and fails
in all other cases.

Nullfail is enforced the same as for OP_CHECKSIG [3]. If the signature
does not match the supplied public key and message hash, and the
signature is not an empty byte array, the entire script fails.

### Opcode Number

OP_CHECKDATASIG uses the previously unused opcode number 186 (0xba in
hex encoding)

### SigOps

Signature operations accounting for OP_CHECKDATASIG shall be
calculated the same as OP_CHECKSIG. This means that each
OP_CHECKDATASIG shall be counted as one (1) SigOp.

### Activation

Use of OP_CHECKDATASIG, unless occuring in an unexecuted OP_IF branch,
will make the transaction invalid if it is included in a block where
the median timestamp of the prior 11 blocks is less than 154230.

### Unit Tests

 - `   OP_CHECKDATASIG` fails if 15 November 2018
protocol upgrade is not yet activated.
 - `  OP_CHECKDATASIG` fails if there are fewer than 3 items on stack.
 - `   OP_CHECKDATASIG` fails if `` is not a
validly encoded public key.
 - `   OP_CHECKDATASIG` fails if `` is not a
validly encoded signature with strict DER encoding.
 - `   OP_CHECKDATASIG` fails if signature ``
is not empty and does not pass the Low S check.
 - `   OP_CHECKDATASIG` fails if signature ``
is not empty and does not pass signature validation of `` and
``.
 - `   OP_CHECKDATASIG` pops three elements and
pushes false onto the stack if `` is an empty byte array.
 - `   OP_CHECKDATASIG` pops three elements and
pushes true onto the stack if `` is a valid signature of ``
with respect to ``.


[bitcoin-dev] Templates, Eltoo, and Covenants, Oh My!

2021-07-03 Thread Jeremy via bitcoin-dev
Dear Bitcoin Devs,

I recently put a blog post up which is of interest for this list. Post
available here: https://rubin.io/blog/2021/07/02/covenants/ (text
reproduced below for archives).

The main technical points of interest for this list are:

1) There's a similar protocol to Eltoo built with CSFS + CTV
2) There may be a similar protocol to Eltoo with exclusively CSFS

I'm curious if there's any sentiment around if a soft fork enabling CSFS is
controversial? Or if there are any thoughts on the design questions posed
below (e.g., splitting r and s value).

Best,

Jeremy



If you've been following The Discourse, you probably know that Taproot is
merged, locked in, and will activate later this November. What you might not
know is what's coming next... and you wouldn't be alone in that. There are a
number of fantastic proposals floating around to further improve Bitcoin,
but
there's no clear picture on what is ready to be added next and on what
timeline. No one -- core developer, technically enlightened individuals,
power
users, or plebs -- can claim to know otherwise.


In this post I'm going to describe 4 loosely related possible upgrades to
Bitcoin -- SH_APO (BIP-118), OP_CAT, OP_CSFS, and OP_CTV (BIP-119). These
four
upgrades all relate to how the next generation of stateful smart contracts
can
be built on top of bitcoin. As such, there's natural overlap -- and
competition
-- for mindshare for review and deployment. This post is my attempt to
stitch
together a path we might take to roll them out and why that ordering makes
sense. This post is for developers and engineers building in the Bitcoin
space,
but is intended to be followable by anyone technical or not who has a keen
interest in Bitcoin.


## Bitcoin Eschews Roadmaps and Agendas.


I provide this maxim to make clear that this document is by no means an
official roadmap, narrative, or prioritization. However, it is my own
assessment of what the current most pragmatic approach to upgrading Bitcoin
is,
based on my understanding of the state of outstanding proposals and their
interactions.


My priorities in producing this are to open a discussion on potential new
features, risk minimization, and pragmatic design for Bitcoin.


### Upgrade Summaries


Below follows summaries of what each upgrade would enable and how it works.
You
might be tempted to skip it if you're already familiar with the upgrades,
but I
recommend reading in any case as there are a few non obvious insights.


 APO: SIGHASH_ANYPREVOUT, SIGHASH_ANYPREVOUTANYSCRIPT


Currently proposed as
[BIP-118](
https://github.com/bitcoin/bips/blob/d616d5492bc6e6566af1b9f9e43b660bcd48ca29/bip-0118.mediawiki).



APO provides two new signature digest algorithms that do not commit to the
coin
being spent, or the current script additionally. Essentially allowing
scripts
to use outputs that didn’t exist at the time the script was made. This
would be
a new promise enforced by Bitcoin (ex. “You can close this Lightning channel
and receive these coins if you give me the right proof. If a newer proof
comes
in later I’ll trust that one instead.”).


APO’s primary purpose is to enable off chain protocols like
[Eltoo](https://blockstream.com/2018/04/30/en-eltoo-next-lightning/), an
improved non-punitive payment channel protocol.


APO can also
[emulate](
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017038.html
)
some of the main features of CTV and could be made to work with Sapio,
partially. See the complimentary upgrades section for more detail.


 CAT (+ variants)


Currently no BIP. However, CAT exists in
[Elements](
https://github.com/ElementsProject/elements/blob/bd2e2d5c64d38286b2ca0519f1215bed228e4dcf/src/script/interpreter.cpp#L914-L933
)
and [Bitcoin
Cash](
https://github.com/bitcoincashorg/bitcoincash.org/blob/3e2e6da8c38dab7ba12149d327bc4b259aaad684/spec/may-2018-reenabled-opcodes.md
)
as a 520 byte limited form, so a proposal for Bitcoin can crib heavily from
either.


Cat enables appending data onto other pieces of data. Diabolically simple
functionality that has many advanced use cases by itself and in concert with
other opcodes. There are many "straightforward" use cases of cat like
requiring
sighash types, requiring specific R values, etc, but there are too many
devious
use cases to list here.  Andrew Poelstra has a decent blogpost series ([part
1](https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html) and
[part
ii](https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-ii.html))
if
you're interested to read more. In particular, with much cleverness, it
seems
possible one could implement full covenants with just CAT, which covers
(inefficiently) most of the other techniques discussed in this post.


 CSFS: CHECKSIGFROMSTACK



Re: [bitcoin-dev] CheckSigFromStack for Arithmetic Values

2021-07-02 Thread Jeremy via bitcoin-dev
Yep -- sorry for the confusing notation but seems like you got it. C++
templates have this issue too btw :)

One cool thing is that if you have op_add for arbitrary width integers or
op_cat you can also make a quantum proof signature by signing the signature
made with checksig with the lamport.

There are a couple gotchas wrt crypto assumptions on that but I'll write it
up soon  it also works better in segwit V0 because there's no keypath
spend -- that breaks the quantum proofness of this scheme.

On Fri, Jul 2, 2021, 4:58 PM ZmnSCPxj  wrote:

> Good morning Jeremy,
>
> > Dear Bitcoin Devs,
> >
> > It recently occurred to me that it's possible to do a lamport signature
> in script for arithmetic values by using a binary expanded representation.
> There are some applications that might benefit from this and I don't recall
> seeing it discussed elsewhere, but would be happy for a citation/reference
> to the technique.
> >
> > blog post here, https://rubin.io/blog/2021/07/02/signing-5-bytes/, text
> reproduced below
> >
> > There are two insights in this post:
> > 1. to use a bitwise expansion of the number
> > 2. to use a lamport signature
> > Let's look at the code in python and then translate to bitcoin script:
> > ```python
> > def add_bit(idx, preimage, image_0, image_1):
> > s = sha256(preimage)
> > if s == image_1:
> > return (1 << idx)
> > if s == image_0:
> > return 0
> > else:
> > assert False
> > def get_signed_number(witnesses : List[Hash], keys : List[Tuple[Hash,
> Hash]]):
> > acc = 0
> > for (idx, preimage) in enumerate(witnesses):
> > acc += add_bit(idx, preimage, keys[idx][0], keys[idx][1])
> > return x
> > ```
> > So what's going on here? The signer generates a key which is a list of
> pairs of
> > hash images to create the script.
> > To sign, the signer provides a witness of a list of preimages that match
> one or the other.
> > During validation, the network adds up a weighted value per preimage and
> checks
> > that there are no left out values.
> > Let's imagine a concrete use case: I want a third party to post-hoc sign
> a sequence lock. This is 16 bits.
> > I can form the following script:
> > ```
> >  checksigverify
> > 0
> > SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<1> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<2> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<3> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<4> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<5> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<6> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<7> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<8> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<9> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<10> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<11> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<12> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<13> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<14> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1<<15> ADD ELSE 
> EQUALVERIFY ENDIF
> > CHECKSEQUENCEVERIFY
> > ```
>
> This took a bit of thinking to understand, mostly because you use the `<<`
> operator in a syntax that uses `< >` as delimiters, which was mildly
> confusing --- at first I thought you were pushing some kind of nested
> SCRIPT representation, but in any case, replacing it with the actual
> numbers is a little less confusing on the syntax front, and I think (hope?)
> most people who can understand `1<<1` have also memorized the first few
> powers of 2
>
> > ```
> >  checksigverify
> > 0
> > SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <2> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <4> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <8> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <16> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <32> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <64> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <128> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <256> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <512> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <1024> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <2048> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <4096> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <8192> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP sha256 DUP  EQUAL IF DROP <16384> ADD ELSE 
> EQUALVERIFY ENDIF
> > SWAP 

[bitcoin-dev] CheckSigFromStack for Arithmetic Values

2021-07-02 Thread Jeremy via bitcoin-dev
Dear Bitcoin Devs,

It recently occurred to me that it's possible to do a lamport signature in
script for arithmetic values by using a binary expanded representation.
There are some applications that might benefit from this and I don't recall
seeing it discussed elsewhere, but would be happy for a citation/reference
to the technique.

blog post here, https://rubin.io/blog/2021/07/02/signing-5-bytes/, text
reproduced below

There are two insights in this post:

1. to use a bitwise expansion of the number
2. to use a lamport signature

Let's look at the code in python and then translate to bitcoin script:

```python
def add_bit(idx, preimage, image_0, image_1):
s = sha256(preimage)
if s == image_1:
return (1 << idx)
if s == image_0:
return 0
else:
assert False

def get_signed_number(witnesses : List[Hash], keys : List[Tuple[Hash,
Hash]]):
acc = 0
for (idx, preimage) in enumerate(witnesses):
acc += add_bit(idx, preimage, keys[idx][0], keys[idx][1])
return x
```

So what's going on here? The signer generates a key which is a list of
pairs of
hash images to create the script.

To sign, the signer provides a witness of a list of preimages that match
one or the other.

During validation, the network adds up a weighted value per preimage and
checks
that there are no left out values.

Let's imagine a concrete use case: I want a third party to post-hoc sign a
sequence lock. This is 16 bits.
I can form the following script:


```
 checksigverify
0
SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<1> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<2> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<3> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<4> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<5> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<6> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<7> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<8> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<9> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<10> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<11> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<12> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<13> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<14> ADD ELSE 
EQUALVERIFY ENDIF
SWAP sha256 DUP  EQUAL IF DROP <1<<15> ADD ELSE 
EQUALVERIFY ENDIF
CHECKSEQUENCEVERIFY
```

In order to sign a 16 bit value V, the owner of K simply puts on the stack
the
binary representation of V indexed into the K. E.g., to sign `53593`, first
expand to binary `0b1101000101011001`, then put the appropriate K values on
the
stack.

```
K_15_1
K_14_1
K_13_0
K_12_1
K_11_0
K_10_0
K_9_0
K_8_1
K_7_0
K_6_1
K_5_0
K_4_1
K_3_1
K_2_0
K_1_0
K_0_1

```


This technique is kind of bulky! It's around 80x16 = 1280 length for the
gadget, and 528 bytes for the witnesses. So it is _doable_, if not a bit
expensive. There might be some more efficient scripts for this -- would a
trinary representation be more efficient?

The values that can be signed can be range limited either post-hoc (using
OP\_WITHIN) or internally as was done with the 16 bit value circuit where
it's
impossible to do more than 16 bits.

Keys *can* be reused across scripts, but signatures may only be constructed
one
time because a third party could take two signed messages and construct an
unintended value (e.g., if you sign both 4 and 2 then a third party could
construct 6).

There are certain applications where this could be used for an effect -- for
example, an oracle might have a bonding contract whereby possessing any
K\_i\_0
and K\_i\_1 allows the burning of funds.

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0

2021-06-26 Thread Jeremy via bitcoin-dev
If the parties trust each other, rbf is still opt-in. Just don't do it?

On Sat, Jun 26, 2021, 9:30 AM Billy Tetrud via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> >  services providers are offering zero-conf channels, where you can start
> to spend instantly [0]. I believe that's an interesting usage
>
> I agree those are interesting and useful cases. I suppose I should clarify
> that when I asked if bitcoin should continue supporting 0-conf
> transactions, I meant: should we make design decisions based on whether it
> makes raw 0-conf transactions more or less difficult to double spend on? I
> do think 0-conf transactions can be useful in situations where there is
> some level of trust (either direct trust between the interacting parties,
> or disperse trust that most people won't try to double spend, perhaps
> because the transaction is small or their identity is tied to it). Fidelity
> bonds sound like an interesting way to mitigate sybil attacks in a
> reputation system.
>
> On Thu, Jun 24, 2021 at 5:23 PM Antoine Riard 
> wrote:
>
>> > Do we as a community want to support 0-conf payments in any way at this
>> > point? It seems rather silly to make software design decisions to
>> > accommodate 0-conf payments when there are better mechanisms for fast
>> > payments (ie lightning).
>>
>> Well, we have zero-conf LN channels ? Actually, Lightning channel funding
>> transactions should be buried under a few blocks, though few services
>> providers are offering zero-conf channels, where you can start to spend
>> instantly [0]. I believe that's an interesting usage, though IMHO as
>> mentioned we can explore different security models to make 0-conf safe
>> (reputation/fidelity-bond).
>>
>> > One question I have is: how does software generally inform the user
>> about
>> 0-conf payment detection?
>>
>> Yes generally it's something like an "Unconfirmed" annotation on incoming
>> txn, though at least this is what Blockstream Green or Electrum are doing.
>>
>> > But I
>> suppose it would depend on how often 0-conf is used in the bitcoin
>> ecosystem at this point, which I don't have any data on.
>>
>> There are few Bitcoin services well-known to rely on 0-conf. Beyond how
>> much of the Bitcoin traffic is tied to a 0-conf is a hard question, a lot
>> of 0-confs service providers are going to be reluctant to share the
>> information, for a really good reason you will learn a subset of their
>> business volumes.
>>
>> I'll see if I can come up with some Fermi estimation on this front.
>>
>> [0] https://www.bitrefill.com/thor-turbo-channels/
>>
>> Le mer. 16 juin 2021 à 20:58, Billy Tetrud  a
>> écrit :
>>
>>> Russel O'Connor recently opined
>>> 
>>> that RBF should be standard treatment of all transactions, rather than as a
>>> transaction opt-in/out. I agree with that. Any configuration in a
>>> transaction that has not been committed into a block yet simply can't be
>>> relied upon. Miners also have a clear incentive to ignore RBF rules and
>>> mine anything that passes consensus. At best opting out of RBF is a weak
>>> defense, and at worst it's simply a false sense of security that is likely
>>> to actively lead to theft events.
>>>
>>> Do we as a community want to support 0-conf payments in any way at this
>>> point? It seems rather silly to make software design decisions to
>>> accommodate 0-conf payments when there are better mechanisms for fast
>>> payments (ie lightning).
>>>
>>> One question I have is: how does software generally inform the user
>>> about 0-conf payment detection? Does software generally tell the user
>>> something along the lines of "This payment has not been finalized yet. All
>>> recipients should wait until the transaction has at least 1 confirmation,
>>> and most recipients should wait for 6 confirmations" ? I think unless we
>>> pressure software to be very explicit about what counts as finality, users
>>> will simply continue to do what they've always done. Rolling out this
>>> policy change over the course of a year or two seems fine, no need to rush.
>>> But I suppose it would depend on how often 0-conf is used in the bitcoin
>>> ecosystem at this point, which I don't have any data on.
>>>
>>> On Tue, Jun 15, 2021 at 10:00 AM Antoine Riard via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
 Hi,

 I'm writing to propose deprecation of opt-in RBF in favor of full-RBF
 as the Bitcoin Core's default replacement policy in version 24.0. As a
 reminder, the next release is 22.0, aimed for August 1st, assuming
 agreement is reached, this policy change would enter into deployment phase
 a year from now.

 Even if this replacement policy has been deemed as highly controversial
 a few years ago, ongoing and anticipated changes in the Bitcoin ecosystem
 are motivating this proposal.

 # RBF opt-out as a DoS Vector against 

  1   2   3   >