Re: [bitcoin-dev] Does Bitcoin require or have an honest majority or a rational one? (re rbf)

2022-10-20 Thread Jeremy Rubin via bitcoin-dev
The difference between honest majority and longest chain is that the
longest chain bug was something acknowledged by Satoshi & patched
https://github.com/bitcoin/bitcoin/commit/40cd0369419323f8d7385950e20342e998c994e1#diff-623e3fd6da1a45222eeec71496747b31R420
.


OTOH, we have more explicit references that the honest majority really
should be thought of as good guys vs bad guys... e.g.
>
> Thanks for bringing up that point.
> I didn't really make that statement as strong as I could have. The
> requirement is that the good guys collectively have more CPU power than any
> single attacker.
> There would be many smaller zombie farms that are not big enough to
> overpower the network, and they could still make money by generating
> bitcoins. The smaller farms are then the "honest nodes". (I need a better
> term than "honest") The more smaller farms resort to generating bitcoins,
> the higher the bar gets to overpower the network, making larger farms also
> too small to overpower it so that they may as well generate bitcoins too.
> According to the "long tail" theory, the small, medium and merely large
> farms put together should add up to a lot more than the biggest zombie farm.
> Even if a bad guy does overpower the network, it's not like he's instantly
> rich. All he can accomplish is to take back money he himself spent, like
> bouncing a check. To exploit it, he would have to buy something from a
> merchant, wait till it ships, then overpower the network and try to take
> his money back. I don't think he could make as much money trying to pull a
> carding scheme like that as he could by generating bitcoins. With a zombie
> farm that big, he could generate more bitcoins than everyone else combined.
> The Bitcoin network might actually reduce spam by diverting zombie farms
> to generating bitcoins instead.
> Satoshi Nakamoto



There is clearly a notion that Satoshi categorizes good guys / bad guys as
people interested in double spending and people who aren't.

Sure, Satoshi's writings don't *really* matter in the context of what
Bitcoin is / can be, and I've acknowledged that repeatedly. For you to call
it misleading is more misleading than for me to quote from it!

There's a reason I'm citing it. To not read the original source material
that pulled the community together is to make one ignorant around why there
is resistance to something like RBF. This is because there are still
elements of the community who expect the rules that good-phenotype node
operators run to be the ones maximally friendly to resolving transactions
on the first seen basis, so that there aren't double spends. This is a view
which you can directly derive from these early writings around what one
should expect of node operators.

The burden rests on the community, who has undertaken a project to adopt a
different security model from the original "social contract" generated by
the early writings of Satoshi, to demonstrate why damaging one group's
reliance interest on a property derived from the honest majority assumption
is justified.

I do think the case can be fairly made for full RBF, but if you don't grok
the above maybe you won't have as much empathy for people who built a
business around particular aspects of the Bitcoin network that they feel
are now being changed. They have every right to be mad about that and make
disagreements known and argue for why we should preserve these properties.
As someone who wants for Bitcoin to be a system which doesn't arbitrarily
change rules based on the whims of others, I think it important that we can
steelman and provide strong cases for why our actions might be in the
wrong, so that we make sure our justifications are not only well-justified,
but that we can communicate them clearly to all participants in a global
value network.

--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Thu, Oct 20, 2022 at 3:28 PM Peter Todd  wrote:

> On Sun, Oct 16, 2022 at 01:35:54PM -0400, Jeremy Rubin via bitcoin-dev
> wrote:
> > The Bitcoin white paper says:
> >
> > The proof-of-work also solves the problem of determining representation
> in
> > majority decision
> > making. If the majority were based on one-IP-address-one-vote, it could
> be
> > subverted by anyone
> > able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote.
> > The majority
> > decision is represented by the longest chain, which has the greatest
> > proof-of-work effort invested
> > in it. If a majority of CPU power is controlled by honest nodes, the
> honest
> > chain will grow the
> > fastest and outpace any competing chains. To modify a past block, an
> > attacker would have to
> > redo the proof-of-work of the block and all blocks after it and then
> catch
> > up with and surpass the
>

Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-10-19 Thread Jeremy Rubin via bitcoin-dev
If they do this to you, and the delta is substantial, can't you sweep all
such abusers with a cpfp transaction replacing their package and giving you
the original txn?

On Wed, Oct 19, 2022, 7:33 AM Sergej Kotliar via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Chiming in on this thread as I feel like the real dangers of RBF as
> default policy aren't sufficiently elaborated here. It's not only about the
> zero-conf (I'll get to that) but there is an even bigger danger called the
> american call option, which risks endangering the entirety of BIP21 "Scan
> this QR code with your wallet to buy this product" model that I believe
> we've all come to appreciate. Specifically, in a scenario with high
> volatility and many transactions in the mempools (which is where RBF would
> come in handy), a user can make a low-fee transaction and then wait for
> hours, days or even longer, and see whether BTCUSD moves. If BTCUSD moves
> up, user can cancel his transaction and make a new - cheaper one. The
> biggest risk in accepting bitcoin payments is in fact not zeroconf risk
> (it's actually quite easily managed), it's FX risk as the merchant must
> commit to a certain BTCUSD rate ahead of time for a purchase. Over time
> some transactions lose money to FX and others earn money - that evens out
> in the end. But if there is an _easily accessible in the wallet_ feature to
> "cancel transaction" that means it will eventually get systematically
> abused. A risk of X% loss on many payments that's easy to systematically
> abuse is more scary than a rare risk of losing 100% of one occasional
> payment. It's already possible to execute this form of abuse with opt-in
> RBF, which may lead to us at some point refusing those payments (even with
> confirmation) or cumbersome UX to work around it, such as crediting the
> bitcoin to a custodial account.
>
> To compare zeroconf risk with FX risk: I think we've had one incident in 8
> years of operation where a user successfully fooled our server to accept a
> payment that in the end didn't confirm. To successfully fool (non-RBF)
> zeroconf one needs to have access to mining infrastructure and probability
> of success is the % of hash rate controlled. This is simply due to the fact
> that the network currently won't propagage the replacement transaction to
> the miner, which is what's being discussed here. American call option risk
> would however be available to 100% of all users, needs nothing beyond the
> wallet app, and has no cost to the user - only upside.
>
> Bitrefill currently processes 1500-2000 onchain payments every day. For
> us, a world where bitcoin becomes de facto RBF by default, means that we
> would likely turn off the BIP21 model for onchain payments, instruct
> Bitcoin users to use Lightning or deposit onchain BTC to a custodial
> account that we have.
> This option is however not available for your typical
> BTCPayServer/CoinGate/Bitpay/IBEX/OpenNode et al. Would be great to hear
> from other merchants or payment providers how they see this new behavior
> and how they would counteract it.
>
> Currently Lightning is somewhere around 15% of our total bitcoin payments.
> This is very much not nothing, and all of us here want Lightning to grow,
> but I think it warrants a serious discussion on whether we want Lightning
> adoption to go to 100% by means of disabling on-chain commerce. For me
> personally it would be an easier discussion to have when Lightning is at
> 80%+ of all bitcoin transactions. Currently far too many bitcoin users
> simply don't have access to Lightning, and of those that do and hold their
> own keys Muun is the biggest wallet per our data, not least due to their
> ease-of-use which is under threat per the OP. It's hard to assess how many
> users would switch to Lightning in such a scenario, the communication
> around it would be hard. My intuition says that the majority of the current
> 85% of bitcoin users that pay onchain would just not use bitcoin anymore,
> probably shift to an alt. The benefits of Lightning are many and obvious,
> we don't need to limit onchain to make Lightning more appealing. As an
> anecdote, we did experiment with defaulting to bech32 addresses some years
> back. The result was that simply users of the wallets that weren't able to
> pay to bech32 didn't complete the purchase, no support ticket or anything,
> just "it didn't work 路‍♂️" and user moved on. We rolled it back, and later
> implemented a wallet selector to allow modern wallets to pay to bech32
> while other wallets can pay to P2SH. This type of thing  is clunky, and
> requires a certain level of scale to be able to do, we certainly wouldn't
> have had the manpower for that when we were starting out. This why I'm
> cautious about introducing more such clunkiness vectors as they are
> centralizing factors.
>
> I'm well aware of the reason for this policy being suggested and the
> potential pinning attack vector for LN and other 

Re: [bitcoin-dev] Ephemeral Anchors: Fixing V3 Package RBF againstpackage limit pinning

2022-10-18 Thread Jeremy Rubin via bitcoin-dev
Excellent proposal and I agree it does capture much of the spirit of
sponsors w.r.t. how they might be used for V3 protocols.

The only drawbacks I see is they don't work for lower tx version contracts,
so there's still something to be desired there, and that the requirement to
sweep the output must be incentive compatible for the miner, or else they
won't enforce it (pass the buck onto the future bitcoiners). The Ephemeral
UTXO concept can be a consensus rule (see
https://rubin.io/public/pdfs/multi-txn-contracts.pdf "Intermediate UTXO")
we add later on in lieu of managing them by incentive, so maybe it's a
cleanup one can punt.

One question I have is if V3 is designed for lightning, and this is
designed for lightning, is there any sense in requiring these outputs for
v3? That might help with e.g. anonymity set, as well as potentially keep
the v3 surface smaller.

On Tue, Oct 18, 2022 at 11:51 AM Greg Sanders via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > does that effectively mark output B as unspendable once the child gets
> confirmed?
>
> Not at all. It's a normal spend like before, since the parent has been
> confirmed. It's completely unrestricted, not being bound to any
> V3/ephemeral anchor restrictions on size, version, etc.
>
> On Tue, Oct 18, 2022 at 11:47 AM Arik Sosman via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Greg,
>>
>> Thank you very much for sharing your proposal!
>>
>> I think there's one thing about the second part of your proposal that I'm
>> missing. Specifically, assuming the scenario of a v3 transaction with three
>> outputs, A, B, and the ephemeral anchor OP_TRUE. If a child transaction
>> spends A and OP_TRUE, does that effectively mark output B as unspendable
>> once the child gets confirmed? If so, isn't the implication therefore that
>> to safely spend a transaction with an ephemeral anchor, all outputs must be
>> spent? Thanks!
>>
>> Best,
>> Arik
>>
>> On Tue, Oct 18, 2022, at 6:52 AM, Greg Sanders via bitcoin-dev wrote:
>>
>> Hello Everyone,
>>
>> Following up on the "V3 Transaction" discussion here
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
>> , I would like to elaborate a bit further on some potential follow-on work
>> that would make pinning severely constrained in many setups].
>>
>> V3 transactions may solve bip125 rule#3 and rule#5 pinning attacks under
>> some constraints[0]. This means that when a replacement is to be made and
>> propagated, it costs the expected amount of fees to do so. This is a great
>> start. What's left in this subset of pinning is *package limit* pinning. In
>> other words, a fee-paying transaction cannot enter the mempool due to the
>> existing mempool package it is being added to already being too large in
>> count or vsize.
>>
>> Zooming into the V3 simplified scenario for sake of discussion, though
>> this problem exists in general today:
>>
>> V3 transactions restrict the package limit of a V3 package to one parent
>> and one child. If the parent transaction includes two outputs which can be
>> immediately spent by separate parties, this allows one party to disallow a
>> spend from the other. In Gloria's proposal for ln-penalty, this is worked
>> around by reducing the number of anchors per commitment transaction to 1,
>> and each version of the commitment transaction has a unique party's key on
>> it. The honest participant can spend their version with their anchor and
>> package RBF the other commitment transaction safely.
>>
>> What if there's only one version of the commitment transaction, such as
>> in other protocols like duplex payment channels, eltoo? What about multi
>> party payments?
>>
>> In the package RBF proposal, if the parent transaction is identical to an
>> existing transaction in the mempool, the parent will be detected and
>> removed from the package proposal. You are then left with a single V3 child
>> transaction, which is then proposed for entry into the mempool. In the case
>> of another parent output already being spent, this is simply rejected,
>> regardless of feerate of the new child.
>>
>> I have two proposed solutions, of which I strongly prefer the latter:
>>
>> 1) Expand a carveout for "sibling eviction", where if the new child is
>> paying "enough" to bump spends from the same parent, it knocks its sibling
>> out of the mempool and takes the one child slot. This would solve it, but
>> is a new eviction paradigm that would need to be carefully worked through.
>>
>> 2) Ephemeral Anchors (my real policy-only proposal)
>>
>> Ephemeral Anchors is a term which means an output is watermarked as an
>> output that MUST be spent in a V3 package. We mark this anchor by being the
>> bare script `OP_TRUE` and of course make these outputs standard to relay
>> and spend with empty witness data.
>>
>> Also as a simplifying assumption, we require the parent transaction with
>> such an output to be 0-fee. This makes mempool 

Re: [bitcoin-dev] Does Bitcoin require or have an honest majority or a rational one? (re rbf)

2022-10-18 Thread Jeremy Rubin via bitcoin-dev
I think the issue with

I still think it is misguided to think that the "honest" (i.e. rule
> following) majority is to just be accepted as an axiom and if it is
> violated, well, then sorry.  The rules need to be incentive compatible for
> the system to be functional.  The honest majority is only considered an
> assumption because even if following the rules were clearly the 100%
> dominant strategy, this doesn't prove that the majority is honest, since
> mathematics cannot say what is happening in the real world at any given
> time.  Still, we must have a reason to think that the majority would be
> honest, and that reasoning should come from an argument that the rule set
> is incentive compatible.


epistemically is that even within the game that you prove the dominant
strategy, you can't be certain that you've captured (except maybe through
clever use of exogenous parameters, which reduces to the same thing as %
honest) the actual incentives of all players. For example, you would need
to capture the existence of large hegemonic governments defending their
legacy currencies by attacking bitcoin.


I think we may be talking past each other if it is a concern / valuable
exercise to decrease the assumptions that Bitcoin rests on to make it more
secure than it is as defined in the whitepaper. That's an exercise of
tremendous value. I think my point is that those things are aspirational
(aspirations that perhaps we should absolutely achieve?) but to the extent
that we need to fix things like the fee market, selfish mining, mind the
gap, etc, those are modifying Bitcoin to be secure (or more fair is perhaps
another way to look at it) in the presence of deviations from a
hypothesized "incentive compatible Bitcoin", which is a different thing
that "whitepaper bitcoin". I think that I largely fall in the camp -- as
evidenced by some past conversations I won't rehash -- that all of Bitcoin
should be incentive compatible and we should fix it if not. But from those
conversations I also learned that there are large swaths of the community
who don't share that value, or only share it up to a point, and do feel
comfortable resting on honest majority assumptions at one layer of the
stack or another. And I think that prior / axiom is a pretty central one to
debug or comprehend when dealing with, as is happening now, a fight over
something that seems obviously not incentive compatible.

--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Tue, Oct 18, 2022 at 10:30 AM Russell O'Connor 
wrote:

> On Tue, Oct 18, 2022 at 9:07 AM Jeremy Rubin via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>> However, what *is* important about what Satoshi wrote is that it is sort
>> of the "social contract" of what Bitcoin is that we can all sort of
>> minimally agree to. This makes it clear, when we try to describe Bitcoin
>> with differing assumptions than in the whitepaper, what the changes are and
>> why we think the system might support those claims. But if we can't prove
>> the new description sound, such as showing tip mining to be rational in a
>> fully adversarial model, it doesn't mean Bitcoin doesn't work as promised,
>> since all that was promised originally is functioning under an honest
>> majority. Caveat Emptor!
>>
>
> I still think it is misguided to think that the "honest" (i.e. rule
> following) majority is to just be accepted as an axiom and if it is
> violated, well, then sorry.  The rules need to be incentive compatible for
> the system to be functional.  The honest majority is only considered an
> assumption because even if following the rules were clearly the 100%
> dominant strategy, this doesn't prove that the majority is honest, since
> mathematics cannot say what is happening in the real world at any given
> time.  Still, we must have a reason to think that the majority would be
> honest, and that reasoning should come from an argument that the rule set
> is incentive compatible.
>
> The stability of mining, i.e. the incentives to mine on the most work
> chain, is actually a huge concern, especially in a future low subsidy
> environment.  There is actually much fretting about this issue, and rightly
> so.  We don't actually know that Bitcoin can function in a low subsidy
> environment because we have never tested it.  Bitcoin could still end up a
> failure if that doesn't work out.  My current understanding/guess is that
> with a "thick mempool" (that is lots of transactions without large gaps in
> fee rates between them) and/or miners rationally leaving behind
> transactions to encourage mining on their block (after all it is in a
> miner's own interest not to have their block orphaned), that mining will be
> stable.  But I don't know this 

Re: [bitcoin-dev] Does Bitcoin require or have an honest majority or a rational one? (re rbf)

2022-10-17 Thread Jeremy Rubin via bitcoin-dev
Building on the most work chain is perhaps not rational in many normal
circumstances that can come up today under the stated reference strategy:

1) Take highest paying transactions that fit
2) Mine on tips

E.g., suppose:

Block N: Fees = 10, reward = 1

Mempool: Fees = 2

Mining block N+1 with the mempool leads to reward 2+1 = 3, reorging leads
to reward of up to 10 + 1 + c, (c < 2, where c is the extra transactions
that fit). Assume instead your reward is 8, leaving 3+c on the table.

If you assume all other miners are tip miners, and there are two
conflicting tips, they should pick the one with the more profit for them,
which is the new one you made as a non-tip miner since you "shared" some
fee.

You aren't particularly more likely to remine block N or N+1, before
someone builds on it, as opposed to deeper reorgs (which require larger
incentive).


However, as many have pointed out, perhaps not following the simple "honest
tip mining" strategy is bad for bitcoin, so maybe we should expect it not
to happen often? Or other strategies to emerge around selecting
transactions so that the next M blocks have a similar fee profile, as
opposed to picking greedily for the next block.


--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Sun, Oct 16, 2022 at 3:03 PM  wrote:

> The proof-of-work also solves the problem of determining
> representation in majority decision
> making. If the majority were based on one-IP-address-one-vote, it
> could be subverted by anyone
> able to allocate many IPs. Proof-of-work is essentially
> one-CPU-one-vote. The majority
> decision is represented by the longest chain, which has the greatest
> proof-of-work effort invested
> in it. If a majority of CPU power is controlled by honest nodes, the
> honest chain will grow the
> fastest and outpace any competing chains. To modify a past block, an
> attacker would have to
> redo the proof-of-work of the block and all blocks after it and then
> catch up with and surpass the
> work of the honest nodes. We will show later that the probability of a
> slower attacker catching up
> diminishes exponentially as subsequent blocks are added.
>
>
> It's interesting that Nash Equilibrium isn't mentioned here.  Since each
> miner has the option to either contribute to the longest chain or not, even
> if the miners know what strategy the other miners will use, they still
> wouldn't change their decision to contribute to the majority.
>
>
> For example, if I run a shop that takes rain checks, but I sell an
> item to a higher bidder who didn't have a hold on the item, that is
> not honest, but it may be selfish profit maximizing.
>
>
> It would be honest if the store policy said ahead of time they are allowed
> to sell rain checks for more in such an occurrence.  Although this is a
> good example of the difference between honest and rational.  I think this
> means it's not a Nash Equilibrium if we needed to rely on the store owner
> to be honest.
>
>
> Satoshi said an honest majority is required for the chain to be
> extended. Honest is not really defined though. Honesty, in my
> definition, is that you follow a pre specified rule, rational or not.
>
>
> My take is that "rational" is probably a better word than honest.  In
> terms of a Nash Equilibrium, each participant is simply trying to maximize
> their outcome and honesty doesn't matter (only that participants are
> rational).
>
>
> It seems a lot of the RBF controversy is that Protocol developers have
> aspired to make the honest behavior also be the rational behavior.
> This is maybe a good idea because, in theory, if the honest behavior
> is rational then we can make a weaker assumption of selfishness
> maximizing a parameter.
>
>
> I'm curious, can RBF can be described by a Nash Equilibrium?  If yes, then
> it also shouldn't matter if participants are honest?
>
>
> Overall, it might be nice to more tightly document what bitcoins
> assumptions are in practice and what those assumptions do in terms of
> properties of Bitcoin, as well as pathways to weakening the
> assumptions without compromising the behaviors users expect the
> network to have.  An "extended white paper" if you will.
>
>
> White paper 1.1 :D
>
>
> A last reflection is that Bitcoin is specified with an honest majority
> assumption, but also has a rational dishonest minority assumption over
> both endogenous (rewards) and exogenous (electricity) costs. Satoshi
> did not suggest, at least as I read it, that Bitcoin works with an
> rational majority assumption. (If anyone thinks these three are
> similar properties you can make some trivial counterexamples)
>
>
> My take is the opposite unless I'm missing something.  Participants are
> always in

Re: [bitcoin-dev] Does Bitcoin require or have an honest majority or a rational one? (re rbf)

2022-10-17 Thread Jeremy Rubin via bitcoin-dev
by, for example, a requirement that users delete their signing
> keys.  If Bitcoin relied on users acting against their own interest to
> function, I doubt Bitcoin would be in operation today.  Certainly I would
> have no interest in it.
>
> While it doesn't really matter, I do believe Satoshi was also aware that
> the rules cannot just be arbitrary, with no incentive to follow them.
> After all, he did note that it was designed to be in the miner's self
> interest to build upon the longest (most work) chain, even if that point
> ended up being rather involved.  That is to say, I don't think that an
> "honest" (i.e rule following) majority is meant to be taken as an
> assumption, rather it is something that ought to be a consequence of the
> design.
>
> Anyhow, the above is simply a comment on "honest majority", and I'm not
> trying to make a specific claim about RBF here, though I do have my
> opinions and I do see how it is related.
>
> On Sun, Oct 16, 2022 at 1:36 PM Jeremy Rubin via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> The Bitcoin white paper says:
>>
>> The proof-of-work also solves the problem of determining representation
>> in majority decision
>> making. If the majority were based on one-IP-address-one-vote, it could
>> be subverted by anyone
>> able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote.
>> The majority
>> decision is represented by the longest chain, which has the greatest
>> proof-of-work effort invested
>> in it. If a majority of CPU power is controlled by honest nodes, the
>> honest chain will grow the
>> fastest and outpace any competing chains. To modify a past block, an
>> attacker would have to
>> redo the proof-of-work of the block and all blocks after it and then
>> catch up with and surpass the
>> work of the honest nodes. We will show later that the probability of a
>> slower attacker catching up
>> diminishes exponentially as subsequent blocks are added.
>>
>>
>> This, Satoshi (who doesn't really matter anyways I guess?) claimed that
>> for Bitcoin to function properly you need a majority honest nodes.
>>
>> There are multiple behaviors one can describe as honest, and economically
>> rational or optimizing is not necessarily rational.
>>
>> For example, if I run a shop that takes rain checks, but I sell an item
>> to a higher bidder who didn't have a hold on the item, that is not honest,
>> but it may be selfish profit maximizing.
>>
>> Satoshi said an honest majority is required for the chain to be extended.
>> Honest is not really defined though. Honesty, in my definition, is that you
>> follow a pre specified rule, rational or not.
>>
>> It seems a lot of the RBF controversy is that Protocol developers have
>> aspired to make the honest behavior also be the rational behavior. This is
>> maybe a good idea because, in theory, if the honest behavior is rational
>> then we can make a weaker assumption of selfishness maximizing a parameter.
>>
>> However, Satoshi did not particularly bound what aspects of honesty are
>> important for the network, because there isn't a spec defining exactly what
>> is honest or not. And also as soon as people are honest, you can rely on
>> that assumption for good effect.
>>
>> And sometimes, defining an honest behavior can be creating a higher
>> utility system because most people are "law abiding citizens" who might not
>> be short term rational. For example, one might expect that miners would be
>> interested in making sure lightning closes are "accurate" because
>> increasing the utility of lightning is good for Bitcoin, even if it is
>> irrational.
>>
>> It seems that the NoRBF crowd want to rely on an honest majority
>> assumption where the honest behavior is not doing replacement if not
>> requested. This is really not much different than trying to close lightning
>> channels "the right way".
>>
>> However, where it may be different, is that even in the presence of
>> honest majority, the safety of 0conf isn't assured given the potential of
>> race conditions in the mempool. Therefore it's not clear to me that 0conf
>> working well is something you can drive from the Honest Majority Assumption
>> (where honest includes first seen).
>>
>>
>> Overall, it might be nice to more tightly document what bitcoins
>> assumptions are in practice and what those assumptions do in terms of
>> properties of Bitcoin, as well as pathways to weakening the assumptions
>> without compromising the behav

[bitcoin-dev] Does Bitcoin require or have an honest majority or a rational one? (re rbf)

2022-10-16 Thread Jeremy Rubin via bitcoin-dev
The Bitcoin white paper says:

The proof-of-work also solves the problem of determining representation in
majority decision
making. If the majority were based on one-IP-address-one-vote, it could be
subverted by anyone
able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote.
The majority
decision is represented by the longest chain, which has the greatest
proof-of-work effort invested
in it. If a majority of CPU power is controlled by honest nodes, the honest
chain will grow the
fastest and outpace any competing chains. To modify a past block, an
attacker would have to
redo the proof-of-work of the block and all blocks after it and then catch
up with and surpass the
work of the honest nodes. We will show later that the probability of a
slower attacker catching up
diminishes exponentially as subsequent blocks are added.


This, Satoshi (who doesn't really matter anyways I guess?) claimed that for
Bitcoin to function properly you need a majority honest nodes.

There are multiple behaviors one can describe as honest, and economically
rational or optimizing is not necessarily rational.

For example, if I run a shop that takes rain checks, but I sell an item to
a higher bidder who didn't have a hold on the item, that is not honest, but
it may be selfish profit maximizing.

Satoshi said an honest majority is required for the chain to be extended.
Honest is not really defined though. Honesty, in my definition, is that you
follow a pre specified rule, rational or not.

It seems a lot of the RBF controversy is that Protocol developers have
aspired to make the honest behavior also be the rational behavior. This is
maybe a good idea because, in theory, if the honest behavior is rational
then we can make a weaker assumption of selfishness maximizing a parameter.

However, Satoshi did not particularly bound what aspects of honesty are
important for the network, because there isn't a spec defining exactly what
is honest or not. And also as soon as people are honest, you can rely on
that assumption for good effect.

And sometimes, defining an honest behavior can be creating a higher utility
system because most people are "law abiding citizens" who might not be
short term rational. For example, one might expect that miners would be
interested in making sure lightning closes are "accurate" because
increasing the utility of lightning is good for Bitcoin, even if it is
irrational.

It seems that the NoRBF crowd want to rely on an honest majority assumption
where the honest behavior is not doing replacement if not requested. This
is really not much different than trying to close lightning channels "the
right way".

However, where it may be different, is that even in the presence of honest
majority, the safety of 0conf isn't assured given the potential of race
conditions in the mempool. Therefore it's not clear to me that 0conf
working well is something you can drive from the Honest Majority Assumption
(where honest includes first seen).


Overall, it might be nice to more tightly document what bitcoins
assumptions are in practice and what those assumptions do in terms of
properties of Bitcoin, as well as pathways to weakening the assumptions
without compromising the behaviors users expect the network to have.  An
"extended white paper" if you will.


 It's somewhat clear to me that we shouldn't weaken assumptions that only
seem local to one subsystem of Bitcoin if they end up destabilizing another
system. In particular, things that decrease "transaction utility" for end
users decrease the demand for transactions which hurts the fee market's
longer term viability, even if we feel good about making an honest policy
assumption into a self interested policy assumption.

A last reflection is that Bitcoin is specified with an honest majority
assumption, but also has a rational dishonest minority assumption over both
endogenous (rewards) and exogenous (electricity) costs. Satoshi did not
suggest, at least as I read it, that Bitcoin works with an rational
majority assumption. (If anyone thinks these three are similar properties
you can make some trivial counterexamples)


Cheers,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Spookchains: Drivechain Analog with One-Time Trusted Setup & APO

2022-09-14 Thread Jeremy Rubin via bitcoin-dev
*also available here on my blog with nicer
formatting: https://rubin.io/bitcoin/2022/09/14/drivechain-apo/
*

This post draws heavily from Zmnscpxj's fantastic post showing how to
make drivechains with recursive covenants. In this post, I will show
similar tricks that can accomplish something similar using ANYPREVOUT
with a one time trusted setup ceremony.

This post presents general techniques that could be applied to many
different types of covenant.

# Peano Counters

The first component we need to build is a Peano counter graph. Instead
of using sha-256, like in Zmnscpxj's scheme, we will use a key and
build a simple 1 to 5 counter that has inc / dec.

Assume a key K1...K5, and a point NUMS which is e.g.
HashToCurve("Spookchains").

Generate scripts as follows:

```
<1 || K1> CHECKSIG
...
<1 || K5> CHECKSIG
```

Now generate 2 signatures under Ki with flags `SIGHASH_SINGLE |
SIGHASH_ANYONECANPAY | SIGHASH_ANYPREVOUT`.


## Rule Increment
For each Ki, when `i < 5`, create a signature that covers a
transaction described as:

```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K{i+1}> CHECKSIG})
```

## Rule Decrement
For each Ki, when `i > 1` The second signature should cover:
```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K{i-1}> CHECKSIG})
```



_Are these really Peano?_ Sort of. While a traditional Peano numeral
is defined as a structural type, e.g. `Succ(Succ(Zero))`, here we
define them via a Inc / Dec transaction operator, and we have to
explicitly bound these Peano numbers since we need a unique key per
element. They're at least spiritually similar.

## Instantiation
Publish a booklet of all the signatures for the Increment and
Decrement rules.

Honest parties should destroy the secret key sets `k`.


To create a counter, simply spend to output C:

```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K1> CHECKSIG})
```


The signature from K1 can be bound to C to 'transition' it to (+1):

```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K2> CHECKSIG})
```

Which can then transition to (+1):

```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K3> CHECKSIG})
```

Which can then transition (-1) to:

```
Amount: 1 satoshi
Key: Tr(NUMS, {<1 || K2> CHECKSIG})
```

This can repeat indefinitely.


We can generalize this technique from `1...5` to `1...N`.



# Handling Arbitrary Deposits / Withdrawals


One issue with the design presented previously is that it does not
handle arbitrary deposits well.

One simple way to handle this is to instantiate the protocol for every
amount you'd like to support.

This is not particularly efficient and requires a lot of storage
space.

Alternatively, divide (using base 2 or another base) the deposit
amount into a counter utxo per bit.

For each bit, instead of creating outputs with 1 satoshi, create
outputs with 2^i satoshis.

Instead of using keys `K1...KN`, create keys `K^i_j`, where i
represents the number of sats, and j represents the counter. Multiple
keys are required per amount otherwise the signatures would be valid
for burning funds.

## Splitting and Joining

For each `K^i_j`, it may also be desirable to allow splitting or
joining.

Splitting can be accomplished by pre-signing, for every `K^i_j`, where
`i!=0`, with `SIGHASH_ALL | SIGHASH_ANYPREVOUT`:

```
Input: 2^i sats with key K^i_j
Outputs:
- 2^i-1 sats to key K^{i-1}_j
- 2^i-1 sats to key K^{i-1}_j
```

Joining can be accomplished by pre-signing, for every `K^i_j`, where
`i!=MAX`, with `SIGHASH_ALL | SIGHASH_ANYPREVOUT`:

```
Inputs:
- 2^i sats with key K^i_j
- 2^i sats with key K^i_j
Outputs:
- 2^i+1 sats to key K^{i+1}_j
```

N.B.: Joining allows for third parties to deposit money in externally,
that is not a part of the covenant.


The splitting and joining behavior means that spookchain operators
would be empowered to consolidate UTXOs to a smaller number, while
allowing arbitrary deposits.


# One Vote Per Block

To enforce that only one vote per block mined is allowed, ensure that
all signatures set the input sequence to 1 block. No CSV is required
because nSequence is in the signatures already.

# Terminal States / Thresholds

When a counter reaches the Nth state, it represents a certain amount
of accumulated work over a period where progress was agreed on for
some outcome.

There should be some viable state transition at this point.

One solution would be to have the money at this point sent to an
`OP_TRUE` output, which the miner incrementing that state is
responsible for following the rules of the spookchain. Or, it could be
specified to be some administrator key / federation for convenience,
with a N block timeout that degrades it to fewer signers (eventually
0) if the federation is dead to allow recovery.

This would look like, from any `K^i_j`, a signature for a transaction
putting it into an `OP_TRUE` and immediately spending it. Other
spookchain miners would be expected to orphan that miner otherwise.


# Open States / Proposals

>From a state `K^i_1`, 

Re: [bitcoin-dev] More uses for CTV

2022-08-19 Thread Jeremy Rubin via bitcoin-dev
Presigned transactions have to use a N-of-N (2-2 for ln, more for pools)
multisignature which is computed over the network whereas in-script
commitments can be done 1 key that is a non-secret point (e.g., just the
generator I think works).

For large protocol trees (e.g., of size N) the savings can be substantial!
It also reduces the amount of state that needs to be stored since the
in-script sigs can be deterministic.

Rene has some nice work demonstrating that latency in generating state
transitions has a very substantial cost to the efficiency of routing, maybe
he can chime in further.


You can also do a "back-filling" where you get the best of both, by (after
you commit to the quick to generate in-script version) lazily backfilling
with an equivalent p2wpkh version. If you have a channel, when you are in
"burst mode", you can cancel the longer to generate p2wpkh version when
newer states come in. (data hazard/ bypass).


With respect to mining pools and size constraints,
https://rubin.io/bitcoin/2021/12/12/advent-15/ shows how paying into
batches of channels can be used to trustlessly compress payouts without
custodial relationship.


--
@JeremyRubin 

On Fri, Aug 19, 2022 at 11:53 AM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On 2022-08-19 06:33, James O'Beirne via bitcoin-dev wrote:
> > Multiple parties could
> > trustlessly collaborate to settle into a single CTV output using
> > SIGHASH_ALL | ANYONECANPAY. This requires a level of interaction
> > similar to coinjoins.
>
> Just to make sure I understand, is the reason for SH_ALL|SH_ACP so that
> any of the parties can subsequently RBF fee bump the transaction?
>
> > Conceptually, CTV is the most parsimonious way to do such a scheme,
> > since you can't really get smaller than a SHA256 commitment
>
> What's the advantage of CTV here compared to presigned transactions?  If
> multiple parties need to interact to cooperatively sign a transaction,
> no significant overhead is added by having them simultaneously sign a
> second transaction that spends from the output of the first transaction.
>   Presigned transactions actually have two small benefits I can think of:
>
> 1. The payment from the first transaction (containing the spends from
> the channel setup transactions) can be sent to a P2WPKH output, which is
> actually smaller than a SHA256 commitment.  Though this probably does
> require an extra round of communication for commit-and-reveal to prevent
> a collision attack on the P2WPKH address.[1]
>
> 2. Having the first transaction pay a either a P2WPKH or bech32m output
> and the second transaction spend from that UTXO may blend in better with
> other transactions, enhancing privacy.  This advantage probably isn't
> compatible with SH_ALL|SH_ACP, though, and it would require other
> privacy upgrades to LN.
>
> > direct-from-coinbase payouts seem like a
> > desirable feature which avoids some trust in pools.
> > [...]
> > If the payout was instead a single OP_CTV output, an arbitrary number
> > of pool participants could be paid out "atomically" within a single
> > coinbase.  One limitation is
> > the size of the coinbase outputs owed to constituent miners; this
> > limits the number of participants in the pool.
>
> I'm confused by this.  What is the size limitation on coinbase outputs,
> how does it limit the number of participants in a pool, and how does CTV
> fix that?
>
> Thanks,
>
> -Dave
>
> [1]
>
> https://bitcoinops.org/en/newsletters/2020/06/24/#reminder-about-collision-attack-risks-on-two-party-ecdsa
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [PROPOSAL] OP_TX: generalized covenants reduced to OP_CHECKTEMPLATEVERIFY

2022-06-24 Thread Jeremy Rubin via bitcoin-dev
I can't find a link, but I've discussed this before somewhere a while
ago... perhaps one of the IRC meetings? I'll see if I can't turn something
up.

The main reason not to was validation performance -- we already usually
compute the flat hash, so the merkle tree would be extra work for just CTV.

However, from an API perspective, I agree that a merkle tree could be
superior for CTV. It does depend on use case. If you have just, say, 3
outputs, a merkle tree probably just 'gets in the way' compared to the
concatenation. It is only when you have many outputs and your need to do a
random-index insertion that it adds value. In many applications, you might
be biased to editing the last output (e.g., change outputs?) and then
SHASTREAM would allow you to O(1) edit the tail.

Best,

Jeremy

On Thu, Jun 23, 2022 at 11:06 PM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, May 10, 2022 at 08:05:54PM +0930, Rusty Russell via bitcoin-dev
> wrote:
>
> > OPTX_SEPARATELY: treat fields separately (vs concatenating)
> > OPTX_UNHASHED: push on the stack without hashing (vs SHA256 before push)
>
> > OPTX_SELECT_OUTPUT_AMOUNT32x2*: sats out, as a high-low u31 pair
> > OPTX_SELECT_OUTPUT_SCRIPTPUBKEY*: output scriptpubkey
>
> Doing random pie-in-the-sky contract design, I had a case where I
> wanted to be able to say "update the CTV hash from commiting to outputs
> [A,B,C,D,E] to outputs [A,B,X,D,E]". The approach above and the one CTV
> takes are somewhat awkward for that:
>
>  * you have to include all of A,B,D,E in order to generate both hashes,
>which seems less efficient than a merkle path
>
>  * proving that you're taking an output in its entirety, rather than,
>say, the last 12 bytes of C and the first 30 bytes of D, seems hard.
>Again, it seems like a merkle path would be better?
>
> This is more of an upgradability concern I think -- ie, only relevant if
> additional features like CAT or TLUV or similar are added; but both OP_TX
> and CTV seem to be trying to take upgradability into account in advance,
> so I thought this was worth raising.
>
> Cheers,
> aj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting #9 Reminder + Agenda (Tuesday, May 17th, 12:00 PT / 7PM UTC)

2022-05-16 Thread Jeremy Rubin via bitcoin-dev
Developers,

A reminder that the regularly scheduled CTV Meeting is tomorrow at 12:00
Pacific Time in ##ctv-bip-review in Libera.

In terms of agenda, we'll keep it as an open forum for discussion guided by
the participants. We'll try to go over, minimally:

- Rusty's OP_TX
- Adding OP_CAT / CSFS

Feel free to propose meeting topics in the IRC in advance of the meeting to
aid in allocating time to things that you would like to have discussed.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Adding SIGHASH to TXID

2022-05-07 Thread Jeremy Rubin via bitcoin-dev
Have you seen the inherited ID proposal from John Law on this list?

It's a pretty thorough treatment of this type of proposal, curious if you
think it overlaps what you had in mind?

Honestly, I've yet to fully load in exactly how the applications of it
work, but I'd be interested to hear your thoughts.

On Sat, May 7, 2022, 4:55 AM vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> For now, we have txid:vout as a previous transaction output. This means
> that to have a stable TXID, we are forced to use SIGHASH_ALL somewhere,
> just to prevent any transaction modifications that can happen during adding
> some inputs and outputs. But it seems that new sighashes could be far more
> powerful than we expected: it is technically possible to not only remove
> previous transaction output by using SIGHASH_ANYPREVOUT. We can do more and
> do it better, we could decide, how to calculate this txid at all!
>
> So, something like SIGHASH_PREVOUT_NONE would be similar to SIGHASH_NONE
> (applied to the previous transaction, taken from txid). To have
> SIGHASH_ANYPREVOUT, we need to remove absolutely everything, I don't know
> any such sighashes, because even SIGHASH_NONE | SIGHASH_ANYONECANPAY will
> commit at least to some fields, for example to the locktime. But, if we
> introduce SIGHASH_PREVOUT_XYZ flags for all existing sighashes, we would
> have this:
>
> SIGHASH_PREVOUT_NONE
> SIGHASH_PREVOUT_SINGLE
> SIGHASH_PREVOUT_ALL
> SIGHASH_PREVOUT_ANYONECANPAY
>
> Then, the procedure is as follows: we use txid:vout to find our previous
> transaction. Then, we apply those sighashes to this previous transaction,
> to form a new txid, that will be checked during every OP_CHECKSIG-based
> opcode. In this way, our txid:vout is used just to do transaction lookup,
> after that, sighashes can be applied to the previous transaction, so our
> txid could remain stable, even if someone will add some inputs and outputs.
>
> By default, we could use SIGHASH_PREVOUT_ALL, that would mean our
> txid:vout remains unchanged. Then, SIGHASH_PREVOUT_SINGLE would obviously
> mean, that we want to commit only to this particular previous transaction
> output. That would allow adding any new outputs to the previous
> transaction, without affecting our replaced txid, but also without blindly
> accepting any txid, because some data of the previous transaction would be
> still hashed.
>
> Then, SIGHASH_PREVOUT_NONE is an interesting case, because it would mean
> that no outputs of the previous transaction are checked. But still, the
> inputs will be! That would mean: "I don't care about in-between addresses,
> but I care that it was initiated from these inputs". In this case, it is
> possible to choose some input without those flags, and then apply
> SIGHASH_PREVOUT_NONE many times, to make sure that everything started from
> that input, but everything in-between can be anything.
>
> All of those three SIGHASH_PREVOUT_XYZ flags could be combined with
> SIGHASH_PREVOUT_ANYONECANPAY. That would mean all inputs of the previous
> transaction are discarded, except from the input number matching "vout". Or
> we could just use SIGHASH_PREVOUT_ANY instead and discard all inputs from
> that previous transaction, that could also be combined with other sighashes.
>
> So, to sum up, by applying sighashes to the previous transaction, instead
> of allowing for any transaction, we could still have some control of our
> txid, and I think it could be better than just saying "give me any txid, I
> will accept that". I think in most cases we don't want to allow any txid:
> we want to only "control the flow", just to make sure that our signatures
> will sign what we want and will not be invalidated by changing some
> transaction inputs and outputs, unrelated to the currently-checked
> signature.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] ANYPREVOUT in place of CTV

2022-05-03 Thread Jeremy Rubin via bitcoin-dev
Antoine,

One high level reason to not prefer APO is that it gets 'dangerously close'
to fully recursive covenants.

E.g., just by tweaking APO to use a Schnorr signature without PK
commitment, Pubkey Recovery would be possible, and fully recursive
covenants could be done.

Short of that type of modification, you can still do a "trusted setup" key
deletion covenant with APO and have a fully recursive covenant set up. E.g.

<1 || N-N MuSig> APO

where the N-N MuSig pregenerates a signature of a transaction that commits
to an output with itself, e.g., using SIGHASH_SINGLE.

By itself, this is not super useful, but does create the type of thing that
people might worry about with a recursive covenant since after
initialization it is autonomous.

One use case for this might be, for example, a spacechain backbone that
infinitely iterates, so it isn't entirely useless.

If other opcodes are added, such as OP_IN_OUT_AMOUNT, then you can get all
sorts of recursive covenant interesting stuff on top of that, since you
could pre-sign e.g. for a quanitzed vault a number of different
deposit/withdraw programs as well as increasing balances depending on
timeout waited.


Therefore, I think reasonable people might discriminate the "complexity
class" of the design space available with just CTV v.s. APO.

In contrast, the approach of smaller independent steps:

1) Adding CTV
2) Adding CSFS (enables APO-like behavior, sufficient for Eltoo)
3) Adding flags to CTV, similar to TXHASH, or just adding TXHASH (enables
full covenants)
4) Ergonomic OPCodes for covenants like TLUV, EcTweak, MAST building, etc
(enables efficient covenants)

is a much more granular path where we are able to cleanly 'level up' into
each covenant complexity class only if we deem it to be safe.

comment about timelines to produce a modified APO

Best,

Jeremy

--
@JeremyRubin 

On Fri, Apr 22, 2022 at 4:23 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I would like to know people's sentiment about doing (a very slightly
> tweaked version of) BIP118 in place of
> (or before doing) BIP119.
>
> SIGHASH_ANYPREVOUT and its precedent iterations have been discussed for
> over 6 years. It presents proven and
> implemented usecases, that are demanded and (please someone correct me if
> i'm wrong) more widely accepted than
> CTV's.
>
> SIGHASH_ANYPREVOUTANYSCRIPT, if its "ANYONECANPAY" behaviour is made
> optional [0], can emulate CTV just fine.
> Sure then you can't have bare or Segwit v0 CTV, and it's a bit more
> expensive to use. But we can consider CTV
> an optimization of APO-AS covenants.
>
> CTV advocates have been presenting vaults as the flagship usecase.
> Although as someone who've been trying to
> implement practical vaults for the past 2 years i doubt CTV is necessary
> nor sufficient for this (but still
> useful!), using APO-AS covers it. And it's not a couple dozen more virtual
> bytes that are going to matter for
> a potential vault user.
>
> If after some time all of us who are currently dubious about CTV's stated
> usecases are proven wrong by onchain
> usage of a less efficient construction to achieve the same goal, we could
> roll-out CTV as an optimization.  In
> the meantime others will have been able to deploy new applications
> leveraging ANYPREVOUT (Eltoo, blind
> statechains, etc..[1]).
>
>
> Given the interest in, and demand for, both simple covenants and better
> offchain protocols it seems to me that
> BIP118 is a soft fork candidate that could benefit more (if not most of)
> Bitcoin users.
> Actually i'd also be interested in knowing if people would oppose the
> APO-AS part of BIP118, since it enables
> CTV's features, for the same reason they'd oppose BIP119.
>
>
> [0] That is, to not commit to the other inputs of the transaction (via
> `sha_sequences` and maybe also
> `sha_amounts`). Cf
> https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki#signature-message
> .
>
> [1] https://anyprevout.xyz/ "Use Cases" section
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting #8 Reminder + Agenda (Tuesday, May 3rd, 12:00 PT / 7PM UTC)

2022-05-02 Thread Jeremy Rubin via bitcoin-dev
Developers,

A reminder that the regularly scheduled CTV Meeting is tomorrow at 12:00
Pacific Time in ##ctv-bip-review in Libera.

In terms of agenda, we'll keep it as an open forum for discussion guided by
the participants. Feel free to propose meeting topics in the IRC in advance
of the meeting to aid in allocating time to things that you would like to
have discussed.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-05-02 Thread Jeremy Rubin via bitcoin-dev
Ok, got it. Won't waste anyone's time on terminology pedantism.


The model that I proposed above is simply what *any* correct timestamping
service must do. If OTS does not follow that model, then I suspect whatever
OTS is, is provably incorrect or, in this context, unreliable, even when
servers and clients are honest. Unreliable might mean different things to
different people, I'm happy to detail the types of unreliability issue that
arise if you do not conform to the model I presented above (of which,
linearizability is one way to address it, there are others that still
implement epoch based recommitting that could be conceptually sound without
requiring linearizability).

Do you have any formal proof of what guarantees OTS provides against which
threat model? This is likely difficult to produce without a formal model of
what OTS is, but perhaps you can give your best shot at producing one and
we can carry the conversation on productively from there.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] On The Drama

2022-05-01 Thread Jeremy Rubin via bitcoin-dev
Developers,

I know that some of you may be interested in hearing my perspective on what
happened and why. I still do not know exactly what happened and why.
However, I can offer a brief explanation of what I perceived my main
actions to be and the response to them:

1. I published and shared to this list a blog post encouraging review on
viability of having a Speedy Trial (ST) with signalling beginning around
3.5 weeks (May 12th), in line with previously communicated materials.
2. I held a regularly scheduled meeting to discuss the viability of an
activation attempt, "The Agenda for the meeting will be an open discussion
on the possibility of activating CTV[CheckTemplateVerify] in 2022, why we
may or may not wish to do that, if we did want to do that what would need
to be done, what the path might look like if we do not do that."
3. If ST was deemed viable, I provided a pathway for sufficient review to
occur and I also wrote User Resisted Soft Fork(URSF) software to be used
such that miners are not unilaterally in control, as well as encouragement
for someone to author a User Activated Soft Fork(UASF) as a follow up if
miners "vetoed".
4. If ST was not viable, I gave encouragement to more thoroughly "re-evaluate
the design of CTV against alternatives that would take more time to prepare
engineering wise (e.g., more general covenants, small tweaks to CTV)"
5. I Made clear that CTV activation was "not a must. All actors must decide
if it’s in their own rational self-interest to have the soft-fork proceed."
6. I provided a review of rationale for why I thought this to be the right
next step for CTV, and for future soft forks to follow.

Since I posted my blog, there have been a flurry of inaccurate claims
lobbed at me across various platforms that I am trying to route around
consensus, force miners to do a ST, force users to accept a patch they
don't want, calls for me to face various repercussions, attacks on my
character, and more. Anyone is free to read the material I actually
communicated myself and evaluate the claims of bad-faith being made. I
accept responsibility that ultimately I may not have communicated these
things clearly enough.

I've kept my word to listen to feedback on parameters before any release:

- I've not released binaries for a ST CTV client in May, and won't be.
- I've kept my promise not to run a UASF process.

I hope you can believe me that I am not trying to do anything wanton to
Bitcoin. I am trying to do my best to accurately communicate my exact
intentions and plans along the way, and learn from the ways I fell short.

I cannot thank enough the (majority!) of individuals who understand this
and have provided overwhelming amounts of personal support to me through
these last weeks. While I do not mistake that personal support for support
of my views, I wanted to share the depth of support and appreciation that
the community has for the difficult tasks developers engage in. This isn't
specific to me; the community has immense respect for the sacrifices every
developer makes in choosing to work on Bitcoin. The hate may be loud and
public on the shallow surface, but the love and support runs deep.

At the same time, it has been eye opening for me to see the processes by
which a kernel of disinformation blossoms into a panic across the Bitcoin
community. For any Bitcoin contributor who might engage in consensus
processes: Agree or disagree with the quality of my actions, it's worth
spending a little time to trace how the response to my proposal was
instigated so that you harden your own defenses against such disinformation
campaigns in the future. I encourage you to look closely at what various
"respected members of the community" have lobbied for because they
represent dangerous precedents for all Bitcoin developers. I've yet to
fully form my thoughts around this.

If you do not think that my actions lived up with my perception of them,
feel free to give me, either publicly or privately, any feedback on how I
can do better going forward.

With respect to this thread, I'll read whatever you send, but I won't be
reply-all'ing here as I view this as largely off-topic for this list,
unless anyone feels strongly otherwise.

Best,

Jeremy



--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Working Towards Consensus

2022-05-01 Thread Jeremy Rubin via bitcoin-dev
Developers,

There is much to say about the events of the last two weeks and the
response to them. I've been searching for the right words to share here,
but I think it best that short of a more thoughtful writeup I start with a
timely small step with the below comments.

First, let me be clear: I am not advancing a Speedy Trial(ST) activation of
Bitcoin Improvement Proposal-119 (BIP-119) CheckTemplateVerify (CTV) at
this time.

I'm skipping any discussion of the drama here. Most of you are interested
in developing Bitcoin, not drama. Let's try to keep this thread focused on
the actual work. I'll make some limited comments on the drama in a separate
thread, for those who care to hear from me on the subject directly.

I believe that the disinformation spread around my post ("7 Theses on a
next step for BIP-119"[0]) created three main negative outcomes within the
Bitcoin community:

1. Confusion about how Bitcoin's "technical consensus" works and how
changes are "approved".
2. Fear about the safety of CTV and covenants more broadly.
3. Misunderstandings around the properties of Speedy Trial, User Activated
Soft Fork (UASF), User Resisted Soft Fork (URSF), Soft Forks, Hard Forks,
and more.

While I cannot take responsibility for the spread of the disinformation, I
do apologize to anyone dealing with it for the role my actions have had in
leading to the current circumstance.

I personally take some solace in knowing that the only way out of this is
through it. The conversations happening now seem to have been more or less
inevitable, this has brought them to the surface, and as a technical
community we are able to address them head on if -- as individuals and
collectively -- we choose to. And, viewed through a certain lens, these
conversations represent incredibly important opportunities to participate
in defining the future of Bitcoin that would not be happening otherwise.
Ultimately, I am grateful to live in a time where I am able to play a small
role in such an important process. This is the work.

In the coming months, I expect the discourse to be messy, but I think the
work is clear cut that we should undertake at least the following:

1. Make great efforts to better document how Bitcoin's technical consensus
process works today, how it can be improved, and how changes may be
formally reviewed while still being unofficially advanced.
2. Work diligently to address the concerns many in the community have
around the negative potential of covenants and better explain the
trade-offs between levels of functionality.
3. Renew conversations about activation and release mechanisms and
re-examine our priors around why Speedy Trial may have been acceptable for
Taproot, was not acceptable for BIP-119, but may not be optimal long
term[1], and work towards processes that better captures the Bitcoin
network's diverse interests and requirements.
4. Work towards thoroughly systematizing knowledge around covenant
technologies so that in the coming months we may work towards delivering a
coherent pathway for the Bitcoin technical community to evaluate and put up
for offer to the broader community an upgrade or set of upgrades to improve
Bitcoin's capabilities for self sovereignty, privacy, scalability, and
decentralization.

This may not be the easiest path to take, but I believe that this work is
critical to the future of Bitcoin. I welcome all reading this to share your
thoughts with this list on how we might work towards consensus going
forward, including any criticisms of my observations and recommendations
above. While I would expect nothing less than passionate debate when it
comes to Bitcoin, remember that at the end of the day we all largely share
a mission to make the world a freer place, even if we disagree about how we
get there.

Yours truly,

Jeremy

[0]: https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/
[1]: http://r6.ca/blog/20210615T191422Z.html I quite enjoyed Roconnor's
detailed post on Speedy Trial

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-28 Thread Jeremy Rubin via bitcoin-dev
Sorry I didn't see this snippet fully earlier, but I caught it in Optech
(cc harding)


> *(I didn't think DROP/1 is necessary here? Doesn't leaving the 32 byte*
> *hash on the stack evaluate as true? I guess that means everyone's 
> using**sapio to
> construct the txs?)*


Not quite: it would mean that everyone is using *sapio-miniscript**, *which
may or may not be in Sapio, or they are using a different miniscript
implementation that is compatible with sapio-miniscript's CTV fragment
(which is sort of the most obvious way to implement it), or they are hand
writing the script and are still using that fragment.

E.g., you can see
https://min.sc/nextc/#gist=001cf1fcb0e24ca9f3614c4db9bfe57d:2 or
https://min.sc/nextc/#gist=001cf1fcb0e24ca9f3614c4db9bfe57d:0 both of these
might "look" like sapio, but are built using minsc.

The underlying point might still stand, but using miniscript seems
different than using Sapio.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks

2022-04-27 Thread Jeremy Rubin via bitcoin-dev
Generally speaking, I'm not too fond of these mechanisms, for reasons
others have expounded upon, but I will point out the following:

Taproot means that top-level keys can be used in a ring signature scheme to
collect coin votes from, e.g., all individual coins above a certain value
at a certain time without revealing the particulars of who signed.

This capability helps with some of the chainalysis concerns.

However, note that many thoughtful individuals do not currently have any
taproot outputs on mainchain AFAIK because wallets are not yet 'upgraded',
so it's more of a future possibility.

One thing that might be nice is if there were a way to sign with a NUMS
point for ring signature purposes, but not for transactions. Otherwise if
NUMS points are common these ring signatures protocols might not be too
useful for collecting signals (even if they remain useful for covering a
set including the NUMS pointed tr outs).

--
@JeremyRubin 

On Tue, Apr 26, 2022 at 1:12 PM Keagan McClelland via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Alongside the debate with CTV right now there's a second debate that was
> not fully hashed out in the activation of Taproot. There is a lot of
> argument around what Speedy Trial is or isn't, what BIP8 T/F is or isn't
> etc. A significant reason for the breakdown in civility around this debate
> is that because we don't have a means of measuring user support for
> proposed sof-fork changes, it invariably devolves into people claiming that
> their circles support/reject a proposal, AND that their circles are more
> broadly representative of the set of Bitcoin users as a whole.
>
> It seems everyone in this forum has at one point or another said "I would
> support activation of  if there was consensus on it, but there isn't".
> This statement, in order to be true, requires that there exist a set of
> conditions that would convince you that there is consensus. People have
> tried to dodge this question by saying "it's obvious", but the reality is
> that it fundamentally isn't. My bubble has a different "obvious" answer
> than any of yours.
>
> Secondly, due to the trauma of the block size wars, no one wants to utter
> a statement that could imply that miners have any influence over what
> rulesets get activated or don't. As such "miner signaling" is consistently
> devalued as a signal for market demand. I don't think this is reasonable
> since following the events of '17  miners are aware that they have the
> strong incentive that they understand market demand. Nevertheless, as it
> stands right now the only signal we have to work with is miner signaling,
> which I think is rightly frustrating to a lot of people.
>
> So how can we measure User Support for a proposed rule change?
>
> I've had this idea floating around in the back of my head for a while, and
> I'd like to solicit some feedback here. Currently, all forms of activation
> that are under consideration involve miner signaling in one form or
> another. What if we could make it such that users could more directly
> pressure miners to act on their behalf? After all, if miners are but the
> humble servants of user demands, this should be in alignment with how
> people want Bitcoin to behave.
>
> Currently, the only means users have of influencing miner decisions are A.
> rejection of blocks that don't follow rules and B. paying fees for
> transaction inclusion. I suggest we combine these in such a way that
> transactions themselves can signal for upgrade. I believe (though am not
> certain) that there are "free" bits in the version field of a transaction
> that are presently ignored. If we could devise a mapping between some of
> those free bits, and the signaling bits in the block header, it would be
> possible to have rules as follows:
>
> - A transaction signaling in the affirmative MUST NOT be included in a
> block that does not signal in the affirmative
> - A transaction that is NOT signaling MAY be included in a block
> regardless of that block's signaling vector
> - (Optional) A transaction signaling in the negative MUST NOT be included
> in a block that signals in the affirmative
>
> Under this set of conditions, a user has the means of sybil-resistant
> influence over miner decisions. If a miner cannot collect the fees for a
> transaction without signaling, the user's fee becomes active economic
> pressure for the miner to signal (or not, if we include some variant of the
> negative clause). In this environment, miners could have a better view into
> what users do want, as would the Bitcoin network at large.
>
> Some may take issue with the idea that people can pay for the outcome they
> want and may try to compare a method like this to Proof of Stake, but there
> are only 3 sybil resistant mechanisms I am aware of, and any "real" view
> into what social consensus looks like MUST be sybil resistant:
>
> - Hashpower
> - Proof of personhood (KYC)

Re: [bitcoin-dev] ANYPREVOUT in place of CTV

2022-04-26 Thread Jeremy Rubin via bitcoin-dev
I can't find all of my earlier references around this, I thought I made a
thread on it, but as a reminder, my thoughts for mild tweaks to APO that
make it a bit less hacky are as follows:

- Remove OP_1 key punning and replace it with OP_GENERATOR and
OP_INTERNALKEY (maybe OP_EXTERNALKEY too?). The key punning is useful
generically, because I may want to reuse the internal key in conjunction
with a script path in some circumstances.
- Add an additional sequence field that is specific to a signature with no
other consensus meaning, so APO can be used with absolute timelocks. For
example, this makes it impossible for more than one ratchet to be
aggregated within a single transaction under any circumstance if their
sequences differ (not sure this is a good example, but an example
nonetheless).
- Replace tagged keys for APO with either a Checksig2 or a separate feature
flag that enables or disables APO behavior so that we can have programmatic
control over if APO is allowed for a given key (e..g., OP_IF  CSV DROP
CHECKSIG2 OP_ELSE CHECKSIG OP_ENDIF enables APO to be turned on after a
certain time, perhaps for a pre-approved backup transaction).

Overall, this would make eltoo ratchets look something like this:

  OP_1 OP_INTERNALKEY OP_CHECKSIG2VERIFY  OP_GREATERTHAN

where checksig2 leaves seq on the stack which can be used to enforce the
ratchet.

and covenants like:

 OP_1 OP_1 OP_GENERATOR OP_CHECKSIG2VERIFY







On Fri, Apr 22, 2022 at 4:23 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I would like to know people's sentiment about doing (a very slightly
> tweaked version of) BIP118 in place of
> (or before doing) BIP119.
>
> SIGHASH_ANYPREVOUT and its precedent iterations have been discussed for
> over 6 years. It presents proven and
> implemented usecases, that are demanded and (please someone correct me if
> i'm wrong) more widely accepted than
> CTV's.
>
> SIGHASH_ANYPREVOUTANYSCRIPT, if its "ANYONECANPAY" behaviour is made
> optional [0], can emulate CTV just fine.
> Sure then you can't have bare or Segwit v0 CTV, and it's a bit more
> expensive to use. But we can consider CTV
> an optimization of APO-AS covenants.
>
> CTV advocates have been presenting vaults as the flagship usecase.
> Although as someone who've been trying to
> implement practical vaults for the past 2 years i doubt CTV is necessary
> nor sufficient for this (but still
> useful!), using APO-AS covers it. And it's not a couple dozen more virtual
> bytes that are going to matter for
> a potential vault user.
>
> If after some time all of us who are currently dubious about CTV's stated
> usecases are proven wrong by onchain
> usage of a less efficient construction to achieve the same goal, we could
> roll-out CTV as an optimization.  In
> the meantime others will have been able to deploy new applications
> leveraging ANYPREVOUT (Eltoo, blind
> statechains, etc..[1]).
>
>
> Given the interest in, and demand for, both simple covenants and better
> offchain protocols it seems to me that
> BIP118 is a soft fork candidate that could benefit more (if not most of)
> Bitcoin users.
> Actually i'd also be interested in knowing if people would oppose the
> APO-AS part of BIP118, since it enables
> CTV's features, for the same reason they'd oppose BIP119.
>
>
> [0] That is, to not commit to the other inputs of the transaction (via
> `sha_sequences` and maybe also
> `sha_amounts`). Cf
> https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki#signature-message
> .
>
> [1] https://anyprevout.xyz/ "Use Cases" section
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] What to expect in the next few weeks

2022-04-26 Thread Jeremy Rubin via bitcoin-dev
Thanks, this is good feedback.

I think the main thing then to add to forkd would be some sort of seed
nodes set that you can peer with of other forkd runners? And have forkd be
responsible for making sure you addnode them?

wrt the generation of other problems, my understanding of the *summons
rusty's bat signal i wonder if he'll see this* triumvirate in this context
is that it's essentially, in this case:

- Dev proposes
- Miners may signal
- Users may credibly threaten that if signal, Miners will lose consensus
with sufficient portion of economy.


And that it's really, AFAIU, the *threat* of the outcome that ensures that
miners don't signal, and the followthrough is intentionally messy. If it's
*not* messy, then it is actually less effective and people just 'go their
separate ways', but if the intent is to drive consensus, it must be messy.

This is similar to Nuclear Deterrence game theory, whereby it's clearly not
the right call to use nukes, but paired with an irrational leader, the
credible threat serves to force a system of more relative peace. So the
pairing of ST + Users able to reject, albeit messily, does form a
relatively stable configuration.

Kudos to NVK for explaining the nuance to me.
--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Tue, Apr 26, 2022 at 3:47 AM Anthony Towns  wrote:

> On Mon, Apr 25, 2022 at 10:48:20PM -0700, Jeremy Rubin via bitcoin-dev
> wrote:
> > Further, you're representing the state of affairs as if there's a great
> > need to scramble to generate software for this, whereas there already are
> > scripts to support a URSF that work with the source code I pointed to
> from
> > my blog. This approach is a decent one, even though it requires two
> things,
> > because it is simple. I think it's important that people keep this in
> mind
> > because that is not a joke, the intention was that the correct set of
> check
> > and balance tools were made available. I'd be eager to learn what,
> > specifically, you think the advantages are of a separate binary release
> > rather than a binary + script that can handle both cases?
>
> The point of running a client with a validation requirement of "blocks
> must (not) signal" is to handle the possiblity of there being a chain
> split, where your preferred ruleset ends up on the less-work side.
>
> Ideally that will be a temporary situation and other people will come to
> your side, switch their miners over etc, and your chain will go back to
> having the most work, and anyone who wasn't running a client with the
> opposite signalling requirement will reorg to your chain and ruleset.
>
> But forkd isn't quite enough to do that reliably -- instead, you'll
> start disconnecting nodes who forward blocks to you that were built on
> the block you disconnected, and you'll risk ending up isolated: that's
> why bip8 recommends clients "should either use parameters that do not
> risk there being a higher work alternative chain, or specify a mechanism
> for implementations that support the deployment to preferentially peer
> with each other".
>
> Also, in order to have other nodes reorg to your chain when it has
> more work, you don't want to exclusively connect to likeminded peers.
> That's less of a big deal though, since you only need one peer to
> forward the new chain to the compatible network to trigger all of them
> to reorg.
>
> Being able to see the other chain has more work might be valuable in
> order to add some sort of user warning signal though: "the other chain
> appears to have maintained 3x as much hash power as the chain your are
> following".
>
> In theory, using the `BLOCK_RECENT_CONSENSUS_CHANGE` flag to indicate
> unwanted signalling might make sense; then you could theoretically
> trigger on that to avoid disconnecting inbound peers that are following
> the wrong chain. There's already some code along those lines; but while I
> haven't checked recently, I think it ends up failing relatively quickly
> once an invalid chain has been extended by a few blocks, since they'll
> result in `BLOCK_INVALID_PREV` errors instead. The segwit UASF client
> took some care to try to make this work, fwiw.
>
> (As it stands, I think RECENT_CONSENSUS_CHANGE only really helps with
> avoiding disconnections if there's one or maybe two invalid blocks in
> a row from a random miner that's doing strange things, rather than if
> there's an active conflict resulting in a deliberate chain split).
>
> On the other hand, if there is a non-trivial chain split, then everyone
> has to deal with splitting their coins across the different chains,
> presuming they don't want to just consider one or the other a complete
> write-off. That's already annoying; but for lightning funds I think

Re: [bitcoin-dev] What to expect in the next few weeks

2022-04-26 Thread Jeremy Rubin via bitcoin-dev
I'm a bit confused here. The "personal blog" in question was sent to this
list with an archive link and you saw an replied to it.

The proposal to make an alternative path hadn't gotten buy in sufficient
from those iterating, and given the propensity of people to blow things out
of proportion in this list, I wanted to be sure a follow up plan carried
some buy before wider dissemination.

On Tue, Apr 26, 2022, 6:53 AM Michael Folkson 
wrote:

> Jeremy
>
> > The reason there was not a mailing list post is because that's not a
> committed plan, it was offered up for discussion to a public working group
> for feedback as a potential plan.
>
> In the interests of posterity from your personal blog on April 17th [1]:
>
> "Within a week from today, you’ll find software builds for a CTV Bitcoin
> Client for all platforms linked here:
>
>- Mac OSX TODO:
>- Windows TODO:
>- Linux TODO:
>
> These will be built using GUIX, which are reproducible for verification."
>
> Doesn't sound to me that this was being "offered up for discussion". A
> week from April 17th would have been Sunday April 24th (2 days ago).
> Readers of this mailing list would have had no idea of these plans.
>
> ​> You've inaccurately informed the list on something no one has
> communicated committed intent for.
>
> I'll let readers assess from the above who is accurately informing the
> mailing list and who is using personal blog posts and messaging apps to
> give a completely different impression to one set of people versus readers
> of this mailing list.
>
> I like to give people the benefit of the doubt and assume incompetence
> rather than malice but when it comes to potential chain splits it doesn't
> really matter which it is. It has the same effect and poses the same
> network risk. If and when you try something like this again I hope this is
> remembered.
>
> The Binance hack rollback suggestion, the NACKing then coin flip
> suggestion on Taproot activation and now this. It seems like this trillion
> dollar industry is a joke to you. I know we aren't supposed to get personal
> on this mailing list but honestly if you are going to continue with these
> stunts I'd rather you do them on a different blockchain.
>
> [1]: https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
> --- Original Message ---
> On Tuesday, April 26th, 2022 at 6:48 AM, Jeremy Rubin <
> jeremy.l.ru...@gmail.com> wrote:
>
> The reason there was not a mailing list post is because that's not a
> committed plan, it was offered up for discussion to a public working group
> for feedback as a potential plan. You've inaccurately informed the list on
> something no one has communicated committed intent for. This was an
> alternative discussed in the telegram messaging app but did not seem to
> strike the correct balance so was not furthered.
>
> I was hoping to be able to share something back to this list sooner rather
> than later, but I have not been able to get, among those interested to
> discuss in that venue, coherence on a best next step. I communicated
> inasmuch on the bird app
> https://twitter.com/JeremyRubin/status/1518347793903017984
> https://twitter.com/JeremyRubin/status/1518477022439247872, but do not
> have a clear next step and am pouring over all the fantastic feedback I
> received so far.
>
> Further, you're representing the state of affairs as if there's a great
> need to scramble to generate software for this, whereas there already are
> scripts to support a URSF that work with the source code I pointed to from
> my blog. This approach is a decent one, even though it requires two things,
> because it is simple. I think it's important that people keep this in mind
> because that is not a joke, the intention was that the correct set of check
> and balance tools were made available. I'd be eager to learn what,
> specifically, you think the advantages are of a separate binary release
> rather than a binary + script that can handle both cases? I'm asking
> sincerely because I would make the modifications to the release I prepared
> to support that as well, if they do not entail substantial technical risk.
> Personally, were I aligned with your preferences, I'd be testing the forkd
> script and making sure it is easy to use as the simplest and most effective
> way to achieve your ends.
>
> regards,
>
> Jeremy
>
> --
> @JeremyRubin 
>
> On Mon, Apr 25, 2022 at 3:44 PM Michael Folkson via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> The latest I'm hearing (this mailing list appears to be being bypassed in
>> favor of personal blogs and messaging apps) is that Speedy Trial miner
>> signaling for the contentious CTV soft fork is no longer going to start on
>> May 5th (as previously communicated [1]) and may instead now start around
>> August 1st 

Re: [bitcoin-dev] What to expect in the next few weeks

2022-04-25 Thread Jeremy Rubin via bitcoin-dev
The reason there was not a mailing list post is because that's not a
committed plan, it was offered up for discussion to a public working group
for feedback as a potential plan. You've inaccurately informed the list on
something no one has communicated committed intent for. This was an
alternative discussed in the telegram messaging app but did not seem to
strike the correct balance so was not furthered.

I was hoping to be able to share something back to this list sooner rather
than later, but I have not been able to get, among those interested to
discuss in that venue, coherence on a best next step. I communicated
inasmuch on the bird app
https://twitter.com/JeremyRubin/status/1518347793903017984
https://twitter.com/JeremyRubin/status/1518477022439247872, but do not have
a clear next step and am pouring over all the fantastic feedback I
received so far.

Further, you're representing the state of affairs as if there's a great
need to scramble to generate software for this, whereas there already are
scripts to support a URSF that work with the source code I pointed to from
my blog. This approach is a decent one, even though it requires two things,
because it is simple. I think it's important that people keep this in mind
because that is not a joke, the intention was that the correct set of check
and balance tools were made available. I'd be eager to learn what,
specifically, you think the advantages are of a separate binary release
rather than a binary + script that can handle both cases? I'm asking
sincerely because I would make the modifications to the release I prepared
to support that as well, if they do not entail substantial technical risk.
Personally, were I aligned with your preferences, I'd be testing the
forkd script and making sure it is easy to use as the simplest and most
effective way to achieve your ends.

regards,

Jeremy

--
@JeremyRubin 

On Mon, Apr 25, 2022 at 3:44 PM Michael Folkson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The latest I'm hearing (this mailing list appears to be being bypassed in
> favor of personal blogs and messaging apps) is that Speedy Trial miner
> signaling for the contentious CTV soft fork is no longer going to start on
> May 5th (as previously communicated [1]) and may instead now start around
> August 1st 2022.
>
> Hence for now the drama seems to have been averted. I am deeply skeptical
> that in the next 3 months this soft fork activation attempt will obtain
> community consensus and will no longer be contentious (although I guess
> theoretically it is possible). As a result I suspect we'll be in the exact
> same situation with a URSF effort required 2-3 months down the line.
>
> If we are I'll try to keep the mailing list informed. It is important
> there is transparency and ample time to research and prepare before making
> decisions on what software to run. Obviously I have no control over what
> others choose to do. Please don't be rushed into running things you don't
> understand the implications of and please only signal for a soft fork if
> you are convinced it has community consensus (what should precede signaling
> as it did for Taproot) and you are ready to activate a soft fork.
>
> [1]: https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
> --- Original Message ---
> On Saturday, April 23rd, 2022 at 11:03 AM, Michael Folkson via bitcoin-dev
>  wrote:
>
> As I said in my post:
>
> "If you care about Bitcoin's consensus rules I'd request you pay
> attention so you can make an informed view on what to run and what to
> support."
>
> Ideally everyone would come to an informed view independently.
> Unfortunately many people don't have the time to follow Bitcoin drama 24/7
> and hence struggle to separate noise from signal. In this case simple
> heuristics are better than nothing. One heuristic is to listen to those in
> the past who showed good judgment and didn't seek to misinform. Of course
> it is an imperfect heuristic. Ideally the community would be given
> sufficient time to come to an informed view independently on what software
> to run and not be rushed into making decisions. But it appears they are not
> being afforded that luxury.
>
> >  I fear you risk losing respect in the community
>
> I appreciate your concern.
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
> --- Original Message ---
> On Saturday, April 23rd, 2022 at 6:10 AM, Billy Tetrud <
> billy.tet...@gmail.com> wrote:
>
> > assuming people pay attention and listen to the individuals who were
> trusted during that period
>
> Bitcoin is not run by a group of authorities of olde. By asking people to
> trust "those.. around in 2015-2017" you're asking people 

Re: [bitcoin-dev] CTV Signet Parameters

2022-04-22 Thread Jeremy Rubin via bitcoin-dev
small note, it's a savings of 34 or 67 bytes *per histogram bucket* to have
bare CTV v.s. v0/v1, so the interesting thing is that by making it cheaper
bytes wise it might enable one to have, for the same byte budget, more
buckets, which would make the feerate savings for the user even greater.
E.g., assume user priorities are exponential, like:

[10, 12, 14, 17, 20, 24, 29, 35, 42, 51]

suppose binning into 4 groups yields:

[10, 12, 14], [17, 20, 24], [29, 35, 42], [51]
then the feerate of each group summarized by the max times bin count is
[14 x 3], [24 x 3], [42 x 3], [51 x 1] =

291

suppose binning into 5 groups yields:

[10, 12], [14, 17], [20, 24], [29, 35], [42, 51]
[12 x 2] [17 x 2] [24 x 2] [35 x 2] [51x2] =

278

so it's clear that bins of 5 yields a discount, and the marginal cost
difference of 5 bins vs 4 can be more than "paid for" by switching to bare
instead of segwit v0.

E.g., 4 segwits = 4*34 additional
5 bares = 1 extra output (34 bytes) + 1 extra input (41 bytes) + extra tx
body (~10 bytes?) = ~2.5 x 34 additional weight

so while in this particular case, the savings mean that 4 would likely be a
better binning than 5 even if bare were available, if you imagine the
groups scaled to more elements under the same distribution would have
eventually the cost (291-278)*S > 2.5*34  make it worth switching the
binning to 5 bins earlier than with would if the bins were more expensive.

Kinda hard to perfectly characterize this type of knock-on effect, but it's
also cool to think about how cheapness of the nodes in the graph changes
the optimal graph, which means you can't just do a simple comparison of how
much is a bigger than b.





On Thu, Apr 21, 2022 at 7:58 PM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thu, Apr 21, 2022 at 10:05:20AM -0500, Jeremy Rubin via bitcoin-dev
> wrote:
> > I can probably make some show up sometime soon. Note that James' vault
> uses
> > one at the top-level https://github.com/jamesob/simple-ctv-vault, but I
> > think the second use of it (since it's not segwit wrapped) wouldn't be
> > broadcastable since it's nonstandard.
>
> The whole point of testing is so that bugs like "wouldn't be broadcastable
> since it's nonstandard" get fixed. If these things are still in the
> "interesting thought experiment" stage, but nobody but Jeremy is
> interested enough to start making them consistent with the proposed
> consensus and policy rules, it seems very premature to be changing
> consensus or policy rules.
>
> > One case where you actually use less space is if you have a few different
> > sets of customers at N different fee priority level. Then, you might need
> > to have N independent batches, or risk overpaying against the customer's
> > priority level. Imagine I have 100 tier 1 customers and 1000 tier 2
> > customers. If I batcher tier 1 with tier 2, to provide tier 1 guarantees
> > I'd need to pay tier 1 rate for 10x the customers. With CTV, I can
> combine
> > my batch into a root and N batch outputs. This eliminates the need for
> > inputs, signatures, change outputs, etc per batch, and can be slightly
> > smaller. Since the marginal benefit on that is still pretty small, having
> > bare CTV improves the margin of byte wise saving.
>
> Bare CTV only saves bytes when *spending* -- but this is when you're
> creating the 1100 outputs, so an extra 34 or 67 bytes of witness data
> seems fairly immaterial (0.05% extra vbytes?). It doesn't make the small
> commitment tx any smaller.
>
> ie, scriptPubKey looks like:
>  - bare ctv: [push][32 bytes][op_nop4]
>  - p2wsh: [op_0][push][32 bytes]
>  - p2tr: [op_1][push][32 bytes]
>
> while witness data looks like:
>  - bare ctv: empty scriptSig, no witness
>  - pw2sh: empty scriptSig, witness = "[push][32 bytes][op_nop4]"
>  - p2tr: empty scriptSig, witness = 33B control block,
>  "[push][32 bytes][op_nop4]"
>
> You might get more a benefit from bare ctv if you don't pay all 1100
> outputs in a single tx when fees go lower; but if so, you're also wasting
> quite a bit more block space in that case due to the intermediate
> transactions you're introducing, which makes it seem unlikely that
> you care about the extra 9 or 17 vbytes bare CTV would save you per
> intermediate tx...
>
> I admit that I am inclined towards micro-optimising things to save
> those bytes if it's easy, which does incline me towards bare CTV; but
> the closest thing we have to real user data suggests that nobody's going
> to benefit from that possibility anyway.
>
> > Even if we got rid of bare ctv, segwit v0 CTV would still exist, so we
> > couldn't use OP_SUCCESSx there either. segwitv0 might be desired if
> someone
> >

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
I think I've discussed this type of concept previously somewhere but cannot
find a link to where.

Largely, I think the following:

1) This doesn't reduce burden of maintenance and risk of consensus split,
it raises it:
   A) as we now have a bunch of tricky code around reorgs and mempool
around the time of rule de-activation.
   B) we need to permanently maintain the rule to validate old blocks fully
2) Most of the value of a 'temporary soft fork' is more safely captured by
use of a CTV emulation server / servers, which has a more graceful
degradation property of the servers simply shutting down and not
authorizing new contracts, but funds not being vulnerable to theft. The
model here is trust, as opposed to a timeout.
   2A) The way I implemented the oracles in CTV was such that, if we wanted
to, we could actually soft-fork the rules for the oracle's keys such that
they would *have to* only sign CTV-valid transactions (e.g., the keys could
be made public). Pretty weird model, but cool that it would enable
after-the-fact trust model improvements. This could be generalized for any
opcode to be emulator -> emulator consensus guaranteed -> non signature
based opcode.

Although I will note that I like the spirit of this, and encourage thinking
more creatively about other ways to have temporary forks in Bitcoin like
this.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
Hi Russell,

Thank you for your feedback here.



> However, I'm still skeptical of the bare-CTV part of BIP-119 (and I'm told
> that bare-CTV hasn't even appeared on the CTV signet).  Unlike the general
> smart-contracting case, bare-CTV does not have any branches.  All it can do
> is commit to a subsequent transaction's outputs.  At first glance this
> appears to be a waste because, for less bandwidth, that transaction could
> just realize those outputs immediately, so why would anyone want to delay
> the inevitable?
>

I can probably make some show up sometime soon. Note that James' vault uses
one at the top-level https://github.com/jamesob/simple-ctv-vault, but I
think the second use of it (since it's not segwit wrapped) wouldn't be
broadcastable since it's nonstandard.




>
> One reason might be that you want to commit to the output early during a
> high-fee time, and then complete the transaction later during a low-fee
> time.  While there are fee-rate situations where this could result in lower
> fees than committing to the outputs all at once, it would be even cheaper
> still to just wait to do the payout at the low-fee time.  I'm struggling to
> understand the advantages of the advanced commitment, along with all the
> overhead that entails.  Doesn't it just cause more blockspace to be used
> overall?
>

One case where you actually use less space is if you have a few different
sets of customers at N different fee priority level. Then, you might need
to have N independent batches, or risk overpaying against the customer's
priority level. Imagine I have 100 tier 1 customers and 1000 tier 2
customers. If I batcher tier 1 with tier 2, to provide tier 1 guarantees
I'd need to pay tier 1 rate for 10x the customers. With CTV, I can combine
my batch into a root and N batch outputs. This eliminates the need for
inputs, signatures, change outputs, etc per batch, and can be slightly
smaller. Since the marginal benefit on that is still pretty small, having
bare CTV improves the margin of byte wise saving.

I give this as an example where CTV uses less space, it is detailed more
here: https://utxos.org/analysis/batching_sim/. This benefit might be
marginal and absurd, given these are already big transactions, but it may
_also_ be absurd that feerates only ever go up and congestion control is
not valuable.

Another example where this arises is where you have a transaction set you
need to pay top-of-mempool rate for the soonest confirmation you can get.
CTV has a decongesting effect, because your top-of-mempool transaction is
small, which doesn't trigger as much rivalrous behavior with other
transactors. Concretely, the current policy max txn size is 100kb, or 10%
of a block. If you bump out of next block window 10% of the mempool, then
if those transactors care to maintain their positioning, they will need to
put themselves into a higher percentile with e.g. RBF or CPFP. Whereas if
you put in a transaction that is just 100 bytes, you only bump out 100
bytes of rivals (0.01%), not 10%.

Lastly, perhaps a more realistic scenario, is where I am batching to 100
customers who all wish to do something else after I pay them. E.g., open a
lightning channel. Being able to use CTV noninteractive channels cuts
through the extra hop transaction (unless dual funded channels, unless the
channels are opened between two customers, then they can be dual funded
again). So using CTV here also saves in net blockspace (although, note,
this is sort of orthogonal to using CTV over the batch itself, just a good
example for the related question of 'doesn't ctv generally use more
blockspace').


> There are some other proposed use cases for bare-CTV.  A bare-CTV can be
> used to delay a "trigger"-transaction.  Some contracts, such as vaults, use
> a relative-locktime as part of their construction and it could make sense
> to make an output commitment but not realize those outputs yet until you
> are ready to start your relative-time lock clock.  But bare-CTV doesn't
> support any co-signing ability here, so you are relying entirely on keeping
> the transaction data secret to prevent a third-party from triggering your
> relative-lock clock.  More specifically for a vault scheme, since
> bare-CTV's are currently unaddressable, and AFAIK, there is no address
> format proposal yet, it is impossible to receive funds directly into a
> vault.  You must shuffle received funds into your vault yourself, which
> seems very likely to negate the cost savings of using bare-CTV in the first
> place (please correct me if I'm wrong).  Better to receive funds directly
> into a taproot-CTV vault, which has an address, and while you are at it,
> you could place the cold-key as the taproot key to save even more when
> using the cold-key to move vault funds.
>
>
This is not quite true, you can receive funds into a bare-CTV from your
vault software, and you can send into one from your vault software. What
doesn't work is exporting or creating an address for 

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
> You'd be confiscating your own funds by making an absurd spending
condition.
> By this argument, ALL softforks would have to be ruled out.

The argument is that transactions which can be relayed and in the mempool
and then confirmed should not ever be restricted.

This is so that old node's mempools don't produce invalid blocks after an
upgrade.

This is what a good chunk of policy is for, and we (being core) do bounce
these txns to make clear what might be upgraded.

Changing the detail you mentioned represents a tweak that could make old
nodes mine invalid blocks. That's all I'm ruling out.



> > In preparing it I just used what was available in Core now, surely the
> last
> > year you could have gotten the appropriate patches done?
>
> They were done, reviewed, and deployed in time for Taproot. You personally
>
> played a part in sabotaging efforts to get it merged into Core, and
> violating
> the community's trust in it by instead merging your BIP9 ST without
> consensus. Don't play dumb. You have nobody to blame but yourself.
>


Even if I accept full responsibility for BIP9 ST without consensus, you
still had the last year to convince the rest of the maintainers to review
and merge your activation code, which you did not do.

Don't confuse consensus-seeking with preference. My preference was to leave
versionbits entirely.

Nor am I blame seeking. I'm simply asking why, if this is _the_ most
important thing for Bitcoin (as I've heard some BIP8 LOT=true people
remark), did you not spend the last year improving your advocacy. And I'm
suggesting that you redouble those efforts by, e.g., opening a new PR for
Core with logic you find acceptable and continuing to drive the debate
forward. None of these things happen without advocacy.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
Missed one for part 2:

Shesek's social recovery wallet using CTV to enforce timelocks without
expiry, using his Minsc toolchain:

https://twitter.com/shesek/status/1511619296367153153
https://docs.google.com/presentation/d/1B59CdMIXW-wSW6CaLSgo7y4kvgrEwVgfY14IW2XV_MA/edit#slide=id.g1235f9ffb79_0_81
https://github.com/shesek/plebfi2022-social-recovery
--
@JeremyRubin 


On Thu, Apr 21, 2022 at 1:16 AM Jeremy Rubin 
wrote:

> Probably merits a more thorough response, but, I wanted to respond on the
> framework above:
>
>
>  1a) can you make transactions using the new feature with bitcoin-cli,
>  eg createrawtransaction etc? (*YES)*
>
> since ~Feb 2020, this has existed:
> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-feb1-workshop
>
> CTV hasn't changed so this code should work un-rebased. The transaction
> outputs may need to be manually submitted to the network, but the covenant
> is enforced. This covers congestion control and vaults.
>
>
>  1b) can you make transactions using the new feature with some other
>  library? *(YES)*
> Sapio, Test Framework, also https://min.sc/nextc/ produced independently
> by Shesek
>
>  1c) can you make transactions using the new feature with most common
>  libraries? *(YES, kinda)*
>
> Yes, https://crates.io/crates/sapio-miniscript and
> https://crates.io/crates/sapio-bitcoin have been maintained for about 1
> year, and are now taproot compatible.
>
> Sapio's use of these libraries has even helped find bugs in the release
> process of Taproot for rust-bitcoin.
>
> kinda: It's not _most_ common libraries, it's _a_ common library. it's
> also not upstreamed, because the patches would not be accepted were it to
> be.
>
>  2) has anyone done a usable prototype of the major use cases of the new
> feature?* (YES)*
>
> In addition to https://github.com/jamesob/simple-ctv-vault, there is also
> https://github.com/kanzure/python-vaults, although it has an interesting
> bug.
>
> There's also a myriad of uses shown in
> https://github.com/sapio-lang/sapio/tree/master/sapio-contrib/src/contracts
> and in https://github.com/sapio-lang/sapio/tree/master/plugin-example.
> While these aren't quite "usable" as an end-to-end application, e.g.,
> something you'd want to put real money on, they are a part of a *massive*
> infrastructure investment in general purpose smart contract tooling for
> covenant design with CTV. That CTV can be targeted with a compiler to
> generate a wide variety of composable use cases *is* one of the use cases
> for CTV, since it enables people to design many different types of thing
> relatively easily. That is a feature of CTV! It's not just for one use case.
>
> The suite of Sapio apps are less "production ready" than they could be for
> a few reasons:
>
> 1) I've been working hard at pushing the limits of what is possible & the
> theory of it v.s. making it production ready
> 2) I prioritized supporting Taproot v.s. legacy script, and much of the
> taproot tooling isn't production ready
> 3) Sapio is really ambitious undertaking, and it will take time to make it
> production
>
> That said, https://rubin.io/bitcoin/2022/03/22/sapio-studio-btc-dev-mtg-6/
> tutorial was completed by people who weren't me, and at the
> pleb.fi/miami2022 one of the projects was able to use sapio congestion
> control transactions as well, so it does "work". As it matures, we'll get a
> number of implemented use cases people have been excited about like DLCs,
> which are implemented here
> https://github.com/sapio-lang/sapio/blob/master/sapio-contrib/src/contracts/derivatives/dlc.rs.
> You can see the test case shows how to construct one.
>
> Why did I not focus on production grade? Well, production grade can always
> happen later, and I don't think it takes as much imagination. But the main
> critique I'd heard of CTV was that no one could see it being used for
> anything but one or two use cases. So I built Sapio, in part, to show how
> CTV could be used for an incredibly wide and diverse set of applications,
> as opposed to the polish on them.
>
> If I knew the bar to surpass was to be polish, I probably could have taken
> a less ambitious approach with Sapio and shown like 1-2 applications
> working end-to-end. But because the main feedback I got was that CTV wasn't
> powerful enough, I opted to build a very general framework for covenants
> and demonstrate how CTV fits that.
>
>
>
>
>
> --
> @JeremyRubin 
>
> On Thu, Apr 21, 2022 at 12:05 AM Anthony Towns via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Wed, Apr 20, 2022 at 05:13:19PM +, Buck O Perley via bitcoin-dev
>> wrote:
>> > All merits (or lack thereof depending on your view) of CTV aside, I
>> find this topic around decision making both interesting and important.
>> While I think I sympathize with the high level concern about making sure
>> there are use cases, interest, and sufficient testing 

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
> While reverting Segwit wouldn't be possible, it IS entirely possible to
do an
> additional softfork to either weigh witness data at the full 4 WU/Byte
rate
> (same as other data), or to reduce the total weight limit so as to extend
the
> witness discount to non-segwit transactions (so scriptSig is similarly
> discounted).

What if I pre signed a transaction which was valid under the discounted
weighting, but the increase in weight would make it invalid? This would
serve to confiscate funds. Let's not do that.



> Furthermore, the variant of Speedy Trial being used (AFAIK) is the BIP9
> variant which has no purpose other than to try to sabotage parallel UASF
> efforts.

Why didn't you upstream the code that was used for the actual activation
into Bitcoin Core in the last year?

In preparing it I just used what was available in Core now, surely the last
year you could have gotten the appropriate patches done?


--
@JeremyRubin 

On Thu, Apr 21, 2022 at 12:57 AM Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thursday 21 April 2022 03:10:02 alicexbt wrote:
> > @DavidHarding
> >
> > Interesting proposal to revert consensus changes. Is it possible to do
> this
> > for soft forks that are already activated?
>
> Generally, no. Reverting a softfork without a built-in expiry would be a
> hardfork.
>
> > Example: Some users are not okay with witness discount in segwit
> > transactions
> >
> > https://nitter.net/giacomozucco/status/1513614380121927682
>
> While reverting Segwit wouldn't be possible, it IS entirely possible to do
> an
> additional softfork to either weigh witness data at the full 4 WU/Byte
> rate
> (same as other data), or to reduce the total weight limit so as to extend
> the
> witness discount to non-segwit transactions (so scriptSig is similarly
> discounted).
>
> > @LukeDashjr
> >
> > > The bigger issue with CTV is the miner-decision route. Either CTV has
> > > community support, or it doesn't. If it does, miners shouldn't have the
> > > ability to veto it. If it doesn't, miners shouldn't have the ability to
> > > activate it (making it a 51% attack more than a softfork).
> >
> > Agree. UASF client compatible with this speedy trial release for BIP 119
> > could be a better way to activate CTV. Users can decide if they prefer
> > mining pools to make the decision for them or they want to enforce it
> > irrespective of how many mining pools signal for it. I haven't seen any
> > arguments against CTV from mining pools yet.
>
> We had that for Taproot, and now certain people are trying to say Speedy
> Trial
> activated Taproot rather than the BIP8 client, and otherwise creating
> confusion and ambiguity.
>
> Furthermore, the variant of Speedy Trial being used (AFAIK) is the BIP9
> variant which has no purpose other than to try to sabotage parallel UASF
> efforts.
>
> At this point, it is probably better for any Speedy Trial attempts to be
> rejected by the community and fail outright. Perhaps even preparing a real
> counter-softfork to invalidate blocks signalling for it.
>
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-04-21 Thread Jeremy Rubin via bitcoin-dev
Probably merits a more thorough response, but, I wanted to respond on the
framework above:


 1a) can you make transactions using the new feature with bitcoin-cli,
 eg createrawtransaction etc? (*YES)*

since ~Feb 2020, this has existed:
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-feb1-workshop

CTV hasn't changed so this code should work un-rebased. The transaction
outputs may need to be manually submitted to the network, but the covenant
is enforced. This covers congestion control and vaults.


 1b) can you make transactions using the new feature with some other
 library? *(YES)*
Sapio, Test Framework, also https://min.sc/nextc/ produced independently by
Shesek

 1c) can you make transactions using the new feature with most common
 libraries? *(YES, kinda)*

Yes, https://crates.io/crates/sapio-miniscript and
https://crates.io/crates/sapio-bitcoin have been maintained for about 1
year, and are now taproot compatible.

Sapio's use of these libraries has even helped find bugs in the release
process of Taproot for rust-bitcoin.

kinda: It's not _most_ common libraries, it's _a_ common library. it's also
not upstreamed, because the patches would not be accepted were it to be.

 2) has anyone done a usable prototype of the major use cases of the new
feature?* (YES)*

In addition to https://github.com/jamesob/simple-ctv-vault, there is also
https://github.com/kanzure/python-vaults, although it has an interesting
bug.

There's also a myriad of uses shown in
https://github.com/sapio-lang/sapio/tree/master/sapio-contrib/src/contracts
and in https://github.com/sapio-lang/sapio/tree/master/plugin-example.
While these aren't quite "usable" as an end-to-end application, e.g.,
something you'd want to put real money on, they are a part of a *massive*
infrastructure investment in general purpose smart contract tooling for
covenant design with CTV. That CTV can be targeted with a compiler to
generate a wide variety of composable use cases *is* one of the use cases
for CTV, since it enables people to design many different types of thing
relatively easily. That is a feature of CTV! It's not just for one use case.

The suite of Sapio apps are less "production ready" than they could be for
a few reasons:

1) I've been working hard at pushing the limits of what is possible & the
theory of it v.s. making it production ready
2) I prioritized supporting Taproot v.s. legacy script, and much of the
taproot tooling isn't production ready
3) Sapio is really ambitious undertaking, and it will take time to make it
production

That said, https://rubin.io/bitcoin/2022/03/22/sapio-studio-btc-dev-mtg-6/
tutorial was completed by people who weren't me, and at the
pleb.fi/miami2022 one of the projects was able to use sapio congestion
control transactions as well, so it does "work". As it matures, we'll get a
number of implemented use cases people have been excited about like DLCs,
which are implemented here
https://github.com/sapio-lang/sapio/blob/master/sapio-contrib/src/contracts/derivatives/dlc.rs.
You can see the test case shows how to construct one.

Why did I not focus on production grade? Well, production grade can always
happen later, and I don't think it takes as much imagination. But the main
critique I'd heard of CTV was that no one could see it being used for
anything but one or two use cases. So I built Sapio, in part, to show how
CTV could be used for an incredibly wide and diverse set of applications,
as opposed to the polish on them.

If I knew the bar to surpass was to be polish, I probably could have taken
a less ambitious approach with Sapio and shown like 1-2 applications
working end-to-end. But because the main feedback I got was that CTV wasn't
powerful enough, I opted to build a very general framework for covenants
and demonstrate how CTV fits that.





--
@JeremyRubin 

On Thu, Apr 21, 2022 at 12:05 AM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Apr 20, 2022 at 05:13:19PM +, Buck O Perley via bitcoin-dev
> wrote:
> > All merits (or lack thereof depending on your view) of CTV aside, I find
> this topic around decision making both interesting and important. While I
> think I sympathize with the high level concern about making sure there are
> use cases, interest, and sufficient testing of a particular proposal before
> soft forking it into consensus code, it does feel like the attempt to
> attribute hard numbers in this way is somewhat arbitrary.
>
> Sure. I included the numbers for falsifiability mostly -- so people
> could easily check if my analysis was way off the mark.
>
> > For example, I think it could be reasonable to paint the list of
> examples you provided where CTV has been used on signet in a positive
> light. 317 CTV spends “out in the wild” before there’s a known activation
> date is quite a lot
>
> Not really? Once you can make one transaction, it's trivial to make
> hundreds. It's more 

[bitcoin-dev] 7 Theses on a next step for BIP-119

2022-04-19 Thread Jeremy Rubin via bitcoin-dev
Devs,

In advance of the CTV meeting today, I wanted to share what my next step is
in advocating for CTV, as well as 7 theses for why I believe it to be the
right course of action to take at this time.

Please see the post at
https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/.

As always, open to hear any and all feedback,

Jeremy


archived at:
https://web.archive.org/web/20220419172825/https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting #7 Reminder + Agenda (Tuesday, April 19th, 12:00 PT / 7PM UTC)

2022-04-18 Thread Jeremy Rubin via bitcoin-dev
Devs,

Apologies for the delay in posting the reminder. As noted on March 22nd,
the 7th meeting was postponed to the time of the 8th meeting given the
Miami conference scheduling conflicts.

We'll hold the meeting tomorrow at noon Pacific time as usual.

The Agenda for the meeting will be an open discussion on the possibility of
activating CTV in 2022, why we may or may not wish to do that, if we did
want to do that what would need to be done, what the path might look like
if we do not do that.

I will try to publish some written thoughts ahead of the meeting for
reference.

If you are unable to attend, you may leave a comment in response below and
I will reference it in the minutes of the meeting.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-04-17 Thread Jeremy Rubin via bitcoin-dev
the 'lots of people' stuff (get confused, can't figure out what i'm
quoting, actually are reading this conversation) is an appeal to an
authority that doesn't exist. If something is unclear to you, let me know.
If it's unclear to a supposed existential person or set of persons, they
can let me know.


concretely, I am confused by how OTS can both support RBF for updating to
larger commitments (the reason you're arguing with me) and not have an
epoch based re-comittings scheme and still be correct. My assumption now,
short of a coherent spec that's not just 'read the code', is that OTS
probably is not formally correct and has some holes in what is
committed to, or relies on clients re-requesting proofs if they fail to be
committed. in any case, you would be greatly aided by having an actual spec
for OTS since i'm not interested in the specifics of OTS software, but I'm
willing to look at the protocol. So if you do that, maybe we can talk more
about the issue you see with how sponsors works.

further, I think that if there is something that sponsors does that could
make a hypothetical OTS-like service work better, in a way that would be
opaque (read: soft-fork like wrt compatibility) to clients, then we should
just change what OTS is rather than committing ourselves to a worse design
in service of some unstated design goals. In particular, it seems that
OTS's servers can be linearized and because old clients aren't looking for
linearization, then the new linearization won't be a breaking change for
old clients, just calendar servers. And new clients can benefit from
linearization.
--
@JeremyRubin 


On Fri, Apr 15, 2022 at 7:52 AM Peter Todd  wrote:

> On Mon, Apr 11, 2022 at 09:18:10AM -0400, Jeremy Rubin wrote:
> > > nonsense marketing
> >
> > I'm sure the people who are confused about "blockchain schemes as \"world
> > computers\" and other nonsense
> > marketing" are avid and regular readers of the bitcoin devs mailing list
> so
> > I offer my sincerest apologies to all members of the intersection of
> those
> > sets who were confused by the description given.
>
> Of course, uninformed people _do_ read all kinds of technical materials.
> And
> more importantly, those technical materials get quoted by journalists,
> scammers, etc.
>
> > > useless work
> >
> > progress is not useless work, it *is* useful work in this context. you
> have
> > committed to some subset of data that you requested -- if it was
> 'useless',
> > why did you *ever* bother to commit it in the first place? However, it is
> > not 'maximally useful' in some sense. However, progress is progress --
> > suppose you only confirmed 50% of the commitments, is that not progress?
> If
> > you just happened to observe 50% of the commitments commit because of
> > proximity to the time a block was mined and tx propagation naturally
> would
> > you call it useless?
>
> Please don't trim quoted text to the point where all context is lost. Lots
> of
> people read this mailing list and doing that isn't helpful to them.
>
> > > Remember that OTS simply proves data in the past. Nothing more.
> > > OTS doesn't have a chain of transactions
> > Gotcha -- I've not been able to find an actual spec of Open Time Stamps
>
> The technical spec of OpenTimestamps is of course the normative validation
> source code, currently python-opentimestamps, similar to how the technical
> spec
> of Bitcoin is the consensus parts of the Bitcoin Core codebase. The
> explanatory
> docs are linked on https://opentimestamps.org under the "How It Works"
> section.
> It'd be good to take the linked post in that section and turn it into
> better
> explanatory materials with graphics (esp interactive/animated graphics).
>
> > anywhere, so I suppose I just assumed based on how I think it *should*
> > work. Having a chain of transactions would serve to linearize history of
> > OTS commitments which would let you prove, given reorgs, that knowledge
> of
> > commit A was before B a bit more robustly.
>
> I'll reply to this as a separate email as this discussion - while useful -
> is
> getting quite off topic for this thread.
>
> > >  I'd rather do one transaction with all pending commitments at a
> > particular time
> > rather than waste money on mining two transactions for a given set of
> > commitments
> >
> > This sounds like a personal preference v.s. a technical requirement.
> >
> > You aren't doing any extra transactions in the model i showed, what
> you're
> > doing is selecting the window for the next based on the prior conf.
>
> ...the model you showed is wrong, as there is no reason to have a
> linearized
> transaction history. OpenTimestamps proofs don't even have the concept of
> transactions: the proof format proves that data existed prior to a merkle
> root
> of a particular Bitcoin block. Not a Bitcoin transaction.
>
> > See the diagram below, you would have to (if OTS is correct) support this
> > sort of 'attempt/confirm' head that 

Re: [bitcoin-dev] A Calculus of Covenants

2022-04-12 Thread Jeremy Rubin via bitcoin-dev
note of clarification:

this is from the perspective of a developer trying to build infrastructure
for covenants. from the perspective of bitcoin consensus, a covenant
enforcing primitve would be something like OP_TLUV and less so it's use in
conjunction with other opcodes, e.g. OP_AMOUNT.

One must also analyze all the covenants that one *could* author using a
primitive, in some sense, to demonstrate that our understanding is
sufficient. As a trivial example, you could use
OP_DELETE_BITCOIN_ENTIRELY_IF_KNOWS_PREIMAGE_TO_X_OR_TLUV and just because
you could use it safely for TLUV would not mean we should add that opcode
if there's some way of using it negatively.

Cheers,

Jeremy
--
@JeremyRubin 


On Tue, Apr 12, 2022 at 10:33 AM Jeremy Rubin 
wrote:

> Sharing below a framework for thinking about covenants. It is most useful
> for modeling local covenants, that is, covenants where only one coin must
> be examined, and not multi-coin covenants whereby you could have issues
> with protocol forking requiring a more powerful stateful prover. It's the
> model I use in Sapio.
>
> I define a covenant primitive as follows:
>
> 1) A set of sets of transaction intents (a *family)*, potentially
> recursive or co-recursive (e.g., the types of state transitions that can be
> generated). These intents can also be represented by a language that
> generates the transactions, rather than the literal transactions
> themselves. We do the family rather than just sets at this level because to
> instantiate a covenant we must pick a member of the family to use.
> 2) A verifier generator function that generates a function that accepts an
> intent that is any element of one member of the family of intents and a
> proof for it and rejects others.
> 3) A prover generator function that generates a function that takes an
> intent that is any element of one member of the family and some extra data
> and returns either a new prover function, a finished proof, or a rejection
> (if not a valid intent).
> 4) A set of proofs that the Prover, Verifier, and a set of intents are
> "impedance matched", that is, all statements the prover can prove and all
> statements the verifier can verify are one-to-one and onto (or something
> similar), and that this also is one-to-one and onto with one element of the
> intents (a set of transactions) and no other.
> 5) A set of assumptions under which the covenant is verified (e.g., a
> multi-sig covenant with at least 1-n honesty, a multisig covenant with any
> 3-n honesty required, Sha256 collision resistance, DLog Hardness, a SGX
> module being correct).
>
> To instantiate a covenant, the user would pick a particular element of the
> set of sets of transaction intents. For example, in TLUV payment pool, it
> would be the set of all balance adjusting transactions and redemptions. *Note,
> we can 'cleave' covenants into separate bits -- e.g. one TLUV + some extra
> CTV paths can be 'composed', but the composition is not guaranteed to be
> well formed.*
>
> Once the user has a particular intent, they then must generate a verifier
> which can receive any member of the set of intents and accept it, and
> receive any transaction outside the intents and reject it.
>
> With the verifier in hand (or at the same time), the user must then
> generate a prover function that can make a proof for any intent that the
> verifier will accept. This could be modeled as a continuation system (e.g.,
> multisig requires multiple calls into the prover), or it could be
> considered to be wrapped as an all-at-once function. The prover could be
> done via a multi-sig in which case the assumptions are stronger, but it
> still should be well formed such that the signers can clearly and
> unambiguously sign all intents and reject all non intents, otherwise the
> covenant is not well formed.
>
> The proofs of validity of the first three parts and the assumptions for
> them should be clear, but do not require generation for use. However,
> covenants which do not easily permit proofs are less useful.
>
> We now can analyze three covenants under this, plain CTV, 2-3 online
> multisig, 3-3 presigned + deleted.
>
> CTV:
> 1) Intent sets: the set of specific next transactions, with unbound inputs
> into it that can be mutated (but once the parent is known, can be filled in
> for all children).
> 2) Verifier: The transaction has the hash of the intent
> 3) Prover: The transaction itself and no other work
> 4) Proofs of impedance: trivial.
> 5) Assumptions: sha256
> 6) Composition: Any two CTVs can be OR'd together as separate leafs
>
> 2-3 Multisig:
> 1) Intent: All possible sets of transactions, one set selected per instance
> 2) Verifier: At least 2 signed the transition
> 3) Prover: Receive some 'state' in the form of business logic to enforce,
> only sign if that is satisfied. Produce a signature.
> 4) Impedance: The business logic must cover the instance's Intent set and
> must not be able to reach 

[bitcoin-dev] A Calculus of Covenants

2022-04-12 Thread Jeremy Rubin via bitcoin-dev
Sharing below a framework for thinking about covenants. It is most useful
for modeling local covenants, that is, covenants where only one coin must
be examined, and not multi-coin covenants whereby you could have issues
with protocol forking requiring a more powerful stateful prover. It's the
model I use in Sapio.

I define a covenant primitive as follows:

1) A set of sets of transaction intents (a *family)*, potentially recursive
or co-recursive (e.g., the types of state transitions that can be
generated). These intents can also be represented by a language that
generates the transactions, rather than the literal transactions
themselves. We do the family rather than just sets at this level because to
instantiate a covenant we must pick a member of the family to use.
2) A verifier generator function that generates a function that accepts an
intent that is any element of one member of the family of intents and a
proof for it and rejects others.
3) A prover generator function that generates a function that takes an
intent that is any element of one member of the family and some extra data
and returns either a new prover function, a finished proof, or a rejection
(if not a valid intent).
4) A set of proofs that the Prover, Verifier, and a set of intents are
"impedance matched", that is, all statements the prover can prove and all
statements the verifier can verify are one-to-one and onto (or something
similar), and that this also is one-to-one and onto with one element of the
intents (a set of transactions) and no other.
5) A set of assumptions under which the covenant is verified (e.g., a
multi-sig covenant with at least 1-n honesty, a multisig covenant with any
3-n honesty required, Sha256 collision resistance, DLog Hardness, a SGX
module being correct).

To instantiate a covenant, the user would pick a particular element of the
set of sets of transaction intents. For example, in TLUV payment pool, it
would be the set of all balance adjusting transactions and redemptions. *Note,
we can 'cleave' covenants into separate bits -- e.g. one TLUV + some extra
CTV paths can be 'composed', but the composition is not guaranteed to be
well formed.*

Once the user has a particular intent, they then must generate a verifier
which can receive any member of the set of intents and accept it, and
receive any transaction outside the intents and reject it.

With the verifier in hand (or at the same time), the user must then
generate a prover function that can make a proof for any intent that the
verifier will accept. This could be modeled as a continuation system (e.g.,
multisig requires multiple calls into the prover), or it could be
considered to be wrapped as an all-at-once function. The prover could be
done via a multi-sig in which case the assumptions are stronger, but it
still should be well formed such that the signers can clearly and
unambiguously sign all intents and reject all non intents, otherwise the
covenant is not well formed.

The proofs of validity of the first three parts and the assumptions for
them should be clear, but do not require generation for use. However,
covenants which do not easily permit proofs are less useful.

We now can analyze three covenants under this, plain CTV, 2-3 online
multisig, 3-3 presigned + deleted.

CTV:
1) Intent sets: the set of specific next transactions, with unbound inputs
into it that can be mutated (but once the parent is known, can be filled in
for all children).
2) Verifier: The transaction has the hash of the intent
3) Prover: The transaction itself and no other work
4) Proofs of impedance: trivial.
5) Assumptions: sha256
6) Composition: Any two CTVs can be OR'd together as separate leafs

2-3 Multisig:
1) Intent: All possible sets of transactions, one set selected per instance
2) Verifier: At least 2 signed the transition
3) Prover: Receive some 'state' in the form of business logic to enforce,
only sign if that is satisfied. Produce a signature.
4) Impedance: The business logic must cover the instance's Intent set and
must not be able to reach any other non-intent
5) Assumptions: at least 2 parties are 'honest' for both liveness and for
correctness, and the usual suspects (sha256, schnorr, etc)
6) Composition: Any two groups can be OR'd together, if the groups have
different signers, then the assumptions expand

3-3 Presigned:
Same as CTV except:
5) Assumptions: at least one party deletes their key after signing


 You can also think through other covenants like TLUV in this model.

One useful question is the 'cardinality' of an intent set. The useful
notion of this is both in magnitude but also contains. Obviously, many of
these are infinite sets, but if one set 'contains' another then it is
definitionally more powerful. Also, if a set of transitions is 'bigger'
(work to do on what that means?) than another it is potentially more
powerful.

Another question is around composition of different covenants inside of an
intent -- e.g., a TLUV that has a branch with a CTV 

Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-04-11 Thread Jeremy Rubin via bitcoin-dev
> nonsense marketing

I'm sure the people who are confused about "blockchain schemes as \"world
computers\" and other nonsense
marketing" are avid and regular readers of the bitcoin devs mailing list so
I offer my sincerest apologies to all members of the intersection of those
sets who were confused by the description given.

> useless work

progress is not useless work, it *is* useful work in this context. you have
committed to some subset of data that you requested -- if it was 'useless',
why did you *ever* bother to commit it in the first place? However, it is
not 'maximally useful' in some sense. However, progress is progress --
suppose you only confirmed 50% of the commitments, is that not progress? If
you just happened to observe 50% of the commitments commit because of
proximity to the time a block was mined and tx propagation naturally would
you call it useless?

> Remember that OTS simply proves data in the past. Nothing more.
> OTS doesn't have a chain of transactions
Gotcha -- I've not been able to find an actual spec of Open Time Stamps
anywhere, so I suppose I just assumed based on how I think it *should*
work. Having a chain of transactions would serve to linearize history of
OTS commitments which would let you prove, given reorgs, that knowledge of
commit A was before B a bit more robustly.

>  I'd rather do one transaction with all pending commitments at a
particular time
rather than waste money on mining two transactions for a given set of
commitments

This sounds like a personal preference v.s. a technical requirement.

You aren't doing any extra transactions in the model i showed, what you're
doing is selecting the window for the next based on the prior conf.

See the diagram below, you would have to (if OTS is correct) support this
sort of 'attempt/confirm' head that tracks attempted commitments and
confirmed ones and 'rewinds' after a confirm to make the next commit
contain the prior attempts that didn't make it.

[.]
 --^ confirm head tx 0 at height 34
^ attempt head after tx 0
 ---^ confirm head tx 1 at height 35
  --^ attempt head after tx 1
  ^ confirm head tx 2 at height 36
 ---^
attempt head after tx 2
  ---^
confirm head tx 3 at height 37

you can compare this to a "spherical cow" model where RBF is always perfect
and guaranteed inclusion:


[.]
 --^ confirm head tx 0 at height 34
   -^ confirm head tx 1 at height 35
   ---^ confirm head at tx 1
height 36
   -^
confirm head tx 3 at height 37

The same number of transactions gets used over the time period.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Pleb.fi/miami2022 Invitation + CTV Meeting #7 postponement

2022-03-22 Thread Jeremy Rubin via bitcoin-dev
Devs,

I warmly invite you to join for pleb.fi/miami2022 if you are interested to
participate. It will be April 4th and 5th near miami.

The focus of this pleb.fi event will be the ins and outs of building
bitcoin stuff in rust with a focus on Sapio and a hackathon.

As the CTV Meeting overlaps with the programming for pleb.fi, regrettably I
will be unable to host it.

We'll resume with meeting #7 at the time meeting #8 would be otherwise.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV BIP Meeting #6 Notes on Sapio Studio Tutorial

2022-03-22 Thread Jeremy Rubin via bitcoin-dev
Devs,

Tutorial: https://rubin.io/bitcoin/2022/03/22/sapio-studio-btc-dev-mtg-6/
Meeting Logs:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020157.html

Summary:

The 6th CTV meeting was a Sapio Studio tutorial. Sapio Studio is a Bitcoin
Wallet / IDE for playing with Bitcoin Smart Contracts. It is clearly "Alpha
Software", but gets better and better!

The tutorial primarily covers setting up Sapio Studio and then using it to
create an instance of a Bitcoin Vault similar to the variety James O'Beirne
shared recently on this list.

Participants had trouble with:

1) Build System Stuff
2) Passing in Valid Arguments
3) Minrelay Fees
4) Minor GUI bugs in the software

But overall, the software was able to be used successfully similar to the
screenshots in the tutorial, including restarting and resuming a session,
recompiling with effect updates (essentially a form of multisig enforced
recursive covenant which can be made compatible with arbitrary covenant
upgrades), and more.

Based on the meeting, there are some clear areas of improvement needed to
make this GUI more intuitive that will be incorporated in the coming weeks.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-03-15 Thread Jeremy Rubin via bitcoin-dev
I've created a prototype of this protocol in Sapio for your perusal:

https://github.com/sapio-lang/sapio/blob/master/sapio-contrib/src/contracts/derivatives/dlc.rs

Feel free to tweak the test and use it as a benchmark, i tested 1 oracle
with 100,000 different payouts and saw it take around 13s on a release
build.

I'll be playing around with this a bit (I doubt Sapio Studio can handle a
gui for 100,000 nodes), but I figured it was worth a share.

Cheers,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-15 Thread Jeremy Rubin via bitcoin-dev
Boker tov bitcoin devs,

A mechanism of soft-forking against activation exists.  What more do you
> want?
>

Agreed -- that should be enough.



> Are we supposed to write the code on behalf of this hypothetical group of
> users who may or may not exist for them just so that they can have a node
> that remains stalled on Speedy Trial lockin?
>
That simply isn't reasonable, but if you think it is, I invite you to
> create such a fork.
>

Disagree.

It is a reasonable ask.

I've done it in about 40 lines of python:
https://github.com/jeremyrubin/forkd

Merry Christmas Jorge, please vet the code carefully before running.

Peace,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Covenants and feebumping

2022-03-12 Thread Jeremy Rubin via bitcoin-dev
Hi Antoine,

I have a few high level thoughts on your post comparing these types of
primitive to an explicit soft fork approach:

1) Transaction sponsors *is* a type of covenant. Precisely, it is very
similar to an "Impossible Input" covenant in conjunction with a "IUTXO" I
defined in my 2017 workshop
https://rubin.io/public/pdfs/multi-txn-contracts.pdf (I know, I know...
self citation, not cool, but helps with context).

However, for Sponsors itself we optimize the properties of how it works &
is represented, as well as "tighten the hatches" on binding to specific TX
vs merely spend of the outputs (which wouldn't work as well with APO).

Perhaps thinking of something like sponsors as a form of covenant, rather
than a special purpose thing, is helpful?

There's a lot you could do with a general "observe other txns in {this
block, the chain}" primitive. The catch is that for sponsors we don't
*care* to enable people to use this as a "smart contracting primitive", we
want to use it for fee bumping. So we don't care about programmability, we
care about being able to use the covenant to bump fees.

2) On Chain Efficiency.


A) Precommitted Levels
As you've noted, an approach like precomitted different fee levels might
work, but has substantial costs.

However, with sponsors, the minimum viable version of this (not quite what
is spec'd in my prior email, but it could be done this way if we care to
optimize for bytes) would require 1 in and 1 out with only 32 bytes extra.
So that's around 40 bytes outpoint + 64 bytes signature + 40 bytes output +
32 bytes metadata = 174 bytes per bump. Bumps in this way can also
amortize, so bumping >1 txn at the same time would hit the limit of 32
bytes + 144/n  bytes to bump more than one thing. You can imagine cases
where this might be popular, like "close >1 of my LN channels" or "start
withdrawals for 5 of my JamesOB vaulted coins"

B) Fancy(er) Covenants

We might also have something with OP_CAT and CSFS where bumps are done as
some sort of covenant-y thing that lets you arbitrarily rewrite
transactions.

Not too much to say other than that it is difficult to get these down in
size as the scripts become more complex, not to mention the (hotly
discussed of late) ramifications of those covenants more generally.

Absent a concrete fancy covenant with fee bumping, I can't comment.

3) On Capital Efficiency

Something like a precommitted or covenant fee bump requires the fee capital
to be pre-committed inside the UTXO, whereas for something like Sponsors
you can use capital you get sometime later. In certain models -- e.g.,
channels -- where you might expect only log(N) of your channels to fail in
a given epoch, you don't need to allocate as much capital as if you were to
have to do it in-band. This is also true for vaults where you know you only
want to open 1 per month let's say, and not  per month,
which pre-committing requires.

4) On Protocol Design

It's nice that you can abstract away your protocol design concerns as a
"second tier composition check" v.s. having to modify your protocol to work
with a fee bumping thing.

There are a myriad of ways dynamic txns (e.g. for Eltoo) can lead to RBF
pinning and similar, Sponsor type things allow you to design such protocols
to not have any native way of paying for fees inside the actual
"Transaction Intents" and use an external system to create the intended
effect. It seems (to me) more robust that we can prove that a Sponsors
mechanism allows any transaction -- regardless of covenant stuff, bugs,
pinning, etc -- to move forward.

Still... careful protocol design may permit the use of optimized
constructions! For example, in a vault rather than assigning *no fee* maybe
you can have a single branch with a reasonable estimated fee. If you are
correct or overshot (let's say 50% chance?) then you don't need to add a
sponsor. If you undershot, not to worry, just add a sponsor. Adopted
broadly, this would cut the expected value of using sponsors by . This basically enables all
protocols to try to be more efficient, but backstop that with a guaranteed
to work safe mechanism.



There was something else I was going to say but I forgot about it... if it
comes to me I'll send a follow up email.

Cheers,

Jeremy

p.s.

>


> *Of course this makes for a perfect DoS: it would be trivial for a miner
> to infer that you are using*
> *a specific vault standard and guess other leaves and replace the witness
> to use the highest-feerate*
> *spending path. You could require a signature from any of the
> participants. Or, at the cost of an**additional depth, in the tree you
> could "salt" each leaf by pairing it with -say- an OP_RETURN leaf.*



you don't need a salt, you just need a unique payout addr (e.g. hardened
derivation) per revocation txn and you cannot guess the branch.

--
@JeremyRubin 

On Sat, Mar 12, 2022 at 10:34 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The idea 

[bitcoin-dev] Meeting Summary & Logs for CTV Meeting #5

2022-03-08 Thread Jeremy Rubin via bitcoin-dev
Logs here: https://gnusha.org/ctv-bip-review/2022-03-08.log

Notes:

1) Sapio Updates

Sapio has Experimental Taproot Support now.
See logs for how to help.
Rust-bitcoin can also use your help reviewing, e.g.
https://github.com/rust-bitcoin/rust-miniscript/pull/305
Adding MuSig support for the oracle servers would be really cool, if
someone wants a challenge.

2) Transaction Sponsors

What sponsors are vs. RBF/CPFP.
Why there's not a BIP # assigned (despite it being written up as a BIP+impl
in
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html,
should only get a number if it seems like people agree).

3) James' Vaults Post

James' vaults are similar to prior art on recursive CTV vaults (Kanzure's /
Jeremy's), where the number of steps = 1.
Actually ends up being a very good design for many custody purposes, might
be a good "80% of the benefit 20% of the work" type of thing.
People maybe want different things out of vaults... how customizable must
it be?

4) Mailing list be poppin'

Zmn shared a prepared remark which spurred a nice conversation.
General sentiment that we should be careful adding crazy amounts of power,
with great power comes great responsibility...
Maybe we shouldn't care though -- don't send to scripts you don't like?
Math is scary -- you can do all sorts of bizarre stuff with more power
(e.g., what if you made an EVM inside a bitcoin output).
Things like OP_EVICT should be bounded by design.
Problem X: Infrastructure issue for all more flexible covenants:
   1) generate a transition function you would like
   2) compile it into a script covenant
   3) request the transition/txn you want to have happen
4) produce a satisifaction of the script covenant for that transaction
   5) prove the transition function *is* what you wanted/secure
Quantifying how hard X is for a given proposal is a good idea.
You can prototype covenants with federations in Sapio pretty easily... more
people should try this!

5) General discuss
People suck at naming things... give things more unique names for protocols!
Jeremy will name something the Hot Tub Coin Machine
Some discussion on forking, if theres any kind of consensus forming, doing
things like
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018833.html
How much does a shot-on-goal cost / unforced errors of not making an
activating client available precluding being able to activate
luke-jr: never ST; ST is a reason enough to oppose CTV
jamesob:  OP_DOTHETHING

best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_AMOUNT Discussion

2022-03-08 Thread Jeremy Rubin via bitcoin-dev
Hi Devs,

Recently, I've been getting a lot of questions about OP_AMOUNT. It's also
come up in the context of "CTV is unsafe because it can't handle differing
amounts". Just sharing some preliminary thinking on it:

It could come in many variants:

OP_PUSHAMOUNT
OP_AMOUNTVERIFY
OP_PUSHAMOUNTSPLIT
OP_SPLITAMOUNTVERIFY

If we want to do a NOP upgrade, we may prefer the *VERIFY formats. If we
want to do a SUCCESSX upgrade, we could do the PUSH format.

The SplitAmount format is required because amounts are > 5 bytes (51 bits
needed max), unless we also do some sort of OP_UPGRADEMATH semantic whereby
presence of an Amount opcode enables 64 bit (or 256 bit?) math opcodes.

And could be applied to the cross product of:

The Transaction
An Input
An Output
The fees total
The fees this input - this output
This Input
"This" Output

A lot of choices! The simplest version of it just being just this input,
and no other (all that is required for many useful cases, e.g. single sig
if less than 1 btc).


A while back I designed some logic for a split amount verifying opcode
here: (I don't love it, so hadn't shared it widely).
https://gist.github.com/JeremyRubin/d9f146475f53673cd03c26ab46492504

There are some decent use cases for amount checking.

For instance, one could create a non-recursive covenant that there must be
an output which exactly matches the sats in the input at the same index.
This could be used for colored coins, statechains, TLUV/EVICT based payment
pools, etc.

Another use case could be to make a static address / descriptor that
disables low security spends if more coins are the input.

Yet another could be to enable pay-what-you-want options, where depending
on how much gets paid into an address different behaviors are permitted.

Lastly, as noted in BIP-119, you can make a belt-and-suspenders value check
in CTV contracts to enable a backup withdrawal should you send the wrong
amount to a vault.

Overall, I think the most straightforward path would be to work on this
only for tapscript, no legacy, and then spec out upgraded math operations,
and then OP_PUSHAMOUNT is pretty straightforward & low technical risk.
Unfortunately, the upgraded math and exact semantics are highly
bikesheddable... If anyone is interested in working on this, I'd be happy
to review/advise on it. Otherwise, I would likely start working on this
sometime after I'm spending less effort on CTV.

Blockstream liquid has some work in this regard that may be copyable for
the math part, but likely not the amount opcode:
https://github.com/ElementsProject/elements/blob/master/doc/tapscript_opcodes.md
 However, they chose to do only 64 bit arithmetic and I personally think
that the community might prefer wider operations, the difficulty being in
not incidentally enabling OP_CAT as a size, bitshift, and add fragment (or
proving that OP_CAT is OK?).

see also: https://rubin.io/bitcoin/2021/12/05/advent-8/#OP_AMOUNT

Cheers,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Meeting #5 Agenda (Tuesday, March 7th, 12:00 PT)

2022-03-07 Thread Jeremy Rubin via bitcoin-dev
* Tuesday, March 8th.

I think Noon PT == 8pm UTC?

but dont trust me i cant even tell what day is what.
--
@JeremyRubin 


On Mon, Mar 7, 2022 at 6:50 PM Jeremy Rubin 
wrote:

> Hi all,
>
> There will be a CTV meeting tomorrow at noon PT. Agenda below:
>
> 1) Sapio Taproot Support Update / Request for Review (20 Minutes)
> - Experimental support for Taproot merged on master
> https://github.com/sapio-lang/sapio
> 2) Transaction Sponsoring v.s CPFP/RBF (20 Minutes)
> -
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019879.html
> -
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
> 3) Jamesob's Non-Recursive Vaults Post (20 minutes)
> -
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020067.html
> 4) What the heck is everyone talking about on the mailing list all of the
> sudden (30 minutes)
> - EVICT, TLUV, FOLD, Lisp, OP_ANNEX, Drivechain Covenants, Jets, Etc
> 5) Q (30 mins)
>
> Best,
>
> Jeremy
>
>
> --
> @JeremyRubin 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting #5 Agenda (Tuesday, March 7th, 12:00 PT)

2022-03-07 Thread Jeremy Rubin via bitcoin-dev
Hi all,

There will be a CTV meeting tomorrow at noon PT. Agenda below:

1) Sapio Taproot Support Update / Request for Review (20 Minutes)
- Experimental support for Taproot merged on master
https://github.com/sapio-lang/sapio
2) Transaction Sponsoring v.s CPFP/RBF (20 Minutes)
-
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019879.html
-
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
3) Jamesob's Non-Recursive Vaults Post (20 minutes)
-
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020067.html
4) What the heck is everyone talking about on the mailing list all of the
sudden (30 minutes)
- EVICT, TLUV, FOLD, Lisp, OP_ANNEX, Drivechain Covenants, Jets, Etc
5) Q (30 mins)

Best,

Jeremy


--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-06 Thread Jeremy Rubin via bitcoin-dev
Hi Christian,

For that purpose I'd recommend having a checksig extra that is

checksigextra that allows N extra data items on the
stack in addition to the txn hash. This would allow signers to sign some
addtl arguments, but would not be an annex since the values would not have
any consensus meaning (whereas annex is designed to have one)


I've previously discussed this for eltoo with giving signatures an explicit
extra seqnum, but it can be generalized as above.



W.r.t. pinning, if the annex is a pure function of the script execution,
then there's no issue with letting it be mutable (e.g. for a validation
cost hint). But permitting both validation cost commitments and stack
readability is asking too much of the annex IMO.

On Sun, Mar 6, 2022, 1:13 PM Christian Decker 
wrote:

> One thing that we recently stumbled over was that we use CLTV in eltoo not
> for timelock but to have a comparison between two committed numbers coming
> from the spent and the spending transaction (ordering requirement of
> states). We couldn't use a number on the stack of the scriptSig as the
> signature doesn't commit to it, which is why we commandeered nLocktime
> values that are already in the past.
>
> With the annex we could have a way to get a committed to number we can
> pull onto the stack, and free the nLocktime for other uses again. It'd also
> be less roundabout to explain in classes :-)
>
> An added benefit would be that update transactions, being singlesig, can
> be combined into larger transactions by third parties or watchtowers to
> amortize some of the fixed cost of getting them confirmed, allowing
> on-path-aggregation basically (each node can group and aggregate
> transactions as they forward them). This is currently not possible since
> all the transactions that we'd like to batch would have to have the same
> nLocktime at the moment.
>
> So I think it makes sense to partition the annex into a global annex
> shared by the entire transaction, and one for each input. Not sure if one
> for inputs would also make sense as it'd bloat the utxo set and could be
> emulated by using the input that is spending it.
>
> Cheers,
> Christian
>
> On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev
>> wrote:
>> > I've seen some discussion of what the Annex can be used for in Bitcoin.
>>
>>
>> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html
>>
>> includes some discussion on that topic from the taproot review meetings.
>>
>> The difference between information in the annex and information in
>> either a script (or the input data for the script that is the rest of
>> the witness) is (in theory) that the annex can be analysed immediately
>> and unconditionally, without necessarily even knowing anything about
>> the utxo being spent.
>>
>> The idea is that we would define some simple way of encoding (multiple)
>> entries into the annex -- perhaps a tag/length/value scheme like
>> lightning uses; maybe if we add a lisp scripting language to consensus,
>> we just reuse the list encoding from that? -- at which point we might
>> use one tag to specify that a transaction uses advanced computation, and
>> needs to be treated as having a heavier weight than its serialized size
>> implies; but we could use another tag for per-input absolute locktimes;
>> or another tag to commit to a past block height having a particular hash.
>>
>> It seems like a good place for optimising SIGHASH_GROUP (allowing a group
>> of inputs to claim a group of outputs for signing, but not allowing inputs
>> from different groups to ever claim the same output; so that each output
>> is hashed at most once for this purpose) -- since each input's validity
>> depends on the other inputs' state, it's better to be able to get at
>> that state as easily as possible rather than having to actually execute
>> other scripts before your can tell if your script is going to be valid.
>>
>> > The BIP is tight lipped about it's purpose
>>
>> BIP341 only reserves an area to put the annex; it doesn't define how
>> it's used or why it should be used.
>>
>> > Essentially, I read this as saying: The annex is the ability to pad a
>> > transaction with an additional string of 0's
>>
>> If you wanted to pad it directly, you can do that in script already
>> with a PUSH/DROP combo.
>>
>> The point of doing it in the annex is you could have a short byte
>> string, perhaps something like "0x010201a4" saying &quo

Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-05 Thread Jeremy Rubin via bitcoin-dev
This may be of interest:

https://github.com/sapio-lang/sapio/blob/01830132bbbe39c3225e173e099f6e1a0611461c/sapio/examples/subscription.py

Basically, a (old, python) sapio contract whereby you can make cancellable
subscriptions that are essentially a time based autopay scheme whereby
cancellation gives time for the receiver to claim the correct amount of
money.

--
@JeremyRubin 

On Sat, Mar 5, 2022 at 10:58 PM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Good morning Chris,
>
> > I think this proposal describes arbitrary lines of pre-approved credit
> from a bitcoin wallet. The line can be drawn down with oracle attestations.
> You can mix in locktimes on these pre-approved lines of credit if you would
> like to rate limit, or ignore rate limiting and allow the full utxo to be
> spent by the borrower. It really is contextual to the use case IMO.
>
> Ah, that seems more useful.
>
> Here is an example application that might benefit from this scheme:
>
> I am commissioning some work from some unbranded workperson.
> I do not know how long the work will take, and I do not trust the
> workperson to accurately tell me how complete the work is.
> However, both I and the workperson trust a branded third party (the
> oracle) who can judge the work for itself and determine if it is complete
> or not.
> So I create a transaction whose signature can be completed only if the
> oracle releases a proper scalar and hand it over to the workperson.
> Then the workperson performs the work, then asks the oracle to judge if
> the work has been completed, and if so, the work can be compensated.
>
> On the other hand, the above, where the oracle determines *when* the fund
> can be spent, can also be implemented by a simple 2-of-3, and called an
> "escrow".
> After all, the oracle attestation can be a partial signature as well, not
> just a scalar.
> Is there a better application for this scheme?
>
> I suppose if the oracle attestation is intended to be shared among
> multiple such transactions?
> There may be multiple PTLCs, that are triggered by a single oracle?
>
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] One testnet to rule them all

2022-03-05 Thread Jeremy Rubin via bitcoin-dev
Signet degrades to a testnet if you make your key OP_TRUE.


It's not about needing 21M coins it's about easily getting access to said
coins for testing, where it's kinda tricky to get testnet coins.

On Sat, Mar 5, 2022, 6:17 PM  wrote:

> > There's no point to pegging coins that are worthless into a system of
> also worthless coins, unless you want to test the mechanism of testing
> pegging.
>
> But testing pegging is what is needed if we ever want to introduce
> sidechains. On the other hand, even if we don't want sidechains, then the
> question still remains: why we need more than 21 million coins for testing,
> if we don't need more than 21 million coins for real transactions?
>
> > If anything I think we should permanently shutter testnet now that
> signet is available.
>
> Then, in that case, the "mainchain" can be our official signet and other
> signets can be pegged into that. Also, testnet3 is permissionless, so how
> signet can replace that? Because if you want to test mining and you cannot
> mine any blocks in signet, then it is another problem.
>
> On 2022-03-05 17:19:40 user Jeremy Rubin  wrote:
> There's no point to pegging coins that are worthless into a system of also
> worthless coins, unless you want to test the mechanism of testing pegging.
>
>
> As is, it's hard enough to get people set up on a signet, if they have to
> run two nodes and then scramble to find testnet coins and then peg them
> were just raising the barriers to entry for starting to use a signet for
> testing.
>
>
>
>
> If anything I think we should permanently shutter testnet now that signet
> is available.
>
>
> On Sat, Mar 5, 2022, 3:53 PM vjudeu via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> In testnet3, anyone can become a miner, it is possible to even mine a
> block on some CPU, because the difficulty can drop to one. In signet, we
> create some challenge, for example 1-of-2 multisig, that can restrict who
> can mine, so that chain can be "unreliably reliable". Then, my question is:
> why signets are introducing new coins out of thin air, instead of forming
> two-way peg-in between testnet3 and signet?
>
> The lack of coins is not a bug, it is a feature. We have more halvings in
> testnet3 than in mainnet or signets, but it can be good, we can use this to
> see, what can happen with a chain after many halvings. Also, in testnet3
> there is no need to have any coins if we are mining. Miners can create,
> move and destroy zero satoshis. They can also extend the precision of the
> coins, so a single coin in testnet3 can be represented as a thousand of
> coins in some signet sidechain.
>
> Recently, there are some discussions regarding sidechains. Before they
> will become a real thing, running on mainnet, they should be tested.
> Nowadays, a popular way of testing new features is creating a new signet
> with new rules. But the question still remains: why we need new coins,
> created out of thin air? And even when some signet wants to do that, then
> why it is not pegged into testnet3? Then it would have as much chainwork
> protection as testnet3!
>
> It seems that testnet3 is good enough to represent the main chain during
> sidechain testing. It is permissionless and open, anyone can start mining
> sidechain blocks, anyone with a CPU can be lucky and find a block with the
> minimal difficulty. Also, because of blockstorms and regular chain reorgs,
> some extreme scenarios, like stealing all coins from some sidechain, can be
> tested in a public way, because that "unfriendly and unstable" environment
> can be used to test stronger attacks than in a typical chain.
>
> Putting that proposal into practice can be simple and require just
> creating one Taproot address per signet in testnet3. Then, it is possible
> to create one testnet transaction (every three months) that would move
> coins to and from testnet3, so the same coins could travel between many
> signets. New signets can be pegged in with 1:1 ratio, existing signets can
> be transformed into signet sidechains (the signet miners rule that chains,
> so they can enforce any transition rules they need).
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] One testnet to rule them all

2022-03-05 Thread Jeremy Rubin via bitcoin-dev
There's no point to pegging coins that are worthless into a system of also
worthless coins, unless you want to test the mechanism of testing pegging.

As is, it's hard enough to get people set up on a signet, if they have to
run two nodes and then scramble to find testnet coins and then peg them
were just raising the barriers to entry for starting to use a signet for
testing.


If anything I think we should permanently shutter testnet now that signet
is available.

On Sat, Mar 5, 2022, 3:53 PM vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> In testnet3, anyone can become a miner, it is possible to even mine a
> block on some CPU, because the difficulty can drop to one. In signet, we
> create some challenge, for example 1-of-2 multisig, that can restrict who
> can mine, so that chain can be "unreliably reliable". Then, my question is:
> why signets are introducing new coins out of thin air, instead of forming
> two-way peg-in between testnet3 and signet?
>
> The lack of coins is not a bug, it is a feature. We have more halvings in
> testnet3 than in mainnet or signets, but it can be good, we can use this to
> see, what can happen with a chain after many halvings. Also, in testnet3
> there is no need to have any coins if we are mining. Miners can create,
> move and destroy zero satoshis. They can also extend the precision of the
> coins, so a single coin in testnet3 can be represented as a thousand of
> coins in some signet sidechain.
>
> Recently, there are some discussions regarding sidechains. Before they
> will become a real thing, running on mainnet, they should be tested.
> Nowadays, a popular way of testing new features is creating a new signet
> with new rules. But the question still remains: why we need new coins,
> created out of thin air? And even when some signet wants to do that, then
> why it is not pegged into testnet3? Then it would have as much chainwork
> protection as testnet3!
>
> It seems that testnet3 is good enough to represent the main chain during
> sidechain testing. It is permissionless and open, anyone can start mining
> sidechain blocks, anyone with a CPU can be lucky and find a block with the
> minimal difficulty. Also, because of blockstorms and regular chain reorgs,
> some extreme scenarios, like stealing all coins from some sidechain, can be
> tested in a public way, because that "unfriendly and unstable" environment
> can be used to test stronger attacks than in a typical chain.
>
> Putting that proposal into practice can be simple and require just
> creating one Taproot address per signet in testnet3. Then, it is possible
> to create one testnet transaction (every three months) that would move
> coins to and from testnet3, so the same coins could travel between many
> signets. New signets can be pegged in with 1:1 ratio, existing signets can
> be transformed into signet sidechains (the signet miners rule that chains,
> so they can enforce any transition rules they need).
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-05 Thread Jeremy Rubin via bitcoin-dev
It seems like a decent concept for exploration.

AJ, I'd be interested to know what you've been able to build with Chia Lisp
and what your experience has been... e.g. what does the Lightning Network
look like on Chia?


One question that I have had is that it seems like to me that neither
simplicity nor chia lisp would be particularly suited to a ZK prover...

Were that the explicit goal, it would seem that we could pretty easily
adapt something like Cairo for Bitcoin transactions, and then we'd get a
big privacy benefit as well as enabling whatever programming paradigm you
find convenient (as it is compiled to a circuit verifier of some kind)...

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-05 Thread Jeremy Rubin via bitcoin-dev
On Sat, Mar 5, 2022 at 5:59 AM Anthony Towns  wrote:

> On Fri, Mar 04, 2022 at 11:21:41PM +0000, Jeremy Rubin via bitcoin-dev
> wrote:
> > I've seen some discussion of what the Annex can be used for in Bitcoin.
>
>
> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html
>
> includes some discussion on that topic from the taproot review meetings.
>
> The difference between information in the annex and information in
> either a script (or the input data for the script that is the rest of
> the witness) is (in theory) that the annex can be analysed immediately
> and unconditionally, without necessarily even knowing anything about
> the utxo being spent.
>

I agree that should happen, but there are cases where this would not work.
E.g., imagine OP_LISP_EVAL + OP_ANNEX... and then you do delegation via the
thing in the annex.

Now the annex can be executed as a script.



>
> The idea is that we would define some simple way of encoding (multiple)
> entries into the annex -- perhaps a tag/length/value scheme like
> lightning uses; maybe if we add a lisp scripting language to consensus,
> we just reuse the list encoding from that? -- at which point we might
> use one tag to specify that a transaction uses advanced computation, and
> needs to be treated as having a heavier weight than its serialized size
> implies; but we could use another tag for per-input absolute locktimes;
> or another tag to commit to a past block height having a particular hash.
>

Yes, this seems tough to do without redefining checksig to allow partial
annexes. Hence thinking we should make our current checksig behavior
require it be 0, future operations should be engineered with specific
structured annex in mind.



>
> It seems like a good place for optimising SIGHASH_GROUP (allowing a group
> of inputs to claim a group of outputs for signing, but not allowing inputs
> from different groups to ever claim the same output; so that each output
> is hashed at most once for this purpose) -- since each input's validity
> depends on the other inputs' state, it's better to be able to get at
> that state as easily as possible rather than having to actually execute
> other scripts before your can tell if your script is going to be valid.
>

I think SIGHASH_GROUP could be some sort of mutable stack value, not ANNEX.
you want to be able to compute what range you should sign, and then the
signature should cover the actual range not the argument itself.

Why sign the annex literally?

Why require that all signatures in one output sign the exact same digest?
What if one wants to sign for value and another for value + change?



>
> > The BIP is tight lipped about it's purpose
>
> BIP341 only reserves an area to put the annex; it doesn't define how
> it's used or why it should be used.
>
>
It does define how it's used, Checksig must commit to it. Were there no
opcodes dependent on it I would agree, and that would be preferable.




> > Essentially, I read this as saying: The annex is the ability to pad a
> > transaction with an additional string of 0's
>
> If you wanted to pad it directly, you can do that in script already
> with a PUSH/DROP combo.
>

You cannot, because the push/drop would not be signed and would be
malleable.

The annex is not malleable, so it can be used to this as authenticated
padding.



>
> The point of doing it in the annex is you could have a short byte
> string, perhaps something like "0x010201a4" saying "tag 1, data length 2
> bytes, value 420" and have the consensus intepretation of that be "this
> transaction should be treated as if it's 420 weight units more expensive
> than its serialized size", while only increasing its witness size by
> 6 bytes (annex length, annex flag, and the four bytes above). Adding 6
> bytes for a 426 weight unit increase seems much better than adding 426
> witness bytes.
>
>
Yes, that's what I say in the next sentence,

*> Or, we might somehow make the witness a small language (e.g., run length
encoded zeros) such that we can very quickly compute an equivalent number
of zeros to 'charge' without actually consuming the space but still
consuming a linearizable resource... or something like that.*

so I think we concur on that.



> > Introducing OP_ANNEX: Suppose there were some sort of annex pushing
> opcode,
> > OP_ANNEX which puts the annex on the stack
>
> I think you'd want to have a way of accessing individual entries from
> the annex, rather than the annex as a single unit.
>

Or OP_ANNEX + OP_SUBSTR + OP_POVARINTSTR? Then you can just do 2 pops for
the length and the tag and then get the data.


>
> > Now suppose that I have a computation that I am running in a script as
> > follows:

[bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread Jeremy Rubin via bitcoin-dev
I've seen some discussion of what the Annex can be used for in Bitcoin. For
example, some people have discussed using the annex as a data field for
something like CHECKSIGFROMSTACK type stuff (additional authenticated data)
or for something like delegation (the delegation is to the annex). I think
before devs get too excited, we should have an open discussion about what
this is actually for, and figure out if there are any constraints to using
it however we may please.

The BIP is tight lipped about it's purpose, saying mostly only:

*What is the purpose of the annex? The annex is a reserved space for future
extensions, such as indicating the validation costs of computationally
expensive new opcodes in a way that is recognizable without knowing the
scriptPubKey of the output being spent. Until the meaning of this field is
defined by another softfork, users SHOULD NOT include annex in
transactions, or it may lead to PERMANENT FUND LOSS.*

*The annex (or the lack of thereof) is always covered by the signature and
contributes to transaction weight, but is otherwise ignored during taproot
validation.*

*Execute the script, according to the applicable script rules[11], using
the witness stack elements excluding the script s, the control block c, and
the annex a if present, as initial stack.*

Essentially, I read this as saying: The annex is the ability to pad a
transaction with an additional string of 0's that contribute to the virtual
weight of a transaction, but has no validation cost itself. Therefore,
somehow, if you needed to validate more signatures than 1 per 50 virtual
weight units, you could add padding to buy extra gas. Or, we might somehow
make the witness a small language (e.g., run length encoded zeros) such
that we can very quickly compute an equivalent number of zeros to 'charge'
without actually consuming the space but still consuming a linearizable
resource... or something like that. We might also e.g. want to use the
annex to reserve something else, like the amount of memory. In general, we
are using the annex to express a resource constraint efficiently. This
might be useful for e.g. simplicity one day.

Generating an Annex: One should write a tracing executor for a script, run
it, measure the resource costs, and then generate an annex that captures
any externalized costs.

---

Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode,
OP_ANNEX which puts the annex on the stack as well as a 0 or 1 (to
differentiate annex is 0 from no annex, e.g. 0 1 means annex was 0 and 0 0
means no annex). This would be equivalent to something based on  OP_TXHASH  OP_TXHASH.

Now suppose that I have a computation that I am running in a script as
follows:

OP_ANNEX
OP_IF
`some operation that requires annex to be <1>`
OP_ELSE
OP_SIZE
`some operation that requires annex to be len(annex) + 1 or does a
checksig`
OP_ENDIF

Now every time you run this, it requires one more resource unit than the
last time you ran it, which makes your satisfier use the annex as some sort
of "scratch space" for a looping construct, where you compute a new annex,
loop with that value, and see if that annex is now accepted by the program.

In short, it kinda seems like being able to read the annex off of the stack
makes witness construction somehow turing complete, because we can use it
as a register/tape for some sort of computational model.

---

This seems at odds with using the annex as something that just helps you
heuristically guess  computation costs, now it's somehow something that
acts to make script satisfiers recursive.

Because the Annex is signed, and must be the same, this can also be
inconvenient:

Suppose that you have a Miniscript that is something like: and(or(PK(A),
PK(A')), X, or(PK(B), PK(B'))).

A or A' should sign with B or B'. X is some sort of fragment that might
require a value that is unknown (and maybe recursively defined?) so
therefore if we send the PSBT to A first, which commits to the annex, and
then X reads the annex and say it must be something else, A must sign
again. So you might say, run X first, and then sign with A and C or B.
However, what if the script somehow detects the bitstring WHICH_A WHICH_B
and has a different Annex per selection (e.g., interpret the bitstring as a
int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn,
Kn')) we end up with needing to pre-sign 2**n annex values somehow... this
seems problematic theoretically.

Of course this wouldn't be miniscript then. Because miniscript is just for
the well behaved subset of script, and this seems ill behaved. So maybe
we're OK?

But I think the issue still arises where suppose I have a simple thing
like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if
COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide
what logic each satisfier for the branch is going to use in advance, or
sign all possible sums of both our annex costs? 

[bitcoin-dev] BIP-119 CTV Meeting #4 Notes

2022-02-22 Thread Jeremy Rubin via bitcoin-dev
Today's meeting was a bit of a different format than usual, the prime focus
was on getting CTV Signet up and running and testing out some contracts.

In terms of discussion, there was some talk about what the goals of a
signet should be, but no conclusions were really reached. It is very good a
signet exists, but it's unclear how much people will be interested in
Signet with CTV v.s. if it had a lot of other forks to play with. Further,
other fork ideas are a lot greener w.r.t. infrastructure available.

In the tutorial section, we walked through the guide posted on the list.
There were a myriad of difficulties with local environments and brittle
bash scripts provided for the tutorial, as well a confusion around using
old versions of sapio-cli (spoiler: it's alpha software, need to always be
on the latest).

Despite difficulties, multiple participants finished the tutorial during
the session, some of their transactions can be seen below:

https://explorer.ctvsignet.com/tx/62292138c2f55713c3c161bd7ab36c7212362b648cf3f054315853a081f5808e
https://explorer.ctvsignet.com/tx/5ff08dcc8eb17979a22be471db1d9f0eb8dc49b4dd015fb08bac34be1ed03a10

In future weeks the tutorials will continue & more contracts can be tried
out. This tutorial was also focused on using the CLI, which is harder,
whereas future tutorials will use the GUI as well but won't be as prime for
understanding all the "moving parts".

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP-119 CTV Meeting #4 Draft Agenda for Tuesday February 22nd at 12:00 PT

2022-02-22 Thread Jeremy Rubin via bitcoin-dev
Hi Devs,

As promised, a Sapio Tutorial. In this tutorial we'll walk through how to
use the Sapio CLI to generate contracts and play with them on the network.
We'll use a congestion control tree because it's very simple! We will walk
through this step-by-step during the meeting today.

-1. Install JQ (json manipulating tool) if you don't have it / other things
needed to run a bitcoin node.
0. Set up a node as described above.  You'll likely want settings like this
in your bitcoin.conf too:
[signet]
# generate this yourself

rpcauth=generateme:fromtherpcauth.pyfile
txindex=1
signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae

rpcport=18332
rpcworkqueue=1000
fallbackfee=0.0002

Get coins https://faucet.ctvsignet.com/ / DM me

1. Follow the install instructions on
https://learn.sapio-lang.org/ch01-01-installation.html You can skip the the
sapio-studio part / pod part and just do the Local Quickstart up until
"Instantiate a contract from the plugin". You'll also want to run *cargo
build --release* from the root directory to build the sapio-cli.


2. Open up the site https://rjsf-team.github.io/react-jsonschema-form/
3. Run *sapio-cli contract api --file
plugin-example/target/wasm32-unknown-unknown/debug/sapio_wasm_plugin_example.wasm*
4. Copy the resulting JSON into the RJSF site
5. Fill out the form as you wish. You should see a JSON like
{
"context": {
"amount": 3,
"network": "Signet",
"effects": {
"effects": {}
}
},
"arguments": {
"TreePay": {
"fee_sats_per_tx": 1000,
"participants": [
{
"address": "tb1pwqchwp3zur2ewuqsvg0mcl34pmcyxzqn9x8vn0p5a4hzckmujqpqp2dlma",
"amount": 1
},
{
"address": "tb1pwqchwp3zur2ewuqsvg0mcl34pmcyxzqn9x8vn0p5a4hzckmujqpqp2dlma",
"amount": 1
}
],
"radix": 2
}
}
}

You may have to delete some extra fields (that site is a little buggy).

Optionally, just modify the JSON above directly.

6. Copy the JSON and paste it into a file ARGS.json
7. Find your sapio-cli config file (mine is at
~/.config/sapio-cli/config.json). Modify it to look like (enter your
rpcauth credentials):
{
 "main": null,
 "testnet": null,
 "signet": {
   "active": true,
   "api_node": {
 "url": "http://0.0.0.0:18332;,
 "auth": {
   "UserPass": [
 "YOUR RPC NAME",
 "YOUR PASSWORD HERE"
   ]
 }
   },
   "emulator_nodes": {
 "enabled": false,
 "emulators": [],
 "threshold": 1
   },
   "plugin_map": {}
 },
 "regtest": null
}

8. Create a contract template:
*cat ARGS.json| ./target/release/sapio-cli contract create  --file
plugin-example/target/wasm32-unknown-unknown/debug/sapio_wasm_plugin_example.wasm
 | jq > UNBOUND.json*
9. Get a proposed funding & binding of the template to that utxo:

*cat UNBOUND.json| ./target/release/sapio-cli contract bind | jq >
BOUND.json*
10. Finalize the funding tx:

*cat BOUND.json | jq ".program[\"funding\"].txs[0].linked_psbt.psbt" |
xargs echo | xargs -I% ./bitcoin-cli -signet utxoupdatepsbt % |  xargs -I%
./bitcoin-cli -signet walletprocesspsbt % | jq ".psbt" | xargs -I%
./bitcoin-cli -signet finalizepsbt % | jq ".hex"*

11. Review the hex transaction/make sure you want this contract... and then
send to network:



*./bitcoin-cli -signet sendrawtransaction
0201015e69106b2eb00d668d945101ed3c0102cf35aba738ee6520fc2603bd60a872ea00feff02e8c5eb0b2200203d00d88fd664cbfaf8a1296d3f717625595d2980976bbf4feeb10ab090180ccdcb3faefd02002251208f7e5e50ce7f65debe036a90641a7e4d719d65d621426fd6589e5ec1c5969e200140a348a8711cb389bdb3cc0b1050961e588bb42cb5eb429dd0a415b7b9c712748fa4d5dfe2bb9c4dc48b31a7e3d1a66d9104bbb5936698f8ef8a92ac27a6506635*


12. Send the other transactions:

*cat BOUND.json| jq .program | jq ".[].txs[0].linked_psbt.psbt" | xargs -I%
./target/release/sapio-cli psbt finalize --psbt %  | xargs -I%
./bitcoin-cli -signet sendrawtransaction %*



Now what?

- Maybe load up the Sapio Studio and try it through the GUI?
- Modify the congestion control tree code and recompile it?
- How big of a tree can you make (I did about 6000 last night)?
- Try out other contracts?
--
@JeremyRubin 


On Mon, Feb 21, 2022 at 7:36 PM Jeremy Rubin 
wrote:

> Hi All,
>
> Apologies for the late posting of the agenda. The 4th CTV meeting will be
> held tomorrow at 12:00 PT in ##ctv-bip-review in Libera.chat.
>
> Tomorrow the conversation will be slightly more tutorial focused. If you
> have time in advance of the meeting, it might be good to do some of this in
> advance.
>
> 1) Discussion: What is the goal of Signet? (20 minutes)
> - Do we have a "decision function" of observations from a test network?
> - What applications should be prototyped/fleshed out?
> - What level of fleshed out matters?
> - Should we add other experiments in the mix on this net, like
> APO/Sponsors?
> - Should we get e.g. lightning working on this signet?
> 2) Connecting to CTV Signet Tutorial (10 mins)
>
> We'll make sure everyone who wants to be on it is on 

[bitcoin-dev] BIP-119 CTV Meeting #4 Draft Agenda for Tuesday February 22nd at 12:00 PT

2022-02-21 Thread Jeremy Rubin via bitcoin-dev
Hi All,

Apologies for the late posting of the agenda. The 4th CTV meeting will be
held tomorrow at 12:00 PT in ##ctv-bip-review in Libera.chat.

Tomorrow the conversation will be slightly more tutorial focused. If you
have time in advance of the meeting, it might be good to do some of this in
advance.

1) Discussion: What is the goal of Signet? (20 minutes)
- Do we have a "decision function" of observations from a test network?
- What applications should be prototyped/fleshed out?
- What level of fleshed out matters?
- Should we add other experiments in the mix on this net, like
APO/Sponsors?
- Should we get e.g. lightning working on this signet?
2) Connecting to CTV Signet Tutorial (10 mins)

We'll make sure everyone who wants to be on it is on it & debug any issues.

*Ahead of Meeting: Build this
branch 
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-signet-23.0-alpha
*

Connect to:
```
[signet]
signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
addnode=50.18.75.225
```

3) Receiving Coins / Sending Coins (5 mins)
There's now a faucet for this signet: https://faucet.ctvsignet.com
And also an explorer: https://explorer.ctvsignet.com

4) Sapio tutorial (25 minutes)

*Ahead of meeting, if you have time: skim https://learn.sapio-lang.org
 & download/build the sapio cli & plugin
examples*

We'll try to get everyone building and sending a basic application (e.g.
congestion control tree or vault) on the signet (instructions to be posted
before meeting).

We won't use Sapio Studio, just the Sapio CLI.

5) Sapio Q (30 mins)

After some experience playing with Sapio, more general discussion about the
project and what it may accomplish

6) General Discussion (30 minutes)


Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Signet Parameters

2022-02-21 Thread Jeremy Rubin via bitcoin-dev
There's also now a faucet:

https://faucet.ctvsignet.com

thanks 0x0ff!
--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Fri, Feb 18, 2022 at 3:13 AM 0x0ff <0x...@onsats.org> wrote:

> Good day,
>
> I've setup the explorer for CTV Signet which is now up and running at
> https://explorer.ctvsignet.com
>
> Best,
> @0x0ff <https://twitter.com/0x0ff_>
>
> --- Original Message ---
> On Thursday, February 17th, 2022 at 9:58 PM, Jeremy Rubin via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Hi devs,
>
> I have been running a CTV signet for around a year and it's seen little
> use. Early on I had some issues syncing new nodes, but I have verified
> syncability to this signet using
> https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-signet-23.0-alpha.
> Please use this signet!
>
> ```
> [signet]
>
> signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
> addnode=50.18.75.225
> ```
>
> This should be operational. Let me know if there are any issues you
> experience (likely with signet itself, but CTV too).
>
> Feel free to also email me an address and I can send you some signet coins
> -- if anyone is interested in running an automatic faucet I would love help
> with that and will send you a lot of coins.
>
> AJ Wrote (in another thread):
>
> > I'd much rather see some real
> > third-party experimentation *somewhere* public first, and Jeremy's CTV
> > signet being completely empty seems like a bad sign to me. Maybe that
> > means we should tentatively merge the feature and deploy it on the
> > default global signet though? Not really sure how best to get more
> > real world testing; but "deploy first, test later" doesn't sit right.
>
> I agree that real experimentation would be great, and think that merging
> the code (w/o activation) for signet would likely help users v.s. custom
> builds/parameters.
>
> I am unsure that "learning in public" is required -- personally I do
> experiments on regtest regularly and on mainnet (using emulators) more
> occasionally. I think some of the difficulty is that for setting up signet
> stuff you need to wait e.g. 10 minutes for blocks and stuff, source faucet
> coins, etc. V.s. regtest you can make tests that run automatically. Maybe
> seeing more regtest RPC test samples for regtests would be a sufficient
> in-between?
>
>
> Best,
>
> Jeremy
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin via bitcoin-dev
Morning!

>
> For the latter case, CPFP would work and already exists.
> **Unless** you are doing something complicated and offchain-y and involves
> relative locktimes, of course.
>
>
The "usual" design I recommend for Vaults contains something that is like:

{ CSV  CHECKSIG,  CHECKSIG}
or
{ CSV  CHECKSIG,  CHECKSIG)> CTV}


where after an output is created, it has to hit maturity before hot
spendable but can be kicked to recovery any time before (optional: use CTV
to actually transition on chain removing hot wallet, if cold key is hard to
access).


Not that this means if you're waiting for one of these outputs to be
created on chain, you cannot spend from the hot key since it needs to
confirm on chain first. Spending from the cold key for CPFP'ing the hot is
an 'invalid move' (emergency key for non emergency sitch)

Thus in order to CPFP, you would need a separate output just for CPFPing
that is not subject to these restrictions, or some sort of RBF-able addable
input/output. Or, Sponsors.


Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin via bitcoin-dev
opt-in or explicit tagging of fee account is a bad design IMO.

As pointed out by James O'Beirne in the other email, having an explicit key
required means you have to pre-plan suppose you're building a vault
meant to distribute funds over many years, do you really want a *specific*
precommitted key you have to maintain? What happens to your ability to bump
should it be compromised (which may be more likely if it's intended to be a
hot-wallet function for bumping).

Furthermore, it's quite often the case that someone might do a transaction
that pays you that is low fee that you want to bump but they choose to
opt-out... then what? It's better that you should always be able to fee
bump.


--
@JeremyRubin 


On Sun, Feb 20, 2022 at 6:24 AM ZmnSCPxj  wrote:

> Good morning DA,
>
>
> > Agreed, you cannot rely on a replacement transaction would somehow
> > invalidate a previous version of it, it has been spoken into the gossip
> > and exists there in mempools somewhere if it does, there is no guarantee
> > that anyone has ever heard of the replacement transaction as there is no
> > consensus about either the previous version of the transaction or its
> > replacement until one of them is mined and the block accepted. -DA.
>
> As I understand from the followup from Peter, the point is not "this
> should never happen", rather the point is "this should not happen *more
> often*."
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin via bitcoin-dev
--
@JeremyRubin 


On Sat, Feb 19, 2022 at 1:39 AM Peter Todd  wrote:

> On Fri, Feb 18, 2022 at 04:38:27PM -0800, Jeremy Rubin wrote:
> > > As I said, it's a new kind of pinning attack, distinct from other types
> > of pinning attack.
> >
> > I think pinning is "formally defined" as sequences of transactions which
> > prevent or make it less likely for you to make any progress (in terms of
> > units of computation proceeding).
>
> Mentioning "computation" when talking about transactions is misleading:
> blockchain transactions have nothing to do with computation.
>

It is in fact computation. Branding it as "misleading" is misleading... The
relevant literature is https://en.wikipedia.org/wiki/Non-blocking_algorithm,
sponsors helps get rid of deadlocking so that any thread can be guaranteed
to make progress. E.g., this is critical in Eltoo, which is effectively a
coordinated multi-party computation on-chain to compute the highest
sequence number known by any worker.

That transactions are blobs of "verification" (which is also itself a
computation) less so than dynamic computations is irrelevant to the fact
that series of transactions do represent computations.



> > Something that only increases possibility to make progress cannot be
> > pinning.
>
> It is incorrect to say that all use-cases have the property that any
> version of
> a transaction being mined is progress.
>

It is progress, tautologically. Progress is formally definable as a
transaction of any kind getting mined. Pinning prevents progress by an
adversarial worker. Sponsoring enables progress, but it may not be your
preferred interleaving. That's OK, but it's inaccurate to say it is not
progress.

Your understanding of how OpenTimestamps calendars work appears to be
> incorrect. There is no chain of unconfirmed transactions. Rather, OTS
> calendars
> use RBF to _update_ the timestamp tx with a new merkle tip hash for to all
> outstanding per-second commitments once per new block. In high fee
> situations
> it's normal for there to be dozens of versions of that same tx, each with a
> slightly higher feerate.
>

I didn't claim there to be a chain of unconfirmed, I claimed that there
could be single output chain that you're RBF'ing one step per block.

E.g., it could be something like

A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo}}
A_1 -> {A_2 w/ CSV 1 block, OP_RETURN {bar}}

such that A_i provably can't have an unconfirmed descendant. The notion
would be that you're replacing one with another. E.g., if you're updating
the calendar like:


Version 0: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo}}
Version 1: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo, bar}}
Version 2: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo, bar, delta}}

and version 1 gets mined, then in A_1's spend you simply shift delta to
that (next) calendar.

A_1 -> {A_2 w/ CSV 1 block, OP_RETURN {delta}}

Thus my claim that someone sponsoring a old version only can delay by 1
block the calendar commit.





> OTS calendars can handle any of those versions getting mined. But older
> versions getting mined wastes money, as the remaining commitments still
> need to
> get mined in a subsequent transaction. Those remaining commitments are also
> delayed by the time it takes for the next tx to get mined.
>
> There are many use-cases beyond OTS with this issue. For example, some
> entities
> use "in-place" replacement for update low-time-preference settlement
> transactions by adding new txouts and updating existing ones. Older
> versions of
> those settlement transactions getting mined rather than the newer version
> wastes money and delays settlement for the exact same reason it does in
> OTS.
>
>
> > Lastly, if you do get "necromanced" on an earlier RBF'd transaction by a
> > third party for OTS, you should be relatively happy because it cost you
> > less fees overall, since the undoing of your later RBF surely returned
> some
> > satoshis to your wallet.
>
> As I said above, no it doesn't.
>
>
It does save money since you had to pay to RBF, the N+1st txn will be
paying higher fee than the Nth. So if someone else sponsors an earlier
version, then you save whatever feerate/fee bumps you would have paid and
the funds are again in your change output (or something). You can apply
those change output savings to your next batch, which can include any
entries that have been dropped .
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread Jeremy Rubin via bitcoin-dev
This is a fascinating post and I'm still chewing on it.

Chiming in with two points:

Point 1, note with respect to evictions, revivals, CTV, TLUV:

CTV enables 1 person to be evicted in O(log N) or one person to leave in
O(log N). TLUV enables 1 person to leave in O(1) O(log N) transactions, but
evictions take (AFAICT?) O(N) O(log N) transactions because the un-live
party stays in the pool. Hence OP_EVICT helps also make it so you can kick
someone out, rather than all having to leave, which is an improvement.

CTV rejoins work as follows:

suppose you have a pool with 1 failure, you need to do log N txns to evict
the failure, which creates R * log_R(N) outputs, which can then do a
transaction to rejoin.

For example, suppose I had 64 people in a radix 4 tree. you'd have at the
top level 4 groups of 16, then 4 groups of 4 people, and then 1 to 4 txns.
Kicking 1 person out would make you do 3 txns, and create 12 outputs total.
A transaction spending the 11 outputs that are live would capture 63 people
back into the tree, and with CISA would not be terribly expensive. To be a
bit more economical, you might prefer to just join the 3 outputs with 16
people in it, and yield 48 people in one pool. Alternatively, you can
lazily re-join if fees make it worth it/piggybacking another transaction,
or operate independently or try to find new, better, peers.

Overall this is the type of application that necessitates *exact* byte
counting. Oftentimes things with CTV seem inefficient, but when you crunch
the numbers it turns out not to be so terrible. OP_EVICT seems promising in
this regard compared to TLUV or accumulators.

Another option is to randomize the CTV trees with multiple outputs per
party (radix Q), then you need to do Q times the evictions, but you end up
with sub-pools that contain more people/fractional liquidity (this might
happen naturally if CTV Pools have channels in them, so it's good to model).


Point 2, on Eltoo:

One point of discomfort I have with Eltoo that I think is not universal,
but is shared by some others, is that non-punitive channels may not be good
for high-value channels as you do want, especially in a congested
blockspace world, punishments to incentivize correct behavior (otherwise
cheating may look like a free option).

Thus I'm reluctant to fully embrace designs which do not permit nested
traditional punitive channels in favor of Eltoo, when Eltoo might not have
product-market-fit for higher valued channels.

If someone had a punitive-eltoo variant that would ameliorate this concern
almost entirely.

Cheers,

Jeremy



--
@JeremyRubin 

On Fri, Feb 18, 2022 at 3:40 PM ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Good morning ariard,
>
>
> > > A statechain is really just a CoinPool hosted inside a
> > >  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.
> >
> > Note, to the best of my knowledge, how to use LN-Penalty in the context
> of multi-party construction is still an unsolved issue. If an invalidated
> state is published on-chain, how do you guarantee that the punished output
> value is distributed "fairly" among the "honest" set of users ? At least
> > where fairness is defined as a reasonable proportion of the balances
> they owned in the latest state.
>
> LN-Penalty I believe is what I call Poon-Dryja?
>
> Both Decker-Wattenhofer (has no common colloquial name) and
> Decker-Russell-Osuntokun ("eltoo") are safe with N > 2.
> The former has bad locktime tradeoffs in the unilateral close case, and
> the latter requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`.
>
>
> > > In principle, a set of promised outputs, if the owners of those
> > > outputs are peers, does not have *any* inherent order.
> > > Thus, I started to think about a commitment scheme that does not
> > > impose any ordering during commitment.
> >
> > I think we should dissociate a) *outputs publication ordering* from the
> b) *spends paths ordering* itself. Even if to each spend path a output
> publication is attached, the ordering constraint might not present the same
> complexity.
> >
> > Under this distinction, are you sure that TLUV imposes an ordering on
> the output publication ?
>
> Yes, because TLUV is based on tapleaf revelation.
> Each participant gets its own unique tapleaf that lets that participant
> get evicted.
>
> In Taproot, the recommendation is to sort the hashes of each tapleaf
> before arranging them into a MAST that the Taproot address then commits to.
> This sort-by-hash *is* the arbitrary ordering I refer to when I say that
> TLUV imposes an arbitrary ordering.
> (actually the only requirement is that pairs of scripts are
> sorted-by-hash, but it is just easier to sort the whole array by hash.)
>
> To reveal a single participant in a TLUV-based CoinPool, you need to
> reveal O(log N) hashes.
> It is the O(log N) space consumption I want to avoid with `OP_EVICT`, and
> I believe the reason for that O(log N) revelation 

Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-18 Thread Jeremy Rubin via bitcoin-dev
> As I said, it's a new kind of pinning attack, distinct from other types
of pinning attack.

I think pinning is "formally defined" as sequences of transactions which
prevent or make it less likely for you to make any progress (in terms of
units of computation proceeding).

Something that only increases possibility to make progress cannot be
pinning.

If you want to call it something else, with a negative connotation, maybe
call it "necromancing" (bringing back txns that would otherwise be
feerate/fee irrational).

I would posit that we should be wholly unconcerned with necromancing -- if
your protocol is particularly vulnerable to a third party necromancing then
your protocol is insecure and we shouldn't hamper Bitcoin's forward
progress on secure applications to service already insecure ones. Lightning
is particularly necromancy resistant by design, but pinning vulnerable.
This is also true with things like coinjoins which are necromancy resistant
but pinning vulnerable.

Necromancy in particular is something that isn't uniquely un-present in
Bitcoin today, and things like package relay and elimination of pinning are
inherently at odds with making necromancy either for CPFP use cases.

In particular, for the use case you mentioned "Eg a third party could mess
up OpenTimestamps calendars at relatively low cost by delaying the mining
of timestamp txs.", this is incorrect. A third party can only accelerate
the mining on the timestamp transactions, but they *can* accelerate the
mining of any such timestamp transaction. If you have a single output chain
that you're RBF'ing per block, then at most they can cause you to shift the
calendar commits forward one block. But again, they cannot pin you. If you
want to shift it back one block earlier, just offer a higher fee for the
later RBF'd calendar. Thus the interference is limited by how much you wish
to pay to guarantee your commitment is in this block as opposed to the next.

By the way, you can already do out-of-band transaction fees to a very
similar effect, google "BTC transaction accelerator". If the attack were at
all valuable to perform, it could happen today.

Lastly, if you do get "necromanced" on an earlier RBF'd transaction by a
third party for OTS, you should be relatively happy because it cost you
less fees overall, since the undoing of your later RBF surely returned some
satoshis to your wallet.

Best,

Jeremy
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Signet Parameters

2022-02-17 Thread Jeremy Rubin via bitcoin-dev
Hi devs,

I have been running a CTV signet for around a year and it's seen little
use. Early on I had some issues syncing new nodes, but I have verified
syncability to this signet using
https://github.com/JeremyRubin/bitcoin/tree/checktemplateverify-signet-23.0-alpha.
Please use this signet!

```
[signet]
signetchallenge=512102946e8ba8eca597194e7ed90377d9bbebc5d17a9609ab3e35e706612ee882759351ae
addnode=50.18.75.225
```

This should be operational. Let me know if there are any issues you
experience (likely with signet itself, but CTV too).

Feel free to also email me an address and I can send you some signet coins
-- if anyone is interested in running an automatic faucet I would love help
with that and will send you a lot of coins.

AJ Wrote (in another thread):

>  I'd much rather see some real
>   third-party experimentation *somewhere* public first, and Jeremy's CTV
>   signet being completely empty seems like a bad sign to me. Maybe that
>   means we should tentatively merge the feature and deploy it on the
>   default global signet though?  Not really sure how best to get more
>   real world testing; but "deploy first, test later" doesn't sit right.

I agree that real experimentation would be great, and think that merging
the code (w/o activation) for signet would likely help users v.s. custom
builds/parameters.

I am unsure that "learning in public" is required -- personally I do
experiments on regtest regularly and on mainnet (using emulators) more
occasionally. I think some of the difficulty is that for setting up signet
stuff you need to wait e.g. 10 minutes for blocks and stuff, source faucet
coins, etc. V.s. regtest you can make tests that run automatically. Maybe
seeing more regtest RPC test samples for regtests would be a sufficient
in-between?


Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Jeremy Rubin via bitcoin-dev
The difference between sponsors and this issue is more subtle. The issue
Suhas raised was with a variant of sponsors trying to address a second
criticism, not sponsors itself, which is secure against this.

I think I can make this clear by defining a few different properties:

Strong Reorgability: The transaction graph can be arbitrarily reorged into
any series of blocks as long as dependency order/timelocks are respected.
Simple Existential Reorgability: The transaction graph can be reorged into
a different series of blocks, and it is not computationally difficult to
find such an ordering.
Epsilon-Strong Reorgability: The transaction graph can be arbitrarily
reorged into any series of blocks as long as dependency order/timelocks are
respected, up to Epsilon blocks.
Epsilon: Simple Existential Reorgability: The transaction graph can be
reorged into a different series of blocks, and it is not computationally
difficult to find such an ordering, up to epsilon blocks.
Perfect Reorgability: The transaction graph can be reorged into a different
series of blocks, but the transactions themselves are already locked in.

Perfect Reorgability doesn't exist in Bitcoin because unconfirmed
transactions can be double spent which invalidates descendants. Notably,
for a subset of the graph which is CTV Congestion control tree expansions,
perfect reorg ability would exist, so it's not just a bullshit concept to
think about :)

The sponsors proposal is a change from Epsilon-Strong Reorgability to
Epsilon-Weak Reorgability. It's not clear to me that there is any
functional reason to rely on Strongness when Bitcoin's reorgability is
already not Perfect, so a reorg generator with malicious intent can already
disturb the tx graph. Epsion-Weak Reorgability seems to be a sufficient
property.

Do you disagree with that?

Best,

Jeremy

--
@JeremyRubin 

On Tue, Feb 15, 2022 at 12:25 PM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
>> >> 2. (from Suhas) "once a valid transaction is created, it should not
>> become invalid later on unless the inputs are double-spent."
>> > This doesn't seem like a huge concern to me
>>
>> I agree that this shouldn't be a concern. In fact, I've asked numerous
>> people in numerous places what practical downside there is to transactions
>> that become invalid, and I've heard basically radio silence other than one
>> off hand remark by satoshi at the dawn of time which didn't seem to me to
>> have good reasoning. I haven't seen any downside whatsoever of transactions
>> that can become invalid for anyone waiting the standard 6 confirmations -
>> the reorg risks only exists for people not waiting for standard
>> finalization. So I don't think we should consider that aspect of a
>> sponsorship transaction that can only be mined with the transaction it
>> sponsors to be a problem unless a specific practical problem case can be
>> identified. Even if a significant such case was identified, an easy
>> solution would be to simply allow sponsorship transactions to be mined on
>> or after the sponsored transaction is mined.
>>
>
> The downside is that in a 6 block reorg any transaction that is moved past
> its expiration date becomes invalid and all its descendants become invalid
> too.
>
> The current consensus threshold for transactions to become invalid is a
> 100 block reorg, and I see no reason to change this threshold.  I promise
> to personally build a wallet that always creates transactions on the verge
> of becoming invalid should anyone ever implement a feature that violates
> this tx validity principle.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread Jeremy Rubin via bitcoin-dev
James,

Unfortunately, there are technical reasons for sponsors to not be monotone.
Mostly that it requires the maintenance of an additional permanent
TX-Index, making Bitcoin's state grow at a much worse rate. Instead, you
could introduce a time-bound for inclusion, e.g. 100 blocks. However, this
time-bounded version has the issue that Roconnor raised which is that
validity "stops" after a certain time, hurting reorganization.

However, If you wanted to map this conceptually onto existing tx indexes,
you could have an output with exactly the script `<100 blocks> OP_CSV` and
then allow sponsor references to be pruned after that output is "garbage
collected" by pruning it out of a block. This would be a way that
sponsorship would be opt-in (must have the flag output) and then sponsors
observations of txid existence would be only guaranteed to work for 100
blocks after which it could be garbage collected by a miner.

It's not a huge leap to say that this behavior should be made entirely
"virtual", as you are essentially arguing that there exists a transaction
graph we could construct that would be equivalent to the graph were we to
actually have such an output / spends relationship. Since the property we
care about is about all graphs, that a specific one could exist that has
the same dependency / invalidity relationships during a reorg is important
for the theory of bitcoin transaction execution.

So it really isn't clear to me that we're hurting the transaction graph
properties that severely with changes in this family. It's also not clear
to me that having a TXINDEX is a huge issue given that making a dust-out
per tx would have the same impact (and people might do it if it's
functionally useful, so just making it default behavior would at least help
us optimize it to be done through e.g. a separate witness space/utreexo-y
thing).

Another consideration is to make the outputs from sponsor txn subject to a
100 block cool-off period. E.g., so even if you have your inverse timelock,
adding a constraint that all outputs then have something similar to
fCoinbase set on them (for spending timelocks only) would mean that little
reorgs could not disturb the tx graph, although this poses a UX challenge
for wallets that aim to bump often (e.g., 1 bump per block would mean you
need to maintain 100 outputs).

Lastly, it's pretty clear from a UX perspective that I should not want to
pay miners who did *not* mine my transactions! Therefore, it would be
natural to see if you pay a high enough fee that users might want to cancel
their (now very desirable) stale fee bumps by replacing it with something
more useful to them. So allowing sponsors to be in subsequent blocks might
make it rational for users to do more transactions, which increases the
costs of such an approach.


All things considered, I favor the simple version of just having sponsors
only valid for the block their target is co-resident in.


Jeremy





--
@JeremyRubin 

On Tue, Feb 15, 2022 at 12:53 PM James O'Beirne via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > The downside is that in a 6 block reorg any transaction that is moved
> > past its expiration date becomes invalid and all its descendants
> > become invalid too.
>
> Worth noting that the transaction sponsors design is no worse an
> offender on this count than, say, CPFP is, provided we adopt the change
> that sponsored txids are required to be included in the current block
> *or* prior blocks. (The original proposal allowed current block only).
>
> In other words, the sponsored txids are just "virtual inputs" to the
> sponsor transaction.
>
> This is a much different case than e.g. transaction expiry based on
> wall-clock time or block height, which I agree complicates reorgs
> significantly.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-15 Thread Jeremy Rubin via bitcoin-dev
Hi Rusty,

Please see my post in the other email thread
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html

The differences in this regard are several, and worth understanding beyond
"you can iterate CTV". I'd note a few clear examples for showing that "CTV
is just as powerful" is not a valid claim:

1) CTV requires the contract to be fully enumerated and is non-recursive.
For example, a simple contract that allows n participants to take an action
in any order requires factorially many pre-computations, not just linear or
constant. For reference, 24! is about 2**80. Whereas for a more
interpretive covenant -- which is often introduced with the features for
recursion -- you can compute the programs for these addresses in constant
time.
2) CTV requires the contract to be fully enumerated: For example, a simple
contract one could write is "Output 0 script matches Output 1", and the set
of outcomes is again unbounded a-priori. With CTV you need to know the set
of pairs you'd like to be able to expand to a-priori
3) Combining 1 and 2, you could imagine recursing on an open-ended thing
like creating many identical outputs over time but not constraining what
those outputs are. E.g., Output 0 matches Input 0, Output 1 matches Output
2.

I think for your point the inverse seems to hold: for the limited
situations we might want to set up, CTV often ends up being sufficient
because usually we can enumerate all the possible outcomes we'd like (or at
least find a mapping onto such a construction). CTV is indeed very
powerful, but as I demonstrated above, not powerful in the same way
("Complexity Class") that OP_TX or TXHASH might be.

At the very least we should clearly understand *what* and *why* we are
advocating for more sophisticated designs and have a thorough understanding
of the protocol complexity we are motivated to introduce the expanded
functionality. Further, if one advocates for TX/TXHASH on a featureful
basis, it's at least a technical ACK on the functionality CTV is
introducing (as it is a subset) and perhaps a disagreement on project
management, which I think is worth noting. There is a very wide gap between
"X is unsafe" and "I prefer Y which X is a subset of ''.

I'll close by repeating : Whether that [the recursive/open ended
properties] is an issue or not precluding this sort of design or not, I
defer to others.

Best,

Jeremy




Best,

Jeremy
--
@JeremyRubin 


On Tue, Feb 15, 2022 at 12:46 AM Rusty Russell 
wrote:

> Jeremy Rubin  writes:
> > Rusty,
> >
> > Note that this sort of design introduces recursive covenants similarly to
> > how I described above.
> >
> > Whether that is an issue or not precluding this sort of design or not, I
> > defer to others.
>
> Good point!
>
> But I think it's a distinction without meaning: AFAICT iterative
> covenants are possible with OP_CTV and just as powerful, though
> technically finite.  I can constrain the next 100M spends, for
> example: if I insist on those each having incrementing nLocktime,
> that's effectively forever.
>
> Thanks!
> Rusty.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-10 Thread Jeremy Rubin via bitcoin-dev
I don't have a specific response to share at this moment, but I may make
one later.

But for the sake of elevating the discourse, I'd encourage people
responding this to read through
https://rubin.io/bitcoin/2021/12/04/advent-7/ as I think it has some
helpful terminology and categorizations.

I bring this up because I think that recursion is often given as a
shorthand for "powerful" because the types of operations that support
recursion typically also introduce open ended covenants, unless they are
designed specially not to. As a trivial example a covenant that makes a
coin spendable from itself to itself entirely with no authorization is
recursive but fully enumerated in a sense and not particularly interesting
or useful.

Therefore when responding you might be careful to distinguish if it is just
recursion which you take issue with or open ended or some combination of
properties which severally might be acceptable.

TL;DR there are different properties people might care about that get
lumped in with recursion, it's good to be explicit if it is a recursion
issue or something else.

Cheers,

Jeremy


On Thu, Feb 10, 2022, 4:55 PM David A. Harding  wrote:

> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist).  Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-10 Thread Jeremy Rubin via bitcoin-dev
That's not really pinning; painning usually refers to pinning something to
the bottom of the mempool whereas these mechanisms make it easier to
guarantee that progress can be made on confirming the transactions you're
interested in.

Often times in these protocols "the call is coming inside the house". It's
not a third party adding fees we are scared of, it's a direct party to the
protocol!

Sponsors or fee accounts would enable you to ensure the protocol you're
working on makes forward progress. For things like Eltoo the internal
ratchet makes this work well.

Protocols which depend on in mempool replacements before confirmation
already must be happy (should they be secure) with any prior state being
mined. If a third party pays the fee you might even be happier since the
execution wasn't on your dime.

Cheers,

Jeremy

On Wed, Feb 9, 2022, 10:59 PM Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sat, Jan 01, 2022 at 12:04:00PM -0800, Jeremy via bitcoin-dev wrote:
> > Happy new years devs,
> >
> > I figured I would share some thoughts for conceptual review that have
> been
> > bouncing around my head as an opportunity to clean up the fee paying
> > semantics in bitcoin "for good". The design space is very wide on the
> > approach I'll share, so below is just a sketch of how it could work which
> > I'm sure could be improved greatly.
> >
> > Transaction fees are an integral part of bitcoin.
> >
> > However, due to quirks of Bitcoin's transaction design, fees are a part
> of
> > the transactions that they occur in.
> >
> > While this works in a "Bitcoin 1.0" world, where all transactions are
> > simple on-chain transfers, real world use of Bitcoin requires support for
> > things like Fee Bumping stuck transactions, DoS resistant Payment
> Channels,
> > and other long lived Smart Contracts that can't predict future fee rates.
> > Having the fees paid in band makes writing these contracts much more
> > difficult as you can't merely express the logic you want for the
> > transaction, but also the fees.
> >
> > Previously, I proposed a special type of transaction called a "Sponsor"
> > which has some special consensus + mempool rules to allow arbitrarily
> > appending fees to a transaction to bump it up in the mempool.
> >
> > As an alternative, we could establish an account system in Bitcoin as an
> > "extension block".
>
> 
>
> > This type of design works really well for channels because the addition
> of
> > fees to e.g. a channel state does not require any sort of pre-planning
> > (e.g. anchors) or transaction flexibility (SIGHASH flags). This sort of
> > design is naturally immune to pinning issues since you could offer to
> pay a
> > fee for any TXID and the number of fee adding offers does not need to be
> > restricted in the same way the descendant transactions would need to be.
>
> So it's important to recognize that fee accounts introduce their own kind
> of
> transaction pinning attacks: third parties would be able to attach
> arbitrary
> fees to any transaction without permission. This isn't necessarily a good
> thing: I don't want third parties to be able to grief my transaction
> engines by
> getting obsolete transactions confirmed in liu of the replacments I
> actually
> want confirmed. Eg a third party could mess up OpenTimestamps calendars at
> relatively low cost by delaying the mining of timestamp txs.
>
> Of course, there's an obvious way to fix this: allow transactions to
> designate
> a pubkey allowed to add further transaction fees if required. Which Bitcoin
> already has in two forms: Replace-by-Fee and Child Pays for Parent.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting Notes #3

2022-02-09 Thread Jeremy Rubin via bitcoin-dev
Bitcoin Developers,

The Third CTV meeting was held earlier today (Tuesday February 8th, 2022).
You can find the meeting log here:
https://gnusha.org/ctv-bip-review/2022-02-08.log

A best-effort summary:

- Not much new to report on the Bounty
- Non Interactive Lightning Channel Opens
  Non interactive lightning Channel opens seems to work!
  There are questions around being able operate a channel in a "unipolar"
way for routing with the receiver's key offline, as HTLCs might require
sync revocation. This is orthogonal to the opening of the channels.
- DLCs w/ CTV
  DLCs built with CTV does seem to be a "key enabler" for DLCs.
  The non interactivity provides a dramatic speedup (30x - 300x depending
on multi-oracle setup)
  Changes the client/server setup enable new use cases to explore, and
simplify the spec substantially.
  Backfilling lets clients commit to the DLC faster and lazily backfill at
cost of state storage.
  For M-N oracles, precompiling N choose M groups + musig'ing the
attestation points can possibly save some witness space because
log2(N)*32 + N*32 > log2(N*(N choose M))*32 for many values of N and M.
- Pathcoin
  Not well understood yet concretely.
  Seems like the API of a "a coin that 1-of-N can spend" shared by N is
new/unique and not something LN can do (which always requires N online to
sign txns)
  Binary expansion of coins could allow arbitrary value transfer (binary
expansion can live in a CTV tree too).
  Best way to think of Pathcoin at this point is an important theoretical
result that should open up new exploration/improvement
- TXHash
  Main concerns: more complexity, potential for recursion, script size
overhead
- Soft Forks, Generally
  Big question: Are the fork processes themselves (e.g., BIP9/8/ST
activiations) riskier than the upgrades (CTV)?
  On the one hand, validation rules are something we have to live with
forever so they should be riskier. Soft fork rules and coordination might
be bad, but after activation they go away.
  On the other hand, we can "prove" a technical upgrade correct, but
soft-fork signalling requires unprovable user behavior and coordination
(e.g., actually upgrading).
  If you perceive the forking mechanism as high risk, it makes sense to
make the upgrades have as much content as possible since you need to
justify the high risk.
  If you perceive the forking mechanism as low risk, it is fine to make the
upgrades smaller and easier to prove safe since there's not a high cost to
forking.
- Elements CTV Emulation
  Seems to be workable.
  Questionable if any of the use cases one might want CTV for (Lightning,
DLCs, Vaults) would have much demand on Liquid today.

Feel free to correct me where I've not represented perspectives decently,
as always the logs are the only true summary.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-07 Thread Jeremy Rubin via bitcoin-dev
Rusty,

Note that this sort of design introduces recursive covenants similarly to
how I described above.

Whether that is an issue or not precluding this sort of design or not, I
defer to others.

Best,

Jeremy


On Mon, Feb 7, 2022 at 7:57 PM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Russell O'Connor via bitcoin-dev 
> writes:
> > Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> > makes sense to decompose their operations into their constituent pieces
> and
> > reassemble their behaviour programmatically.  To this end, I'd like to
> > instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
> >
> > OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> > txhash in accordance with that flag, and push the resulting hash onto the
> > stack.
>
> It may be worth noting that OP_TXHASH can be further decomposed into
> OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
>
> OP_TX would place the concatenated selected fields onto the stack
> (rather than hashing them) This is more compact for some tests
> (e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
> "OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
> testing (e.g amount less than X or greater than X, or less than 3 inputs).
>
> > I believe the difficulties with upgrading TXHASH can be mitigated by
> > designing a robust set of TXHASH flags from the start.  For example
> having
> > bits to control whether (1) the version is covered; (2) the locktime is
> > covered; (3) txids are covered; (4) sequence numbers are covered; (5)
> input
> > amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> > inputs is covered; (8) output amounts are covered; (9) output
> scriptpubkeys
> > are covered; (10) number of outputs is covered; (11) the tapbranch is
> > covered; (12) the tapleaf is covered; (13) the opseparator value is
> > covered; (14) whether all, one, or no inputs are covered; (15) whether
> all,
> > one or no outputs are covered; (16) whether the one input position is
> > covered; (17) whether the one output position is covered; (18) whether
> the
> > sighash flags are covered or not (note: whether or not the sighash flags
> > are or are not covered must itself be covered).  Possibly specifying
> which
> > input or output position is covered in the single case and whether the
> > position is relative to the input's position or is an absolute position.
>
> These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
> locktime is pushed as u32, ...".
>
> We might want to push SHA256() of scripts instead of scripts themselves,
> to reduce possibility of DoS.
>
> I suggest, also, that 14 (and similarly 15) be defined two bits:
> 00 - no inputs
> 01 - all inputs
> 10 - current input
> 11 - pop number from stack, fail if >= number of inputs or no stack elems.
>
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP-119 CTV Meeting #3 Draft Agenda for Tuesday February 8th at 12:00 PT

2022-02-07 Thread Jeremy Rubin via bitcoin-dev
Reminder:

This is in ~24 hours.

There have been no requests to add content to the agenda.

Best,

Jeremy
--
@JeremyRubin 


On Wed, Feb 2, 2022 at 12:29 PM Jeremy Rubin 
wrote:

> Bitcoin Developers,
>
> The 3rd instance of the recurring meeting is scheduled for Tuesday
> February 8th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC
> server.
>
> The meeting should take approximately 2 hours.
>
> The topics proposed to be discussed are agendized below. Please review the
> agenda in advance of the meeting to make the best use of everyone's time.
>
> Please send me any feedback, proposed topic changes, additions, or
> questions you would like to pre-register on the agenda.
>
> I will send a reminder to this list with a finalized Agenda in advance of
> the meeting.
>
> Best,
>
> Jeremy
>
> - Bug Bounty Updates (10 Minutes)
> - Non-Interactive Lightning Channels (20 minutes)
>   + https://rubin.io/bitcoin/2021/12/11/advent-14/
>   + https://utxos.org/uses/non-interactive-channels/
> - CTV's "Dramatic" Improvement of DLCs (20 Minutes)
>   + Summary: https://zensored.substack.com/p/supercharging-dlcs-with-ctv
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html
>   + https://rubin.io/bitcoin/2021/12/20/advent-23/
> - PathCoin (15 Minutes)
>   + Summary: A proposal of coins that can be transferred in an offline
> manner by pre-compiling chains of transfers cleverly.
>   + https://gist.github.com/AdamISZ/b462838cbc8cc06aae0c15610502e4da
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019809.html
> - OP_TXHASH (30 Minutes)
>   + An alternative approach to OP_CTV + APO's functionality by
> programmable tx hash opcode.
>   + See discussion thread at:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
> - Emulating CTV for Liquid (10 Minutes)
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019851.html
> - General Discussion (15 Minutes)
>
> Best,
>
> Jeremy
>
>
>
>
> --
> @JeremyRubin 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-02-06 Thread Jeremy Rubin via bitcoin-dev
I'm not sure what is meant concretely by (5) but I think overall
performance is ok here. You will always have 10mins or so to confirm the
DLC so you can't be too fussy about performance!


I mean that if you think of the CIT points as being the X axis (or
independent axes if multivariate) of a contract, the Y axis is the
dependent variable represented by the CTV hashes.


For a DLC living inside a lightning channel, which might be updated between
parties e.g. every second, this means you only have to recompute the
cheaper part of the DLC only if you update the payoff curves (y axis) only,
and you only have to update the points whose y value changes.

For on chain DLCs this point is less relevant since the latency of block
space is larger.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP-119 CTV Meeting #3 Draft Agenda for Tuesday February 8th at 12:00 PT

2022-02-02 Thread Jeremy Rubin via bitcoin-dev
Bitcoin Developers,

The 3rd instance of the recurring meeting is scheduled for Tuesday February
8th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC server.

The meeting should take approximately 2 hours.

The topics proposed to be discussed are agendized below. Please review the
agenda in advance of the meeting to make the best use of everyone's time.

Please send me any feedback, proposed topic changes, additions, or
questions you would like to pre-register on the agenda.

I will send a reminder to this list with a finalized Agenda in advance of
the meeting.

Best,

Jeremy

- Bug Bounty Updates (10 Minutes)
- Non-Interactive Lightning Channels (20 minutes)
  + https://rubin.io/bitcoin/2021/12/11/advent-14/
  + https://utxos.org/uses/non-interactive-channels/
- CTV's "Dramatic" Improvement of DLCs (20 Minutes)
  + Summary: https://zensored.substack.com/p/supercharging-dlcs-with-ctv
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html
  + https://rubin.io/bitcoin/2021/12/20/advent-23/
- PathCoin (15 Minutes)
  + Summary: A proposal of coins that can be transferred in an offline
manner by pre-compiling chains of transfers cleverly.
  + https://gist.github.com/AdamISZ/b462838cbc8cc06aae0c15610502e4da
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019809.html
- OP_TXHASH (30 Minutes)
  + An alternative approach to OP_CTV + APO's functionality by programmable
tx hash opcode.
  + See discussion thread at:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
- Emulating CTV for Liquid (10 Minutes)
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019851.html
- General Discussion (15 Minutes)

Best,

Jeremy




--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CTV Meeting #2 Summary & Minutes

2022-02-02 Thread Jeremy Rubin via bitcoin-dev
This meeting was held January 25th, 2022. The meeting logs are available
https://gnusha.org/ctv-bip-review/2022-01-25.log

Please review the agenda in conjunction with the notes:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019807.html

Feel free to make any corrections if I did not summarize accurately.

The next meeting is next Tuesday at 12:00 PT. I will attempt to circulate a
pre-meeting agenda draft shortly.

Best,

Jeremy

*Bug Bounty Update:*

   1. Basic Rules set, working to formalize the program.
   2. It turns out that 1 person allocating ~$10k is easy, a multi
   stakeholder organization requires more formality.
   3. 501c3 status / tax deducitbility available.
   4. See here for more details:
   
https://docs.google.com/document/d/1pN6YzQ6HlR8t_-ZZoEdTegt88w6gJzCkcA_a4IXpKAo/edit
   5. Rules still subject to change, but issues found under the current
   descriptions awarded in good faith by me/Ariel for now.



*Notes from Feedback Review:*

*Luke's Feedback:*

   1. Sentiment that activation / CTV should be discussed somewhat
   separately.
   2. Sentiment that having more clear cut use cases is good, no agreement
   about what venue / type of document those should be (no disagreement really
   either, just that BIPs might be too formal, but blog posts might not be
   formal enough).


*James' Feedback:*

   1. Sentiment that a minor slowdown isn't problematic, we've done it
   before for other precomputations.
   2. James was to spend a bit more time on benchmarking in a more modern
   segment of the chain (the range he used was slightly irrelevant given low
   segwit adoption period).
   3. *After meeting: James' shows updates for CTV don't cause any notable
   slowdown for current chain segments.*


*Peter's Feedback:*

   1. Denial-of-Service concerns seem largely addressed.
   2. Comment on tests was a result of reviewing outdated branch, not PR.
   3. Main feedback that "sticks" is wanting more use cases to be more clear

I've seen some reviews that almost run into a kind of paradox of choice and
> are turned off by all the different potential applications. This isn't
> necessarily a bad thing. As we've seen with Taproot and now even CTV with
> the DLC post, there are going to be use cases and standards nobody's
> thought of yet, but having them out there all at once can be difficult for
> some to digest



*Sapio*

   1. Sapio can be used today, without CTV.
   2. Main change with CTV is more "non-interactivity".
   3. Need for a more descriptive terms than "non-interactive", e.g.,
   "asynchronous non-blocking", "independently verifiable", "non-stallable".
   4. Composability is cool, but people aren't doing that much composable
   stuff anyways so it's probably under-appreciated.



*Vaults*

   1. Very strong positive sentiment for Vaults.
   2. CTV eliminates "toxic waste" from the setup of vaults with pre-signed
   txns / requirement for a RNG.
   3. CTV/Sapio composability makes vaults somewhat "BIP Resistant" because
   vaults could be customized heavily per user, depending on needs.
   4. CPFP for smart contracts is in not the best state, improving
   CPFP/Package relay important for these designs.
   5. The ability to *prove* vaults constructed correctly w/o toxic waste,
   e.g., 30 years later, is pretty important for high security uses (as
   opposed to assume w/ presigned).
   6. More flexible vaults (e.g., withdraw up to X amount per step v.s.
   fixed X per step) are desirable, but can be emulated by withdrawing X and
   sending it somewhere else (e.g. another vault) without major loss of
   security properties or network load -- more flexible vault covenants have
   greater space/validation costs v.s. simpler CTV ones.



*Congestion Control*

   1. Sentiments shared that no one really cares about this issue and it's
   bad marketing.
   2. Layer 2 to 1 Index "21i" which is how long for a L2 (sidechain,
   exchange, mining pools, etc) to clear all liabilities to end users (CTV
   improves this to 1 block, currently clearing out and Exchange could take
   weeks and also trigger "thundering herd" behaviors whereby if the expected
   time to withdraw becomes too long, you then also need to withdraw).
   3. Anecdotally, Exchanges seem less interested in congestion control,
   Mining Pools and Lightning Channel openers seem more into it.


Main Issues & Answers:

Q: wallet complexity?
A: Wallets largely already need to understand most of the logic for CTV,
should they be rational
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019756.html

Q: uses more space overall
A: Doesn't use more space than existing incentive compatible behavior on
how you might split up txns already, and even if it did, it's a small
constant factor more. See https://utxos.org/analysis/batching_sim/ for more
analysis.

Q: block space is cheap right now, why do we need this?
A: we do not want or expect blockspace to be cheap in the future, we should
plan for 

Re: [bitcoin-dev] Why CTV, why now?

2022-02-01 Thread Jeremy Rubin via bitcoin-dev
I agree this emulation seems sound but also tap out at how the CT stuff
works with this type of covenant as well.

Happy hacking!

On Tue, Feb 1, 2022, 5:29 PM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Jan 05, 2022 at 02:44:54PM -0800, Jeremy via bitcoin-dev wrote:
> > CTV was an output of my personal "research program" on how to make simple
> > covenant types without undue validation burdens. It is designed to be the
> > simplest and least risky covenant specification you can do that still
> > delivers sufficient flexibility and power to build many useful
> applications.
>
> I believe the new elements opcodes [0] allow simulating CTV on the liquid
> blockchain (or liquid-testnet [1] if you'd rather use fake money but not
> use Jeremy's CTV signet). It's very much not as efficient as having a
> dedicated opcode, of course, but I think the following script template
> would work:
>
> INSPECTVERSION SHA256INITIALIZE
> INSPECTLOCKTIME SHA256UPDATEE
> INSPECTNUMINPUTS SCRIPTNUMTOLE64 SHA256UPDATE
> INSPECTNUMOUTPUTS SCRIPTNUMTOLE64 SHA256UPDATE
>
> PUSHCURRENTINPUTINDEX SCRIPTNUMTOLE64 SHA256UPDATE
> PUSHCURRENTINPUTINDEX INSPECTINPUTSEQUENCE SCRIPTNUMTOLE64 SHA256UPDATE
>
> { for  in 0..
>  INSPECTOUTPUTASSET CAT SHA256UPDATE
>  INSPECTOUTPUTVALUE DROP SIZE SCRIPTNUMTOLE64 SWAP CAT SHA256UPDATE
>  INSPECTOUTPUTNONCE SIZE SCRIPTNUMTOLE64 SWAP CAT SHA256UPDATE
>  INSPECTOUTPUTSCRIPTPUBKEY SWAP SIZE SCRIPTNUMTOLE64 SWAP CAT CAT
> SHA256UPDATE
> }
>
> SHA256FINALIZE  EQUAL
>
> Provided NUMINPUTS is one, this also means the txid of the spending tx is
> fixed, I believe (since these are tapoot only opcodes, scriptSig
> malleability isn't possible); if NUMINPUTS is greater than one, you'd
> need to limit what other inputs could be used somehow which would be
> application specific, I think.
>
> I think that might be compatible with confidential assets/values, but
> I'm not really sure.
>
> I think it should be possible to use a similar approach with
> CHECKSIGFROMSTACK instead of " EQUAL" to construct APO-style
> signatures on elements/liquid. Though you'd probably want to have the
> output inspction blocks wrapped with "INSPECTNUMOUTPUTS  GREATERTHAN
> IF .. ENDIF". (In that case, beginning with "PUSH[FakeAPOSig] SHA256
> DUP SHA256INITIALIZE SHA256UPDATE" might also be sensible, so you're
> not signing something that might be misused in a different context later)
>
>
> Anyway, since liquid isn't congested, and mostly doesn't have lightning
> channels built on top of it, probably the vaulting application is the
> only interesting one to build on top on liquid today? There's apparently
> about $120M worth of BTC and $36M worth of USDT on liquid, which seems
> like it could justify some vault-related work. And real experience with
> CTV-like constructs seems like it would be very informative.
>
> Cheers,
> aj
>
> [0]
> https://github.com/ElementsProject/elements/blob/master/doc/tapscript_opcodes.md
> [1] https://liquidtestnet.com/
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-29 Thread Jeremy Rubin via bitcoin-dev
Perhaps there is some misunderstanding.  TXHASH + CSFSV doesn't allow for
complex or recursive covenants.  Typically CAT is needed, at minimum, to
create those sorts of things.  TXHASH still amounts to deploying a
non-recursive covenant construction.


This seems false to me.

 txhash  txhash equalverify

Is that not a recursive covenant? With a little extra work you can also
control for amounts and stuff.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-01-28 Thread Jeremy Rubin via bitcoin-dev
Apologies for the double post*, but I just had a follow up idea
that's pretty interesting to me.

You can make the close portion of a DLC be an "optimistic" execution with a
choice of justice scheme. This enables closing a DLC somewhat securely
without exposing the oracles on-chain at all.

Assuming honest oracles, the only cost of this mechanism over previous is
that you have to do a script path spend (but it can be a top-level branch,
since it's the "most likely" one).


For every DLC branch like:

* CHECKTEMPLATEVERIFY
 CHECKSIG
 CHECKSIGADD
 CHECKSIGADD
2 EQUAL*


add a 2 branches:


* CHECKTEMPLATEVERIFY
 CHECKSIG
*

* CHECKTEMPLATEVERIFY
 CHECKSIG*


This enables Alice or Bob to "lock in" a redemption of the contract
that becomes spendable by them after . CET-hash-* should
include a nLockTime/nSequence such that it is at the same time as the
attestation points should be known.


Where CET-hash-T sends funds to a DLC that has the following conditions:


(cooperate):

*pk_internal=musig(Alice, Bob)*

or (unilateral timeout)

* Checksig <2 weeks> CSV*

or (show oracles for this outcome)

* CHECKTEMPLATEVERIFY*

* CHECKSIG
 CHECKSIGADD
 CHECKSIGADD
2 EQUAL*

or (justice with no punishment), forall j !=i:

* CHECKTEMPLATEVERIFY*

* CHECKSIG
 CHECKSIGADD
 CHECKSIGADD
2 EQUAL*

or (justice with punishment), forall j!=i:

* CHECKTEMPLATEVERIFY*

* CHECKSIG
 CHECKSIGADD
 CHECKSIGADD
2 EQUAL*


Justice with punishment seems to me to be the better option since T is
actively choosing this resolution (the CTV transition is signed), but
justice with no punishment might be better if you think the oracles
might screw you over and collude to steal.

One interesting question is if the justice transactions can be
"compressed" to be fewer for a given outcome. I.e., if Bob has claimed
that the outcome is 35, and there are 100 total outcomes, do we need
99 justice paths or is there a way to make fewer of them? Intuitively,
it would seem so, because if we have a 8-10 threshold for picking a
path, a 3-10 proof would be sufficient to prove Bob claimed to know
the 8-10 falsely. However, that then means 3-10 could collude, v.s.
the fraud proof requiring a full 8-10 counter. Things to think about!


Best,


Jeremy


* this might actually be a triple or quadruple post depending on how
you count, I adjusted which email was the subscriber on my mailing
list account and resultantly sent from the old address... sincere
apologies if you are seeing this message >1 times to those who were on
the CC.

--
@JeremyRubin 



On Fri, Jan 28, 2022 at 9:21 AM Jeremy  wrote:

> Lloyd,
>
> This is an excellent write up, the idea and benefits are clear.
>
> Is it correct that in the case of a 3/5th threshold it is a total 10x *
> 30x = 300x improvement? Quite impressive.
>
> I have a few notes of possible added benefits / features of DLCs with CTV:
>
> 1) CTV also enables a "trustless timeout" branch, whereby you can have a
> failover claim that returns funds to both sides.
>
> There are a few ways to do this:
>
> A) The simplest is just an oracle-free  CTV whereby the
> timeout transaction has an absolute/relative timelock after the creation of
> the DLC in question.
>
> B) An alternative approach I like is to have the base DLC have a branch
> ` CTV` which pays into a DLC that is the exact same
> except it removes the just-used branch and replaces it with ` tx)> CTV` which contains a relative timelock R for the desired amount of
> time to resolve. This has the advantage of always guaranteeing at least R
> amount of time since the Oracles have been claimed to be non-live to
> "return funds"  to parties participating
>
>
> 2) CTV DLCs are non-interactive asynchronously third-party unilaterally
> creatable.
>
> What I mean by this is that it is possible for a single party to create a
> DLC on behalf of another user since there is no required per-instance
> pre-signing or randomly generated state. E.g., if Alice wants to create a
> DLC with Bob, and knows the contract details, oracles, and a key for Bob,
> she can create the contract and pay to it unilaterally as a payment to Bob.
>
> This enables use cases like pay-to-DLC addresses. Pay-to-DLC addresses can
> also be constructed and then sent (along with a specific amount) to a third
> party service (such as an exchange or Lightning node) to create DLCs
> without requiring the third party service to do anything other than make
> the payment as requested.
>
>
> 3) CTV DLCs can be composed in interesting ways
>
> Options over DLCs open up many exciting types of instrument where Alice
> can do things like:
> A) Create a Option expiring in 1 week where Bob can add funds to pay a
> premium and "Open" a DLC on an outcome closing in 1 year
> B) Create an Option expiring in 1 week where one-of-many Bobs can pay the
> premium (on-chain DEX?).
>
>  See https://rubin.io/bitcoin/2021/12/20/advent-23/ for more concrete
> stuff around this.
>
> 

[bitcoin-dev] BIP Draft: Minimum Viable TXIn Hash

2015-07-23 Thread Jeremy Rubin via bitcoin-dev
Please see the following draft BIP which should decrease the amount of
bytes needed per transaction. This is very much a draft BIP, as the design
space for this type of improvement is large.

This BIP can be rolled out by a soft fork.

Improvements are around 12% for  standard one in two out txn, and even
more with more inputs hashes.

https://gist.github.com/JeremyRubin/e175662d2b8bf814a688
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP: Short Term Use Addresses for Scalability

2015-07-22 Thread Jeremy Rubin via bitcoin-dev
I think the catch here is that under STUA (short term use address) there is
a strict incentive, you can reduce the transaction fee for these txns. This
also fits with the general model that you pay the miners for security. My
belief is that when there is a savings benefit to be had large players will
prefer it at a minimum, and users will desire it.


Your analysis of saving is inaccurate, it comes to be at or greater than 20
bytes because there is typically 2 UTXOs or more. I get that this is still
marginal, but when the margins are tight this is a pretty decent gain.


The security decrease is actually less extreme than it seems. This is for
multiple reasons:
1) you can select LEN_PARAM when you make the tx to be secure at that time
Adding a byte or two gets much more security while still keeping it lean.
2) For a small transaction, the hash power is less rewarding than just
working on the blockchain or doing something else
3) These addresses are only for use for short term, not perm storage. As
such, if you model the threat it isn't great (I'm using this address for
one day, someone grinds it in that time).
4) Because it is a UTXO saving, it reduces memory bloat.t

It would be interesting to get a more exact analysis on the time needed to
run a brute force as it involves computing a valid keypair and hashing for
each attempt.



On Thu, Jul 23, 2015 at 5:06 AM, Gavin Andresen via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On Wed, Jul 22, 2015 at 4:34 PM, Tier Nolan via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 It also requires most clients to be updated to support the new address
 system.


 That's the killer: introducing Yet Another Type of Bitcoin Address takes a
 very long time and requires a lot of people to change their code. At least,
 that was the lesson learned when we introduced P2SH addresses.

 I think it's just not worth it for a very modest space savings (10 bytes,
 when scriptSig+scriptPubKey is about 120 bytes), especially with the
 extreme decrease in security (going from 2^160 to 2^80 to brute-force).

 --
 --
 Gavin Andresen


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev