Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-05-02 Thread Jeremy Rubin
Ok, got it. Won't waste anyone's time on terminology pedantism.


The model that I proposed above is simply what *any* correct timestamping
service must do. If OTS does not follow that model, then I suspect whatever
OTS is, is provably incorrect or, in this context, unreliable, even when
servers and clients are honest. Unreliable might mean different things to
different people, I'm happy to detail the types of unreliability issue that
arise if you do not conform to the model I presented above (of which,
linearizability is one way to address it, there are others that still
implement epoch based recommitting that could be conceptually sound without
requiring linearizability).

Do you have any formal proof of what guarantees OTS provides against which
threat model? This is likely difficult to produce without a formal model of
what OTS is, but perhaps you can give your best shot at producing one and
we can carry the conversation on productively from there.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Removing the Sats From the Eltoo Ratchet

2022-04-30 Thread Jeremy Rubin
Devs,

One sketch of an idea on how to improve Eltoo like constructions by making
the contract "optically isolated".


Create an output F with:

Amount: A, Key: MuSig(A,B)

Create a second output R with:

Amount: Dust, Key: Musig(A', B')

and sign ratchet updates something like:

Amount: Dust, Key: tr(Musig(A', B'), {OP_1 CHECKSIG  CLTV,  CSV
OP_1 CHECKSIG 0 OP_CHECKINPUTOUTPOINT  EQUAL})

And also sign a Tx where {F, R using path with OP_CHECKINPUT} -> {A's
amount, B's amount}.
F's signature must commit to R's script for Ratchet with N, but not R's
TXID.


Why go through the trouble of two UTXOs per channel? Is it even two
channels?

Here are some properties this 'flipped channel' might have. Are there
others you can think of?

1) Privacy: funds are unlinked from being a channel until end of contested
close period. All Ratchet txns look the same on the network, harder for
third parties to shake you down for more fees.
2) Reuse: Ratchet can be reused if channel cooperatively closes / splits
funds out
3) Cooperative close can't be pinned by past reveals of ratchet state for
M-N channels
4) Ratchet can create multiple ratchet outputs at a time to drive multiple
channel balances -- updating ratchet requires N-N, but each
subfunds requires only M-M
6) Some types of issue in the ratchet protocol still permits recovery in
the custody layer
7) If you still want to carry value along the ratchet, you can splice in
funds indirectly into that ratchet without linking the funds on-chain
(e.g., in a channel factory, can use the trick in 4 to dynamically add a
sub M-M of the N-N for a new separate balance), only linked on
uncooperative closes.

I know this is handwave WRT the sighash flags/opcodes required, but I'm
merely here to inspire and figured the idea of abstracting the ratchet was
novel.

Best,

Jeremy






--
@JeremyRubin <https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-04-17 Thread Jeremy Rubin
the 'lots of people' stuff (get confused, can't figure out what i'm
quoting, actually are reading this conversation) is an appeal to an
authority that doesn't exist. If something is unclear to you, let me know.
If it's unclear to a supposed existential person or set of persons, they
can let me know.


concretely, I am confused by how OTS can both support RBF for updating to
larger commitments (the reason you're arguing with me) and not have an
epoch based re-comittings scheme and still be correct. My assumption now,
short of a coherent spec that's not just 'read the code', is that OTS
probably is not formally correct and has some holes in what is
committed to, or relies on clients re-requesting proofs if they fail to be
committed. in any case, you would be greatly aided by having an actual spec
for OTS since i'm not interested in the specifics of OTS software, but I'm
willing to look at the protocol. So if you do that, maybe we can talk more
about the issue you see with how sponsors works.

further, I think that if there is something that sponsors does that could
make a hypothetical OTS-like service work better, in a way that would be
opaque (read: soft-fork like wrt compatibility) to clients, then we should
just change what OTS is rather than committing ourselves to a worse design
in service of some unstated design goals. In particular, it seems that
OTS's servers can be linearized and because old clients aren't looking for
linearization, then the new linearization won't be a breaking change for
old clients, just calendar servers. And new clients can benefit from
linearization.
--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Fri, Apr 15, 2022 at 7:52 AM Peter Todd  wrote:

> On Mon, Apr 11, 2022 at 09:18:10AM -0400, Jeremy Rubin wrote:
> > > nonsense marketing
> >
> > I'm sure the people who are confused about "blockchain schemes as \"world
> > computers\" and other nonsense
> > marketing" are avid and regular readers of the bitcoin devs mailing list
> so
> > I offer my sincerest apologies to all members of the intersection of
> those
> > sets who were confused by the description given.
>
> Of course, uninformed people _do_ read all kinds of technical materials.
> And
> more importantly, those technical materials get quoted by journalists,
> scammers, etc.
>
> > > useless work
> >
> > progress is not useless work, it *is* useful work in this context. you
> have
> > committed to some subset of data that you requested -- if it was
> 'useless',
> > why did you *ever* bother to commit it in the first place? However, it is
> > not 'maximally useful' in some sense. However, progress is progress --
> > suppose you only confirmed 50% of the commitments, is that not progress?
> If
> > you just happened to observe 50% of the commitments commit because of
> > proximity to the time a block was mined and tx propagation naturally
> would
> > you call it useless?
>
> Please don't trim quoted text to the point where all context is lost. Lots
> of
> people read this mailing list and doing that isn't helpful to them.
>
> > > Remember that OTS simply proves data in the past. Nothing more.
> > > OTS doesn't have a chain of transactions
> > Gotcha -- I've not been able to find an actual spec of Open Time Stamps
>
> The technical spec of OpenTimestamps is of course the normative validation
> source code, currently python-opentimestamps, similar to how the technical
> spec
> of Bitcoin is the consensus parts of the Bitcoin Core codebase. The
> explanatory
> docs are linked on https://opentimestamps.org under the "How It Works"
> section.
> It'd be good to take the linked post in that section and turn it into
> better
> explanatory materials with graphics (esp interactive/animated graphics).
>
> > anywhere, so I suppose I just assumed based on how I think it *should*
> > work. Having a chain of transactions would serve to linearize history of
> > OTS commitments which would let you prove, given reorgs, that knowledge
> of
> > commit A was before B a bit more robustly.
>
> I'll reply to this as a separate email as this discussion - while useful -
> is
> getting quite off topic for this thread.
>
> > >  I'd rather do one transaction with all pending commitments at a
> > particular time
> > rather than waste money on mining two transactions for a given set of
> > commitments
> >
> > This sounds like a personal preference v.s. a technical requirement.
> >
> > You aren't doing any extra transactions in the model i showed, what
> you're
> > doing is selecting the window for the next based on the prior conf.
>
> ...the model you showed is wrong, as there is 

Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-04-11 Thread Jeremy Rubin
> nonsense marketing

I'm sure the people who are confused about "blockchain schemes as \"world
computers\" and other nonsense
marketing" are avid and regular readers of the bitcoin devs mailing list so
I offer my sincerest apologies to all members of the intersection of those
sets who were confused by the description given.

> useless work

progress is not useless work, it *is* useful work in this context. you have
committed to some subset of data that you requested -- if it was 'useless',
why did you *ever* bother to commit it in the first place? However, it is
not 'maximally useful' in some sense. However, progress is progress --
suppose you only confirmed 50% of the commitments, is that not progress? If
you just happened to observe 50% of the commitments commit because of
proximity to the time a block was mined and tx propagation naturally would
you call it useless?

> Remember that OTS simply proves data in the past. Nothing more.
> OTS doesn't have a chain of transactions
Gotcha -- I've not been able to find an actual spec of Open Time Stamps
anywhere, so I suppose I just assumed based on how I think it *should*
work. Having a chain of transactions would serve to linearize history of
OTS commitments which would let you prove, given reorgs, that knowledge of
commit A was before B a bit more robustly.

>  I'd rather do one transaction with all pending commitments at a
particular time
rather than waste money on mining two transactions for a given set of
commitments

This sounds like a personal preference v.s. a technical requirement.

You aren't doing any extra transactions in the model i showed, what you're
doing is selecting the window for the next based on the prior conf.

See the diagram below, you would have to (if OTS is correct) support this
sort of 'attempt/confirm' head that tracks attempted commitments and
confirmed ones and 'rewinds' after a confirm to make the next commit
contain the prior attempts that didn't make it.

[.]
 --^ confirm head tx 0 at height 34
^ attempt head after tx 0
 ---^ confirm head tx 1 at height 35
  --^ attempt head after tx 1
  ^ confirm head tx 2 at height 36
 ---^
attempt head after tx 2
  ---^
confirm head tx 3 at height 37

you can compare this to a "spherical cow" model where RBF is always perfect
and guaranteed inclusion:


[.]
 --^ confirm head tx 0 at height 34
   -^ confirm head tx 1 at height 35
   ---^ confirm head at tx 1
height 36
   -^
confirm head tx 3 at height 37

The same number of transactions gets used over the time period.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin
Morning!

>
> For the latter case, CPFP would work and already exists.
> **Unless** you are doing something complicated and offchain-y and involves
> relative locktimes, of course.
>
>
The "usual" design I recommend for Vaults contains something that is like:

{ CSV  CHECKSIG,  CHECKSIG}
or
{ CSV  CHECKSIG,  CHECKSIG)> CTV}


where after an output is created, it has to hit maturity before hot
spendable but can be kicked to recovery any time before (optional: use CTV
to actually transition on chain removing hot wallet, if cold key is hard to
access).


Not that this means if you're waiting for one of these outputs to be
created on chain, you cannot spend from the hot key since it needs to
confirm on chain first. Spending from the cold key for CPFP'ing the hot is
an 'invalid move' (emergency key for non emergency sitch)

Thus in order to CPFP, you would need a separate output just for CPFPing
that is not subject to these restrictions, or some sort of RBF-able addable
input/output. Or, Sponsors.


Jeremy
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin
opt-in or explicit tagging of fee account is a bad design IMO.

As pointed out by James O'Beirne in the other email, having an explicit key
required means you have to pre-plan suppose you're building a vault
meant to distribute funds over many years, do you really want a *specific*
precommitted key you have to maintain? What happens to your ability to bump
should it be compromised (which may be more likely if it's intended to be a
hot-wallet function for bumping).

Furthermore, it's quite often the case that someone might do a transaction
that pays you that is low fee that you want to bump but they choose to
opt-out... then what? It's better that you should always be able to fee
bump.


--
@JeremyRubin 


On Sun, Feb 20, 2022 at 6:24 AM ZmnSCPxj  wrote:

> Good morning DA,
>
>
> > Agreed, you cannot rely on a replacement transaction would somehow
> > invalidate a previous version of it, it has been spoken into the gossip
> > and exists there in mempools somewhere if it does, there is no guarantee
> > that anyone has ever heard of the replacement transaction as there is no
> > consensus about either the previous version of the transaction or its
> > replacement until one of them is mined and the block accepted. -DA.
>
> As I understand from the followup from Peter, the point is not "this
> should never happen", rather the point is "this should not happen *more
> often*."
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy Rubin
--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Sat, Feb 19, 2022 at 1:39 AM Peter Todd  wrote:

> On Fri, Feb 18, 2022 at 04:38:27PM -0800, Jeremy Rubin wrote:
> > > As I said, it's a new kind of pinning attack, distinct from other types
> > of pinning attack.
> >
> > I think pinning is "formally defined" as sequences of transactions which
> > prevent or make it less likely for you to make any progress (in terms of
> > units of computation proceeding).
>
> Mentioning "computation" when talking about transactions is misleading:
> blockchain transactions have nothing to do with computation.
>

It is in fact computation. Branding it as "misleading" is misleading... The
relevant literature is https://en.wikipedia.org/wiki/Non-blocking_algorithm,
sponsors helps get rid of deadlocking so that any thread can be guaranteed
to make progress. E.g., this is critical in Eltoo, which is effectively a
coordinated multi-party computation on-chain to compute the highest
sequence number known by any worker.

That transactions are blobs of "verification" (which is also itself a
computation) less so than dynamic computations is irrelevant to the fact
that series of transactions do represent computations.



> > Something that only increases possibility to make progress cannot be
> > pinning.
>
> It is incorrect to say that all use-cases have the property that any
> version of
> a transaction being mined is progress.
>

It is progress, tautologically. Progress is formally definable as a
transaction of any kind getting mined. Pinning prevents progress by an
adversarial worker. Sponsoring enables progress, but it may not be your
preferred interleaving. That's OK, but it's inaccurate to say it is not
progress.

Your understanding of how OpenTimestamps calendars work appears to be
> incorrect. There is no chain of unconfirmed transactions. Rather, OTS
> calendars
> use RBF to _update_ the timestamp tx with a new merkle tip hash for to all
> outstanding per-second commitments once per new block. In high fee
> situations
> it's normal for there to be dozens of versions of that same tx, each with a
> slightly higher feerate.
>

I didn't claim there to be a chain of unconfirmed, I claimed that there
could be single output chain that you're RBF'ing one step per block.

E.g., it could be something like

A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo}}
A_1 -> {A_2 w/ CSV 1 block, OP_RETURN {bar}}

such that A_i provably can't have an unconfirmed descendant. The notion
would be that you're replacing one with another. E.g., if you're updating
the calendar like:


Version 0: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo}}
Version 1: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo, bar}}
Version 2: A_0 -> {A_1 w/ CSV 1 block, OP_RETURN {blah, foo, bar, delta}}

and version 1 gets mined, then in A_1's spend you simply shift delta to
that (next) calendar.

A_1 -> {A_2 w/ CSV 1 block, OP_RETURN {delta}}

Thus my claim that someone sponsoring a old version only can delay by 1
block the calendar commit.





> OTS calendars can handle any of those versions getting mined. But older
> versions getting mined wastes money, as the remaining commitments still
> need to
> get mined in a subsequent transaction. Those remaining commitments are also
> delayed by the time it takes for the next tx to get mined.
>
> There are many use-cases beyond OTS with this issue. For example, some
> entities
> use "in-place" replacement for update low-time-preference settlement
> transactions by adding new txouts and updating existing ones. Older
> versions of
> those settlement transactions getting mined rather than the newer version
> wastes money and delays settlement for the exact same reason it does in
> OTS.
>
>
> > Lastly, if you do get "necromanced" on an earlier RBF'd transaction by a
> > third party for OTS, you should be relatively happy because it cost you
> > less fees overall, since the undoing of your later RBF surely returned
> some
> > satoshis to your wallet.
>
> As I said above, no it doesn't.
>
>
It does save money since you had to pay to RBF, the N+1st txn will be
paying higher fee than the Nth. So if someone else sponsors an earlier
version, then you save whatever feerate/fee bumps you would have paid and
the funds are again in your change output (or something). You can apply
those change output savings to your next batch, which can include any
entries that have been dropped .
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread Jeremy
opt-in or explicit tagging of fee account is a bad design IMO.

As pointed out by James O'Beirne in the other email, having an explicit key
required means you have to pre-plan suppose you're building a vault
meant to distribute funds over many years, do you really want a *specific*
precommitted key you have to maintain? What happens to your ability to bump
should it be compromised (which may be more likely if it's intended to be a
hot-wallet function for bumping).

Furthermore, it's quite often the case that someone might do a transaction
that pays you that is low fee that you want to bump but they choose to
opt-out... then what? It's better that you should always be able to fee
bump.
--
@JeremyRubin 



On Sun, Feb 20, 2022 at 6:24 AM ZmnSCPxj  wrote:

> Good morning DA,
>
>
> > Agreed, you cannot rely on a replacement transaction would somehow
> > invalidate a previous version of it, it has been spoken into the gossip
> > and exists there in mempools somewhere if it does, there is no guarantee
> > that anyone has ever heard of the replacement transaction as there is no
> > consensus about either the previous version of the transaction or its
> > replacement until one of them is mined and the block accepted. -DA.
>
> As I understand from the followup from Peter, the point is not "this
> should never happen", rather the point is "this should not happen *more
> often*."
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-18 Thread Jeremy Rubin
> As I said, it's a new kind of pinning attack, distinct from other types
of pinning attack.

I think pinning is "formally defined" as sequences of transactions which
prevent or make it less likely for you to make any progress (in terms of
units of computation proceeding).

Something that only increases possibility to make progress cannot be
pinning.

If you want to call it something else, with a negative connotation, maybe
call it "necromancing" (bringing back txns that would otherwise be
feerate/fee irrational).

I would posit that we should be wholly unconcerned with necromancing -- if
your protocol is particularly vulnerable to a third party necromancing then
your protocol is insecure and we shouldn't hamper Bitcoin's forward
progress on secure applications to service already insecure ones. Lightning
is particularly necromancy resistant by design, but pinning vulnerable.
This is also true with things like coinjoins which are necromancy resistant
but pinning vulnerable.

Necromancy in particular is something that isn't uniquely un-present in
Bitcoin today, and things like package relay and elimination of pinning are
inherently at odds with making necromancy either for CPFP use cases.

In particular, for the use case you mentioned "Eg a third party could mess
up OpenTimestamps calendars at relatively low cost by delaying the mining
of timestamp txs.", this is incorrect. A third party can only accelerate
the mining on the timestamp transactions, but they *can* accelerate the
mining of any such timestamp transaction. If you have a single output chain
that you're RBF'ing per block, then at most they can cause you to shift the
calendar commits forward one block. But again, they cannot pin you. If you
want to shift it back one block earlier, just offer a higher fee for the
later RBF'd calendar. Thus the interference is limited by how much you wish
to pay to guarantee your commitment is in this block as opposed to the next.

By the way, you can already do out-of-band transaction fees to a very
similar effect, google "BTC transaction accelerator". If the attack were at
all valuable to perform, it could happen today.

Lastly, if you do get "necromanced" on an earlier RBF'd transaction by a
third party for OTS, you should be relatively happy because it cost you
less fees overall, since the undoing of your later RBF surely returned some
satoshis to your wallet.

Best,

Jeremy
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-10 Thread Jeremy Rubin
That's not really pinning; painning usually refers to pinning something to
the bottom of the mempool whereas these mechanisms make it easier to
guarantee that progress can be made on confirming the transactions you're
interested in.

Often times in these protocols "the call is coming inside the house". It's
not a third party adding fees we are scared of, it's a direct party to the
protocol!

Sponsors or fee accounts would enable you to ensure the protocol you're
working on makes forward progress. For things like Eltoo the internal
ratchet makes this work well.

Protocols which depend on in mempool replacements before confirmation
already must be happy (should they be secure) with any prior state being
mined. If a third party pays the fee you might even be happier since the
execution wasn't on your dime.

Cheers,

Jeremy

On Wed, Feb 9, 2022, 10:59 PM Peter Todd via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> wrote:

> On Sat, Jan 01, 2022 at 12:04:00PM -0800, Jeremy via bitcoin-dev wrote:
> > Happy new years devs,
> >
> > I figured I would share some thoughts for conceptual review that have
> been
> > bouncing around my head as an opportunity to clean up the fee paying
> > semantics in bitcoin "for good". The design space is very wide on the
> > approach I'll share, so below is just a sketch of how it could work which
> > I'm sure could be improved greatly.
> >
> > Transaction fees are an integral part of bitcoin.
> >
> > However, due to quirks of Bitcoin's transaction design, fees are a part
> of
> > the transactions that they occur in.
> >
> > While this works in a "Bitcoin 1.0" world, where all transactions are
> > simple on-chain transfers, real world use of Bitcoin requires support for
> > things like Fee Bumping stuck transactions, DoS resistant Payment
> Channels,
> > and other long lived Smart Contracts that can't predict future fee rates.
> > Having the fees paid in band makes writing these contracts much more
> > difficult as you can't merely express the logic you want for the
> > transaction, but also the fees.
> >
> > Previously, I proposed a special type of transaction called a "Sponsor"
> > which has some special consensus + mempool rules to allow arbitrarily
> > appending fees to a transaction to bump it up in the mempool.
> >
> > As an alternative, we could establish an account system in Bitcoin as an
> > "extension block".
>
> 
>
> > This type of design works really well for channels because the addition
> of
> > fees to e.g. a channel state does not require any sort of pre-planning
> > (e.g. anchors) or transaction flexibility (SIGHASH flags). This sort of
> > design is naturally immune to pinning issues since you could offer to
> pay a
> > fee for any TXID and the number of fee adding offers does not need to be
> > restricted in the same way the descendant transactions would need to be.
>
> So it's important to recognize that fee accounts introduce their own kind
> of
> transaction pinning attacks: third parties would be able to attach
> arbitrary
> fees to any transaction without permission. This isn't necessarily a good
> thing: I don't want third parties to be able to grief my transaction
> engines by
> getting obsolete transactions confirmed in liu of the replacments I
> actually
> want confirmed. Eg a third party could mess up OpenTimestamps calendars at
> relatively low cost by delaying the mining of timestamp txs.
>
> Of course, there's an obvious way to fix this: allow transactions to
> designate
> a pubkey allowed to add further transaction fees if required. Which Bitcoin
> already has in two forms: Replace-by-Fee and Child Pays for Parent.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] BIP-119 CTV Meeting #3 Draft Agenda for Tuesday February 8th at 12:00 PT

2022-02-07 Thread Jeremy Rubin
Reminder:

This is in ~24 hours.

There have been no requests to add content to the agenda.

Best,

Jeremy
--
@JeremyRubin <https://twitter.com/JeremyRubin>


On Wed, Feb 2, 2022 at 12:29 PM Jeremy Rubin 
wrote:

> Bitcoin Developers,
>
> The 3rd instance of the recurring meeting is scheduled for Tuesday
> February 8th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC
> server.
>
> The meeting should take approximately 2 hours.
>
> The topics proposed to be discussed are agendized below. Please review the
> agenda in advance of the meeting to make the best use of everyone's time.
>
> Please send me any feedback, proposed topic changes, additions, or
> questions you would like to pre-register on the agenda.
>
> I will send a reminder to this list with a finalized Agenda in advance of
> the meeting.
>
> Best,
>
> Jeremy
>
> - Bug Bounty Updates (10 Minutes)
> - Non-Interactive Lightning Channels (20 minutes)
>   + https://rubin.io/bitcoin/2021/12/11/advent-14/
>   + https://utxos.org/uses/non-interactive-channels/
> - CTV's "Dramatic" Improvement of DLCs (20 Minutes)
>   + Summary: https://zensored.substack.com/p/supercharging-dlcs-with-ctv
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html
>   + https://rubin.io/bitcoin/2021/12/20/advent-23/
> - PathCoin (15 Minutes)
>   + Summary: A proposal of coins that can be transferred in an offline
> manner by pre-compiling chains of transfers cleverly.
>   + https://gist.github.com/AdamISZ/b462838cbc8cc06aae0c15610502e4da
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019809.html
> - OP_TXHASH (30 Minutes)
>   + An alternative approach to OP_CTV + APO's functionality by
> programmable tx hash opcode.
>   + See discussion thread at:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
> - Emulating CTV for Liquid (10 Minutes)
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019851.html
> - General Discussion (15 Minutes)
>
> Best,
>
> Jeremy
>
>
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] BIP-119 CTV Meeting #3 Draft Agenda for Tuesday February 8th at 12:00 PT

2022-02-07 Thread Jeremy Rubin
Bitcoin Developers,

The 3rd instance of the recurring meeting is scheduled for Tuesday February
8th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC server.

The meeting should take approximately 2 hours.

The topics proposed to be discussed are agendized below. Please review the
agenda in advance of the meeting to make the best use of everyone's time.

Please send me any feedback, proposed topic changes, additions, or
questions you would like to pre-register on the agenda.

I will send a reminder to this list with a finalized Agenda in advance of
the meeting.

Best,

Jeremy

- Bug Bounty Updates (10 Minutes)
- Non-Interactive Lightning Channels (20 minutes)
  + https://rubin.io/bitcoin/2021/12/11/advent-14/
  + https://utxos.org/uses/non-interactive-channels/
- CTV's "Dramatic" Improvement of DLCs (20 Minutes)
  + Summary: https://zensored.substack.com/p/supercharging-dlcs-with-ctv
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html
  + https://rubin.io/bitcoin/2021/12/20/advent-23/
- PathCoin (15 Minutes)
  + Summary: A proposal of coins that can be transferred in an offline
manner by pre-compiling chains of transfers cleverly.
  + https://gist.github.com/AdamISZ/b462838cbc8cc06aae0c15610502e4da
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019809.html
- OP_TXHASH (30 Minutes)
  + An alternative approach to OP_CTV + APO's functionality by programmable
tx hash opcode.
  + See discussion thread at:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
- Emulating CTV for Liquid (10 Minutes)
  +
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019851.html
- General Discussion (15 Minutes)

Best,

Jeremy




--
@JeremyRubin <https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-19 Thread Jeremy
SIGHASH_BUNDLE
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015862.html

By cycles I meant that if you commit to the sponsors by TXID from the
witness, you could "sponsor yourself" directly or through a cycle involving
> 1 txn.

With OP_VER I was talking about the proposal I linked here
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
which used OP_VER to indicate a txn sponsoring txn. Because the OP_VER is
in the output space, and uses TXIDs, it is cycle-free.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Wed, Jan 19, 2022 at 8:52 AM Billy Tetrud  wrote:

> Hmm, I don't know anything about  SIGHASH_BUNDLE. The only references
> online I can find are just mentions (mostly from you). What is
> SIGHASH_BUNDLE?
>
> > unless you're binding a WTXID
>
> That could work, but it would exclude cases where you have a transaction
> that has already been partially signed and someone wants to, say, only sign
> that transaction if some 3rd party signs a transaction paying part of the
> fee for it. Kind of a niche use case, but it would be nice to support it if
> possible. If the transaction hasn't been signed at all yet, a new
> transaction can just be created that includes the prospective fee-payer,
> and if the transaction is fully signed then it has a WTXID to use.
>
> > then you can have fee bumping cycles
>
> What kind of cycles do you mean? You're saying these cycles would make it
> less robust to reorgs?
>
> > OP_VER
>
> I assume you mean something other than pushing the version onto the stack
> <https://bitcoin.stackexchange.com/questions/97258/given-op-ver-was-never-used-is-disabled-and-not-considered-useful-can-its-meani>?
> Is that related to your fee account idea?
>
>
> On Wed, Jan 19, 2022 at 1:32 AM Jeremy  wrote:
>
>> Ah my bad i misread what you were saying as being about SIGHASH_BUNDLE
>> like proposals.
>>
>> For what you're discussing, I previously proposed
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
>> which is similar.
>>
>> The benefit of the OP_VER output is that SIGHASH_EXTERNAL has the issue
>> that unless you're binding a WTXID (which is maybe too specific?) then you
>> can have fee bumping cycles. Doing OP_VER output w/ TXID guarantees that
>> you are acyclic.
>>
>> The difference between a fee account and this approach basically boils
>> down to the impact on e.g. reorg stability, where the deposit/withdraw
>> mechanism is a bit more "robust" for reorderings in reorgs than the in-band
>> transaction approach, although they are very similar.
>>
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Tue, Jan 18, 2022 at 8:53 PM Billy Tetrud 
>> wrote:
>>
>>> >  because you make transactions third party malleable it becomes
>>> possible to bundle and unbundle transactions.
>>>
>>> What I was suggesting doesn't make it possible to malleate someone
>>> else's transaction. I guess maybe my proposal of using a sighash flag
>>> might have been unclear. Imagine it as a script opcode that just says "this
>>> transaction must be mined with this other transaction" - the only
>>> difference being that you can use any output with any encumberance as an
>>> input for fee bumping. It doesn't prevent the original transaction from
>>> being mined on its own. So adding junk inputs would be no more of a problem
>>> than dust attacks already are. It would be used exactly like cpfp, except
>>> it doesn't spend the parent.
>>>
>>> I don't think what I was suggesting is as different from your proposal.
>>> All the problems of fee revenue optimization and feerate rules that you
>>> mentioned seem like they'd also exist for your proposal, or for cpfp. Let
>>> me know if I should clarify further.
>>>
>>> On Tue, Jan 18, 2022 at 8:51 PM Jeremy  wrote:
>>>
>>>> The issue with sighash flags is that because you make transactions
>>>> third party malleable it becomes possible to bundle and unbundle
>>>> transactions.
>>>>
>>>> This means there are circumstances where an attacker could e.g. see
>>>> your txn, and then add a lot of junk change/inputs + 25 descendants and
>>>> strongly anchor your transaction to the bottom of the mempool.
>>>>
>>>> because of rbf rules requiring more fee and feerate, this means you
>>>> have to bump acros

Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy
Ah my bad i misread what you were saying as being about SIGHASH_BUNDLE like
proposals.

For what you're discussing, I previously proposed
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
which is similar.

The benefit of the OP_VER output is that SIGHASH_EXTERNAL has the issue
that unless you're binding a WTXID (which is maybe too specific?) then you
can have fee bumping cycles. Doing OP_VER output w/ TXID guarantees that
you are acyclic.

The difference between a fee account and this approach basically boils down
to the impact on e.g. reorg stability, where the deposit/withdraw mechanism
is a bit more "robust" for reorderings in reorgs than the in-band
transaction approach, although they are very similar.

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Tue, Jan 18, 2022 at 8:53 PM Billy Tetrud  wrote:

> >  because you make transactions third party malleable it becomes
> possible to bundle and unbundle transactions.
>
> What I was suggesting doesn't make it possible to malleate someone else's
> transaction. I guess maybe my proposal of using a sighash flag might have
> been unclear. Imagine it as a script opcode that just says "this
> transaction must be mined with this other transaction" - the only
> difference being that you can use any output with any encumberance as an
> input for fee bumping. It doesn't prevent the original transaction from
> being mined on its own. So adding junk inputs would be no more of a problem
> than dust attacks already are. It would be used exactly like cpfp, except
> it doesn't spend the parent.
>
> I don't think what I was suggesting is as different from your proposal.
> All the problems of fee revenue optimization and feerate rules that you
> mentioned seem like they'd also exist for your proposal, or for cpfp. Let
> me know if I should clarify further.
>
> On Tue, Jan 18, 2022 at 8:51 PM Jeremy  wrote:
>
>> The issue with sighash flags is that because you make transactions third
>> party malleable it becomes possible to bundle and unbundle transactions.
>>
>> This means there are circumstances where an attacker could e.g. see your
>> txn, and then add a lot of junk change/inputs + 25 descendants and strongly
>> anchor your transaction to the bottom of the mempool.
>>
>> because of rbf rules requiring more fee and feerate, this means you have
>> to bump across the whole package and that can get really messy.
>>
>> more generally speaking, you could imagine a future where mempools track
>> many alternative things that might want to be in a transaction.
>>
>> suppose there are N inputs each with a weight and an amount of fee being
>> added and the sighash flags let me pick any subset of them. However, for a
>> txn to be standard it must be < 100k bytes and for it to be consensus <
>> 1mb. Now it is possible you have to solve a knapsack problem in order to
>> rationally bundle this transaction out of all possibilities.
>>
>> This problem can get even thornier, suppose that the inputs I'm adding
>> themselves are the outputs of another txn in the mempool, now i have to
>> track and propagate the feerates of that child back up to the parent txn
>> and track all these dependencies.
>>
>> perhaps with very careful engineering these issues can be tamed. however
>> it seems with sponsors or fee accounts, by separating the pays-for from the
>> participates-in concerns we can greatly simplify it to something like:
>> compute effective feerate for a txn, including all sponsors that pay more
>> than the feerate of the base txn. Mine that txn and it's subsidies using
>> the normal algo. If you run out of space, all subsidies are same-sized so
>> just take the ones that pay the highest amount up until the added marginal
>> feerate is less than the next eligible txn.
>>
>>
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Tue, Jan 18, 2022 at 6:38 PM Billy Tetrud 
>> wrote:
>>
>>> I see, its not primarily to make it cheaper to append fees, but also
>>> allows appending fees in cases that aren't possible now. Is that right? I
>>> can certainly see the benefit of a more general way to add a fee to any
>>> transaction, regardless of whether you're related to that transaction or
>>> not.
>>>
>>> How would you compare the pros and cons of your account-based approach
>>> to something like a new sighash flag? Eg a sighash flag that says "I'm
>>> signing this transaction, but the signature is only valid if mined in the
&

Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy
The issue with sighash flags is that because you make transactions third
party malleable it becomes possible to bundle and unbundle transactions.

This means there are circumstances where an attacker could e.g. see your
txn, and then add a lot of junk change/inputs + 25 descendants and strongly
anchor your transaction to the bottom of the mempool.

because of rbf rules requiring more fee and feerate, this means you have to
bump across the whole package and that can get really messy.

more generally speaking, you could imagine a future where mempools track
many alternative things that might want to be in a transaction.

suppose there are N inputs each with a weight and an amount of fee being
added and the sighash flags let me pick any subset of them. However, for a
txn to be standard it must be < 100k bytes and for it to be consensus <
1mb. Now it is possible you have to solve a knapsack problem in order to
rationally bundle this transaction out of all possibilities.

This problem can get even thornier, suppose that the inputs I'm adding
themselves are the outputs of another txn in the mempool, now i have to
track and propagate the feerates of that child back up to the parent txn
and track all these dependencies.

perhaps with very careful engineering these issues can be tamed. however it
seems with sponsors or fee accounts, by separating the pays-for from the
participates-in concerns we can greatly simplify it to something like:
compute effective feerate for a txn, including all sponsors that pay more
than the feerate of the base txn. Mine that txn and it's subsidies using
the normal algo. If you run out of space, all subsidies are same-sized so
just take the ones that pay the highest amount up until the added marginal
feerate is less than the next eligible txn.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Tue, Jan 18, 2022 at 6:38 PM Billy Tetrud  wrote:

> I see, its not primarily to make it cheaper to append fees, but also
> allows appending fees in cases that aren't possible now. Is that right? I
> can certainly see the benefit of a more general way to add a fee to any
> transaction, regardless of whether you're related to that transaction or
> not.
>
> How would you compare the pros and cons of your account-based approach to
> something like a new sighash flag? Eg a sighash flag that says "I'm signing
> this transaction, but the signature is only valid if mined in the same
> block as transaction X (or maybe transactions LIST)". This could be named
> SIGHASH_EXTERNAL. Doing this would be a lot more similar to other bitcoin
> transactions, and no special account would need to be created. Any
> transaction could specify this. At least that's the first thought I would
> have in designing a way to arbitrarily bump fees. Have you compared your
> solution to something more familiar like that?
>
> On Tue, Jan 18, 2022 at 11:43 AM Jeremy  wrote:
>
>> Can you clarify what you mean by "improve the situation"?
>>
>> There's a potential mild bytes savings, but the bigger deal is that the
>> API should be much less vulnerable to pinning issues, fix dust leakage for
>> eltoo like protocols, and just generally allow protocol designs to be fully
>> abstracted from paying fees. You can't easily mathematically quantify API
>> improvements like that.
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud 
>> wrote:
>>
>>> Do you have any back-of-the-napkin math on quantifying how much this
>>> would improve the situation vs existing methods (eg cpfp)?
>>>
>>>
>>>
>>> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
>>> bitcoin-...@lists.linuxfoundation.org> wrote:
>>>
>>>> Happy new years devs,
>>>>
>>>> I figured I would share some thoughts for conceptual review that have
>>>> been bouncing around my head as an opportunity to clean up the fee paying
>>>> semantics in bitcoin "for good". The design space is very wide on the
>>>> approach I'll share, so below is just a sketch of how it could work which
>>>> I'm sure could be improved greatly.
>>>>
>>>> Transaction fees are an integral part of bitcoin.
>>>>
>>>> However, due to quirks of Bitcoin's transaction design, fees are a part
>>>> of the transactions that they occur in.
>>>>
>>>> While this works in a "Bitcoin 1.0" world, where all transactions are
>>>> simple on-chain transfers, real world use of Bitcoin requires support for
>>>> things l

Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-01-18 Thread Jeremy
Can you clarify what you mean by "improve the situation"?

There's a potential mild bytes savings, but the bigger deal is that the API
should be much less vulnerable to pinning issues, fix dust leakage for
eltoo like protocols, and just generally allow protocol designs to be fully
abstracted from paying fees. You can't easily mathematically quantify API
improvements like that.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Tue, Jan 18, 2022 at 8:13 AM Billy Tetrud  wrote:

> Do you have any back-of-the-napkin math on quantifying how much this would
> improve the situation vs existing methods (eg cpfp)?
>
>
>
> On Sat, Jan 1, 2022 at 2:04 PM Jeremy via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
>
>> Happy new years devs,
>>
>> I figured I would share some thoughts for conceptual review that have
>> been bouncing around my head as an opportunity to clean up the fee paying
>> semantics in bitcoin "for good". The design space is very wide on the
>> approach I'll share, so below is just a sketch of how it could work which
>> I'm sure could be improved greatly.
>>
>> Transaction fees are an integral part of bitcoin.
>>
>> However, due to quirks of Bitcoin's transaction design, fees are a part
>> of the transactions that they occur in.
>>
>> While this works in a "Bitcoin 1.0" world, where all transactions are
>> simple on-chain transfers, real world use of Bitcoin requires support for
>> things like Fee Bumping stuck transactions, DoS resistant Payment Channels,
>> and other long lived Smart Contracts that can't predict future fee rates.
>> Having the fees paid in band makes writing these contracts much more
>> difficult as you can't merely express the logic you want for the
>> transaction, but also the fees.
>>
>> Previously, I proposed a special type of transaction called a "Sponsor"
>> which has some special consensus + mempool rules to allow arbitrarily
>> appending fees to a transaction to bump it up in the mempool.
>>
>> As an alternative, we could establish an account system in Bitcoin as an
>> "extension block".
>>
>> *Here's how it might work:*
>>
>> 1. Define a special anyone can spend output type that is a "fee account"
>> (e.g. segwit V2). Such outputs have a redeeming key and an amount
>> associated with them, but are overall anyone can spend.
>> 2. All deposits to these outputs get stored in a separate UTXO database
>> for fee accounts
>> 3. Fee accounts can sign only two kinds of transaction: A: a fee amount
>> and a TXID (or Outpoint?); B: a withdraw amount, a fee, and an address
>> 4. These transactions are committed in an extension block merkle tree.
>> While the actual signature must cover the TXID/Outpoint, the committed data
>> need only cover the index in the block of the transaction. The public key
>> for account lookup can be recovered from the message + signature.
>> 5. In any block, any of the fee account deposits can be: released into
>> fees if there is a corresponding tx; consolidated together to reduce the
>> number of utxos (this can be just an OP_TRUE no metadata needed); or
>> released into fees *and paid back* into the requested withdrawal key
>> (encumbering a 100 block timeout). Signatures must be unique in a block.
>> 6. Mempool logic is updated to allow attaching of account fee spends to
>> transactions, the mempool can restrict that an account is not allowed more
>> spend more than it's balance.
>>
>> *But aren't accounts "bad"?*
>>
>> Yes, accounts are bad. But these accounts are not bad, because any funds
>> withdrawn from the fee extension are fundamentally locked for 100 blocks as
>> a coinbase output, so there should be no issues with any series of reorgs.
>> Further, since there is no "rich state" for these accounts, the state
>> updates can always be applied in a conflict-free way in any order.
>>
>>
>> *Improving the privacy of this design:*
>>
>> This design could likely be modified to implement something like
>> Tornado.cash or something else so that the fee account paying can be
>> unlinked from the transaction being paid for, improving privacy at the
>> expense of being a bit more expensive.
>>
>> Other operations could be added to allow a trustless mixing to be done by
>> miners automatically where groups of accounts with similar values are
>> trustlessly  split into a common denominator and change, and keys are
>> derived via a verifiable stealth address like 

[Lightning-dev] [Pre-BIP] Fee Accounts

2022-01-01 Thread Jeremy
lags). This sort of
design is naturally immune to pinning issues since you could offer to pay a
fee for any TXID and the number of fee adding offers does not need to be
restricted in the same way the descendant transactions would need to be.

*Without a fork?*

This type of design could be done as a federated network that bribes miners
-- potentially even retroactively after a block is formed. That might be
sufficient to prove the concept works before a consensus upgrade is
deployed, but such an approach does mean there is a centralizing layer
interfering with normal mining.


Happy new year!!

Jeremy

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [Bitcoin Advent Calendar] Payment Channels in a CTV+Sapio World

2021-12-11 Thread Jeremy
hola devs,

This post details more formally a basic version of payment channels built
on top of CTV/Sapio and the implications of having non-interactive channel
creation.

https://rubin.io/bitcoin/2021/12/11/advent-14/

I'm personally incredibly bullish on where this concept can go since it
would make channel opening much more efficient, especially when paired with
the payment pool concept shared the other day.

Best,

Jeremy

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy
IMO this is not a big problem. The problem is not if a 0 value ever enters
the mempool, it's if it is never spent. And even if C2/P1 goes in, C1 still
can be spent. In fact, it increases it's feerate with P1's confirmation so
it's somewhat likely it would go in. C2 further has to be pretty expensive
compared to C1 in order to be mined when C2 would not be, so the user
trying to do this has to pay for it.

If we're worried it might never be spent again since no incentive, it's
rational for miners *and users who care about bloat* to save to disk the
transaction spending it to resurrect it. The way this can be broken is if
the txn has two inputs and that input gets spent separately.

That said, I think if we can say that taking advantage of keeping the 0
value output will cost you more than if you just made it above dust
threshold, it shouldn't be economically rational to not just do a dust
threshold value output instead.

So I'm not sure the extent to which we should bend backwards to make 0
value outputs impossible v.s. making them inconvenient enough to not be
popular.



-
Consensus changes below:
-

Another possibility is to have a utxo with drop semantics; if UTXO X with
some flag on it is not spent in the block it is created, it expires and can
never be spent. This is essentially an inverse timelock, but severely
limited to one block and mempool evictions can be handled as if a conflict
were mined.

These types of 0 value outputs could be present just for attaching fee in
the mempool but be treated like an op_return otherwise. We could add two
cases for this: one bare segwit version (just the number, no data) and one
that's equivalent to taproot. This covers OP_TRUE anchors very efficiently
and ones that require a signature as well.

This is relatively similar to how Transaction Sponsors works, but without
full tx graph de-linkage... obviously I think if we'll entertain a
consensus change, sponsors makes more sense, but expiring utxos doesn't
change as many properties of the tx-graph validation so might be simpler.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy
Bastien,

The issue is that with Decker Channels you either use SIGHASH_ALL / APO and
don't allow adding outs (this protects against certain RBF pinning on the
root with bloated wtxid data) and have anchor outputs or you do allow them
and then are RBF pinnable (but can have change).

Assuming you use anchor outs, then you really can't use dust-threshold
outputs as it either breaks the ratcheting update validity (if the specific
amount paid to output matters) OR it allows many non-latest updates to
fully drain the UTXO of any value.

You can get around the needing for N of them by having a congestion-control
tree setup in theory; then you only need log(n) data for one bumper, and
(say) 1.25x the data if all N want to bump. This can be a nice trade-off
between letting everyone bump and not. Since these could be chains of
IUTXO, they don't need to carry any weight directly.

The carve out would just be to ensure that CPFP 0 values are known how to
be spent.





--
@JeremyRubin 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Take 2: Removing the Dust Limit

2021-12-07 Thread Jeremy
Bitcoin Devs (+cc lightning-dev),

Earlier this year I proposed allowing 0 value outputs and that was shot
down for various reasons, see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html

I think that there can be a simple carve out now that package relay is
being launched based on my research into covenants from 2017
https://rubin.io/public/pdfs/multi-txn-contracts.pdf.

Essentially, if we allow 0 value outputs BUT require as a matter of policy
(or consensus, but policy has major advantages) that the output be used as
an Intermediate Output (that is, in order for the transaction to be
creating it to be in the mempool it must be spent by another tx)  with the
additional rule that the parent must have a higher feerate after CPFP'ing
the parent than the parent alone we can both:

1) Allow 0 value outputs for things like Anchor Outputs (very good for not
getting your eltoo/Decker channels pinned by junk witness data using Anchor
Inputs, very good for not getting your channels drained by at-dust outputs)
2) Not allow 0 value utxos to proliferate long
3) It still being valid for a 0 value that somehow gets created to be spent
by the fee paying txn later

Just doing this as a mempool policy also has the benefits of not
introducing any new validation rules. Although in general the IUTXO concept
is very attractive, it complicates mempool :(

I understand this may also be really helpful for CTV based contracts (like
vault continuation hooks) as well as things like spacechains.

Such a rule -- if it's not clear -- presupposes a fully working package
relay system.

I believe that this addresses all the issues with allowing 0 value outputs
to be created for the narrow case of immediately spendable outputs.

Cheers,

Jeremy

p.s. why another post today? Thank Greg
https://twitter.com/JeremyRubin/status/1468390561417547780


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Half-Delegation & Chaperones for Decker Channels

2021-11-29 Thread Jeremy
Hi zmnscpxj,

>
>> Just a minor curiosity I figured was worth mentioning on the composition
of delegations and anyprevout...
>>
>> DA: Let full delegation be a script S such that I can sign script R and
then R may sign for a transaction T.
>> DB: Let partial delegation be a script S such that I can sign a tuple
(script R, transaction T) and R may sign T.
>>
>> A simple version of this could be done for scriptless multisigs where S
signs T and then onion encrypts to the signers of R and distributes the
shares.
>
>Just to be clear, do you mean, "for the case where R is a scriptless
multisig"?
>And, "onion encrypts the signature"?


No. Let's suppose that R = 2 of 3 {a,b,c}.

S signs T. S distributes enc(a, enc(b, T)), enc(a, enc(c, T)), and enc(b,
enc(c, T)) and then R can 'sign' by decrypt and broadcast (of course you
have
an FLP issue here, but let's ignore that for now).

This is a "scriptless multisig with onion encryption" in this context.

Note: you don't have to encrypt T, just the witness to T technically.

>Since part of the signature `(R, s)` would be a scalar modulo k, `s`,
another way would be to SSS that scalar and distribute the shares to the R
multisig signers, that may require less computation and would allow R to be
k-of-n.

Yep that works too! There are a lot of different things S can do here, was
just giving the simplest "it works" version v.s. focusing on efficiency.


>> However, under such a model, if T is signed by S with AnyPrevOut, then T
is now arbitrarily rebindable.
>>
>> Therefore let us define more strictly:
>>
>> DC: Let half-delegation be a script S such that I can sign a tuple
(script R, transaction T) and R may sign T and revealing T/R does grant
authorization to any other party.
>
>Do you mean "does *not* grant"?

Yes absolutely, that was a typo.


>If S is a delegator that intends to delegate to R, and creates a simple
Taproot with keypath S, and signs a spend from that using
`SIGHASH_ANYPREVOUT` and distributes shares of the signature to R, then
once the signature is revealed onchain, anyone (not just R) may rebind the
transaction to any other Taproot with keypath S, which I think is what you
wish to prevent with the stricter definition "does *not* grant
authorization to any other party"?

Correct.

>>
>> The signer of R could choose to sign with APO, in which case they make
the txn rebindable. They could also reveal the private keys for R similarly.
>> For "correct" use, R should sign with SIGHASH_ALL, binding the
transaction to a single instance.
>
>Well, for the limited case where R is a k-of-n multisig (including n-of-n)
it seems the "sign and SSS" would work similarly, for "correct" use R
should sign with `SIGHASH_ALL` anyway, so in the "sign and SSS" method S
should always sign with `SIGHASH_ALL`.

Correct.

>This does not work if the script S itself is hosted in some construction
that requires `SIGHASH_ANYPREVOUT` at the base layer, which I believe is
what you are concerned about?
>In that case all signers should really give fresh pubkeys, i.e. no address
reuse.

I don't think so? Not sure what you mean here.

>> Observation: a tuple script R + transaction T can, in many cases, be
represented by script R ||  CTV.
>> Corollary: half-delegation can be derived from full delegation and a
covenant.
>>
>> Therefore delegation + CTV + APO may be sufficient for making chaperone
signatures work, if they are desired by a user.
>
>Hmm what?
>Is there some other use for chaperone signatures other than to
artificially encumber `SIGHASH_ANYPREVOUT` or have definitions drifted over
time?

I don't know; but they are interesting. Of course you can just always write
a script like `<1||pk> checksig  checksig`, but where this is
unique is that you can accomplish post-hoc chaperoning which lets you
dynamically pick/rotate keys, for example.

>> Remarks:
>>
>> APO's design discussion should not revisit Chaperone signatures
(hopefully already a dead horse?) but instead consider how APO might
compose with Delegation proposals and CTV.
>
>no chaperones == good

:)
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Half-Delegation & Chaperones for Decker Channels

2021-11-29 Thread Jeremy
Just a minor curiosity I figured was worth mentioning on the composition of
delegations and anyprevout...

DA: Let full delegation be a script S such that I can sign script R and
then R may sign for a transaction T.
DB: Let partial delegation be a script S such that I can sign a tuple
(script R, transaction T) and R may sign T.

A simple version of this could be done for scriptless multisigs where S
signs T and then onion encrypts to the signers of R and distributes the
shares. However, under such a model, if T is signed by S with AnyPrevOut,
then T is now arbitrarily rebindable. Therefore let us define more strictly:
DC: Let half-delegation be a script S such that I can sign a tuple (script
R, transaction T) and R may sign T and revealing T/R does grant
authorization to any other party.

The signer of R could choose to sign with APO, in which case they make the
txn rebindable. They could also reveal the private keys for R similarly.
For "correct" use, R should sign with SIGHASH_ALL, binding the transaction
to a single instance.

Observation: a tuple script R + transaction T can, in many cases, be
represented by script R ||  CTV.
Corollary: half-delegation can be derived from full delegation and a
covenant.

Therefore delegation + CTV + APO may be sufficient for making chaperone
signatures work, if they are desired by a user.

Remarks:

APO's design discussion should not revisit Chaperone signatures (hopefully
already a dead horse?) but instead consider how APO might compose with
Delegation proposals and CTV.

--
@JeremyRubin 

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-24 Thread Jeremy
John let me know that he's posted some responses in his Github repo
https://github.com/JohnLaw2/btc-iids

probably easiest to respond to him via e.g. a github issue or something.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-17 Thread Jeremy
Bitcoin & LN Devs,

The below is a message that was shared to me by an anon account on Telegram
(nym: John Law). You can chat with them directly in the https://t.me/op_ctv
or https://t.me/bips_activation group. I'm reproducing it here at their
request as they were unsure of how to post to the mailing list without
compromising their identity (perhaps we should publish a guideline on how
to do so?).

Best,

Jeremy


Hi,

I'd like to propose an alternative to BIP-118 [1] that is both safer and
more
powerful. The proposal is called Inherited IDs (IIDs) and is described in a
paper that can be found here [2]. The paper presents IIDs and Layer 2
protocols
using IIDs that are far more scalable and usable than those proposed for
BIP-118
(including eltoo [3]).

Like BIP-118, IIDs are a proposal for a softfork that changes the rules for
calculating certain signatures. BIP-118 supports signatures that do not
commit to the transaction ID of the parent transaction, thus allowing
"floating
transactions". In contrast, the IID proposal does not allow floating
transactions, but it does allow an output to specify that child transaction
signatures commit to the parent transaction's IID, rather than its
transaction
ID.

IID Definitions
===
* If T is a transaction, TXID(T) is the transaction ID of T.
* An output is an "IID output" if it is a native SegWit output with version
2
  and a 32-byte witness program, and is a "non-IID output" otherwise.
* A transaction is an "IID transaction" if it has at least one IID output.
* If T is a non-IID transaction, or a coinbase transaction, IID(T) =
TXID(T).
* If T is a non-coinbase IID transaction, first_parent(T) = F is the
transaction
  referenced by the OutPoint in T's input 0, and IID(T) = hash(IID(F) ||
F_idx)
  where F_idx is the index field in the OutPoint in T's input 0 (that is,
T's
  input 0 spends F's output F_idx).

IID Signature Validation

* Signatures that spend IID outputs commit to signature messages in which
IIDs
  replace transaction IDs in all OutPoints of the child transaction that
spend
  IID outputs.

Note that IID(T) can be calculated from T (if it is a non-IID or a coinbase
transaction) or from T and F (otherwise). Therefore, as long as nodes store
(or
calculate) the IID of each transaction in the UTXO set, they can validate
signatures of transactions that spend IID outputs. Thus, the IID proposal
fits
Bitcoin's existing UTXO model, at the small cost of adding a 32-byte IID
value
for certain unspent outputs. Also, note that the IID of a transaction may
not
commit to the exact contents of the transaction, but it does commit to how
the
transaction is related to some exactly-specified transaction (such as being
the
first child of the second child of a specific transaction). As a result, a
transaction that is signed using IIDs cannot be used more than once or in an
unanticipated location, thus making it much safer than a floating
transaction.

2-Party Channel Protocols
=
BIP-118 supports the eltoo protocol [3] for 2-party channels, which improves
upon the Lightning protocol for 2-party channels [4] by:
1) simplifying the protocol,
2) eliminating penalty transactions, and
3) supporting late determination of transaction fees [1, Sec. 4.1.5].

The IID proposal does not support the eltoo protocol. However, the IID
proposal
does support a 2-party channel protocol, called 2Stage [2, Sec. 3.3], that
is
arguably better than eltoo. Specifically, 2Stage achieves eltoo's 3
improvements
listed above, plus it:
4) eliminates the need for watchtowers [2, Sec. 3.6], and
5) has constant (rather than linear) worst-case on-chain costs [2, Sec.
3.4].

Channel Factories
=
In general, an on-chain transaction is required to create or close a 2-party
channel. Multi-party channel factories have been proposed in order to allow
a
fixed set of parties to create and close numerous 2-party channels between
them,
thus amortizing the on-channel costs of those channels [5]. BIP-118 also
supports simple and efficient multi-party channel factories via the eltoo
protocol [1, Sec. 5.2] (which are called "multi-party channels" in that
paper).

While the IID proposal does not support the eltoo protocol, it does support
channel factories that are far more scalable and powerful than any
previously-
proposed channel factories (including eltoo factories). Specifically, IIDs
support a simple factory protocol in which not all parties need to sign the
factory's funding transaction [2, Sec. 5.3], thus greatly improving the
scale
of the factory (at the expense of requiring an on-chain transaction to
update
the set of channels created by the factory). These channel factories can be
combined with the 2Stage protocol to create trust-free and watchtower-free
channels including very large numbers of casual users.

Furthermore, IIDs support channel factories with an unbounded number of

Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-19 Thread Jeremy
one interesting point that came up at the bitdevs in austin today that
favors remove that i believe is new to this discussion (it was new to me):

the argument can be reduced to:

- dust limit is a per-node relay policy.
- it is rational for miners to mine dust outputs given their cost of
maintenance (storing the output potentially forever) is lower than their
immediate reward in fees.
- if txn relaying nodes censor something that a miner would mine, users
will seek a private/direct relay to the miner and vice versa.
- if direct relay to miner becomes popular, it is both bad for privacy and
decentralization.
- therefore the dust limit, should there be demand to create dust at
prevailing mempool feerates, causes an incentive to increase network
centralization (immediately)

the tradeoff is if a short term immediate incentive to promote network
centralization is better or worse than a long term node operator overhead.


///

my take is that:

1) having a dust limit is worse since we'd rather not have an incentive to
produce or roll out centralizing software, whereas not having a dust limit
creates an mild incentive for node operators to improve utreexo
decentralizing software.
2) it's hard to quantify the magnitude of the incentives, which does matter.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-09 Thread Jeremy
You might be interested in https://eprint.iacr.org/2017/1066.pdf which
claims that you can make CT computationally hiding and binding, see section
4.6.

with respect to utreexo, you might review
https://github.com/mit-dci/utreexo/discussions/249?sort=new which discusses
tradeoffs between different accumulator designs. With a swap tree, old
things that never move more or less naturally "fall leftward", although
there are reasons to prefer alternative designs.


>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy
some additional answers/clarifications



> Question for Jeremy: would you also allow zero-value outputs?  Or would
> you just move the dust limit down to a fixed 1-sat?
>

I would remove it entirely -- i don't think there's a difference between
the two realistically.



>
> Allowing 0-value or 1-sat outputs minimizes the cost for polluting the
> UTXO set during periods of low feerates.
>
>
Maybe that incentivizes people to make better use of the low
feerate periods to do more important work like consolidations so that
others do not have the opportunity to pollute (therefore eliminating the
low fee period ;)



> If your stuff is going to slow down my node and possibly reduce my
> censorship resistance, how is that not my business?
>

You don't know that's what I'm doing, it's a guess as to my future behavior.

If it weren't worth it to me, I wouldn't be doing it. Market will solve
what is worth v.s. not worth.



>
> > 2) dust outputs can be used in various authentication/delegation smart
> > contracts
>
> All of which can also use amounts that are economically rational to
> spend on their own.  If you're gonna use the chain for something besides
> value transfer, and you're already wiling to pay X in fees per onchain
> use, why is it not reasonable for us to ask you to put up something on
> the order of X as a bond that you'll actually clean up your mess when
> you're no longer interested in your thing?
>

These authentication/delegation smart contracts can be a part of value
transfer e.g. some type of atomic swaps or other escrowed payment.

A bond to clean it up is a fair reason; but perhaps in a protocol it might
not make sense to clean up the utxo otherwise and so you're creating a
cleanup transaction (potentially has to be presigned in a way it can't be
done as a consolidation) and then some future consolidation to make the
dusts+eps aggregately convenient to spend. So you'd be trading a decent
amount more chainspace v.s. just ignoring the output and writing it to disk
and maybe eventually into a utreexo (e.g. imagine utreexo where the last N
years of outputs are held in memory, but eventually things get tree'd up)
so the long term costs need not be entirely bourne in permanent storage.


>
> Nope, nothing is forced.  Any LN node can simply refuse to accept/route
> HTLCs below the dust limit.
>

I'd love to hear some broad thoughts on the impact of this on routing (cc
Tarun who thinks about these things a decent amount) as this means for
things like multipath routes you have much stricter constraints on which
nodes you can route payments through. The impact on capacity from every
user's pov might be not insubstantial.



>
> I also doubt your proposed solution fixes the problem.  Any LN node that
> accepts an uneconomic HTLC cannot recover that value, so the money is
> lost either way.  Any sane regulation would treat losing value to
> transaction fees the same as losing value to uneconomical conditions.
>
> Finally, if LN nodes start polluting the UTXO set with no economic way
> to clean up their mess, I think that's going to cause tension between
> full node operators and LN node operators.
>



My anticipation is that the LN operators would stick the uneconomic HTLCs
aggregately into a fan out utxo and try to cooperate, but failing that only
pollute the chain by O(1) for O(n) non economic HTLCs. There is a
difference between losing money and knowing exactly where it is but not
claiming it.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy
Under no circumstances do I think we should *increase* the dust limit. That
would have a mildly confiscatory effect on current Lightning Channel
operators, among others.

Generally, the UTXO set will grow. We should work to accommodate the worst
case scenario under current consensus rules. I think this points to using
things like Utreexo or similar rather than meddling in the user's business.

I am skeptical that 0 value outputs are a real spam problem given the cost
to create. Generally one creates an output when one either believes it
would make sense to redeem it in the future. So surely this is a market
problem, if people want them they can pay what it is worth for them to have
it. Again, it's not my business.

Matt proposes that people might use a nominal amount of bitcoin on a zero
value input so that it doesn't look like dust. What Matt is asking for is
that in any protocol you pay for your space not via fees, but instead via
an assurance bond that you will eventually redeem it and clean the state
up. In my opinion, this is worse than just allowing a zero value input
since then you might accrue the need for an additional change output to
which the bond's collateral be returned.

With respect to the check in the mail analogy, cutting down trees for paper
is bad for everyone and shipping things using fossil fuels contributes to
climate change. Therefore it's a cost borne by society in some respects.
Still, if someone else decides it's worth sending a remittance of whichever
value, it is still not my business.

With respect to CT and using the range proofs to exclude dust, I'm aware
that can be done (hence compromising allowed transfers). Again, I don't
think it's quite our business what people do, but on a technical level,
this would have the impact of shrinking the anonymity set so is also
suspect to me.

---

If we really want to create incentives for state clean up, I think it's a
decent design space to consider.

e.g., we could set up a bottle deposit program whereby miners contribute an
amount of funds from fee revenue from creating N outputs to a "rolling
utxo" (e.g., a coinbase utxo that gets spent each block op_true to op_true
under some miner rules) and the rolling utxo can either disperse funds to
the miner reward or soak up funds from the fees in order to encourage
blocks which have a better ratio of inputs to outputs than the mean. Miners
can then apply this rule in the mempool to prioritize transactions that
help their block's ratio. This is all without directly interfering with the
user's intent to create whatever outputs they want, it just provides a way
of paying miners to clean up the public common.

Gas Token by Daian et al comes to mind, from Eth, w.r.t. many pitfalls
arbing these state space freeing return curves, but it's worth thinking
through nonetheless.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Removing the Dust Limit

2021-08-08 Thread Jeremy
We should remove the dust limit from Bitcoin. Five reasons:

1) it's not our business what outputs people want to create
2) dust outputs can be used in various authentication/delegation smart
contracts
3) dust sized htlcs in lightning (
https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
force channels to operate in a semi-trusted mode which has implications
(AFAIU) for the regulatory classification of channels in various
jurisdictions; agnostic treatment of fund transfers would simplify this
(like getting a 0.01 cent dividend check in the mail)
4) thinly divisible colored coin protocols might make use of sats as value
markers for transactions.
5) should we ever do confidential transactions we can't prevent it without
compromising privacy / allowed transfers

The main reasons I'm aware of not allow dust creation is that:

1) dust is spam
2) dust fingerprinting attacks

1 is (IMO) not valid given the 5 reasons above, and 2 is preventable by
well behaved wallets to not redeem outputs that cost more in fees than they
are worth.

cheers,

jeremy

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-27 Thread Jeremy
Just my 2 cents:

I think worrying about the size of a resolution during a contested close
scenario (too much) is not worth it. Encoding the state needed (e.g., in
op_return or whatever) is the safest option because then you guarantee the
availability of the closing transaction data in the protocol with no
external dependencies.

If you want to make it cheaper, then allow for Alice to choose to cooperate
with a contesting Bob to replace the transaction with something smaller
(quibble: we should get rid of mempool absolute fee increase rule for RBF
perhaps... otherwise, this should be done as pre-broadcast negotiation)
after observing the state published by Bob, but make it mandatory to at
least reveal it if Bob wants to use the transaction unilaterally.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-12 Thread Jeremy
Another option would be to somehow encrypt this data in, say, an OP_RETURN
for any update transaction for each participant (perhaps worth breaking
update symmetry for efficiency on this...) that way if an update ever
happens on a state you don't have you can use your static key to decrypt
the relevant data for what PK_si signed off on.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Mon, Jul 12, 2021 at 3:16 PM Jeremy  wrote:

> Without an exact implementation, one thing you could do to fix the lost
> state issue would be to make the scripts something like:
>
> [` CLTV DROP PKu CHECKSIGVERIFY GETLOCKTIME 
> BIP32DERIVE CHECKTRANSACTIONSIGNEDFROMSTACK`, `2016 CSV DROP PK_si
> CHECKSIG`]
>
> In order to upgrade to state M>= N+1 you'd have to publish a transaction
> signed with the BIP32 derived key for that update in the future.
>
> The downside is that you end up double publishing the txdata on the chain,
> but it at least ensure data availability.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-12 Thread Jeremy
Without an exact implementation, one thing you could do to fix the lost
state issue would be to make the scripts something like:

[` CLTV DROP PKu CHECKSIGVERIFY GETLOCKTIME 
BIP32DERIVE CHECKTRANSACTIONSIGNEDFROMSTACK`, `2016 CSV DROP PK_si
CHECKSIG`]

In order to upgrade to state M>= N+1 you'd have to publish a transaction
signed with the BIP32 derived key for that update in the future.

The downside is that you end up double publishing the txdata on the chain,
but it at least ensure data availability.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-12 Thread Jeremy
On Sun, Jul 11, 2021 at 10:01 PM Anthony Towns  wrote:

> On Thu, Jul 08, 2021 at 08:48:14AM -0700, Jeremy wrote:
> > This would disallow using a relative locktime and an absolute
> locktime
> > for the same input. I don't think I've seen a use case for that so
> far,
> > but ruling it out seems suboptimal.
> > I think you meant disallowing a relative locktime and a sequence
> locktime? I
> > agree it is suboptimal.
>
> No? If you overload the nSequence for a per-input absolute locktime
> (well in the past for eltoo), then you can't reuse the same input's
> nSequence for a per-input relative locktime (ie CSV).
>
> Apparently I have thought of a use for it now -- cut-through of PTLC
> refunds when the timeout expires well after the channel settlement delay
> has passed. (You want a signature that's valid after a relative locktime
> of the delay and after the absolute timeout)
>

Ah -- I didn't mean a per input abs locktime, I mean the  tx global
locktime.

I agree that at some point we should just separate all locktime types per
input so we get rid of all weirdness/overlap.



>
> > What do you make of sequence tagged keys?
>
> I think we want sequencing restrictions to be obvious from some (simple)
> combination of nlocktime/nsequence/annex so that you don't have to
> evaluate scripts/signatures in order to determine if a transaction
> is final.
>
> Perhaps there's a more general principle -- evaluating a script should
> only return one bit of info: "bool tx_is_invalid_script_failed"; every
> other bit of information -- how much is paid in fees (cf ethereum gas
> calculations), when the tx is final, if the tx is only valid in some
> chain fork, if other txs have to have already been mined / can't have
> been mined, who loses funds and who gets funds, etc... -- should already
> be obvious from a "simple" parsing of the tx.
>
> Cheers,
> aj
>
>
I don't think we have this property as is.

E.g. consider the transaction:

TX:
   locktime: None
   sequence: 100
   scriptpubkey: 101 CSV

How will you tell it is able to be included without running the script?

I agree this is a useful property, but I don't think we can do it
practically.

What's nice is the transaction in this form cannot go from invalid to valid
-- once invalid it is always invalid for a given UTXO.

sequence tagged keys have this property -- a txn is either valid or invalid
and that never changes w/o any external information needing to be passed up.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Eltoo Burst Mode & Continuations

2021-07-10 Thread Jeremy
Last thought:

suppose you make a Taproot tree with N copies (with different keys) of the
state update protocol.

Then what you can do is use the 1st copy  until you hit MAX_STATE, and then
start signing with the 2nd copy back an state 0 but when you sign with the
2nd copy you *remove* the 1st copy from the taproot tree.

e.g.,

{A:0, B:0, C:0} -> {A:1, B:0, C:0} -> {A:MAX, B:0, C:0} -> {B:1, C:0}...

Then the cut-thru transition

{A:0, B:0, C:0} -> {B:1, C:0}

is valid, but the regression:

{B:N, C:0} -> {A:M, B:0, C:0} is not.


You can take a random path through which leaf you are using which, if
you're careful about how you construct your scripts (e.g., keeping the
trees the same size) you can be more private w.r.t. how many state updates
you performed throughout the protocol (i.e., you can see the low order bits
in the CLTV clause, but the high order bits of A, B, C's relationship is
not revealed if you traverse them in a deterministically permuted order).

The space downside of this approach v.s. the approach presented in the
prior email is that the prior approach achieves 64 bits with 2 txns one of
which should be like 150 bytes, a similar amount of data for the script
leaves may only gets you 5 bits of added sequence space.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sat, Jul 10, 2021 at 4:25 PM Jeremy  wrote:

> on further reflection, i think you can get around the restriction of CSV
> by signing a eltoo "trampoline".
>
> essentially, begin a burst session at pk1:N under pk2, but always include
> a third branch to go to any pk1:N+1.
>
> The scripts look like:
>
> Eltoo Layer 1 Regular at state i = N<<32:
> Or(And(After(N+1), Key(PK_u1)), And(Older(2016), Key(PK_s_i))
>
> Eltoo Layer 1 During Burst at state i = (N<<32) + M:
> Or(And(After(N+1), Key(PK_u1)), Key(PK_s_burst_N == PK_s_i))
>
> Eltoo Layer 2 During Burst at state i = (N<<32) + M:
> Or(And(After(N+1), Key(PK_u1)), And(After(M+1), Key(PK_u_burst_N)),
> And(Older(2016), Key(PK_s_i))
>
> i represents the 64 bit concatenation of two 32 bit locktimes, where N is
> MSBs and M is LSBs.
>
> During burst mode resolving on chain, either:
> 1) the published Layer 1 tx is at the tip state N
> a) The tip N is at inner tip state M, wait 2016 to close with
> And(Older(2016), Key(PK_s_i))
> b) The tip N is at inner tip state M - c, use path And(After(M+1),
> Key(PK_u_burst_N)) to jump to case 1.a
> 2) published Layer 1 tx is at non tip state N - c
> a) Layer 2 does not get published: use path And(After(N+1),
> Key(PK_u1)) to jump back to newest state known on parent, repeat from top
> b) Layer 2 gets published: use path And(After(N+1), Key(PK_u1)) to
> jump back to newest state known on parent, repeat from top
>
>
> This trampoline pattern should essentially be repeatable as many times as
> needed, although I think 2 layers is likely enough in practice.
>
> In terms of "state management", it grows at O(layers) but is otherwise
> constant. A node must only store:
>
> Key/Sigs for
> PK_s_i: to allow closing at the highest reachable state
> PK_u1
> During burst:
> PK_u_burst_N: to allow getting to current burst
> PK_s_burst_N: the same as PK_s_i just makes sense to think of it's
> distinct purpose from PK_s_i not requiring a sequence lock
>
> Note that the above scripts can be optimized to remove the Older clause as
> it can be a rule that all PK_s_i must sign with a sequence unless entering
> a burst.
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Sat, Jul 10, 2021 at 2:07 PM Jeremy  wrote:
>
>> Let's say you're about to hit your sequence limits on a Eltoo channel...
>> Do you have to go on chain?
>>
>> No, you could do a continuation where for your *final* update, you sign a
>> move to a new update key. E.g.,
>>
>> start at: IF "N+1" CLTV DROP  CHECKSIG  ELSE 2016 CSV DROP 
>> CHECKSIG ENDIF
>>
>> before N+1 = last, sign a txn with pk_s_last that moves coins to
>>
>> IF "1" CLTV DROP <*pk_u**2*> CHECKSIG  ELSE 2016 CSV DROP 
>> CHECKSIG ENDIF
>>
>> This essentially lets you do 32 bits worth of updates and then fwd to a
>> new contract by paying 1x extra transaction.
>>
>> This is potentially better than just directly closing because we keep it
>> off chain for longer.  However... this also adds an additional CSV.
>>
>> (We can get around this by modifying the script branch which ends a CLTV
>> domain with:
>>  CHECKSIG
>> since any updates past that point are done through the continuation
>> state... but let's 

Re: [Lightning-dev] Eltoo Burst Mode & Continuations

2021-07-10 Thread Jeremy
on further reflection, i think you can get around the restriction of CSV by
signing a eltoo "trampoline".

essentially, begin a burst session at pk1:N under pk2, but always include a
third branch to go to any pk1:N+1.

The scripts look like:

Eltoo Layer 1 Regular at state i = N<<32:
Or(And(After(N+1), Key(PK_u1)), And(Older(2016), Key(PK_s_i))

Eltoo Layer 1 During Burst at state i = (N<<32) + M:
Or(And(After(N+1), Key(PK_u1)), Key(PK_s_burst_N == PK_s_i))

Eltoo Layer 2 During Burst at state i = (N<<32) + M:
Or(And(After(N+1), Key(PK_u1)), And(After(M+1), Key(PK_u_burst_N)),
And(Older(2016), Key(PK_s_i))

i represents the 64 bit concatenation of two 32 bit locktimes, where N is
MSBs and M is LSBs.

During burst mode resolving on chain, either:
1) the published Layer 1 tx is at the tip state N
a) The tip N is at inner tip state M, wait 2016 to close with
And(Older(2016), Key(PK_s_i))
b) The tip N is at inner tip state M - c, use path And(After(M+1),
Key(PK_u_burst_N)) to jump to case 1.a
2) published Layer 1 tx is at non tip state N - c
a) Layer 2 does not get published: use path And(After(N+1), Key(PK_u1))
to jump back to newest state known on parent, repeat from top
b) Layer 2 gets published: use path And(After(N+1), Key(PK_u1)) to jump
back to newest state known on parent, repeat from top


This trampoline pattern should essentially be repeatable as many times as
needed, although I think 2 layers is likely enough in practice.

In terms of "state management", it grows at O(layers) but is otherwise
constant. A node must only store:

Key/Sigs for
PK_s_i: to allow closing at the highest reachable state
PK_u1
During burst:
PK_u_burst_N: to allow getting to current burst
PK_s_burst_N: the same as PK_s_i just makes sense to think of it's
distinct purpose from PK_s_i not requiring a sequence lock

Note that the above scripts can be optimized to remove the Older clause as
it can be a rule that all PK_s_i must sign with a sequence unless entering
a burst.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sat, Jul 10, 2021 at 2:07 PM Jeremy  wrote:

> Let's say you're about to hit your sequence limits on a Eltoo channel...
> Do you have to go on chain?
>
> No, you could do a continuation where for your *final* update, you sign a
> move to a new update key. E.g.,
>
> start at: IF "N+1" CLTV DROP  CHECKSIG  ELSE 2016 CSV DROP 
> CHECKSIG ENDIF
>
> before N+1 = last, sign a txn with pk_s_last that moves coins to
>
> IF "1" CLTV DROP <*pk_u**2*> CHECKSIG  ELSE 2016 CSV DROP 
> CHECKSIG ENDIF
>
> This essentially lets you do 32 bits worth of updates and then fwd to a
> new contract by paying 1x extra transaction.
>
> This is potentially better than just directly closing because we keep it
> off chain for longer.  However... this also adds an additional CSV.
>
> (We can get around this by modifying the script branch which ends a CLTV
> domain with:
>  CHECKSIG
> since any updates past that point are done through the continuation
> state... but let's ignore that for the next part)
>
> What if we *always* used this every update? Then we'd essentially have 64
> bits of sequence space. Each layer of this trick adds 32 bytes.
>
> Doing layers like this inherently adds a bunch of CSV layers, so it
> increases resolution time linearly.
>
> One possibility to mitigate this is to do a "semitrusted burst mode" with
> a counterparty. Suppose you're at sequence M and it's a normal txn.
>
> Party A requests to Party B to initiate burst mode. A and B move to
> sequence M+1 where state M+1 passes through to a 2 step Eltoo update.
>
> This burst now has 32 bits of sequences to blow through.
>
> B or A then indicates to the other party to terminate the burst at
> "internal state number" Q. Then B and A sign M+2 where M+2 reflects the
> last state at internal state number Q. This gets rid of the temporary extra
> locking time for when parties are offline.
>
> This has a benefit for privacy as well because if this protocol is used,
> then top level state numbers do not reflect the # of payments strongly as
> they're more akin to how many burst mode payments were done.
>
> The semi trusted nature of this is that if a malicious peer induces you
> into starting this, you double your funds lockup time. There are some
> mitigations:
>
> 1) Only enter burst mode with long lived peers
> 2) Only enter burst mode when initiator has more funds in the channel than
> you (or has some ratio) which imposes an opportunity cost for attacking.
> 3) Only allow a certain % of liquidity to be moved during a burst -- e.g.,
> any time the delta in balance goes above a threshold, force a higher order
> channel state update

[Lightning-dev] Eltoo Burst Mode & Continuations

2021-07-10 Thread Jeremy
Let's say you're about to hit your sequence limits on a Eltoo channel... Do
you have to go on chain?

No, you could do a continuation where for your *final* update, you sign a
move to a new update key. E.g.,

start at: IF "N+1" CLTV DROP  CHECKSIG  ELSE 2016 CSV DROP 
CHECKSIG ENDIF

before N+1 = last, sign a txn with pk_s_last that moves coins to

IF "1" CLTV DROP <*pk_u**2*> CHECKSIG  ELSE 2016 CSV DROP  CHECKSIG
ENDIF

This essentially lets you do 32 bits worth of updates and then fwd to a new
contract by paying 1x extra transaction.

This is potentially better than just directly closing because we keep it
off chain for longer.  However... this also adds an additional CSV.

(We can get around this by modifying the script branch which ends a CLTV
domain with:
 CHECKSIG
since any updates past that point are done through the continuation
state... but let's ignore that for the next part)

What if we *always* used this every update? Then we'd essentially have 64
bits of sequence space. Each layer of this trick adds 32 bytes.

Doing layers like this inherently adds a bunch of CSV layers, so it
increases resolution time linearly.

One possibility to mitigate this is to do a "semitrusted burst mode" with a
counterparty. Suppose you're at sequence M and it's a normal txn.

Party A requests to Party B to initiate burst mode. A and B move to
sequence M+1 where state M+1 passes through to a 2 step Eltoo update.

This burst now has 32 bits of sequences to blow through.

B or A then indicates to the other party to terminate the burst at
"internal state number" Q. Then B and A sign M+2 where M+2 reflects the
last state at internal state number Q. This gets rid of the temporary extra
locking time for when parties are offline.

This has a benefit for privacy as well because if this protocol is used,
then top level state numbers do not reflect the # of payments strongly as
they're more akin to how many burst mode payments were done.

The semi trusted nature of this is that if a malicious peer induces you
into starting this, you double your funds lockup time. There are some
mitigations:

1) Only enter burst mode with long lived peers
2) Only enter burst mode when initiator has more funds in the channel than
you (or has some ratio) which imposes an opportunity cost for attacking.
3) Only allow a certain % of liquidity to be moved during a burst -- e.g.,
any time the delta in balance goes above a threshold, force a higher order
channel state update.




Best,

Jeremy


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-08 Thread Jeremy
>
> This would disallow using a relative locktime and an absolute locktime
> for the same input. I don't think I've seen a use case for that so far,
> but ruling it out seems suboptimal.


I think you meant disallowing a relative locktime and a sequence locktime?
I agree it is suboptimal.


What do you make of sequence tagged keys?
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-07 Thread Jeremy
I made a comment on
https://github.com/bitcoin/bips/pull/943#issuecomment-876034559 but it
occurred to me it is more ML appropriate.

In general, one thing that strikes me is that when anyprevout is used for
eltoo you're generally doing a script like:

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
 CLTV DROP
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```

This means that you're overloading the CLTV clause, which means it's
impossible to use Eltoo and use a absolute lock time, it also means you
have to use fewer than a billion sequences, and if you pick a random # to
mask how many payments you've done / pick random gaps let's say that
reduces your numbers in half. That may be enough, but is still relatively
limited. There is also the issue that multiple inputs cannot be combined
into a transaction if they have signed on different locktimes.

Since Eltoo is the primary motivation for ANYPREVOUT, it's worth making
sure we have all the parts we'd need bundled together to see it be
successful.

A few options come to mind that might be desirable in order to better serve
the eltoo usecase

1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated
to eltoo sequences). This has the benefit of giving a per input sequence,
but the drawback of using a CSV bit. Because there's only 1 CSV per input,
this technique cannot be used with a sequence tag.
2) CSFS -- it would be possible to take a signature from stack for an
arbitrary higher number, e.g.:
```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
DUP musigkey(Aseq, BSeq) CSFSV  GTE VERIFY
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```
Then, posession of a higher signed sequence would allow for the use of the
update path. However, the downside is that there would be no guarantee that
the new state provided for update would be higher than the past one without
a more advanced covenant.
3) Sequenced Signature: It could be set up such that ANYPREVOUT keys are
tagged with a N byte sequence (instead of 1), and a part of the process of
signature verification includes hashing a sequence on the signature itself.

E.g.

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
   ::musigkey(Au,Bu) CHECKSIG
ENDIF
```
To satisfy this clause, a signature `::S` would be required. When
validating the signature S, the APO digest would have to include the value
. It is non cryptographically checked that N+1 > N.
5) Similar to 3, but look at more values off the stack. This is also OK,
but violates the principle of not making opcodes take variable numbers of
things off the stack. Verify semantics on the extra data fields could
ameliorate this concern, and it might make sense to do it that way.
4) Something in the Annex: It would also be possible to define a new
generic place for lock times in the annex (to permit dual height/time
relative/absolute, all per input. The pro of this approach is that it would
be solving an outstanding problem for script that we want to solve anyways,
the downside is that the Annex is totally undefined presently so it's
unclear that this is an appropriate use for it.
5) Do Nothing :)


Overall I'm somewhat partial to option 3 as it seems to be closest to
making ANYPREVOUT more precisely designed to support Eltoo. It would also
be possible to make it such that if the tag N=1, then the behavior is
identical to the proposal currently.

--
@JeremyRubin 

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Jeremy
I'd be excited to join. Recommend bumping the date  to mid June, if that's
ok, as many Americans will be at Bitcoin 2021.

I was thinking about reviving the sponsors proposal with a 100 block lock
on spending a sponsoring tx which would hopefully make less controversial,
this would be a great place to discuss those tradeoffs.

On Fri, Apr 23, 2021, 8:17 AM Antoine Riard  wrote:

> Hi,
>
> During the lastest years, tx-relay and mempool acceptances rules of the
> base layer have been sources of major security and operational concerns for
> Lightning and other Bitcoin second-layers [0]. I think those areas require
> significant improvements to ease design and deployment of higher Bitcoin
> layers and I believe this opinion is shared among the L2 dev community. In
> order to make advancements, it has been discussed a few times in the last
> months to organize in-person workshops to discuss those issues with the
> presence of both L1/L2 devs to make exchange fruitful.
>
> Unfortunately, I don't think we'll be able to organize such in-person
> workshops this year (because you know travel is hard those days...) As a
> substitution, I'm proposing a series of one or more irc meetings. That
> said, this substitution has the happy benefit to gather far more folks
> interested by those issues that you can fit in a room.
>
> # Scope
>
> I would like to propose the following 4 items as topics of discussion.
>
> 1) Package relay design or another generic L2 fee-bumping primitive like
> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
> making obsolete propagation of transactions with pre-signed feerate, solve
> pinning attacks compromising Lightning/multi-party contract protocol
> safety, offer an usable and stable API to L2 software stack, stay
> compatible with miner and full-node operators incentives and obviously
> minimize CPU/memory DoS vectors.
>
> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
> for an attacker to partition network mempools in divergent subsets and from
> then launch advanced security or privacy attacks against a Lightning node.
> Note, it might also be a concern for bandwidth bleeding attacks against L1
> nodes.
>
> 3) Guidelines about coordinated cross-layers security disclosures.
> Mitigating a security issue around tx-relay or the mempool in Core might
> have harmful implications for downstream projects. Ideally, L2 projects
> maintainers should be ready to upgrade their protocols in emergency in
> coordination with base layers developers.
>
> 4) Guidelines about L2 protocols onchain security design. Currently
> deployed like Lightning are making a bunch of assumptions on tx-relay and
> mempool acceptances rules. Those rules are non-normative, non-reliable and
> lack documentation. Further, they're devoid of tooling to enforce them at
> runtime [2]. IMHO, it could be preferable to identify a subset of them on
> which second-layers protocols can do assumptions without encroaching too
> much on nodes's policy realm or making the base layer development in those
> areas too cumbersome.
>
> I'm aware that some folks are interested in other topics such as extension
> of Core's mempools package limits or better pricing of RBF replacement. So
> l propose a 2-week concertation period to submit other topics related to
> tx-relay or mempools improvements towards L2s before to propose a finalized
> scope and agenda.
>
> # Goals
>
> 1) Reaching technical consensus.
> 2) Reaching technical consensus, before seeking community consensus as it
> likely has ecosystem-wide implications.
> 3) Establishing a security incident response policy which can be applied
> by dev teams in the future.
> 4) Establishing a philosophy design and associated documentations (BIPs,
> best practices, ...)
>
> # Timeline
>
> 2021-04-23: Start of concertation period
> 2021-05-07: End of concertation period
> 2021-05-10: Proposition of workshop agenda and schedule
> late 2021-05/2021-06: IRC meetings
>
> As the problem space is savagely wide, I've started a collection of
> documents to assist this workshop : https://github.com/ariard/L2-zoology
> Still wip, but I'll have them in a good shape at agenda publication, with
> reading suggestions and open questions to structure discussions.
> Also working on transaction pinning and mempool partitions attacks
> simulations.
>
> If L2s security/p2p/mempool is your jam, feel free to get involved :)
>
> Cheers,
> Antoine
>
> [0] For e.g see optech section on transaction pinning attacks :
> https://bitcoinops.org/en/topics/transaction-pinning/
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
> [2] Lack of reference tooling make it easier to have bug slip in like
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002858.html
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-22 Thread Jeremy
Yes -- to be clear, most of the feature-wise benefits of CTV for Lightning
are only in the initial channel setup phase, lessening interactivity
requirements.

Everything else can be emulated via multisig layers, but that can add
substantial latency in doing either 2pECDSA for each layer or on chain &
storage overhead in the signature space. CTV helps here because it can be
both deterministic & compact, but is not adding a new feature to already
interactive protocols. This does end up helping in terms of the feasibility
of some of the HTLC indirection tree techniques though :).

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sun, Jun 21, 2020 at 6:20 PM Olaoluwa Osuntokun 
wrote:

> Hi Jeremy,
>
> The up-front costs can be further mitigated even without something like CTV
> (which makes things more efficient) by adding a layer of in-direction
> w.r.t how
> HTLCs are manifested within the commitment transactions. To do this, we
> add a
> new 2-of-2 multi-sig output (an HTLC indirect block) to the commitment
> transactions. This is then spent by a new transaction (the HTLC block) that
> actually manifests (creates the HTLC outputs) the HTLCs.
>
> With this change, the cost to have a commitment be mined in the chain is
> now
> _independent of the number of HTLCs in the channel_. In the past I've
> called
> this construction "coupe commitments" (lol).
>
> Other flavors of this technique are possible as well, allowing both sides
> to
> craft varying HTLC indirection trees (double layers of indirection are
> possible, etc) which may factor in traits like HTLC expiration time (HTLCs
> that
> expire later are further down in the tree).
>
> Something like CTV does indeed make this technique more powerful+efficient
> as
> it allows one to succinctly commit to all the relevant desirable
> combinations
> of HTLC indirect blocks, and HTLC fan-out transactions.
>
> -- Laolu
>
>
> On Sat, Jun 20, 2020 at 4:14 PM Jeremy  wrote:
>
>> I am not steeped enough in Lightning Protocol issues to get the full
>> design space, but I'm fairly certain BIP-119 Congestion Control trees would
>> help with this issue.
>>
>> You can bucket a tree by doing a histogram of HTLC size, so that all
>> small HTLCs live in a common CTV subtree and don't interfere with higher
>> value HTLCs. You can also play with sequencing to prevent those HTLCs from
>> getting longchains in the mempool until they're above a certain value.
>> --
>> @JeremyRubin <https://twitter.com/JeremyRubin>
>> <https://twitter.com/JeremyRubin>
>>
>>
>> On Thu, Jun 18, 2020 at 1:41 AM Antoine Riard 
>> wrote:
>>
>>> Hi Rene,
>>>
>>> Thanks for disclosing this vulnerability,
>>>
>>> I think this blackmail scenario holds but sadly there is a lower
>>> scenario.
>>>
>>> Both "Flood & Loot" and your blackmail attack rely on `update_fee`
>>> mechanism and unbounded commitment transaction size inflation. Though the
>>> first to provoke block congestion and yours to lockdown in-flight fees as
>>> funds hostage situation.
>>>
>>> > 1. The current solution is to just not use up the max value of
>>> htlc's. Eclaire and c-lightning by default only use up to 30 htlcs.
>>>
>>> As of today, yes I would recommend capping commitment size both for
>>> ensuring competitive propagation/block selection and limiting HTLC exposure.
>>>
>>> > 2. Probably the best fix (not sure if I understand the consequences
>>> correctly) is coming from this PR to bitcoin core (c.f.
>>> https://github.com/bitcoin/bitcoin/pull/15681 by @TheBlueMatt . If I
>>> get it correctly with that we could always have low fees and ask the person
>>> who want to claim their outputs to pay fees. This excludes overpayment and
>>> could happen at a later stage when fees are not spiked. Still the victim
>>> who offered the htlcs would have to spend those outputs at some time.
>>>
>>> It's a bit more complex, carve-out output, even combined with anchor
>>> output support on the LN-side won't protect against different flavors of
>>> pinning. I invite you to go through logs of past 2 LN dev meetings.
>>>
>>> > 3. Don't overpay fees in commitment transactions. We can't foresee the
>>> future anyway
>>>
>>> Once 2. is well-addressed we may deprecate `update_fee`.
>>>
>>> > 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
>>> value (like we do with sub dust amounts and sub satoshi

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-21 Thread Jeremy
Hi ZmnSCPxj,

My understanding is that you can use the CTV deferral to also get
independent HTLC relative timelocks start points per output. This would
help with this sort of issue right?

And you're correct that there's overhead of indirection, but it's not super
large (minimally complicated something like an extra 100 bytes per output,
if you were to have a flat array where each entry is a CTV output so that
each out gets its own clock).

Essentially something like this:

Chan
 |
 -
 |   | | ||
CTV(A) CTV(B) CTV(C) CTV(D)  (Optional CPFP Anchor?)
 |   | | |
1 block1 block   1 block   1 block
 |   | | |
A  B   CD

Where A B C and D are all HTLCs.

Now because of the one-hop indirection, A B C and D can all expand
independently. It's also possible for the Channel Operator to do something
like:

Chan
 |
 -
 |   | | ||
CTV(A) CTV(B) CTV(C) CTV(D)  (Optional CPFP Anchor?)
 |   | | |
1 block1 block   1 block   10 blocks
 |   | | |
A  B   CD

To make D have a further out resolution time to prevent the
simultaneous-ness issue (trees or a linear-chain rather than total fan-out
can also be used but I think it's a bit more confusing for a basic
example). The benefit of trees is that I can do something like:


Chan
 |
 
 |   | | |  |
CTV(A) CTV(B) CTV(C) CTV(400 HTLC)  (Optional CPFP Anchor?)
 |   | | ||
1 block1 block   1 block   10 blocks  (Optional CPFP Anchor?)
 |   | |  |
 |   | | / \
A  B   C| . |

Which makes it so that the low-value new HTLCs can be deprioritized fee
wise. So that the attack, which occurs during a fee spike, doesn't end up
*requiring* substantial fees to be added to the channel to support a burst
of HTLCS.

--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Sat, Jun 20, 2020 at 8:34 PM ZmnSCPxj  wrote:

> Good morning Jeremy,
>
> > I am not steeped enough in Lightning Protocol issues to get the full
> design space, but I'm fairly certain BIP-119 Congestion Control trees would
> help with this issue.
> >
> > You can bucket a tree by doing a histogram of HTLC size, so that all
> small HTLCs live in a common CTV subtree and don't interfere with higher
> value HTLCs. You can also play with sequencing to prevent those HTLCs from
> getting longchains in the mempool until they're above a certain value.
>
> If the attacker stops responding, then all HTLC rules need to be published
> onchain for enforcement of the HTLC rules.
> And that publication onchain is the problem: every HTLC published requires
> onchain space, which must be paid for.
>
> The most compact way to expose the HTLCs is as a flat array, i.e. outputs
> of a single transaction.
> Every tree structure is going to take up more space than a flat array.
>
> What CTV buys is to be able to defer *when* you reveal scripts, possibly
> to a later time when blockchain space is cheaper.
> But in case the victim owns the timelock branch of an outgoing HTLC, it is
> unsafe for the victim to defer: it has to enforce the locktime soon or it
> could end up losing both incoming and outgoing HTLC amounts.
> And to enforce the locktime it has to publish the HTLC.
>
> Now of course with CTV you could publish only the HTLC you have to enforce
> *now*, and keep the rest in an CTV output.
> The attacker can counter this by pushing 483 HTLCs with the same timelock
> at the victim, so that the victim has to publish all HTLCs simultaneously.
> And a flat array of outputs is cheaper than a tree.
>
> What *can* be done would be to bin by timelock rather than amount; tree
> leaves are a transaction that exposes all HTLCs with a particular timelock
> as a flat array of outputs, but different timelocks go to different tree
> branches.
> But the attacker can still do the same-timelock trick, and the tree
> structure is likely to take up more space in the end than just a non-treed
> flat array of outputs.
>
> Regards,
> ZmnSCPxj
>
>
> > --
> > @JeremyRubin
> >
> > On Thu, Jun 18, 2020 at 1:41 AM Antoine Riard 
> wrote:
> >
> > > Hi Rene,
> > > Thanks for disclosing this vulnerability,
> > >
> > > I 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-20 Thread Jeremy
I am not steeped enough in Lightning Protocol issues to get the full design
space, but I'm fairly certain BIP-119 Congestion Control trees would help
with this issue.

You can bucket a tree by doing a histogram of HTLC size, so that all small
HTLCs live in a common CTV subtree and don't interfere with higher value
HTLCs. You can also play with sequencing to prevent those HTLCs from
getting longchains in the mempool until they're above a certain value.
--
@JeremyRubin 



On Thu, Jun 18, 2020 at 1:41 AM Antoine Riard 
wrote:

> Hi Rene,
>
> Thanks for disclosing this vulnerability,
>
> I think this blackmail scenario holds but sadly there is a lower scenario.
>
> Both "Flood & Loot" and your blackmail attack rely on `update_fee`
> mechanism and unbounded commitment transaction size inflation. Though the
> first to provoke block congestion and yours to lockdown in-flight fees as
> funds hostage situation.
>
> > 1. The current solution is to just not use up the max value of
> htlc's. Eclaire and c-lightning by default only use up to 30 htlcs.
>
> As of today, yes I would recommend capping commitment size both for
> ensuring competitive propagation/block selection and limiting HTLC exposure.
>
> > 2. Probably the best fix (not sure if I understand the consequences
> correctly) is coming from this PR to bitcoin core (c.f.
> https://github.com/bitcoin/bitcoin/pull/15681 by @TheBlueMatt . If I get
> it correctly with that we could always have low fees and ask the person who
> want to claim their outputs to pay fees. This excludes overpayment and
> could happen at a later stage when fees are not spiked. Still the victim
> who offered the htlcs would have to spend those outputs at some time.
>
> It's a bit more complex, carve-out output, even combined with anchor
> output support on the LN-side won't protect against different flavors of
> pinning. I invite you to go through logs of past 2 LN dev meetings.
>
> > 3. Don't overpay fees in commitment transactions. We can't foresee the
> future anyway
>
> Once 2. is well-addressed we may deprecate `update_fee`.
>
> > 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
> value (like we do with sub dust amounts and sub satoshi amounts. This would
> at least make the attack expensive as the attacker would have to bind a lot
> of liquidity.
>
> Ideally we want dust_limit to be dynamic, dust cap should be based on HTLC
> economic value, feerate of its output, feerate of HTLC-transaction, feerate
> estimation of any CPFP to bump it. I think that's kind of worthy to do once
> we solved 3. and 4
>
> > 5. Somehow be able to aggregate htlc's. In a world where we use payment
> points instead of preimages we might be able to do so. It would be really
> cool if separate HTLC's could be combined to 1 single output. I played
> around a little bit but I have not come up with a scheme that is more
> compact in all cases. Thus I just threw in the idea.
>
> Yes we may encode all HTLC in some Taproot tree in the future. There are
> some wrinkles but for a high-level theoretical construction see my post on
> CoinPool.
>
> > 6. Split onchain fees differently (now the attacker would also lose fees
> by conducting this attack) - No I don't want to start yet another fee
> bikeshadding debate. (In particular I believe that a different split of
> fees might make the Flood & Loot attack economically more viable which
> relies on the same principle)
>
> Likely a bit more of fee bikeshedding is something we have to do to make
> LN secure... Switching fee from pre-committed ones to a single-party,
> dynamic one.
>
> > Independently I think we should have a hint in our readme file about
> where and how people can disclose attacks and vulnerabilities.
> Implementations have this but the BOLTs do not.
>
> I 100% agree, that's exactly
> https://github.com/lightningnetwork/lightning-rfc/pull/772, waiting for
> your feedback :)
>
> Cheers,
>
> Antoine
>
> Le mer. 17 juin 2020 à 09:41, ZmnSCPxj via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> a écrit :
>
>>
>> Good morning all,
>>
>> >
>> > Fee futures could help against this.
>> > I remember writing about this some time ago but cannot find where (not
>> sure if it was in lightning-dev or bitcoin-dev).
>>
>> `harding` found it:
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017601.html
>>
>> Regards,
>> ZmnSCPxj
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Jeremy
Hi everyone,

Sorry to just be getting to a response here. Hadn't noticed it till now.

*(Plug: If anyone or their organizations would like to assist in funding
the work described below for a group of developers, I've been working to
put resources together for funding the above for a few months now, and I
think it would be high leverage towards seeing this through. There are a
lot of unsexy tasks to do  that aren't coming up with a solution
(e.g.,writing a myriad of Mempool stress test scenarios) that can be a well
defined full-time job for someone to do.)*

I've been working on exactly this problem in the mempool for months now.
I'm deeply familiar with the issues here and the types of pinning possible.
I think everyone can recognize that with my work on OP_CTV I want nothing
more than the mempool to be able to accept whatever long chains we can
throw at it, but I'm pretty well steeped at this point in the obstacles to
doing that.

I don't think that we should be entertaining further carve outs at the
moment, unless it is really trivial. Every new carve out rule added to the
way that the mempool operates is removing complexity invariants we aim to
preserve in the mempool in order to keep nodes operational. Many of these
invariants are well documented, some are not. I'm happy to go off list for
a more thorough discussion with anyone qualified to have it; this isn't the
best venue for that discussion.

>From my point of view the path forward here is to dedicate more development
resources towards finishing the mempool project I began. You can see the
outstanding work here: https://github.com/bitcoin/bitcoin/projects/14,
contributing review towards moving those PRs forward will greatly improve
our ability to consider a stopgap carve out measure.

The current focus of this work is primarily on:

1) Testing Construction to better test & catch regressions or
vulnerabilities introduced or extant in mempool
2) Refactoring algorithms in mempool to reduce constant factors &
asymptotics
3) Package Relay


None of these fix the exact problem at hand though, but here's part of how
they can help us:

If we finish up the algorithmic refactors I've been working on it seems
plausible to do a one-off increase of descendants limits to say, 100
descendants with no restriction. However, we could use the opportunity to
use the 75 descendant increase exclusively for a new carve out, and apply
some new stricter rules in that extra space. There are a few anti-pinning
countermeasures that you can apply in that space that you would not
generally want in the mempool. An example of one is that any new
transaction must pay more feerate and absolute fee than every child in that
space. Or that only the highest fee paying branch of the excess
transactions are mineable, no others. Another would be disabling RBF past
that watermark. In all likelihood, different subsystems interacting with
the mempool will require a different set of restrictions each with the
current architecture, I don't think there's a magic bullet.

Package relay is a promising approach for a future pinning solution as
there are opportunities to attach to packages compact proofs of improved
fee efficiency for pinned transactions. But the ground work for package
relay needs to come first. This is theoretically possible with our current
architecture of the mempool and can probably address much of the pinning
concerns by replacing pinning with more rational eviction policies.

Longer term I've been working on plans and designs to completely re-do the
mempool's architecture to make it behave for arbitrary cases. It's possible
to one day lift all preemptively enforced (e.g., before acceptance)
descendants limits, which can solve this problem for good. There is more
than one potentially good solution here, and a conjunction of them can be
used as they affect independent sub systems. But this work will probably
take years to complete to the point where restrictions can realistically be
lifted.

If developers would like to coordinate resources around completing this
work and making more regular progress on it I'm happy to help point people
to specific tasks that need to be done in order to accelerate this and help
serialize the work so that we can not get into rebase hell.

Originally I had the plug at the top as a closing note, but I figured
people might miss it.

Best,

Jeremy


--
@JeremyRubin <https://twitter.com/JeremyRubin>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-27 Thread Jeremy
Johan,

The issues with mempool limits for OP_SECURETHEBAG are related, but have
distinct solutions.

There are two main categories of mempool issues at stake. One is relay
cost, the other is mempool walking.

In terms of relay cost, if an ancestor can be replaced, it will invalidate
all it's children, meaning that no one paid for that broadcasting. This can
be fixed by appropriately assessing Replace By Fee update fees to
encapsulate all descendants, but there are some tricky edge cases that make
this non-obvious to do.

The other issue is walking the mempool -- many of the algorithms we use in
the mempool can be N log N or N^2 in the number of descendants. (simple
example: an input chain of length N to a fan out of N outputs that are all
spent, is O(N^2) to look up ancestors per-child, unless we're caching).

The other sort of walking issue is where the indegree or outdegree for a
transaction is high. Then when we are computing descendants or ancestors we
will need to visit it multiple times. To avoid re-expanding a node, we
currently cache it with a set. This uses O(N) extra memory and makes O(N
Log N) (we use std::set not unordered_set) comparisons.

I just opened a PR which should help with some of the walking issues by
allowing us to cheaply cache which nodes we've visited on a run. It makes a
lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
memory. See: https://github.com/bitcoin/bitcoin/pull/17268.


Now, for OP_SECURETHEBAG we want a particular property that is very
different from with lightning htlcs (as is). We want that an unlimited
number of child OP_SECURETHEBAG txns may extend from a confirmed
OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
lightning (one dangling unconfirmed to permit channels).

OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a tree
where they are individualized leaf nodes with a preceding CSV. Then, the
above fix would ensure each HTLC always has time to close properly as they
would have individualized lockpoints. This is desirable for some additional
reasons and not for others, but it should "work".



--
@JeremyRubin 



On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
wrote:

> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.
>
> On Oct 24, 2019, at 21:05, Johan Torås Halseth  wrote:
>
> 
> It essentially changes the rule to always allow CPFP-ing the commitment as
> long as there is an output available without any descendants. It changes
> the commitment from "you always need at least, and exactly, one non-CSV
> output per party. " to "you always need at least one non-CSV output per
> party. "
>
> I realize these limits are there for a reason though, but I'm wondering if
> could relax them. Also now that jeremyrubin has expressed problems with the
> current mempool limits.
>
> On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
> wrote:
>
>> I may be missing something, but I'm not sure how this changes anything?
>>
>> If you have a commitment transaction, you always need at least, and
>> exactly, one non-CSV output per party. The fact that there is a size
>> limitation on the transaction that spends for carve-out purposes only
>> effects how many other inputs/outputs you can add, but somehow I doubt
>> its ever going to be a large enough number to matter.
>>
>> Matt
>>
>> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
>> > Reviving this old thread now that the recently released RC for bitcoind
>> > 0.19 includes the above mentioned carve-out rule.
>> >
>> > In an attempt to pave the way for more robust CPFP of on-chain contracts
>> > (Lightning commitment transactions), the carve-out rule was added in
>> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked
>> on
>> > an implementation of a new commitment format for utilizing the Bring
>> > Your Own Fees strategy using CPFP, I’m wondering if the special case
>> > rule should have been relaxed a bit, to avoid the need for adding a 1
>> > CSV to all outputs (in case of Lightning this means HTLC scripts would
>> > need to be changed to add the CSV delay).
>> >
>> > Instead, what about letting the rule be
>> >
>> > The last transaction which is added to a package of dependent
>> > transactions in the mempool must:
>> >   * Have no more than one unconfirmed parent.
>> >
>> > This would of course allow adding a large transaction to each output of
>> > the unconfirmed parent, which in effect would allow an attacker to
>> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
>> > this a problem with the current mempool acceptance code in bitcoind? I
>> > would imagine evicting transactions based on feerate when the max
>> > mempool size is met handles 

Re: [Lightning-dev] [bitcoin-dev] OP_CAT was Re: Continuing the discussion about noinput / anyprevout

2019-10-10 Thread Jeremy
Interesting point.

The script is under your control, so you should be able to ensure that you
are always using a correctly constructed midstate, e.g., something like:

scriptPubKey: <-1> OP_SHA256STREAM DEPTH OP_SHA256STREAM <-2>
OP_SHA256STREAM
 OP_EQUALVERIFY

would hash all the elements on the stack and compare to a known hash.
How is that sort of thing weak to midstateattacks?


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Fri, Oct 4, 2019 at 4:16 AM Peter Todd  wrote:

> On Thu, Oct 03, 2019 at 10:02:14PM -0700, Jeremy via bitcoin-dev wrote:
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> > OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> > function to allow concatenation of an unlimited amount of data, provided
> > the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2>
> > OP_SHA256STREAM
>
> One issue with this is the simplest implementation where the state is just
> raw
> bytes would expose raw SHA256 midstates, allowing people to use them
> directly;
> preventing that would require adding types to the stack. Specifically I
> could
> write a script that rather than initializing the state correctly from the
> official IV, instead takes an untrusted state as input.
>
> SHA256 isn't designed to be used in situations where adversaries control
> the
> initialization vector. I personally don't know one way or the other if
> anyone
> has analyzed this in detail, but I'd be surprised if that's secure. I
> considered adding midstate support to OpenTimestamps but decided against
> it for
> exactly that reason.
>
> I don't have the link handy but there's even an example of an experienced
> cryptographer on this very list (bitcoin-dev) proposing a design that falls
> victim to this attack. It's a subtle issue and we probably don't want to
> encourage it.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] OP_CAT was Re: [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-10 Thread Jeremy
Good point -- in our discussion, we called it OP_FFS -- Fold Functional
Stream, and it could be initialized with a different integer to select for
different functions. Therefore the stream processing opcodes would be
generic, but extensible.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>


On Fri, Oct 4, 2019 at 12:00 AM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Jeremy,
>
> > Awhile back, Ethan and I discussed having, rather than OP_CAT, an
> OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
> function to allow concatenation of an unlimited amount of data, provided
> the only use is to hash it.
> >
> > You can then use it perhaps as follows:
> >
> > // start a new hash with item
> > OP_SHA256STREAM  (-1) -> [state]
> > // Add item to the hash in state
> > OP_SHA256STREAM n [item] [state] -> [state]
> > // Finalize
> > OP_SHA256STREAM (-2) [state] -> [Hash]
> >
> > <-1> OP_SHA256STREAM<3> OP_SHA256STREAM
> <-2> OP_SHA256STREAM
> >
> > Or it coul
> >
>
> This seems a good idea.
>
> Though it brings up the age-old tension between:
>
> * Generically-useable components, but due to generalization are less
> efficient.
> * Specific-use components, which are efficient, but which may end up not
> being useable in the future.
>
> In particular, `OP_SHA256STREAM` would no longer be useable if SHA256
> eventually is broken, while the `OP_CAT` will still be useable in the
> indefinite future.
> In the future a new hash function can simply be defined and the same
> technique with `OP_CAT` would still be useable.
>
>
> Regards,
> ZmnSCPxj
>
> > --
> > @JeremyRubin
> >
> > On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:
> >
> > > I hope you are having an great afternoon ZmnSCPxj,
> > >
> > > You make an excellent point!
> > >
> > > I had thought about doing the following to tag nodes
> > >
> > > || means OP_CAT
> > >
> > > `node = SHA256(type||SHA256(data))`
> > > so a subnode would be
> > > `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> > > and a leaf node would be
> > > `leafnode = SHA256(0||SHA256(leafdata))`
> > >
> > > Yet, I like your idea better. Increasing the size of the two inputs to
> > > OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> > > size of object on the stack seems sensible and also doesn't special
> > > case the logic of OP_CAT.
> > >
> > > It would also increase performance. SHA256(tag||subnode2||subnode3)
> > > requires 2 compression function calls whereas
> > > SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> > > function calls (due to padding).
> > >
> > > >Or we could implement tagged SHA256 as a new opcode...
> > >
> > > I agree that tagged SHA256 as an op code that would certainty be
> > > useful, but OP_CAT provides far more utility and is a simpler change.
> > >
> > > Thanks,
> > > Ethan
> > >
> > > On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj 
> wrote:
> > > >
> > > > Good morning Ethan,
> > > >
> > > >
> > > > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > > > subject to OP_CAT.
> > > > >
> > > > > Responding to:
> > > > > """
> > > > >
> > > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > > > retained from the original BitCoin 0.1.0 Alpha for Windows
> design, on
> > > > > par with:
> > > > > [..]
> > > > >
> > > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > > > [..]
> > > > > """
> > > > >
> > > > > OP_CAT is an extremely valuable op code. I understand why it
> was
> > > > > removed as the situation at the time with scripts was dire.
> However
> > > > > most of the protocols I've wanted to build on Bitcoin run into
> the
> > > > > limitation that stack values can not be concatenated. For
> instance
> > > > > TumbleBit would have far smaller transaction sizes if OP_CAT
> was
> > > > > supported in Bitcoin. If it happens to me as a researcher it is
> > > > > p

Re: [Lightning-dev] OP_CAT was Re: [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-10 Thread Jeremy
Awhile back, Ethan and I discussed having, rather than OP_CAT, an
OP_SHA256STREAM that uses the streaming properties of a SHA256 hash
function to allow concatenation of an unlimited amount of data, provided
the only use is to hash it.

You can then use it perhaps as follows:

// start a new hash with item
OP_SHA256STREAM  (-1) -> [state]
// Add item to the hash in state
OP_SHA256STREAM n [item] [state] -> [state]
// Finalize
OP_SHA256STREAM (-2) [state] -> [Hash]

<-1> OP_SHA256STREAM<3> OP_SHA256STREAM <-2>
OP_SHA256STREAM


Or it coul



--
@JeremyRubin 



On Thu, Oct 3, 2019 at 8:04 PM Ethan Heilman  wrote:

> I hope you are having an great afternoon ZmnSCPxj,
>
> You make an excellent point!
>
> I had thought about doing the following to tag nodes
>
> || means OP_CAT
>
> `node = SHA256(type||SHA256(data))`
> so a subnode would be
> `subnode1 = SHA256(1||SHA256(subnode2||subnode3))`
> and a leaf node would be
> `leafnode = SHA256(0||SHA256(leafdata))`
>
> Yet, I like your idea better. Increasing the size of the two inputs to
> OP_CAT to be 260 Bytes each where 520 Bytes is the maximum allowable
> size of object on the stack seems sensible and also doesn't special
> case the logic of OP_CAT.
>
> It would also increase performance. SHA256(tag||subnode2||subnode3)
> requires 2 compression function calls whereas
> SHA256(1||SHA256(subnode2||subnode3)) requires 2+1=3 compression
> function calls (due to padding).
>
> >Or we could implement tagged SHA256 as a new opcode...
>
> I agree that tagged SHA256 as an op code that would certainty be
> useful, but OP_CAT provides far more utility and is a simpler change.
>
> Thanks,
> Ethan
>
> On Thu, Oct 3, 2019 at 7:42 PM ZmnSCPxj  wrote:
> >
> > Good morning Ethan,
> >
> >
> > > To avoid derailing the NO_INPUT conversation, I have changed the
> > > subject to OP_CAT.
> > >
> > > Responding to:
> > > """
> > >
> > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> > > retained from the original BitCoin 0.1.0 Alpha for Windows design,
> on
> > > par with:
> > > [..]
> > >
> > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > [..]
> > > """
> > >
> > > OP_CAT is an extremely valuable op code. I understand why it was
> > > removed as the situation at the time with scripts was dire. However
> > > most of the protocols I've wanted to build on Bitcoin run into the
> > > limitation that stack values can not be concatenated. For instance
> > > TumbleBit would have far smaller transaction sizes if OP_CAT was
> > > supported in Bitcoin. If it happens to me as a researcher it is
> > > probably holding other people back as well. If I could wave a magic
> > > wand and turn on one of the disabled op codes it would be OP_CAT.
> Of
> > > course with the change that size of each concatenated value must
> be 64
> > > Bytes or less.
> >
> > Why 64 bytes in particular?
> >
> > It seems obvious to me that this 64 bytes is most suited for building
> Merkle trees, being the size of two SHA256 hashes.
> >
> > However we have had issues with the use of Merkle trees in Bitcoin
> blocks.
> > Specifically, it is difficult to determine if a hash on a Merkle node is
> the hash of a Merkle subnode, or a leaf transaction.
> > My understanding is that this is the reason for now requiring
> transactions to be at least 80 bytes.
> >
> > The obvious fix would be to prepend the type of the hashed object, i.e.
> add at least one byte to determine this type.
> > Taproot for example uses tagged hash functions, with a different tag for
> leaves, and tagged hashes are just
> prepend-this-32-byte-constant-twice-before-you-SHA256.
> >
> > This seems to indicate that to check merkle tree proofs, an `OP_CAT`
> with only 64 bytes max output size would not be sufficient.
> >
> > Or we could implement tagged SHA256 as a new opcode...
> >
> > Regards,
> > ZmnSCPxj
> >
> >
> > >
> > > On Tue, Oct 1, 2019 at 10:04 PM ZmnSCPxj via bitcoin-dev
> > > bitcoin-...@lists.linuxfoundation.org wrote:
> > >
> > >
> > > > Good morning lists,
> > > > Let me propose the below radical idea:
> > > >
> > > > -   `SIGHASH` flags attached to signatures are a misdesign, sadly
> retained from the original BitCoin 0.1.0 Alpha for Windows design, on par
> with:
> > > > -   1 RETURN
> > > > -   higher-`nSequence` replacement
> > > > -   DER-encoded pubkeys
> > > > -   unrestricted `scriptPubKey`
> > > > -   Payee-security-paid-by-payer (i.e. lack of P2SH)
> > > > -   `OP_CAT` and `OP_MULT` and `OP_ADD` and friends
> > > > -   transaction malleability
> > > > -   probably many more
> > > >
> > > > So let me propose the more radical excision, starting with SegWit v1:
> > > >
> > > > -   Remove `SIGHASH` from signatures.
> > > > -   Put `SIGHASH` on public keys.
> > > >
> > > > Public keys are now encoded as either 33-bytes (implicit
>