### Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

```On Wed, Jun 29, 2022 at 12:38:17PM +, ZmnSCPxj wrote:
> > > ### Inverting The Filter: Feerate Cards
> > > Basically, a feerate card is a mapping between a probability-of-success
> > > range and a feerate.
> > > * 00%->25%: -10ppm
> > > * 26%->50%: 1ppm
> > > * 51%->75%: 5ppm
> > > * 76%->100%: 50ppm
> The economic insight here is this:
> * The payer wants to pay because it values a service / product more highly
> than the sats they are spending.

> * If payment fails, then the payer incurs an opportunity cost, as it is
> unable to utilize the difference in subjective value between the service /
> product and the sats being spent.

(If payment fails, the only opportunity cost they incur is that they
can't use the funds that they locked up for the payment. The opportunity
cost is usually considered to occur when the payment succeeds: at that
point you've lost the ability to use those funds for any other purpose)

>   * Thus, the subjective difference in value between the service / product
> being bought, and the sats to be paid, is the cost of payment failure.

If you couldn't successfully route the payment at any price, you never

> We can now use the left-hand side of the feerate card table, by multiplying
> `100% - middle_probability_of_success` (i.e. probability of failure) by the
> fee budget (i.e. cost of failure), and getting the
> cost-of-failure-for-this-entry.

I don't think that makes much sense; your expected gain if you just try
one option is:

(1-p)*0 + p*cost*(benefit/cost - fee)

where p is the probability of success that corresponds with the fee.

I don't think you can do that calculation with a range; if I fix the
probabilities as:

12.5%  -10ppm
27.5%1ppm
62.5%5ppm
87.5%   50ppm

then that approach chooses:

-10 ppm if the benefit/cost is in (-10ppm, 8.77ppm)
5 ppm if the benefit/cost is in [8.77ppm, 162.52ppm)
50 ppm if the benefit/cost is >= 162.52ppm

so for that policy, one of those entries is already irrelevant.

But that just feels super unrealistic to me. If your benefit is 8ppm,
and you try at -10ppm, and that fails, why wouldn't you try again at
5ppm? That means the real calculation is:

p1*(benefit/cost - fee1)
+ (p2-p1)*(benefit/cost - fee2 - retry_delay)
- (1-p2)*(2*retry_delay)

Which is:

p2*(benefit/cost)
- p1*fee1 - (p2-p1)*fee2
- (2-p1-p2)*retry_delay

My feeling is that the retry_delay factor's going to dominate...

That's also only considering one hop; to get the entire path, you
need them all to succeed, giving an expected benefit (for a particular
combination of rate card entries) of:

(p1*p2*p3*p4*p5)*cost*(benefit/cost - (fee1 + fee2 + fee3 + fee4 + fee5)

And (p1*..*p5) is going to be pretty small in most cases -- 5 hops at
87.5% each already gets you down to only a 51% total chance of success.
And there's an exponential explosion of combinations, if each of the
5 hops has 4 options on their rate card, that's up to 1024 different
options to be evaluated...

> We then evaluate the fee card by plugging this in to each entry of the
> feerate card, and picking which entry gives the lowest total fee.

I don't think that combines hops correctly. For example, if the rate
cards for hop1 and hop2 are both:

10%  10ppm
100%  92ppm

and your expected benefit/cost is 200ppm (so 100ppm per hop), then
treated individually you get:

10%*(100ppm - 10ppm) = 9ppm  <-- this one!
100%*(100ppm - 92ppm) = 8ppm

but treated together, you get:

1%*(200ppm -  20ppm) =  1.8ppm
10%*(200ppm - 102ppm) =  9.8ppm (twice)
100%*(200ppm - 184ppm) = 16ppm <-- this one!

> This is then added as a fee in payment algorithms, thus translated down to
> "just optimize for low fee".

You're not optimising for low fee though, you're optimising for
maximal expected value, assuming you can't retry. But you can retry,
and probably in reality also want to minimise the chance of failure up
to some threshold.

For example: if I buy a coffee with lightning every week day for a year,
that's 250 days, so maybe I'd like to choose a fee so that my payment
failure rate is <0.4%, to avoid embarassment and holding up the queue.

> * Nodes utilizing wall strategies and doing lots of rebalancing put low
> limits on the fee budget of the rebalancing cost.
>   * These nodes are willing to try lots of possible routes, hoping to nab the
> liquidity of a low-fee node on the cheap in order to resell it later.
>   * Such nodes are fine with low probability of success.

Sure. But in that case, they don't care about delays, so why wouldn't they
just try the lowest fee rates all the time, no matter what their expected
value is? They can retry once an hour indefinitely, and eventually they
should get lucky, if the rate card's even remotely accurate. (Though
chances are they won't get -10ppm lucky for the entire path)

Finding out that you're paying 50ppm at the exact same time someone else
is ```

### Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

```On Sun, Jun 05, 2022 at 02:29:28PM +, ZmnSCPxj via Lightning-dev wrote:

Just sharing my thoughts on this.

> Introduction
>
>   Optimize for reliability+
>uncertainty+fee+drain+uptime...
>  .--~~--.
> /\
>/  \
>   /\
>  /  \
> /\
> _--'  `--_
> Just  Just
>   optimize  optimize
> for   for
>   low fee   low fee

I think ideally you want to optimise for some combination of fee, speed
and reliability (both liklihood of a clean failure that you can retry
and of generating stuck payments). As Matt/Peter suggest in another
thread, maybe for some uses you can accept low speed for low fees,
while in others you'd rather pay more and get near-instant results. I
think drain should just go to fee, and uncertainty/uptime are just ways
of estimating reliability.

It might be reasonable to generate local estimates for speed/reliability
by regularly sending onion messages or designed-to-fail htlcs.

Sorry if that makes me a midwit :)

> Rene Pickhardt also presented the idea of leaking friend-of-a-friend
> balances, to help payers increase their payment reliability.

I think foaf (as opposed to global) gossip of *fee rates* is a very
interesting approach to trying to give nodes more *current* information,
without flooding the entire network with more traffic than it can
cope with.

> Now we can consider that *every channel is a marketplace*.
> What is being sold is the sats inside the channel.

(Really, the marketplace is a channel pair (the incoming channel and
the outgoing channel), and what's being sold is their relative balance)

> So my concrete proposal is that we can do the same friend-of-a-friend balance
> leakage proposed by Rene, except we leak it using *existing* mechanisms ---
> i.e. gossiping a `channel_update` with new feerates adjusted according to the
> supply on the channel --- rather than having a new message to leak
> friend-of-a-friend balance directly.

+42

> Because we effectively leak the balance of channels by the feerates on the
> channel, this totally leaks the balance of channels.

I don't think this is true -- you ideally want to adjust fees not to
maintain a balanced channel (50% on each side), but a balanced *flow*
(1:1 incoming/outgoing payment volume) -- it doesn't really matter if
you get the balanced flow that results in an average of a 50:50, 80:20
or 20:80 ratio of channel balances (at least, it doesn't as long as your
channel capacity is 10 or 100 times the payment size, and your variance
is correspondingly low).

Further, you have two degrees of freedom when setting fee rates: one
is how balanced the flows are, which controls how long your channel can
remain useful, but the other is how *much* flow there is -- if halving
your fee rate doubles the flow rate in sats/hour, then that will still
increase your profit. That also doesn't leak balance information.

> ### Inverting The Filter: Feerate Cards
> Basically, a feerate card is a mapping between a probability-of-success range
> and a feerate.
> * 00%->25%: -10ppm
> * 26%->50%: 1ppm
> * 51%->75%: 5ppm
> * 76%->100%: 50ppm

Feerate cards don't really make sense to me; "probability of success"
isn't a real measure the payer can use -- naively, if it were, they could
just retry at 1ppm 10 times and get to 95% chances of success. But if
they can afford to retry (background rebalancing?), they might as well
just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
and see if they're lucky; but if they want a 1s response time, and can't
afford retries, what good is even a 75% chance of success if that's the
individual success rate on each hop of their five hop path?

And if you're not just going by odds of having to retry, then you need to
get some current information about the channel to plug into the formula;
but if you're getting *current* information, why not let that information
be the feerate directly?

> More concretely, we set some high feerate, impose some kind of constant
> "gravity" that pulls down the feerate over time, then we measure the relative
> loss of outgoing liquidity to serve as "lift" to the feerate.

If your current fee rate is F (ppm), and your current volume (flow) is V
(sats forwarded per hour), then your profit is FV. If dropping your fee
rate by dF (<0) results in an increase of V by dV (>0), then you want:

(F+dF)(V+dV) > FV
FV + VdF + FdV + dFdV > FV
FdV > -VdF
dV/dF < -V/F (flip the inequality because dF is negative)

(dV/V)/(dF/F) < -1  (fee-elasticity of volume is in the elastic
region)

(<-1 == elastic == flow changes more than the fee does == drop the fee
rate; >-1 == ineleastic == flow changes less than the fee does == raise
the fee rate; =-1 == unit elastic == ```

### Re: [Lightning-dev] PTLCs early draft specification

```On Tue, Dec 21, 2021 at 04:25:41PM +0100, Bastien TEINTURIER wrote:
> The reason we have "toxic waste" with HTLCs is because we commit to the
> payment_hash directly inside the transaction scripts, so we need to
> remember all the payment_hash we've seen to be able to recreate the
> scripts (and spend the outputs, even if they are revoked).

I think "toxic waste" refers to having old data around that, if used,
could cause you to lose all the funds in your channel -- that's why it's
toxic. This is more just regular landfill :)

> *_anchor: dust, who cares -- might be better if local_anchor used key =
> > revkey
> I don't think we can use revkey,

musig(revkey, remote_key)
--> allows them to spend after you've revealed the secret for revkey
you can never spend because you'll never know the secret for
remote_key

but if you just say:

(revkey)

then you can spend (because you know revkey) immediately (because it's
an anchor output, so intended to be immediately spent) or they can spend
if it's an obsolete commitment and you've revealed the revkey secret.

> this would prevent us from bumping the
> current remote commitment if it appears on-chain (because we don't know
> the private revkey yet if this is the latest commitment). Usually the
> remote peer should bump it, but if they don't, we may want to bump it
> ourselves instead of publishing our own commitment (where our main
> output has a long CSV).

If we're going to bump someone else's commitment, we'll use the
remote_anchor they provided, not the local_anchor, so I think this is
fine (as long as I haven't gotten local/remote confused somewhere along
the way).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] PTLCs early draft specification

```On Wed, Dec 08, 2021 at 04:02:02PM +0100, Bastien TEINTURIER wrote:
> I updated my article [0], people jumping on the thread now may find it
> helpful to better understand this discussion.
> [0] https://github.com/t-bast/lightning-docs/pull/16

Since merged, so

So imagine that this proposal is finished and widely adopted/deployed
and someone adds an additional feature bit that allows a channel to
forward PTLCs only, no HTLCs.

Then suppose that you forget every old PTLC, because you don't like
having your channel state grow without bound. What happens if your

* the musig2 channel funding is irrelevant -- the funding tx has been
spend at this point

* the unspent commitment outputs pay to:
to_local: ipk = musig(revkey, mykey) -- known ; scripts also known
to_remote: claimable in 1 block, would be better if ipk was also musig
*_anchor: dust, who cares -- might be better if local_anchor used
key = revkey
*_htlc: irrelevant by definition
local_ptlc: ipk = musig(revkey, mykey) -- known; scripts also known

* commitment outputs may be immediately spent via layered txs. if so,
their outputs are: ipk = musig(revkey, mykey); with fixed scripts,
that include a relative timelock

So provided you know the revocation key (which you do, because it's an
old transaction and that only requires log(states) data to reconstruct)
and your own private key, you can reconstruct all the scripts and use
key path spends for every output immediately (excepting the local_anchor,
and to_remote is delayed by a block).

So while this doesn't achieve eltoo's goal of "no toxic waste", I believe
it does achieve the goal of "state information is bounded no matter
how long you leave the channel open / how many transactions travel over
the channel".

(Provided you're willing to wait for the other party to attempt to claim
a htlc via their layered transaction, you can use this strategy for
htlcs as well as ptlcs -- however this leaves you the risk that they
never attempt to claim the funds, which may leave you out of pocket,
and may give them the opportunity to do an attack along the lines of
"you don't get access to the \$10,000 locked in old HTLCs unless you pay
me \$1,000".  So I don't think that's really a smart thing to do)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] PTLCs early draft specification

```On Thu, Dec 09, 2021 at 12:34:00PM +1100, Lloyd Fournier wrote:
> I wanted to add a theoretical note that you might be aware of. The final
> message "Bob -> Alice: revoke_and_ack" is not strictly necessary. Alice
> does not care about Bob revoking a commit tx that gives her strictly more
> coins.

That's true if Alice is only sending new tx's paying Bob; and Rusty's
examples in the `option_simplified_update` proposal do only include new
HTLCs...

But I think it's intended to cover *all* update messages, and if Alice is
also including any `update_fulfill_htlc` or `update_fail_htlc` messages in
the commitment, she's potentially gaining funds, both for the amount of
fees she saves by avoiding extra transactions, but for the fulfill case,
potentially also because she doesn't need to worry about the fulfilled
htlc reaching its timeout.

Actually, as an alternative to the `option_simplified_update` approach,
has anyone considered an approach more like this:

* each node can unilaterally send various messages that always update
the state, eg:
+ new htlc/ptlc paying the other node (update_add_htlc)
+ secret reveal of htlc/ptlc paying self (update_fulfil_htlc)
+ rejection of htlc/ptlc paying self (update_fail_htlc)
+ timeout of htlc/ptlc paying the other node (not currently allowed?)
+ update the channel fee rate (update_fee)

* continue to allow these to occur at any time, asynchronously, but
to make it easier to keep track of them, add a uint64_t counter
to each message, that each peer increments by 1 for each message.

* each channel state (n) then corresponds to the accumulation of
updates from each each peer, up to message (a) for Alice, and message
(b) for Bob.

* so when updating to a new commitment (n+1), the proposal message
should just include both update values (a') and (b')

* nodes can then track the state by having a list of
htlcs/ptlcs/balances, etc for state (n), and a list of unapplied
update messages for themselves and the other party (a+1,...,a') and
(b+1,...,b'), and apply them in order when constructing the new state
(n+1) for a new commitment signing round

I think that retains both the interesting async bits (anyone can queue
state updates immediately) but also makes it fairly simple to maintain
the state?

> Bob's new commit tx can use the same revocation key as the previous
> one

That's a neat idea, but I think the fail/fulfill messages break it.
_But_ I think that means it would still be an interesting technique to
use for fast forwards which get updated for every add message...

> Not sending messages you don't need to is usually
> both more performant and simpler

The extra message from Bob allows Alice to discard the adaptor sigs
associated with the old state, which I think is probably worthwhile
anyway?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] PTLCs early draft specification

```On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev wrote:
> Alternately, fast-forwards, which avoid this because it does not change
> commitment transactions on the payment-forwarding path.
> You only change commitment transactions once you have enough changes to
> justify collapsing them.

I think the problem t-bast describes comes up here as well when you
collapse the fast-forwards (or, anytime you update the commitment
transaction even if you don't collapse them).

That is, if you have two PTLCs, one from A->B conditional on X, one
from B->A conditional on Y. Then if A wants to update the commitment tx,
she needs to

1) produce a signature to give to B to spend the funding tx
2) produce an adaptor signature to authorise B to spend via X from his
commitment tx
3) produce a signature to allow B to recover Y after timeout from his
commitment tx spending to an output she can claim if he cheats
4) *receive* an adaptor signature from B to be able to spend the Y output
if B posts his commitment tx using A's signature in (1)

The problem is, she can't give B the result of (1) until she's received
(4) from B.

It doesn't matter if the B->A PTLC conditional on Y is in the commitment
tx itself or within a fast-forward child-transaction -- any previous
adaptor sig will be invalidated because there's a new commitment
transaction, and if you allowed any way of spending without an adaptor
sig, B wouldn't be able to recover the secret and would lose funds.

It also doesn't matter if the commitment transaction that A and B will
publish is the same or different, only that it's different from the
commitment tx that previous adaptor sigs committed to. (So ANYPREVOUT
would fix this if it were available)

So I think this is still a relevant question, even if fast-forwards
make it a rare problem, that perhaps is only applicable to very heavily
used channels.

(I said the following in email to t-bast already)

I think doing a synchronous update of commitments to the channel state,
something like:

Alice -> Bob: propose_new_commitment
channel id
adaptor sigs for PTLCs to Bob

Bob -> Alice: agree_new_commitment
channel id
adaptor sigs for PTLCs to Alice
sigs for Alice to spend HTLCs and PTLCs to Bob from her own
commitment tx
signature for Alice to spend funding tx

Alice -> Bob: finish_new_commitment_1
channel id
sigs for Bob to spend HTLCs and PTLCs to Alice from his own
commitment tx
signature for Bob to spend funding tx
reveal old prior commitment secret
new commitment nonce

Bob -> Alice: finish_new_commitment_2
reveal old prior commitment secret
new commitment nonce

would work pretty well.

This adds half a round-trip compared to now:

Alice -> Bob: commitment_signed
Bob -> Alice: revoke_and_ack, commitment_signed
Alice -> Bob: revoke_and_ack

The timings change like so:

Bob can use the new commitment after 1.5 round-trips (previously 0.5)

Alice can be sure Bob won't use the old commitment after 2 round-trips
(previously 1)

Alice can use the new commitment after 1 round-trip (unchanged)

Bob can be sure Alice won't use the old commitment after 1.5 round-trips
(unchanged -- note: this is what's relevant for forwarding)

Making the funding tx a musig setup would mean also supplying 64B
of musig2 nonces along with the "adaptor sigs" in one direction,
and providing the other side's 64B of musig2 nonces back along with the
(now partial) signature for spending the funding tx (a total of 256B of
nonce data, not 128B).

Because it keeps both peers' commitments synchronised to a single channel
state, I think the same protocol should work fine with the revocable
signatures on a single tx approach too, though I haven't tried working
through the details.

Fast forwards would then be reducing the 2 round-trip protocol to
update the state commitment to a 0.5 round-trip update, to reduce
latency when forwarding by the same amount as before (1.5 round-trips
to 0.5 round-trips).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Sat, Oct 09, 2021 at 11:12:07AM +1000, Anthony Towns wrote:
> Here's my proposal for replacing BOLT#2 and BOLT#3 to take advantage of
> taproot and implement PTLCs.

I think the conclusion from the discussions at the in-person LN summit
was to split these features up an implement them gradually. I think that
would look like:

1) taproot funding/anchor output
benefits:
* LN utxos just look normal, so better privacy
* mutual closes also look normal, and only need one sig and no
script, better privacy and lower fees
* doesn't require updating any HTLC scripts
complexities:
* requires implementing musig/musig2/similar for mutual
closes and signing commitment txs
* affects gossip, which wants to link channels with utxos so needs
to understand the new utxo format
* affects splicing -- maybe it's literally an update to the
splicing spec, and takes effect only when you open new channels
or splice existing ones?

2) update commitment outputs to taproot
benefits:
* slightly cheaper unilateral closes, maybe more private?
complexities:
* just need to support taproot script path spends

3) PTLC outputs
benefits:
* has a different "hash" at every hop, arguably better privacy
* can easily do cool things with points/secrets that would require
zkp's to do with hashes/secrets
* no need to remember PTLCs indefinitely in case of old
complexities:
* needs a routing feature bit
* not usable unless lots of the network upgrades to support PTLCs

4) symmetric commitment tx (revocation via signature info)
benefits:
* reduces complexity of layered txs?
* reduces gamesmanship of who posts the commitment tx?
* enables low-latency/offline payments?
complexities:
* requires careful nonce management?

5) low-latency payments?
benefits:
* for payments that have no problems, halves the time to complete
* the latency introduced by synchronous commitment updates doesn't
matter for successful payments, so peer protocol can be simplified
complexities:
* ?

6) offline receipt?

7) eltoo channels?

8) eltoo factories?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Wed, Oct 13, 2021 at 03:15:14PM +1100, Lloyd Fournier wrote:
> If you're willing to accept that "worst case" happening more often, I
> think you could then retain the low latency forwarding, by having the
> transaction structure be:

So the idea here is that we have two channel parameters:

PD - the payment delay or payment timeout delta, say 40 blocks
RD - the channel recovery delay, say 2016 blocks

and the idea is that if you publish an old state, I have the longer delay
(RD) to correct that; *but* if the currently active state includes a
payment that I've forwarded to you, I may only have the shorter delay
(PD) in order to forward the payment claim details back in order to
avoid being out of pocket.

The goal is to keep that working while also allowing me to tell you about
a payment to you in such a way that you can safely forward it on *without*

It's not really a super-important goal; it could shave off 50% of
the time to accept a ln tx when everything goes right, and there's no
bottlenecks elsewhere in the implementation, but it can't do anything
more than that, and doesn't help the really slow cases when things go
wrong. Mostly, I just find it interesting.

Suppose that a payment is forwarded from Alice to Bob, Carol and finally
reaches Dave. Alice/Bob and Carol/Dave are both colocated in a data centre
and have high bandwidth and have 1ms (rtt) latency, but Bob/Carol are
on different continents (but not via tor) and have 100ms (rtt) latency.

With 1.5 round-trips before forwarding, we'd get:

t=0 Alice tells Bob
t=1.5   Bob tells Carol
t=151.5 Carol tells Dave
t=153   Dave reveals the secret to Carol
t=153.5 Carol reveals the secret to Bob
t=203.5 Bob reveals the secret to Alice
t=204   Alice knows the secret!

That's how things work now, with "X tells Y" being:

Y->X: commitment_signed, revoke_and_ack
X->Y: revoke_and_ack

and "X reveals the secret to Y" being:

X->Y: update_fulfill_htlc

However, if we could do it optimally we would have:

t=0 Alice tells Bob about the payment
t=0.5   Bob tells Carol about the payment
t=50.5  Carol tells Dave about the payment
t=51Dave accepts the payment and tells Carol the secret
t=51.5  Carol accepts the payment and tells Bob the secret
t=101.5 Bob accepts the payment and tells Alice the secret
t=102   Alice knows the secret!

Looking just at Bob/Carol we might also have the underlying commitment

t=50.5  Carol acks the payment to Bob (commitment_signed,
revoke_and_ack)
t=100.5 Bob acks Carol's ack, revoking old state (revoke_and_ack)
t=150.5 Carol's safe with the new state including the payment

t=51.5  Carol reveals the secret and signs a new updated state
(update_fulfill_htlc, commitmnt_signed)
t=101.5 Bob acks receipt of the secret (commitment_signed,
revoke_and_ack)
t=151.5 Carol's safe with the new state with an increased balance
(revoke_and_ack)
t=201.5 Bob's state is up to date

Note that the first of those doesn't complete until well after Alice
would know the secret in an optimal construction; and that as described
the second upate overlaps the first, which might not be particularly
desirable.

> In my mind your "update the base channel state" idea seems to fix everything
> by
> itself.

Yeah -- if you're willing to do 1.5 round-trips (and thanks to musig2 this
doesn't blow out to 2.5 (?) round-trips) that does solve everything. The
challenge is to do it in 0.5 round-trips. :)

> So at T - to_self_delay (or a bit before) you say to your counterparty
> "can we lift this HTLC out of your in-flight tx into the 'balance tx' (which
> will go back to naming a 'commitment tx' since it doesn't just have balance
> outputs anymore) so I can use it too? -- otherwise I'll have to close the
> channel on chain now to force you to reveal it to me on time?". If they agree,
> after the revocation and new commit tx everything is back to (tx symmetric)
> Poon-Dryja so no need for extra CSVs.

Maybe? So the idea is that:

1) Bob gets a "low-latency" tx that spends Alice's balance and has a
bunch of outputs for really recent payments
2) In normal conditions, in 5 or 10 or 30 seconds, Alice/Bob renegotiate
the base commitment to move those payments out of the "low-latency"
tx
3) In abnormal conditions, with an active forwarded "low-latency" tx and
communications failure of length up to "PD", Alice closes the channel
on chain.
4) Bob then has "PD" period to post the "low-latency" tx, if he
doesn't, Alice can do a layered claim of her balance preventing Bob
from oing so.
5) If Bob does post his "low-latency" tx, then he'll also need to reveal
secrets prior to the payment timeout.

So taking the payment timeout as T, then he'll have to post ```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Tue, Oct 12, 2021 at 04:18:37AM +, ZmnSCPxj via Lightning-dev wrote:
> > A+P + max(0, B'-B)*0.1 to Alice
> > B-f - max(0, B'-B)*0.1 to Bob

> So, if what you propose is widespread, then a theft attempt is costless:

That's what the "max" part prevents -- if your current balance is B and
you try to claim an old state with B' > B for a profit of B'-B, Alice
will instead take 10% of that value.

(Except maybe all the funds they were trying to steal were in P' rather
than B'; so better might have been "A+P + max(0, min(B'+P'-B)*0.1, B)")

Eltoo would enable costless theft attempts (ignoring fees), particularly
for multiparty channels/factories, of course, so getting the game theory
right in advance of that seems worth the effort anyway.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Mon, Oct 11, 2021 at 05:05:05PM +1100, Lloyd Fournier wrote:
> ### Scorched earth punishment
> Another thing that I'd like to mention is that using revocable signatures
> enables scorched earth punishments [2].

I kind-of think it'd be more interesting to simulate eltoo's behaviour.
If Alice's current state has balances (A, B) and P in in-flight
payments, and Bob posts an earlier state with (A', B') and P' (so A+B+P
= A'+B'+P'), then maybe Alice's justice transaction should pay:

A+P + max(0, B'-B)*0.1 to Alice
B-f - max(0, B'-B)*0.1 to Bob

(where "f" is the justice transaction fees)

Idea being that in an ideal world there wouldn't be a hole in your pocket
that lets all your coins fall out, but in the event that there is such
a hole, it's a *nicer* world if the people who find your coins give them
back to you out of the kindness of their heart.

> Note that we number each currently inflight transaction by "k",
> starting at 0. The same htlc/ptlc may have a different value for k
> between different inflight transactions.
> Can you expand on why "k" is needed in addition to "n" and "i". k sounds like
> the same thing as i to me.

"k" is used to distinguish the inflight payments (htlcs/ptlcs), not the
inflight state (which is "i").

> Also what does RP/2/k notation imply given the definition of RP you gave
> above?

I defined earlier that if P=musig(A,B) then P/x/y = musig(A/x/y,B/x/y);
so RP/2/k = musig(A/2/n/i/2/k,RB2(n,i)/2/k).

>  * if the inflight transaction contains a ptlc output, [...]
> What about just doing a scriptless PTLC to avoid this (just CSV input of
> presigned tx)? The cost is pre-sharing more nonces per PTLC message.

Precisely that reason. Means you have to share "k+1" nonce pairs in
advance of every inflight tx update. Not a show stopper, just seemed
use a key path spend instead of a script path spend)

> This does not support option_static_remotekey, but compensates for that
> by allowing balances to be recovered with only the channel setup data
> even if all revocation data is lost.
> This is rather big drawback but is this really the case? Can't "in-flight"
> transactions send the balance of the remote party to their unencumbered static
> remote key?

They could, but there's no guarantee that there is an inflight
transaction, or that the other party will post it for you. In those case,
you have to be able to redeem your output from the balance tx directly,
and if you can do that, might as well have every possible address be
derived differently to minimise the amount of information any third
parties could glean.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Mon, Oct 11, 2021 at 09:23:19PM +1100, Lloyd Fournier wrote:
> On Mon, 11 Oct 2021 at 17:30, Anthony Towns  wrote:
> I don't think the layering here quite works: if Alice forwarded a payment
> to Bob, with timeout T, then the only way she can be sure that she can
> either reclaim the funds or know the preimage by time T is to close the
> channel on-chain at time T-to_self_delay.
> This problem may not be as bad as it seems.

Maybe you can break it down a little bit further. Consider *three*
delays:

1) refund delay: how long you have before a payment attempt starts
getting refunded

2) channel recovery delay: how long you have to recover from node
failure to prevent an old state being committed to, potentially losing

3) payment recovery delay: how long you have to recover from node
failure to prevent losing funds due to a forwarded payment (eg,
Carol claimed the payment, while Alice claimed the refund, leaving
Bob out of pocket)

(Note that if you allow payments up to the total channel balance, there's
not really any meaningful distinction between (2) and (3), at least in
the worst case)

With layered transactions, (2) and (3) are different -- if Bob's node
fails near the timeout, then both Alice and Carol drop to the blockchain,
and Carol knows the preimage, Bob may have as little as the channel
"delay" parameter to extract the preimage from Carol's layered commitment
tx to be able to post a layered commitment on top of Alice's unilateral
close to avoid being out of pocket.

(Note that that's a worst case -- Carol would normally reveal the preimage
onchain earlier than just before the timeout, giving Bob more time to
recover his node and claim the funds from Alice)

If you're willing to accept that "worst case" happening more often, I
think you could then retain the low latency forwarding, by having the
transaction structure be:

commitment tx
input:
funding tx
outputs:
Alice's balance
(others)

low-latency inflight tx:
input:
Alice's balance
output:
(1) or (2)
Alice's remaining balance

Bob claim:
input:
(1) [ CSV bob CHECKSIG]
output:
[ checksigverify  checksig
ifdup notif  csv endif]

Too-slow:
input:
(2) [ CLTV alice CHECKSIG]
output:
Alice

The idea being:

* Alice sends the low-latency inflight tx which Bob then forwards
immediately.

* Bob then tries to update the base channel state with Alice, so both
sides have a commitment to the new payment, and the low-latency
inflight tx is voided (since it's based on a revoked channel state)
If this succeeds, everything is fine as usual.

* If Alice is unavailable to confirm that update, Bob closes the
channel prior to (payment-timeout - payment-recover-delay), and posts
"Bob claim". After an additional pyment recovery delay (and prior
to payment-timeout) Bob posts Bob claim, ensuring that the only way
Alice can claim the funds is if he had posted a revoked state.

* In this case, Alice has at least one payment-recovery-delay period
prior to the payment-timeout to notice the transaction onchain and
recover the preimage.

* If Bob posted the low-latency inflight tx later than
(payment-timeout - payment-recovery-delay) then Alice will have
payment-recovery-delay time to notice and post the "too-slow" tx and
claim the funds via the timeout path.

* If Bob posted a revoked state, Alice can also claim the funds via
Bob claim, provided she notices within the channel-recovery-delay

That only allows one low-latency payment to be inflight though, which I'm
not sure is that interesting... It's also kinda complicated, and doesn't
cover both the low-latency and offline cases, which is disappointing...

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Sat, Oct 09, 2021 at 11:12:07AM +1000, Anthony Towns wrote:
>  2. The balance transaction - tracks the funding transaction, contains
> a "balance" output for each of the participants.
>  3. The inflight transactions - spends a balance output from the balance
> transaction and provides outputs for any inflight htlc/ptlc transactions.
>  4. Layered transactions - spends inflight htlc/ptlc outputs by revealing
> the preimage, while still allowing for the penalty path.

I don't think the layering here quite works: if Alice forwarded a payment
to Bob, with timeout T, then the only way she can be sure that she can
either reclaim the funds or know the preimage by time T is to close the
channel on-chain at time T-to_self_delay.

Any time later than that, say T-to_self_delay+x+1, would allow Bob to
post the inflight tx at T+x (prior to Alice being able to claim her
balance directly due to the to_self_delay) and then immediately post the
layered transaction (4, above) revealing the preimage, and preventing
Alice from claiming the refund.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Sat, Oct 09, 2021 at 12:21:03PM +, Jonas Nick wrote:
> it seems like parts of this proposal rely on deterministic nonces in MuSig.

The "deterministic" nonces are combined with "recoverable" nonces via
musig2, so I think that part is a red-herring?

They're "deterministic" in the sense that the person who generated the
nonce needs to be able to recover the secret/dlog for the nonce later,
without having to store unique randomness for it. Thinking about it,
I think you could make the "deterministic" nonce secrets be

H( private-key, msg, other-party's-nonce-pair, 1 )
H( private-key, msg, other-party's-nonce-pair, 2 )

because you only need to recover the secret if the other party posts a
sig for a revoked transaction, in which case you can lookup their nonce
directly anyway. And you're choosing your "deterministic" nonce after
knowing what their ("revocable") nonce is, so can include it in the hash.

As far as the revocable nonce goes, you should only be generating a
single signature based on that, since that's used to finish things off
and post the tx on chain.

> Generally, this is insecure unless combined with heavy machinery that proves
> correctness of the nonce derivation in zero knowledge. If one signer uses
> deterministic nonces and another signer uses random nonces, then two signing
> sessions will have different challenge hashes which results in nonce reuse by
> the first signer [0]. Is there a countermeasure against this attack in the
> proposal? What are the inputs to the function that derive DA1, DA2? Is the
> assumption that a signer will not sign the same message more than once?

I had been thinking DA1,DA2 = f(seed,n) where n increases each round, but I
think the above would work and be an improvement. ie:

Bob has a shachain based secret generator, producing secrets s_0 to
s_(2**48). If you've seen s_0 to s_n, you only need to keep O(log(n))
of those values to regenerate all of them.

Bob generates RB1_n and RB2_n as H(s_n, 1)*G and H(s_n, 2)*G and sends
those values to Alice.

Alice determines the message (ie, the transaction), and sets da1_n
and da2_n as H(A_priv, msg, RB1_n, RB2_n, 1) and H(A_priv, msg, RB1_n,
RB2_n, 2). She then calculates k=H(da1_n, da2_n, RB1_n, RB2_n), and
signs for her nonce which da1_n+k*da2_n, and sends da1_n*G and
da2_n*G and the partial signature to Bob.

Bob checks and records Alice's musig2 derivation and partial signature,
but does not sign himself.

_If_ Bob wants to close the channel and publish the tx, he completes
the signature by signing with nonce RB1_n + k*RB2_n.

If you can convince Bob to close the channel repeatedly, using the
same nonce pair, then he'll have problems -- but if you can do that,
you can probably trick him into closing the channel with old state,
which gives him the same problems by design... Or that's my take.

> It may be worth pointing out that an adaptor signature scheme can not treat
> MuSig2 as a black box as indicated in the "Adaptor Signatures" section [1].

Hmm, you had me panicking that I'd been describing how to combine the
two despite having decided it wasn't necessary to combine them... :)

(I figured doing musig for k ptlcs for every update would get old fast --
if you maxed the channel out with ~400 inflight ptlcs you'd be exchanging
~800 nonces for every update. OTOH, I guess that's the only thing you'd
be saving, and the cost is ~176 bytes of extra witness data per ptlc...
Hmm...)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Lightning over taproot with PTLCs

```On Sat, Oct 09, 2021 at 01:49:38AM +, ZmnSCPxj wrote:
> A transaction is required, but I believe it is not necessary to put it
> *onchain* (at the cost of implementation complexity in the drop-onchain case).

The trick with that is that if you don't put it on chain, you need
to calculate the fees for it in advance so that they'll be sufficient
when you do want to put it on chain, *and* you can't update it without
going onchain, because there's no way to revoke old off-chain funding
transactions.

> This has the advantage of maintaining the historical longevity of the channel.
> Many pathfinding and autopilot heuristics use channel lifetime as a positive
> indicator of desirability,

Maybe that's a good reason for routing nodes to do shadow channels as
a matter of course -- call the currently established channel between
Alice and Bob "C1", and leave it as bolt#3 based, but establish a new
taproot based channel C2 also between Alice and Bob. Don't advertise C2
(making it a shadow channel), just say that C1 now supports PTLCs, but
secretly commit to those PTLCs to C2 instead C1. Once the C2 funding tx
now sufficiently buried funding transaction, and convert C1 to a shadow

In particular, that setup allows you to splice funds into or out of the
shadow channel while retaining the positive longevity heuristics of the
public channel.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Lightning over taproot with PTLCs

```Hi all,

Here's my proposal for replacing BOLT#2 and BOLT#3 to take advantage of
taproot and implement PTLCs.

It's particularly inspired by ZmnSCPxj's thoughts from Dec 2019 [0], and
some of his and Lloyd Fournier's posts since then (which are listed in
references) -- in particular, I think those allow us to drop the latency
for forwarding a payment *massively* (though refunding a payment still
requires roundtrips), and also support receiving via a mostly offline
lightning wallet, which seems kinda cool.

I somehow hadn't realised it prior to a conversation with @brian_trollz
via twitter DM, but I think switching to PTLCs, even without eltoo,
means that there's no longer any need to permanently store old payment
info in order to recover the entirety of the channel's funds. (Some brute
force is required to recover the timeout info, but in my testing I think
that's about 0.05 seconds of work per ptlc output via python+libsecp256k1)

This doesn't require any soft-forks, so I think we could start work on
it immediately, and the above benefits actually seem pretty worth it,
even ignoring any privacy/efficiency benefits from doing taproot key
path spends and forwarding PTLCs.

I've sketched up the musig/musig2 parts for the "balance" transactions
in python [1] and tested it a little on signet [2], which I think is
enough to convince me that this is implementable. There'll be a bit of
preliminary work needed in actually defining specs/BIPs for musig and
musig2 and adaptor signatures, I think.

Anyway, details follow. They're also up on github as a gist [3] if that

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

[1]
https://github.com/ajtowns/bitcoin/blob/202109-ptlc-lnpenalty/test/functional/feature_ln_ptlc.py

[2] The balance transaction (spending the funding tx and with outputs
being Alice's and Bob's channel balance is at):

[3] https://gist.github.com/ajtowns/12f58fa8a4dc9f136ed04ca2584816a2/

Goals
=

1. Support HTLCs
2. Support PTLCs
3. Minimise long-term data storage requirements
4. Minimise latency when forwarding payments
5. Minimise latency when refunding payments
6. Support offline receiving
7. Minimise on-chain footprint
8. Minimise ability for third-parties to analyse

Setup
=

We have two participants in the channel, Alice and Bob. They each have
bip32 private keys, a and b, and share the corresponding xpubs A and B
with each other.

Musig
-

We will use musig to combine the keys, where P = musig(A,B) = H(A,B,1)*A
+ H(A,B,2)*B. We'll talk about subkeys of P, eg P/4/5/6, which are
calculated by taking subkeys of the input and then applying musig,
eg P/4/5/6 = musig(A/4/5/6, B/4/5/6). (Note that we don't use hardened
paths anywhere)

Musig2
--

We'll use musig2 to sign for these keys, that is both parties will
pre-share two nonce points each, NA1, NA2, NB1, NB2, and the nonce will be
calculated as: R=(NA1+NB1)+k(NA2+NB2), where k=Hash(P,NA1,NA2,NB1,NB2,m),
where P is the pubkey that will be signing and m is the message to be
signed. Note that NA1, NA2, NB1, NB2 can be calculated and shared prior
to knowing what message will be signed.

The partial sig by A for a message m with nonce R as above is calculated as:

sa = (na1+k*na2) + H(R,A+B,m)*a

where na1, na2, and a are the secrets generating NA1, NA2 and A respectively.
Calculating the corresponding partial signature for B,

sb = (nb1+k*nb2) + H(R,A+B,m)*b

gives a valid signature (R,sa+sb) for (A+B):

(sa+sb)G = R + H(R,A+B,m)*(A+B)

Note that BIP340 sepcifies x-only pubkeys, so A+B and R implicitly have
even y, however since those values are caluclated via musig and musig2

H(A,B,1)*A + H(A,B,2)*B

does not have even y, we calculate:

P = (-H(A,B,1))*A + (-H(A,B,2))*B

instead, which will have even y. Similarly, if (NA1+NB1+k(NA2+NB2)) does
not have even y, when signing, we replace each partial nonce by its negation,
eg: sa = -(na1+k*na2) + H(R,A+B,m).

An adaptor signature for P for secret X is calculated as:

s = r + H(R+X, P, m)*p

which gives:

(s+x)G = (R+X) + H(R+X, P, m)*P

so that (R+X,s+x) is a valid signature by P of m, and the preimage for
X can be calculated as the difference between the published sig and the

Note that if R+X does not have even Y, we will need to negate both R and X,
and the recovered secret preimage will be -x instead of x.

Revocable Secrets
-

Alice and Bob have shachain secrets RA(n) and RB(n) respectively,
and second level shachain secrets RA2(n,i) and RB2(n,i), with n and i
counting up from 0 to a maximum.

Summary
===

We'll introduce four layers of transactions:

1. The funding transaction - used to establish the channel, provides
the utxo backing the ```

### Re: [Lightning-dev] [bitcoin-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

```On Fri, Sep 17, 2021 at 09:58:45AM -0700, Jeremy via bitcoin-dev wrote,
on behalf of John Law:

> I'd like to propose an alternative to BIP-118 [1] that is both safer and more
> powerful. The proposal is called Inherited IDs (IIDs) and is described in a
> paper that can be found here [2]. [...]

Pretty sure I've skimmed this before but hadn't given it a proper look.
Saying "X is more powerful" and then saying it can't actually do the
same stuff as the thing it's "more powerful" than always strikes me as
a red flag. Anyhoo..

I think the basic summary is that you add to each utxo a new resettable
"structural" tx id called an "iid" and indetify input txs that way when
signing, so that if the details of the transaction changes but not the
structure, the signature remains valid.

In particular, if you've got a tx with inputs tx1:n1, tx2:n2, tx3:n3, etc;
and outputs out1, out2, out3, etc, then its structual id is hash(iid(tx1),
n1) if any of its outputs are "tagged" and it's not a coinbase tx, and
otherwise it's just its txid.  (The proposed tagging is to use a segwit
v2 output in the tx, though I don't think that's an essential detail)

So if you have a tx A with 3 outputs, then tx B spends "A:0, A:1" and
tx C spends "B:0" and tx D spends "C:0", if you replace B with B',
then if both B and B' were tagged, and the signatures for C (and D,
assuming C was tagged) will still be valid for spending from B'.

So the question is what you can do with that.

The "2stage" protocol is proposed as an alternative to eltoo is
essentially just:

a) funding tx gets dropped to the chain
b) closing state is proposed by one party
c) other party can immediately finalise by confirming a final state
that matches the proposed closing state, or was after it
d) if the other party's not around for whatever delay, the party that
proposed the close can finalise it

That doesn't work for more than two participants, because two of
the participants could collude to take the fast path in (c) with some
earlier state, robbing any other participants. That said, this is a fine
protocol for two participants, and might be better than doing the full
eltoo arrangement if you only have a two participant channel.

To make channel factories work in this model, I think the key step is
using invalidation trees to allow updating the split of funds between
groups of participants. I think invalidation trees introduce a tradeoff
between (a) how many updates you can make, and (b) how long you have to
notice a close is proposed and correct it, before an invalidated state
can be posted, and (c) how long it will take to be able to extract your
funds from the factory if there are problems initially. You reduce those
delays substantially (to a log() factor) by introducing a hierarchy of
update txs (giving you a log() number of txs), I think.

That's the "multisig factories" section anyway, if I'm
following correctly. The "timeout trees", "update-forest" and
"challenge-and-response" approaches both introduce a trusted user ("the
operator"), I think, so are perhaps more comparable to statechains
than eltoo?

So how does that compare, in my opinion?

If you consider special casing two-party channels with eltoo, then I
think eltoo-2party and 2stage are equally effective. Comparing
eltoo-nparty and the multisig iid factories approach, I think the
uncooperative case looks like:

ms-iid:
log(n) txs (for the invalidation tree)
log(n) time (?) (for the delays to ensure invalidated states don't
get published)

eltoo: 1 tx from you
1 block after you notice, plus the fixed csv delay

A malicious counterparty can post many old update states prior to you
poisting the latest state, but those don't introduce extra csv delays
and you aren't paying the fees for those states, so I don't think it
makes sense to call that an O(n) delay or cost.

An additional practical problem with lightning is dealing with layered
commitments; that's a problem both for the delays while waiting for a
potential rejection in 2stage and for the invalidation tree delays in the
factory construction. But it's not a solved problem for eltoo yet, either.

As far as implementation goes, introducing the "iid" concept would mean
that info would need to be added to the utxo database -- if every utxo
got an iid, that would be perhaps a 1.4GB increase to the utxo db (going
by unique transaction rather than unique output), but presumably iid txs
would end up being both uncommon and short-lived, so the cost is probably
really mostly just in the additional complexity. Both iid and ANYPREVOUT
require changes to how signatures are evaluated and apps that use the
new feature are written, but ANYPREVOUT doesn't need changes beyond that.

(Also, the description of OP_CODESEPARATOR (footnote 13 on page 13,
ominous!) doesn't match its implementation in taproot. It also says BIP
118 introduces a new address type for floating transactions, but while
this was floated on the list, ```

### Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

```On Thu, Aug 26, 2021 at 04:33:23PM +0200, René Pickhardt via Lightning-dev
wrote:
> As we thought it was obvious that the function is not linear we only explained
> in the paper how the jump from f(0)=0 to f(1) = ppm+base_fee breaks convexity.

(This would make more sense to me as "f(0)=0 but f(epsilon)->b as
epsilon->0, so it's discontinuous")

> "Do we really want users to solve an NP-hard problem when
> they wish to find a cheap way of paying each other on the Lightning Network?"

FWIW, my answer to this is "sure, if that's the way it turns out".

Another program which solves an NP-hard problem every time it runs is
"apt-get install" -- you can simulate 3SAT using Depends: and Conflicts:
relationships between packages. I worked on a related project in Debian
back in the day that did a slightly more complicated variant of that
problem, namely working out if updating a package in the distro would
render other packages uninstallable (eg due to providing a different
library version) -- as it turned out, that even actually managed to hit
some of the "hard" NP cases every now and then. But it was never really
that big a deal in practice: you just set an iteration limit and consider
it to "fail" if things get too complicated, and if it fails too often,
you re-analyse what's going on manually and add a new heuristic to cope
with it.

I don't see any reason to think you can't do roughly the same for
lightning; at worst just consider yourself as routing on log(N) different
networks: one that routes payments of up to 500msat at (b+0.5ppm), one
that routes payments of up to 1sat at (b+ppm), one that routes payments
of up to 2sat at (b+2ppm), one that routes payments of up to 4sat at
(b+4ppm), etc. Try to choose a route for all the funds; if that fails,
split it; repeat. In some case that will fail despite there being a
possible successful multipath route, and in other cases it will choose a
moderately higher fee path than necessary, but if you're talking a paying
a 0.2% fee vs a 0.1% fee when the current state of the art is a 1% fee,
that's fine.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] #zerobasefee

```On Tue, Aug 24, 2021 at 08:50:42PM -0700, Matt Corallo wrote:
> I feel like we're having two very, very different conversations here. On one
> hand, you're arguing that the base fee is of marginal use, and that maybe we
> can figure out how to average it out such that we can avoid needing it.

I'm not sure about the "we" in that sentence -- I'm saying node operators
shouldn't bother with it, not that lightning software devs shouldn't offer
it as a config option or take it into account when choosing routes. The
only software change that /might/ make sense is changing defaults from
1sat to 0msat, but it seems a bit early for that too, to me.

(I'm assuming comments like "We'll most definitely support #zerobasefee"
[0] just means "you can set it to zero if you like" which seems like a
weird thing to have to say explicitly...)

> On
> the other hand, I'm arguing that, yes, maybe you can, but ideally you
> wouldn't have to, because its still pretty nice to capture those costs
> sometimes.

I don't really think it captures costs at all, but I do agree it could
be nice (at least in theory) to have it available since then you might
be able to better optimise your fee income based on whatever demand
happens to be. That's to increase profits, not match costs though, and
I'm not convinced the theory will play out in practice presuming AMP is
often useful/necessary.

> Also, even if we can maybe do away with the base fee, that still
> doesn't mean we should start relying on the lack of any
> not-completely-linear-in-HTLC-value fees in our routing algorithms,

I mean, exprimental/research routing algorithms should totally rely
on that if they feel like it? I just don't see any evidence that
anyone's thinking of moving that out of research and into production
until there's feedback from operators and a lot more results from the
research in general...

> as maybe
> we'll want to do upfront payments or some other kind of anti-DoS payment in
> the future to solve the gaping, glaring, giant DoS hole that is HTLCs taking
> forever to time out.

Until we've got an even vaguely workable scheme for that, I don't
think it's relevant to consider. (If my preferred scheme turns out
to be workable, I don't think it needs to be taken into account when
(multi)pathfinding at all)

> I'm not even sure that you're trying to argue, here, that we should start
> making key assumptions about the only fee being a proportional one in our
> routing algorithms, but that is what the topic at hand is, so I can't help
> but assume you are?

No, that's not the topic at hand, at all?

I mean, it's related, and interesting to talk about, but it's a digression
into "wild ideas that might happen in the future", not the topic... I
don't think anyone's currently advocating for node software to work that
way? (I do think having many/most channels have a zero base fee will make
multipath routing algos work better even when they *don't* assume the
base fee is zero)

I think I'm arguing for these things:

a) "everyone" should drop their base fee msat from the default,
probably to 0 because that's an easy fixed point that you don't need
to think about again as the price of btc changes, but anything at
or below 10msat would be much more reasonable than 1000msat.

b) if people are concerned about wasting resources forwarding super
small payments for correspondingly super small fees, they should
raise min_htlc_amount from 0 (or 1000) to compensate, instead of
raising their base fee.

c) software should dynamically increase min_htlc_amount as the
number of available htlc slots decreases, as a DoS mitigation measure.
(presuming this is a temporary increase, probably this wouldn't
be gossiped, and possibly not even communicated to the channel
counterparty -- just a way of immediately rejecting htlcs? I think
if something along these lines were implemented, (b) would almost
never be necessary)

d) the default base fee should be changed to 0, 1, or 10msat instead
of 1000msat

e) trivially: (I don't think anyone's saying otherwise)
- 0 base fee should be a supported config option
- research/experimental routing algorithms are great and should
be encouraged
- deploying new algorithms in production should only be done with
a lot of care
- changing the protocol should only be done with even more care
- proportional fees should be rounded up to the next msat and never
rounded down to 0
- research/experiments on charging for holding htlcs open should
continue (likewise research on other DoS prevention measures)

I'm not super sure about (c) or (d), and the "everyone" in (a) could
easily not really be everyone.

> If you disagree with the above characterization I'm happy to go line-by-line
> tit-for-tat, but usually those kinds of tirades aren't exactly useful and
> end up being more about semantics than ```

### Re: [Lightning-dev] #zerobasefee

```On Mon, Aug 16, 2021 at 12:48:36AM -0400, Matt Corallo wrote:
> > The base+proportional fees paid only on success roughly match the *value*
> > of forwarding an HTLC, they don't match the costs particularly well
> > at all.
> Sure, indeed, there's some additional costs which are not covered by failed
> HTLCs, [...]
> Dropping base fee makes the whole situation a good chunk *worse*.

Can you justify that quantitatively?

Like, pick a realistic scenario, where you can make things profitable
with some particular base_fee, prop_fee, min_htlc_amount combination,
but can't reasonably pick another similarly profitable outcome with
base_fee=0?  (You probably need to have a bimodal payment distribution
with a micropayment peak and a regular payment peak, I guess, or perhaps
have particularly inelastic demand and highly competitive supply?)

> > And all those costs can be captured equally well (or badly) by just
> > setting a proportional fee and a minimum payment value. I don't know why
> > you keep ignoring that point.
> I didn't ignore this, I just disagree, and I'm not entirely sure why you're
> ignoring the points I made to that effect :).

I don't think I've seen you explicitly disagree with that previously,
nor explain why you disagree with it? (If I've missed that, a reference
appreciated; explicit re-explanation also appreciated)

> In all seriousness, I'm entirely unsure why you think proportional is just
> as good?

In principle, because fee structures already aren't a good match, and
a simple approximation is better that a complicated approximation.
Specifically, because you can set

min_htlc_msat=old_base_fee_msat * 1e6 / prop_fee_millionths

which still ensures every HTLC you forward offers a minimum fee of
old_base_fee_msat, and your fees still increase as the value transferred
goes up, which in the current lightning environment seems like it's just
as good an approximation as if you'd actually used "old_base_fee_msat".

For example, maybe you pay \$40/month for your node, which is about 40msat
per second [0], and you really can only do one HTLC per second on average
[1]. Then instead of a base_fee of 40msat, pick your proportional rate,
say 0.03%, and calculate your min_htlc amount as above, ie 133sat. So if
someone sends 5c/133sat through you, they'll pay 40msat, and for every
covered, and provided your fee rate is competitive and there's traffic
on the network, you'll make your desired profit.

If your section of the lightning network is being used mainly for
microtransactions, and you're not competitive/profitable when limiting
yourself to >5c transactions, you could increase your proportional fee
and lower your min_htlc amount, eg to 1% and 4sat so that you'll get
your 40msat from a 4sat/0.16c HTLC, and increase at a rate of 10msat/sat
after that.

That at least matches the choices you're probably actually making as a
node operator: "I'm trying to be cheap at 0.03% and focus on relatively
large transfers" vs "I'm focussing on microtransactions by reducing the
minimum amount I'll support and being a bit expensive". I don't think
anyone's setting a base fee by calculating per-tx costs (and if they
were, per the footnote, I'm not convinced it'd even justify 1msat let
alone 1sat per tx).

OTOH, if you want to model an arbitrary concave fee function (because
you have some scheme that optimises fee income by discriminating against
smaller payments), you could do that by having multiple channels between
the same nodes, which is much more effective with (base, prop) fee pairs
than with (prop, min) pairs. (With (prop, min) pairs, you end up with
large ranges of the domain that would prefer to pay prop2*min2 rather
than prop1*x when x As you note, the cost for nodes is a function of the opportunity
> cost of the capital, and opportunity cost of the HTLC slots. Lets say as a
> routing node I decide that the opportunity cost of one of my HTLC slots is
> generally 1 sat per second, and the average HTLC is fulfilled in one second.
> Why is it that a proportional fee captures this "equally well"?!

If I send an HTLC through you, I can pay your 1 sat fee, then keep the
HTLC open for a day, costing you 86,400 sats by your numbers. So I don't
think that's even remotely close to capturing the costs of the individual
HTLC that's paying the fee.

But if your averages are right, and enough people are nice despite me
being a PITA, then you can get the same minimum with a proportional fee;
if you're charging 0.1% you set the minimum amount to be 1000 sats.

(But 1sat per HTLC is ridiculously expensive, like at least 20x over
what your actual costs would be, even if your hardware is horribly slow
and horribly expensive)

> Yes, you could amortize it,

You're already amortizing it: that's what "generally 1 sat per second"
and "average HTLC is fulfilled in one second" is capturing.

> but that doesn't make it "equally" good, and
> there are ```

### Re: [Lightning-dev] #zerobasefee

```On Sun, Aug 15, 2021 at 10:21:52PM -0400, Matt Corallo wrote:
> On 8/15/21 22:02, Anthony Towns wrote:
> > > In
> > > one particular class of applicable routing algorithms you could use for
> > > lightning routing having a base fee makes the algorithm intractably slow,
> > I don't think of that as the problem, but rather as the base fee having
> > a multiplicative effect as you split payments.
> Yes, matching the real-world costs of forwarding an HTLC.

Actually, no, not at all.

The base+proportional fees paid only on success roughly match the *value*
of forwarding an HTLC, they don't match the costs particularly well
at all.

Why not? Because the costs are incurred on failed HTLCs as well, and
also depend on the time a HTLC lasts, and also vary heavily depending
on how many other simultaneous HTLCs there are.

> Yes. You have to pay the cost of a node. If we're really worried about this,
> we should be talking about upfront fees and/or refunds on HTLC fulfillment,
> not removing the fees entirely.

(I don't believe either of those are the right approach, but based on
previous discussions, I don't think anyone's going to realise I'm right
until I implement it and prove it, so *shrug*)

> > Being denominated in sats, the base fee also changes in value as the
> > bitcoin price changes -- c-lightning dropped the base fee to 1sat (from
> > 546 sat!) in Jan 2018, but the value of 1sat has increased about 4x
> > since then, and it seems unlikely the fixed costs of a successful HTLC
> > payment have likewise increased 4x.  Proportional fees deal with this
> > factor automatically, of course.
> This isn't a protocol issue, implementations can automate this without issue.

I don't think anyone's proposing the protocol be changed; just that node
operators set an option to a particular value?

Well, except that Lisa's maybe proposing that 0 not be allowed, and a
value >= 0.001 sat be required? I'm not quite sure.

> > > There's real cost to distorting the fee structures on the network away
> > > from
> > > the costs of node operators,
> > That's precisely what the base fee is already doing.
> Huh? For values much smaller than a node's liquidity, the cost for nodes is
> (mostly) a function of HTLCs, not the value.

Yes, the cost for nodes is a function of the requests that come in, not
how many succeed. The fees are proportional to how many succeed, which
is at best a distorted reflection of the number of requests that come in.

> The cost to nodes is largely [...]

The cost to nodes is almost entirely the opportunity cost of not being
able to accept other txs that would come in afterwards and would pay
higher fees.

And all those costs can be captured equally well (or badly) by just
setting a proportional fee and a minimum payment value. I don't know why
you keep ignoring that point.

> so I'd argue for many HTLCs forwarded
> today per-payment costs mirror the cost to a node much, much, much, much
> better than some proportional fees?

You're talking yourself into a *really* broken business model there.

> > Additionally, I don't think HTLC slot usage needs to be kept as a
> > limitation after we switch to eltoo;
> The HTLC slot limit is to keep transactions broadcastable. I don't see why
> this would change, you still get an output for each HTLC on the latest
> commitment in eltoo, AFAIU.

eltoo gives us the ability to have channel factories, where we divide
the overall factory balance amongst different channels, all updated
off-chain. It seems likely we'll want to do factories from day one,
so that we don't implicitly limit either the lifetime of the channel
or its update rate (>1 update/sec ~= <4 year lifetime otherwise if I
did the maths right). Once we're doing factories, if we have more than
however many htlcs for a channel, we can re-divide the factory balance
and add a new channel. If the limit is 500 HTLCs per tx, you'd have to
amortize 0.2% of the new tx across each HTLC, in addition to the cost
of the HTLC itself, but that seems trivial.

> > and in the meantime, I think it can
> > be better managed via adjusting the min_htlc_amount -- at least for the
> > scenario where problems are being caused by legitimate payment attempts,
> > which is also the only place base fee can help.
> Sure, we could also shift towards upfront fees or similar solutions,

Upfront fees seem extremely vulnerable to attacks, and are certainly a
(pretty large) protocol change.

> > > Instead, we should investigate how we can
> > > apply the ideas here with the more complicated fee structures we have.
> > Fee structures should be *simple* not complicated.
> > I mean, it's kind of great that we started off complicated -- if it
> > turns out base fee isn't necessary, it's easy to just set it t```

### Re: [Lightning-dev] #zerobasefee

```On Sun, Aug 15, 2021 at 07:19:01AM -0500, lisa neigut wrote:
> My suggestion would be that, as a compromise, we set a network wide minimum
> fee
> at the protocol level of 1msat.

Is that different in any meaningful way to just saying "fees get rounded
up to the nearest msat" ? If the fee is 999.999msat, expecting to get
away with paying less than 1sat seems kinda buggy to me.

On Sun, Aug 15, 2021 at 08:04:52PM -0400, Matt Corallo wrote:
> I'm frankly still very confused why we're having these conversations now.

Because it's what people are thinking about. The bar for having a
conversation about something is very low...

> In
> one particular class of applicable routing algorithms you could use for
> lightning routing having a base fee makes the algorithm intractably slow,

I don't think of that as the problem, but rather as the base fee having
a multiplicative effect as you split payments.

If every channel has the same (base,proportional) fee pair, and send a
payment along a single path, you're paying n*(base+k*proportional). If
you split the payment, and send half of it one way, and half the other
way, you're paying n*(2*base+k*proportional). If you split the payment
four ways, you're paying n*(4*base+k*proportional). Where's the value
to the network in penalising payment splitting?

Being denominated in sats, the base fee also changes in value as the
bitcoin price changes -- c-lightning dropped the base fee to 1sat (from
546 sat!) in Jan 2018, but the value of 1sat has increased about 4x
since then, and it seems unlikely the fixed costs of a successful HTLC
payment have likewise increased 4x.  Proportional fees deal with this
factor automatically, of course.

> There's real cost to distorting the fee structures on the network away from
> the costs of node operators,

That's precisely what the base fee is already doing. Yes, we need some
other way of charging fees to prevent using up too many slots or having
transactions not fail in a timely manner, but the base fee does not
do that.

> Imagine we find some great way to address HTLC slot flooding/DoS attacks (or
> just chose to do it in a not-great way) by charging for HTLC slot usage, now
> we can't fix a critical DoS issue because the routing algorithms we deployed
> can't handle the new costing.

I don't think that's true. The two things we don't charge for that can
be abused by probing spam are HTLC slot usage and channel balance usage;
both are problems only in proportion to the amount of time they're held
open, and the latter is also only a problem proportional to the value
being reserved. [0]

Additionally, I don't think HTLC slot usage needs to be kept as a
limitation after we switch to eltoo; and in the meantime, I think it can
be better managed via adjusting the min_htlc_amount -- at least for the
scenario where problems are being caused by legitimate payment attempts,
which is also the only place base fee can help.

[0] (Well, ln-penalty's requirement to permanently store HTLC information
in order to apply the penalty is in some sense a constant
cost, however the impact is also proportional to value, and for
sufficiently low value HTLCs can be ignored entirely if the HTLC
isn't included in the channel commitment)

> Instead, we should investigate how we can
> apply the ideas here with the more complicated fee structures we have.

Fee structures should be *simple* not complicated.

I mean, it's kind of great that we started off complicated -- if it
turns out base fee isn't necessary, it's easy to just set it to zero;
if we didn't have it, but needed it, it would be much more annoying to

> Color me an optimist, but I'm quite confident with sufficient elbow grease
> and heuristics we can get 95% of the way there. We can and should revisit
> these conversations if such exploration is done and we find that its not
> possible, but until then this all feels incredibly premature.

Depends; I don't think it makes sense to try to ban nodes that don't have
a base fee of zero or anything, but random people on twitter advocating
that node operators should set it to zero and just worry about optimising
via the proportional fee and the min htlc amount seems fine.

For an experimental plugin that aggressively splits payments up, I think
either ignoring channels with >0 base fee entirely, or deciding that
you're happy to spend a total of X sats on base fees, and then ignoring
channels whose base fee is greater than X/paths/path-length sats is fine.

But long term, I also think that the base fee is an entirely unhelpful
complication that will eventually just be hardcoded to zero by everyone,
and eventually channels that propose non-zero base fees won't even be
gossiped. I don't expect that to happen any time soon though.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] #zerobasefee

```Hey *,

setting the BOLT#7 fee_base_msat value [0] to zero. I'm just writing
this to summarise my understanding in a place that's able to easily be
referenced later.

Setting the base fee to zero has a couple of benefits:

- it means you only have one value to optimise when trying to collect
the most fees, and one-dimensional optimisation problems are
obviously easier to write code for than two-dimensional optimisation
problems

- when finding a route, if all the fees on all the channels are
proportional only, you'll never have to worry about paying more fees
just as a result of splitting a payment; that makes routing easier
(see [1])

So what's the cost? The cost is that there's no longer a fixed minimum
fee -- so if you try sending a 1sat payment you'll pay 0.1% of the fee
to send a 1000sat payment, and there may be fixed costs that you have
in routing payments that you'd like to be compensated for (eg, the
computational work to update channel state, the bandwith to forward the
tx, or the opportunity cost for not being able to accept another htlc if
you've hit your max htlcs per channel limit).

But there's no need to explicitly separate those costs the way we do
now; instead of charging 1sat base fee and 0.02% proportional fee,
you can instead just set the 0.02% proportional fee and have a minimum
payment size of 5000 sats (htlc_minimum_msat=5e6, ~\$2), since 0.02%
of that is 1sat. Nobody will be asking you to route without offering a
fee of at least 1sat, but all the optimisation steps are easier.

You could go a step further, and have the node side accept smaller
payments despite the htlc minimum setting: eg, accept a 3000 sat payment
provided it pays the same fee that a 5000 sat payment would have. That is,
treat the setting as minimum_fee=1sat, rather than minimum_amount=5000sat;
so the advertised value is just calculated from the real settings,
and that nodes that want to send very small values despite having to
pay high rates can just invert the calculation.

I think something like this approach also makes sense when your channel
becomes overloaded; eg if you have x HTLC slots available, and y channel
capacity available, setting a minimum payment size of something like
y/2/x**2 allows you to accept small payments (good for the network)
when you're channel is not busy, but reserves the last slots for larger
payments so that you don't end up missing out on profits because you
ran out of capacity due to low value spam.

Two other aspects related to this:

At present, I think all the fixed costs are also incurred even when
a htlc fails, so until we have some way of charging failing txs for
incurring those costs, it seems a bit backwards to penalise successful
txs who at least pay a proportional fee for the same thing. Until we've
got a way of handling that, having zero base fee seems at least fair.

Lower value HTLCs don't need to be included in the commitment transaction
(if they're below the dust level, they definitely shouldn't be included,
and if they're less than 1sat they can't be included), and as such don't
incur all the same fixed costs that HTLCs that are committed too do.
Having different base fees for microtransactions that incur fewer costs
would be annoying; so having that be "amortised" into the proportional
fee might help there too.

I think eltoo can help in two ways by reducing the fixed costs: you no
longer need to keep HTLC information around permanently, and if you do
a multilevel channel factory setup, you can probably remove the ~400
HTLCs per channel at any one time limit. But there's still other fixed
costs, so I think that would just lower the fixed costs, not remove them
altogether and isn't a fundamental change.

I think the fixed costs for forwarding a HTLC are very small; something
like:

0.02sats -- cost of permanently storing the HTLC info
(100 bytes, \$500/TB/year, 1% discount rate)
0.04sats -- compute and bandwidth cost for updating an HTLC (\$40/month
at linode, 1 second of compute)

The opportunity cost of having HTLC slots or Bitcoin locked up until
the HTLC succeeds/fails could be much more significant, though.

Cheers,
aj

[0]
https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md#the-channel_update-message
[1] https://basefee.ln.rene-pickhardt.de/

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

```On Tue, Aug 10, 2021 at 06:37:48PM -0400, Antoine Riard via bitcoin-dev wrote:
> Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime of
> the HTLC.

Right: but that just means it's not something you should determine once
for the HTLC, but something you should determine each time you update the
channel commitment -- if fee rates are at 1sat/vb, then a 10,000 sat HTLC
that's going to cost 100 sats to create the utxo and eventually claim it
might be worth committing to, but if fee rates suddenly rise to 75sat/vb,
then the combined cost of 7500 sat probably isn't worthwhile (and it
certainly isn't worthwhile if fees rise to above 100sat/vb).

That's independent of dust limits -- those only give you a fixed size
lower limit or about 305sats for p2wsh outputs.

Things become irrational before they become uneconomic as well: ie the
100vb is perhaps 40vb to create then 60vb to spend, so if you create
the utxo anyway then the 40vb is a sunk cost, and redeeming the 10k sats
might still be marginally wortwhile up until about 167sat/vb fee rate.

But note the logic there: it's an uneconomic output if fees rise above
167sat/vb, but it was already economically irrational for the two parties
to create it in the first place when fees were at or above 100sat/vb. If
you're trying to save every sat, dust limits aren't your problem. If
you're not trying to save every sat, then just add 305 sats to your
output so you avoid the dust limit.

(And the dust limit is only preventing you from creating outputs that
would be irrational if they only required a pubkey reveal and signature
to spend -- so a HTLC that requires revealing a script, two hashes,
two pubkeys, a hash preimage and two signatures with the same dust
threshold value for p2wsh of ~305sats would already be irrational at
about 2.1sat/vb and unconomic at 2.75 sat/vb).

> (From a LN viewpoint, I would say we're trying to solve a price discovery
> issue, namely the cost to write on the UTXO set, in a distributed system,
> where
> any deviation from the "honest" price means you trust more your LN
> counterparty)

At these amounts you're already trusting your LN counterparty to not just
close the channel unilaterally at a high fee rate time and waste your
funds in fees, vs doing a much for efficient mutual/cooperative close.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Impact of eltoo loss of state

```On Wed, Jul 14, 2021 at 04:44:24PM +0200, Christian Decker wrote:
> Not quite sure if this issue is unique to eltoo tbh. While in LN-penalty
> loss-of-state equates to loss-of-funds, in eltoo this is reduced to
> impact only funds that are in a PTLC at the time of the loss-of-state.

Well, the idea (in my head at least) is it should be "safe" to restore
an eltoo channel from a backup even if it's out of date, so the question
is what "safe" can actually mean. LN-penalty definitely isn't safe in
that scenario.

>  2) Use the peer-storage idea, where we deposit an encrypted bundle with
>  our peers, and which we expect the peers to return. by hiding the fact
>  that we forgot some state, until the data has been exchanged we can
>  ensure that peers always return the latest snapshot of whatever we gave
>  them.

I don't think you can reliably hide that you forgot some state? If you
_did_ forget your state, you'll have forgotten their latest bundle too,
and it seems like there's at least a 50/50 chance you'd have to send
them their bundle before they sent you yours?

Sharing with other peers has costs too -- if you can't commit to an
updated state with peer A until you've sent the updated data to peers
B and C as backup, then you've got a lot more latency on each channel,
for example. And if you commit first, then you've got the problem of
what happens if you crash before the update has made it to either B or C?

But I guess what I'm saying is sure -- those are great ideas, but they
only reduce the chance that you'll not have the latest state, they don't
eliminate it.

But it seems like it can probably be reduced enough that it's fine that
you're risking the balances in live HTLCs (or perhaps HTLCs that have
been initiated since your last state backup), as long as you're at least
able to claim your channel balance from whatever more recent state your
peers may have.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

```On Mon, Jul 12, 2021 at 03:07:29PM -0700, Jeremy wrote:
> Perhaps there's a more general principle -- evaluating a script should
> only return one bit of info: "bool tx_is_invalid_script_failed"; every
> other bit of information -- how much is paid in fees (cf ethereum gas
> calculations), when the tx is final, if the tx is only valid in some
> chain fork, if other txs have to have already been mined / can't have
> been mined, who loses funds and who gets funds, etc... -- should already
> be obvious from a "simple" parsing of the tx.
> I don't think we have this property as is.
> E.g. consider the transaction:
> TX:
>    locktime: None
>    sequence: 100
>    scriptpubkey: 101 CSV

That tx will never be valid, no matter the state of the chain -- even if
it's 420 blocks after the utxo it's spending: it fails because "top stack
item is greater than the transaction input sequence" rule from BIP 112.

> How will you tell it is able to be included without running the script?

You have to run the script at some point, but you don't need to run the
script to differentiate between it being valid on one chain vs valid on
some other chain.

> What's nice is the transaction in this form cannot go from invalid to valid --
> once invalid it is always invalid for a given UTXO.

Huh? Timelocks always go from invalid to valid -- they're invalid prior
to some block height (IsFinal() returns false), then valid after.

Not going from valid to invalid is valuable because it limits the cases
where you have to remove txs (and their descendents) from the mempool.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Impact of eltoo loss of state

```Hello world,

Suppose you have some payments going from Alice to Bob to Carol with
eltoo channels. Bob's lightning node crashes, and he recovers from an
old backup, and Alice and Carol end up dropping newer channel states
onto the blockchain.

Suppose the timeout for the payments is a few hours away, while the
channels have specified a week long CSV delay to rectify any problems
on-chain.

Then I think that that means that:

1) Carol will reveal the point preimages on-chain via adaptor
signatures, but Bob won't be able to decode those adaptor signatures
because those signatures will need to change for each state

2) Even if Bob knows the point preimages, he won't be able to
claim the PTLC payments on-chain, for the same reason: he needs

3) For any payments that timeout, Carol doesn't have any particular
incentive to make it easy for Bob to claim the refund, and Bob won't
have the adaptor signatures for the latest state to do so

4) But Alice will be able to claim refunds easily. This is working how
it's meant to, at least!

I think you could fix (3) by giving Carol (who does have all the adaptor
signatures for the latest state) the ability to steal funds that are
meant to have been refunded, provided she gives Bob the option of claiming
them first.

However fixing (1) and (2) aren't really going against Alice or Carol's
interests, so maybe you can just ask: Carol loses nothing by allowing
Bob to claim funds from Alice; and Alice has already indicated that
knowing P is worth more to her than the PTLC's funds -- otherwise she
wouldn't have forwarded the PTLC to Bob in the first place.

Likewise, everyone's probably incentivised to negotiate cooperative
closes instead of going on-chain -- better privacy, less fees, and less
delay before the funds can be used elsewhere.

FWIW, I think a similar flaw exists even in the original eltoo spec --
Alice could simply decline to publish the settlement transaction until
the timeout has been reached, preventing Bob from revealing the HTLC
preimage before Alice can claim the refund.

So I think that adds up to:

a) Nodes should share state on reconnection; if you find a node that
doesn't do this, close the channel and put the node on your enemies
list. If you disagree on what the current state is, share your most
recent state, and if the other guy's state is more recent, and all
the signatures verify, update your state to match theirs.

b) Always negotiate a mutual/cooperative close if possible, to avoid
actually using the eltoo protocol on-chain.

c) If you want to allow continuing the channel after restoring an old
state from backup, set the channel state index based on the real time,
first update after a restore from backup will ensure that any old
states that your channel partner may not have told you about are
invalidated.

d) Accept that if you lose connectivity to a channel partner, you will
have to pay any PTLCs that were going to them, and won't be able
to claim the PTLCs that were funding them. Perhaps limit the total
value of inbound PTLCs for forwarding that you're willing to accept
at any one itme?

Also, layered commitments seem like they make channel factories
complicated too. Nobody came up with a way to avoid layered commitments
while I wasn't watching did they?

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

```On Thu, Jul 08, 2021 at 08:48:14AM -0700, Jeremy wrote:
> This would disallow using a relative locktime and an absolute locktime
> for the same input. I don't think I've seen a use case for that so far,
> but ruling it out seems suboptimal.
> I think you meant disallowing a relative locktime and a sequence locktime? I
> agree it is suboptimal.

No? If you overload the nSequence for a per-input absolute locktime
(well in the past for eltoo), then you can't reuse the same input's
nSequence for a per-input relative locktime (ie CSV).

Apparently I have thought of a use for it now -- cut-through of PTLC
refunds when the timeout expires well after the channel settlement delay
has passed. (You want a signature that's valid after a relative locktime
of the delay and after the absolute timeout)

> What do you make of sequence tagged keys?

I think we want sequencing restrictions to be obvious from some (simple)
combination of nlocktime/nsequence/annex so that you don't have to
evaluate scripts/signatures in order to determine if a transaction
is final.

Perhaps there's a more general principle -- evaluating a script should
only return one bit of info: "bool tx_is_invalid_script_failed"; every
other bit of information -- how much is paid in fees (cf ethereum gas
calculations), when the tx is final, if the tx is only valid in some
chain fork, if other txs have to have already been mined / can't have
been mined, who loses funds and who gets funds, etc... -- should already
be obvious from a "simple" parsing of the tx.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Eltoo Burst Mode & Continuations

```On Sat, Jul 10, 2021 at 02:07:02PM -0700, Jeremy wrote:
> Let's say you're about to hit your sequence limits on a Eltoo channel... Do
> you
> have to go on chain?
> No, you could do a continuation where for your *final* update, you sign a move
> to a new update key. E.g.,

That adds an extra tx to the uncooperative path every 2**30 states.

> Doing layers like this inherently adds a bunch of CSV layers, so it increases
> resolution time linearly.

I don't think that's correct -- you should be using the CLTV path for
updating the state, rather than the CSV path; so CSV shouldn't matter.

On Sat, Jul 10, 2021 at 04:25:06PM -0700, Jeremy wrote:
> [...] signing a eltoo "trampoline".
> essentially, begin a burst session at pk1:N under pk2, but always include a
> third branch to go to any pk1:N+1.

I think this is effectively reinventing/special casing channel
factories? That is you start an eltoo channel factory amongst group
{A,B,C,...}, then if {A,B} want an eltoo channel, that's a single update
to the factory; that channel can get updated independently until A and
B get bored and want to close their channel, which is then a single
additional update to the factory. In this case, the factory just doesn't

On Sat, Jul 10, 2021 at 05:02:35PM -0700, Jeremy wrote:
> suppose you make a Taproot tree with N copies (with different keys) of the
> state update protocol.

This feels cleverer/novel to me -- but as you point out it's actually
more costly than the trampoline/factory approach so perhaps it's not
that great.

I think what you'd do is change from a single tapscript of "OP_1
CHECKSIG <500e6+i> CLTV" to a tree of tapscripts:

" CHECKSIG <500e6+j+1> CLTV"

so if your state is (i*2**30 + j) you're spending using  with a
locktime of 500e6+j, and you're allowing later spends with the above script
filled in with (i,j) or (i',0) for i You can take a random path through which leaf you are using which, if you're
> careful about how you construct your scripts (e.g., keeping the trees the same
> size) you can be more private w.r.t. how many state updates you performed
> throughout the protocol (i.e., you can see the low order bits in the CLTV
> clause, but the high order bits of A, B, C's relationship is not revealed if
> you traverse them in a deterministically permuted order).

Tapscript trees are shuffled randomly based on the hashes of their
scripts, so I think that's a non-issue. You could keep the trees the
same size by adding scripts " CHECKSIG <500e6+j+1> RETURN".

> The space downside of this approach v.s. the approach presented in the prior
> email is that the prior approach achieves 64 bits with 2 txns one of which
> should be like 150 bytes, a similar amount of data for the script leaves may
> only gets you 5 bits of added sequence space.

You'd get 2**34 states (4 added bits of added sequence space) for
about 161 extra bytes (4 merkle branches at 32B each and revealing the
pubkey for 33B), compared to about 2**60 states (2**30 states for the
second tx, with a different second tx for each of the 2**30 states of
the first tx). Haven't done the math to check the 150 byte estimate,
but it seems the right ballpark.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

```On Wed, Jul 07, 2021 at 06:00:20PM -0700, Jeremy via bitcoin-dev wrote:
> This means that you're overloading the CLTV clause, which means it's
> impossible
> to use Eltoo and use a absolute lock time,

It's already impossible to simultaneously spend two inputs if one
requires a locktime specified by mediantime and the other by block
height. Having per-input locktimes would satisfy both concerns.

> 1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated to
> eltoo sequences). This has the benefit of giving a per input sequence, but the
> drawback of using a CSV bit. Because there's only 1 CSV per input, this
> technique cannot be used with a sequence tag.

This would disallow using a relative locktime and an absolute locktime
for the same input. I don't think I've seen a use case for that so far,
but ruling it out seems suboptimal.

Adding a per-input absolute locktime to the annex is what I've had in
mind. That could also be used to cheaply add a commitment to an historical
block hash (eg "the block at height 650,000 ended in cc6a") in order to
disambiguate which branch of a chain split or reorg your tx is valid for.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Lightning dice

```Hey all,

Here's a rough design for doing something like satoshi dice (ie, gambling
on "guess the number I'm thinking of" but provably fair after the fact
[0]) on lighting, at least once PTLCs exist.

[0]
https://bitcoin.stackexchange.com/questions/4609/how-can-a-wager-with-satoshidice-be-proven-to-be-fair

The security model is that if the casino cheats, you can prove they
cheated, but turning that proof into a way of getting your just rewards
is out of scope. (You could use the proof to discourage other people
from losing their money at the casino, or perhaps use it as evidence to
get a court or the police to act in your favour)

That we don't try to cryptographically guarantee the payout means we
can send both bets over lightning, but don't need to reserve the funds
for the bet payout for the lifetime of the bet. The idea is that's much
friendlier to the lightning network (you're not holding up funds of the
intermediary routing nodes) and it requires less capital to run the casino
than other approaches.

So the first thing to do is to set up a wager, between Bob the better
and Carol the casino, say. Carol offers a standard bet to anyone, say
1.8% chance of winning and a 50x payout, with up to 200 satoshi stake
(so 10k satoshi max payout).

We assume the bet is implemented as Bob and Carol both picking random
numbers (b and c, respectively), and who wins being decided based on
the relationship between those numbers.

We start off with two messages:

m1: "C owes B \${amount}, provided values b and c are given where
0 <= (b+c)%500 < 9 and b*G = \${Pb} and c*G = \${Pc}"

m2: "C has paid B \${amount} for the \${b} \${c} bet"

The first message, if signed by C, and accompanied by consisent values
for b and c, serves as proof that Bob took the bet and won. The second
message, if signed by B, serves as proof that Carol didn't cheat Bob.

So the idea then is that Bob should get a signature for the first message
as soon as he pays the lightning invoice for the bet, and Carol should
get a signature for the latter, as soon as she's gotten the payout
after winning.

PTLCs make this possible, because when verifying a Schnorr signature,
you want:

s*G = R + H(R,P,m)*P

but if you provide (R,P,m) initially, then you can calculate the right
hand side of the equation as the point, and then use a PTLC on that
point to pay for its preimage "s", at which point you have both s,R
which is the signature you were hoping for.

But you want to be even cleverer than this -- because as soon as Bob pays
Carol, Bob needs to not only have the signature but also have Carol's
"c". He can't have "c" before he pays, because that would allow him to
cheat (he could choose to bet only when the value of c guarantees he
wins). We can do that by making it an adaptor signature conditional on
c. That is, provide R,(s-c) as the adaptor signature instead of R,s.
Bob can verify "s-c" is correct, by verifying:

(s-c)*G = R + H(R,P,m)*P - C

So the protocol becomes:

1 -- Setup)
Bob has a pubkey B; picks random number b, calculates Pb = b*G.
Sends bet, B, Pb to Carol.

Carol decides she wants to accept the bet.
Carol picks c, calculates Pc = c*G.
Carol calculates m1(amount=50*bet, C, B, Pb, Pc), and generates a
signature R1,s1 for it.
Carol sends Pc,R1,(s1-c) to Bob, and a PTLC invoice for (bet,Pc)

Bob checks the adaptor signature -- (s1-c)*G = R1 + H(C,R1,m1)*C - Pc

2 -- Bet)
Bob pays the invoice, receiving "c".
Bob checks if (b+c)%500 < 9, and if it isn't stops, having lost the
bet.
Bob calculates m2(amount=50*bet, b, c) and produces a signature for
it, namely R2,s2.
Bob calculates S2=s2*G.
Bob sends b, R2 to Carol, and a PTLC invoice for (50*bet, S2)

3 -- Payout)
Carol checks b,c complies with the bet parameters.
Carol checks the signature -- S2 = R2 + H(R2,B,m2)*B
Carol pays the invoice, receiving s2

I think it's pretty straightforward to see how this meets the goals:
as soon as Bob puts up the bet money, he can prove to anyone whether or
not he won the bet; and as soon as Carol pays, she has proof that she
paid.

Note that Bob could abort the protocol with a winning bet before
requesting the payout from Carol -- he already has enough info to prove
he's won and claim Carol isn't paying him out at this point.

One way of dealing with this is to vet Bob's claim by sending b,R2 and a
PTLC invoice of (50*bet,S2) to Carol with yourself as the recipient -- you
can construct all that info from Bob's claim that Carol is cheating. If
Carol isn't cheating, she won't be able to tell you're not Bob and
will try paying the PTLC; at which point you know Carol's not cheating.
This protocol does't work without better spam defenses in lightning --
PTLC payments have to be serialised or Carol risks sending the payout
to Bob multiple times, and if many people want to verify Carol is(n't)
cheating, they can be delayed by just one verifier forcing Carol to wait
for the PTLC timeout to be reached.

Another way ```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Mon, Feb 24, 2020 at 01:29:36PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> > On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> >> And if there is a grace period, I can just gum up the network with lots
> >> of slow-but-not-slow-enough HTLCs.
> > Well, it reduces the "gum up the network for  blocks" to "gum
> > up the network for  seconds", which seems like a pretty
> > big win. I think if you had 20 hops each with a 1 minute grace period,
> > and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> > second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> > so at the very least, successfully performing this attack would be
> > demonstrating lightning's solved bitcoin's transactions-per-second
> > limitation?
> But the comparison here is not with the current state, but with the
> "best previous proposal we have", which is:
>
> 1. Charge an up-front fee for accepting any HTLC.
> 2. Will hang-up after grace period unless you either prove a channel
>close, or gain another grace period by decrypting onion.

In general I don't really like comparing ideas that are still in
brainstorming mode; it's never clear whether there are unavoidable
pitfalls in one or the other that won't become clear until they're
actually implemented...

Specifically, I'm not a fan of either channel closes or peeling the onion
-- the former causes problems if you're trying to route across sidechains
or have lightning as a third layer above channel factories or similar,
and I'm not convinced even within Bitcoin "proving a channel close"
is that meaningful, and passing around decrypted onions seems like it
opens up privacy attacks.

Aside from those philosophical complaints, seems to me the simplest
attack would be:

* route 1000s of HTLCs from your node A1 to your node A2 via different,
long paths, using up the total channel capacity of your A1/A2 nodes,
with long timeouts
* have A2 offer up a transaction claiming that was the channel
close to A3; make it a real thing if necessary, but it's probably
fake-able
* then leave the HTLCs open until they time out, using up capacity
from all the nodes in your 1000s of routes. For every satoshi of
yours that's tied up, you should be able to tie up 10-20sat of other
people's funds

That increases the cost of the attack by one on-chain transaction per
timeout period, and limits the attack surface by how many transactions
you can get started/completed within whatever the grace period is, but
it doesn't seem a lot better than what we have today, unless onchain
fees go up a lot.

(If the up-front fee is constant, then A1 paid a fee, and A2 collected a
fee so it's a net wash; if it's not constant then you've got a lot of
hassle making it work with any privacy I think)

> >   A->B: here's a HTLC, locked in
> >   B->C: HTLC proposal
> >   C->B: sure: updated commitment with HTLC locked in
> >   B->C: great, corresponding updated commitment, plus revocation
> >   C->B: revocation
> Interesting; this adds a trip, but not in latency (since C can still
> count on the HTLC being locked in at step 3).
> I don't see how it helps B though?  It still ends up paying A, and C
> doesn't pay anything?

The updated commitment has C paying B onchain; if B doesn't receive that
by the time the grace period's about over, B can cancel the HTLC with A,
and then there's statemachine complexity for B to cancel it with C if
C comes alive again a little later.

> It forces a liveness check of C, but TBH I dread rewriting the state
> machine for this when we can just ping like we do now.

I'd be surprised if making musig work doesn't require a dread rewrite
of the state machine as well, and then there's PTLCs and eltoo...

> >> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
> >> would fail this HTLC once it's committed, here's the error"
> > Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> > fast fail the above too.
> > And I think something like that's necessary (at least with my view of how
> > this "keep the HTLC open" payment would work), otherwise B could send C a
> > "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> > timeout of 2016 blocks" and if C couldn't reject it immediately would
> > owe B 50c per millisecond it took to cancel.
> Well, surely grace period (and penalty rate) are either fixed in the
> protocol or negotiated up-front, not per-HTLC.

I think the "keep open rate" should depend on how many nodes have
already been in the route (the more hops it's gone through, the more
funds/channels you're t```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> > I think the way it would end up working
> > is that the further the route extends, the greater the payments are, so:
> >   A -> B   : B sends A 1msat per minute
> >   A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> >   A -> B -> C -> D : D sends C 3 msat, etc
> >   A -> B -> C -> D -> E : E sends D 4 msat, etc
> > so each node is receiving +1 msat/minute, except for the last one, who's
> > paying n msat/minute, where n is the number of hops to have gotten up to
> > the last one. There's the obvious privacy issue there, with fairly
> > obvious ways to fudge around it, I think.
> Yes, it needs to scale with distance to work at all.  However, it has
> the same problems with other upfront schemes: how does E know to send
> 4msat per minute?

D tells it "if you want this HTLC, you'll need to pay 4msat/minute after
the grace period of 65 seconds". Which also means A as the originator can
also choose whatever fees they like. The only consequence of choosing too
high a fee is that it's more likely one of the intermediate nodes will
say "screw that!" and abort the HTLC before it gets to the destination.

> > I think it might make sense for the payments to have a grace period --
> > ie, "if you keep this payment open longer than 20 seconds, you have to
> > start paying me x msat/minute, but if it fulfills or cancels before
> > then, it's all good".
> But whatever the grace period, I can just rely on knowing that B is in
> Australia (with a 1 second HTLC commit time) to make that node bleed
> satoshis.  I can send A->B->C, and have C fail the htlc after 19
> seconds for free.  But B has to send 1msat to A.  B can't blame A or C,
> since this attack could come from further away, too.

So A gives B a grace period of 35 seconds, B deducts 5 seconds
processing time and 10 seconds for latency, so gives C a grace period of
20 seconds; C rejects after 19 seconds, and B still has 15 seconds to
notify A before he has to start paying fees. Same setup as decreasing
timelocks when forwarding HTLCs.

> And if there is a grace period, I can just gum up the network with lots
> of slow-but-not-slow-enough HTLCs.

Well, it reduces the "gum up the network for  blocks" to "gum
up the network for  seconds", which seems like a pretty
big win. I think if you had 20 hops each with a 1 minute grace period,
and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
so at the very least, successfully performing this attack would be
demonstrating lightning's solved bitcoin's transactions-per-second
limitation?

I think you could do better by having the acceptable grace period be
dynamic: both (a) requiring a shorter grace period the more funds a HTLC
locks up, which stops a single HTLC from gumming up the channel, and (b)
requiring a shorter grace period the more active HTLCs you have (or, the
more active HTLCs you have that are in the grace period, perhaps). That
way if the network is loaded, you're prioritising more efficient routes
(or at least ones that are willing to pay their way), and if it's under
attack, you're dynamically increasing the resources needed to maintain
the attack.

Anyway, that's my hot take; not claiming it's a perfect solution or
final answer, rather that this still seems worth brainstorming out.

My feeling is that this might interact nicely with the sender-initiated
upfront fee. Like, you could replace a grace period of 30 seconds at
2msat/minute by always charging 2msat/minute but doing a forward payment
of 1msat. But at this point I can't keep it all in my head at once to
figure out something that really makes sense.

> > Maybe this also implies a different protocol for HTLC forwarding,
> > something like:
> >   1. A sends the HTLC onion packet to B
> >   2. B decrypts it, makes sure it makes sense
> >   3. B sends a half-signed updated channel state back to A
> >   4. A accepts it, and forwards the other half-signed channel update to B
> > so that at any point before (4) Alice can say "this is taking too long,
> > I'll start losing money" and safely abort the HTLC she was forwarding to
> > Bob to avoid paying fees; while only after (4) can she start the time on
> > expecting Bob to start paying fees that she'll forward back. That means
> > 1.5 round-trips before Bob can really forward the HTLC on to Carol;
> > but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
> > as Alice/Bob has finished (2).
> We added a ping-before-commit[1] to avoid the case where B has disconnected
> and we don't know yet; we have to assume an HTLC is stuck once we send
> commitment_signed.  This would be a formalization of that, but I don't
> think it's any better?

I don't think it's any better as things stand, but with the "B pays A
holding fees" I think it becomes necessary. If you've got a route
A->B->C then from B's perspective I think it ```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Thu, Feb 20, 2020 at 03:42:39AM +, ZmnSCPxj wrote:
> A thought that arises here is, what happens if I have forwarded a payment,
> then the outgoing channel is dropped onchain and that peer disconnects from
> me?
>
> Since the onchain HTLC might have a timelock of, say, a few hundred blocks
> from now, the outgoing peer can claim it up until the timelock.
> If the peer does not claim it, I cannot claim it in my incoming as well.
> I also cannot safely fail my incoming, as the outgoing peer can still claim
> it until the timelock expires.

Suppose the channel state looks like:

Bob's balance:   \$150
Carol's balance: \$500
Bob to Carol: \$50, hash X, timelock +2016 blocks

The pre-signed close transaction will have already deducted maybe \$1 in
fees, say 50c from each balance.

At 5% pa, that's \$50*0.05*2/52, so about 10 cents worth of "holding"
fees, so that seems like it's worth just committing to up-front, ie:

Bob's balance:   \$149.60 (-.50+.10)
Carol's balance: \$499.40 (-.50-.10)
Bob to Carol: \$50, hash X, timelock +2016 blocks
Fees:  \$1

And that seems necessary anyway: if the channel does drop to the chain,
then the HTLC can't be cancelled, so if it never confirms, Bob will have
had to pay, say, 9.5c to Alice waiting for the timeout, and can then
immediately cancel the HTLC with Alice allowing it to finish unwinding.

So I think the idea would be not to accept a (rate, amount, timelock)
tuple for an incoming HTLC unless the rate*amount*timelock product
is substantially less than what you're putting towards the blockchain
fees anyway, as otherwise you've got bad incentives for the other guy to
drop to the chain.

Note the rate increases with number of hops, so if it's 1% pa per hop,
the 11th peer will be emitting 10% pa. I think that's probably okay,
because BTC's deflationary nature probably means you don't need to earn
much interest on it, and you can naturally choose the rate dynamically
based on how many HTLCs you currently have open and how much of your
channel funds are being used up by the HTLC?

Also, you'd presumably update your channel state every hundred blocks,
reducing the 10c by half a cent or so each time, so you could have your
risk reduce. Maybe there could be some way of bumping the timelock across
a HTLC path so that the risk is capped, but if the HTLC is still being
paid for it doesn't have to be cancelled?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
> A different way of mitigating this is to reverse the direction in which the
> bond is paid. So instead of paying to offer an htlc, nodes need to pay to
> receive an htlc. This sounds counterintuitive, but for the described jamming
> attack there is also an attacker node at the end of the route. The attacker
> still pays.

I think this makes a lot of sense. I think the way it would end up working
is that the further the route extends, the greater the payments are, so:

A -> B   : B sends A 1msat per minute
A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
A -> B -> C -> D : D sends C 3 msat, etc
A -> B -> C -> D -> E : E sends D 4 msat, etc

so each node is receiving +1 msat/minute, except for the last one, who's
paying n msat/minute, where n is the number of hops to have gotten up to
the last one. There's the obvious privacy issue there, with fairly
obvious ways to fudge around it, I think.

But that's rational, because that last node can either (a) collect the
payment, covering their cost; or (b) forward the payment, at which point
they'll start collecting funds rather than paying them; or (c) cancel
the payment releasing all the locked up funds all the way back.

I think it might make sense for the payments to have a grace period --
ie, "if you keep this payment open longer than 20 seconds, you have to
start paying me x msat/minute, but if it fulfills or cancels before
then, it's all good".

I'm not sure if there needs to be any enforcement for this beyond "this
peer isn't obeying the protocol, so I'm going to close the channel"; not
even sure it's something that needs to be negotiated as part of payment
routing -- it could just be something each peer does for HTLCs on their
channels? If that can be made to work, it doesn't need much crypto or
bitcoin consensus changes, or even much deployment coordination, all of
which would be awesome.

I think at \$10k/BTC then 1msat is about the fair price for locking up \$5
worth of BTC (so 50k sat) for 1 minute at a 1% pa interest rate, fwiw.

Maybe this opens up some sort of an attack where a peer lies about the
time to make the "per minute" go faster, but if msats-per-minute is the
units, not sure that really matters.

Maybe this also implies a different protocol for HTLC forwarding,
something like:

1. A sends the HTLC onion packet to B
2. B decrypts it, makes sure it makes sense
3. B sends a half-signed updated channel state back to A
4. A accepts it, and forwards the other half-signed channel update to B

so that at any point before (4) Alice can say "this is taking too long,
I'll start losing money" and safely abort the HTLC she was forwarding to
Bob to avoid paying fees; while only after (4) can she start the time on
expecting Bob to start paying fees that she'll forward back. That means
1.5 round-trips before Bob can really forward the HTLC on to Carol;
but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
as Alice/Bob has finished (2).

Cheers
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Layered commitments with eltoo

```Hi all,

At present, BOLT-3 commitment transactions provide a two-layer
pay-to-self path, so that you can reduce the three options:

1) pay-to-them due to revoked commitment
2) pay-to-me due to timeout (or: preimage known)
3) pay-to-them due to preimage known (or: timeout)

to just the two options:

1) pay-to-them due to revoked commitment
2) pay-to-me due to timeout (or: preimage known)

This allows the `to_self_delay` and the HTLC timeout (and hence the
`cltv_expiry_delta`) to be chosen independently.

As it stands, both the original eltoo proposal [0] and the
ANYPREVOUT-based sketch [1] don't have this property, which means that
either the `cltv_expiry_delta` needs to take the `to_self_delay` into
account, or you risk not being able to claim funds to cover payments
you forward.

[0] https://blockstream.com/eltoo.pdf
[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-May/001996.html

I think if we drop the commitment to the input value from
ANYPREVOUTANYSCRIPT signatures, it's possible to come up with a scheme
that preserves the other benefits of eltoo while also having the same
benefits BOLT-3 currently achieves. I think for eltoo it needs to be a
channel-wide "shared_delay" rather than a "to_self" delay, so I'll use
that.

Here's the setup. We have four types of transaction:

* funding transaction, posted on-chain as part of channel setup
* update transaction, posted on-chain to close the channel at a
given state
* revocable claim transaction, posted on-chain to reveal a preimage
or establish a timeout has passed
* settlement transaction, to actually claim funds

As with eltoo, if a stale update transaction is posted, it can be spent
via any subsequent update transaction with no penalty. The revocable
claim transactions have roughly the same goal as the second layer BOLT-3
transactions, that is going from:

1) spent by a later update transaction
2) pay-to-me due to timeout (or: preimage known)
3) pay-to-them due to preimage known (or: timeout)

to

1) spent by a later update transaction
2) pay-to-me due to timeout (or: preimage known)

In detail:

* Get a pubkey from each peer (A, B), and calculate P=MuSig(A,B).

* Each state update involves constructing and calculating signatures
for new update transactions, revocable claim transactions and
settlement transactions.

* The update transaction has k+2 outputs, where k is the number of open
PTLCs. Each PTLC output pays to P as the internal key, and:

IF CODESEP [i] ELSE [500e6+n+1] CLTV ENDIF DROP OP_1 CHECKSIG

as the script. i varies from 1..k; n is the state counter, starting
at 1 and counting up.

Each balance output pays to P as the internal key and:

IF CODESEP IF [balance_pubkey_n] [shared_delay] CSV ELSE OP_1 OP_0 ENDIF
ELSE OP_1 [500e6+n+1] CLTV ENDIF
DROP CHECKSIG

as the script.

The signature that allows an update tx to spend a previous update tx
is calculated using ALL|ANYPREVOUTANYSCRIPT, a locktime of 500e6+n,
with the key P, and codesep_pos=0x_.

* For each output of the update tx and each party that can spend it,
we also construct a revocable claim transaction. These are designed
to update a single output of each PTLC, and their output pays to P
as the internal key, and the script:

IF [i*P+p] CODESEP ELSE [500e6+n+1] CLTV ENDIF DROP OP_1 CHECKSIG

(swapping the position of the CODESEP opcode, and encoding both i and
p in the script -- P is the number of peers in the channel, so 2
here, and p is an identifier for each peer so either 0 or 1; i=1..k
for HTLCs, i=0 for the balances)

The signature that allows this tx to be applied to the update tx
is calculated as SINGLE|ANYPREVOUT, with the script committed and
codesep_pos=1. This signature should be made conditional for each
PTLC, either by being an adaptor signature requiring the point preimage
to be added, or by having a locktime given.

* For each revocable claim transaction, we also construct a settlement
transaction. The outputs of the settlement transactions are just

These are also done by SINGLE|ANYPREVOUT signatures, with nSequence
set to the shared_delay. There's no locktime or adaptor signatures
needed here, since they were taken care of for the revocable claim
transaction.  The signatures commit to the respective scripts, and
set codesep_pos to either 1 or 2 depending on whether a revocable
claim is being spent or not.

* The funding transaction pays to internal key P, with tapscript:

"OP_1 CHECKSIGVERIFY 500e6 CLTV"

Then: to spend from the funding transaction cooperatively, you make a
new SIGHASH_ALL signature based the output key Q for the funding tx.

If you can't do that, you post two transactions: the latest update tx,
and another tx that includes any revocable claim tx's you can already
claim and an input to cover fees, and any change from ```

### Re: [Lightning-dev] Lightning in a Taproot future

```On Sun, Dec 15, 2019 at 03:43:07PM +, ZmnSCPxj via Lightning-dev wrote:
> For now, I am assuming the continued use of the existing Poon-Dryja update
> mechanism.
> Decker-Russell-Osuntokun requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`, and
> its details seem less settled for now than taproot details.

Supporting PTLCs instead of HTLCs is a global upgrade in that you need
all nodes along your payment path to support it; moving from Poon-Dryja
to Decker-Russell-Osuntokun is only relevant to individual peers. So I
think it makes sense to do PTLCs first if the required features aren't
both enabled at the same time.

> Poon-Dryja with Schnorr
> ---

I think MuSig between the two pairs is always superior to a NUMS point
for the taproot internal key; you definitely want to calculate a point
rather than use a constant, or you're giving away that it's lightning,
and if you're calculating you might as well calculate something that can
be used for a cooperative key path spend if you ever want to.

> A potential issue with MuSig is the increased number of communication rounds
> needed to generate signatures.

I think you can reduce this via an alternative script path. In
particular, if you want a script that the other guy can spend if they
reveal the discrete log of point X, with musig you do:

P = H(H(A,B),1)*A + H(H(A,B),2)*B
[exchange H(RA),H(RB),RA,RB]

[send X]

sb = rb + H(RA+RB+X,P,m)*H(H(A,B),2)*b

[wait for sb]

sa = ra + H(RA+RB+X,P,m)*H(H(A,B),1)*a

[store RA+RB+X, sa+sb, supply sa, watch for sig]

sig = (RA+RB+X, sa+sb+x)

So the 1.5 round trips are "I want to do a PTLC for X", "okay here's
sb", "great, here's sa".

But with taproot you can have a script path as well, so you could have a
script:

A CHECKSIGVERIFY B CHECKSIG

and supply a partial signature:

R+X,s,X where s = r + H(R+X,A,m)*a

to allow them to satisfy "A CHECKSIGVERIFY" if they know the discrete
log of X, and of course they can sign with B at any time. This is only
half a round trip, and can be done at the same time as sending the "I
want to do a PTLC for X" message to setup the (ultimately cheaper) MuSig
spend. It's an extra signature on the sender's side and an extra verification
on the receiver's side, but I think it works out fine.

> Pointlocked Timelocked Contracts
>
> First, I will discuss how to create a certain kind of PTLCs, which I call
> "purely scriptless" PTLCs.
> In particular, I would like to point out that we *actually* use in current
> Poon-Dryja Lightning Network channels is *revocable* HTLCs, thus we need to
> have *revocable* PTLCs to replace them.
> * First, we must have a sender A, who is buying a secret scalar, and knows
> the point equivalent to that scalar.
> * Second, we have a receiver B, who knows this secret scalar (or can somehow
> learn this secret scalar).
> * A and B agree on the specifications of the PTLC: the point, the future
> absolute timelock, the value.
> * A creates (but *does not* sign or broadcast) a transaction that pays to a
> MuSig of A and B and shares the txid and output number with the relevant
> MuSig output.
> * A and B create a backout transaction.
>   * This backout has an `nLockTime` equal to the agreed absolute timelock.
>   * It spends the above MuSig output (this input must enable `nLockTime`,
> e.g. by setting `nSequence` to `0xFFFE`).
>   * It creates an output that is solely controlled by A.
> * A and B perform a MuSig ritual to sign the backout transaction.
> * A now signs and broadcast the first transaction, the one that has an output
> that represents the PTLC.
> * A and B wait for the above transaction to confirm deeply.
>   This completes the setup phase for the PTLC.
> * After this point, if the agreed-upon locktime is reached, A broadcasts the
> backout transaction and aborts the ritual.
> * A and B create a claim transaction.
>   * This has an `nLockTime` of 0, or a present or past blockheight, or
> disabled `nLockTime`.
>   * This spends the above MuSig output.
>   * This creates an output that is solely controlled by B.
> * A and B generate an adaptor signature for the claim transaction, which
> reveals the agreed scalar.
>   * This is almost entirely a MuSig ritual, except at `s` exchange, B
> provides `t + r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, B) * b` first,
> then demands `r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, A) * a` from A,
> then reveals `r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, B) * b` (or the
> completed signature, by publishing onchain), revealing the secret scalar `t`
> to A.
> * A is able to learn the secret scalar from the above adaptor signature
> followed by the full signature, completing the ritual.

(I think it makes more sense to provide "r + H(R+T, P, m)*b" instead of
"r+t + H(R,P,m)*b" -- you might not know "t" at the point you need to
start the signature exchange)

I think the setup can be similar to BOLT-3:

Funding ```

### Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

```On Tue, Nov 26, 2019 at 03:41:14PM -0800, Conner Fromknecht wrote:
> I recently revisited the eltoo paper and noticed some things related
> watchtowers that might affect channel construction.
> In order to spend, however, the tower must also produce a witness
> script which when hashed matches the witness program of the input. To
> ensure settlement txns can only spend from exactly one update txn,
> each update txn uses unique keys for the settlement clause, meaning
> that each state has a _unique_ witness program.

I don't believe that's necessary with the ANYPREVOUT design, see

https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-May/001996.html

The design I'm thinking of use a common taproot internal key
P=muSig(A,B) for update transactions. The tapscript paths are
(with the chaperone sigs dropped):

Update n: [nLockTime = 500e6+n]
script: OP_1 CHECKSIGVERIFY [500e6+n+1] CLTV
witness: [ANYPREVOUTANYSCRIPT sig]

Settlement n: [nSequence = delay; nLockTime=500e6+n+1]
witness: [ANYPREVOUT sig]

(This relies on having the two variants of ANYPREVOUT, one of which
commits to the state number via commiting to the [500e6+n+1] value in
the update tx's script, so that you don't need unique keys to ensure
settlement tx n can't spend settlement tx n+k)

With this you can tell which update was posted by subtracting 500e6 from
the nLocktime, and use that to calculate the tapscript the update tx used,
and the internal key is constant.

The watchtower only needs to post the update tx -- as long as the latest
update is posted, the only tx that can spend it is the correct settlement,
so you can post that whenever you're back online, even if that's weeks
or months later, and likewise for actually claiming your funds from the
settlement tx's outputs.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Fri, Nov 08, 2019 at 01:08:04PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> [ Snip summary, which is correct ]

Huzzah!

This correlates all the hops in a payment when the route reaches its end
(due to the final preimage getting propogated back for everyone to justify
the funds they claim). Maybe solvable by converting from hashes to ECC
as the trapdoor function?

The refund amount propogating back also reveals the path, probably.
Could that be obfusticated by somehow paying each intermediate node
both as the funds go out and come back, so the refund decreases on the
way back?

Oh, can we make the amounts work like the onion, where it stays constant?
So:

Alice wants to pay Dave via Bob, Carol. Bob gets 700 msat, Carol gets
400 msat, Dave gets 300 msat, and Alice gets 100 msat refunded.

Success:
Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
Carol forwards 1500 msat to Dave  (-1500, 0, 0, +1500)
Dave refunds 1200 msat to Carol   (-1500, 0, +1200, +300)
Carol refunds 800 msat to Bob (-1500, +800, +400, +300)
Bob refunds 100 msat to Alice (-1400, +700, +400, +300)

Clean routing failure at Carol/Dave:
Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
Carol says Dave's not talking
Carol refunds 1100 msat to Bob(-1500, +1100, +400, 0)
Bob refunds 400 msat to Alice (-1100, +700, +400, 0)

I think that breaks the correlation pretty well, so you just need a
decent way of obscuring path length?

In the uncooperative routing failure case, I wonder if using an ECC
trapdoor and perhaps scriptless scripts, you could make it so Carol
doesn't even get an updated state without revealing the preimage...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Thu, Nov 07, 2019 at 02:56:51PM +1030, Rusty Russell wrote:
> > What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
> > field: it says 14, ." but Alice doesn't know anything other than
> >  so can't put  in the onion?
> Alice created the onion.  Alice knows all the preimages, since she
> created the chain AZ.

In your reply to Zmn, it was Rusty (Bob) preparing the nonce and creating
the chain ... -- so I was lost as to what you were proposing...

Here's what I now think you're saying, which I think mostly hangs
together:

Alice sends a HTLC with hash X and value V to Dave via Bob then Carol

Alice generates a nonce , and calculates H^25() = .

Alice creates an onion, and sends the HTLC to Bob, revealing  and
6, to Bob, along with 2500 msat (25 for the hashing ops between
and , and *100 for round numbers). Bob calculates "6" is a
fair price.

Bob checks H^6()=. If not, Bob refunds the 2500 msat, and fails
the HTLC immediately. Otherwise, Bob passes the onion on to Carol, with
1900 msat and ; Carol unwraps the onion revealing 15,. Carol
calcualtes "15" is a fair price.

Carol checks H^15()=, and fails the route if not, refunding
1900msat to Bob. Otherwise, Carol passes the onion on to Dave, with 400
msat and .  Dave unwraps the onion, revealing 2,, so can claim
200 msat as well as the HTLC amount, etc.

After the successful route, Dave passes 2, and 200msat back to Carol,
who validates and continues passing things back.

If Carol instead passes, say, 3, back, then she also has to refund
300msat to avoid Bob closing the channel, which would be fine, because
Bob can just pass that back too -- Carol's the only one losing money in
that case.

If Carol wants to close the channel anyway and collect the HTLC on
chain, then Bob's situation is:

channel with Alice: +2500 msat
channel with Carol: -1900 msat , -fees , -HTLC funds

If Carol isn't cooperative, Bob only definitely knows , so to keep
the channel open with Alice, has to refund 1900msat, so:

channel with Alice:  +600 msat , +HTLC funds
channel with Carol: -1900 msat , -fees , -HTLC funds

(or Bob could keep the 2500 msat at the cost of Alice closing the channel
too:

channel with Alice: +2500 msat , -fees , +HTLC funds
channel with Carol: -1900 msat , -fees , -HTLC funds
)

So Bob and either keep the channel open but is out 1300 msat because of
Carol, or can gain 600 msat at the cost of closing the channel with
Alice?

As far as the "fair price" goes, the spitballed formula is "16 - X/4"
where X is number of zero bits in some PoW-y thing. The proposal is
the thing is SHA256(blockhash|revealedonion) which works, and (I think)
means each step is individually grindable.

I think an alternative would be to use the prepayment hashes themselves,
so you generate the nonce  as the value you'll send to Dave then
hash it repeatedly to get .., then check if pow(,) has
60 leading zero bits or pow(,) has 56 leading zero bits etc.
If you made pow(a,b) be SHA256(a,b,shared-onion-key) I think it'd
preserve privacy, but also mean you can't meaningfully grind unfairly
cheap routing except for very short paths?

If you don't grind and just go by luck, the average number of hashes
per hop is ~15.93 (if I got my maths right), so you should be able to
estimate path length pretty accurate by dividing claimed prepaid funds by
15.93*25msat or whatever. If everyone grinds at each level independently,
I think you'd just subtract maybe 6 hops from that, but the maths would
mostly stay the same?

Though I think you could obfusticate that pretty well by moving
some of the value from the HTLC into the prepayment -- you'd risk losing
that extra value if the payment made it all the way to the recipient but
they declined the HTLC that way though.

> >> Does Alice lose everything on any routing failure?
> > That was my thought yeah; it seems weird to pay upfront but expect a
> > refund on failure -- the HTLC funds are already committed upfront and
> > refunded on failure.
> AFAICT you have to overpay, since anything else is very revealing of
> path length.  Which kind of implies a refund, I think.

I guess you'd want to pay for a path length of about 20 whether the
path is actually 17, 2, 10 or 5. But a path length of 20 is just paying
for bandwidth for maybe 200kB of total traffic which at \$1/GB is 2% of
1 cent, which doesn't seem that worth refunding (except for really tiny
micropayments, where paying for payment bandwidth might not be feasible
at all).

If you're buying a \$2 coffee and paying 500ppm in regular fees per hop
with 5 hops, then each routing attempt increases your fees by 4%, which
seems pretty easy to ignore to me.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Wed, Nov 06, 2019 at 10:43:23AM +1030, Rusty Russell wrote:
> >> Rusty prepares a nonce, A and hashes it 25 times = Z.
> >> ZmnSCPxj prepares the onion, but adds extra fields (see below).
> > It would have made more sense to me for Alice (Zmn) to generate
> > the nonce, hash it, and prepare the onion, so that the nonce is
> > revealed to Dave (Rusty) if/when the message ever actually reaches its
> > destination. Otherwise Rusty has to send A to Zmn already so that
> > Zmn can prepare the onion?
> The entire point is to pay *up-front*, though, to prevent spam.

Hmm, I'm not sure I see the point of paying upfront but not
unconditionally -- you already commit the funds as part of the HTLC,
and if you're refunding some of them, you kind-of have to keep them
reserved or you risk finalising the HTLC causing a failure because you
don't have enough msats spare to do the refund?

If you refund on routing failure, why wouldn't a spammer just add a fake
"Ezekiel" at the end of the route after Dave, so that the HTLCs always
fail and all the fees are returned?

> Bob/ZmnSCPxj doesn't prepare anything in the onion.  They get handed the
> last hash directly: Alice is saying "I'll pay you 50msat for each
> preimage you can give me leading to this hash".

So my example was Alice paying Dave via Bob and Carol (so Alice/Bob,
Bob/Carol, Carol/Dave being the individual channels).

What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
field: it says 14, ." but Alice doesn't know anything other than
so can't put  in the onion?

Are you using Hornet so that every intermediary can communicate a nonce
back to the source of the route? If not, "Rusty" generating the nonce
seems like you're informing Rusty that you're actually the origin of the
HTLC, and not just innocently forwarding it along; if so, it seems like
you have independent nonces at each step, rather than
/// in a direct chain.

> > I'm not sure why lucky hashing should result in a discount?
> Because the PoW adds noise to the amounts, otherwise the path length is
> trivially exposed, esp in the failure case.  It's weak protection
> though.

With a linear/exponential relationship you just get "half the time it's
1 unit, 25% of the time it's 2 units, 12% of the time it's 3 units", so
I don't think that's adding much noise?

> > You've only got two nonce choices -- the initial  and the depth
> > that you tell Bob and Carol to hash to as steps in the route;
> No, the sphinx construction allows for grinding, that was my intent
> here.  The prepay hashes are independent.

Oh, because you're also xoring with the onion packet, right, I see.

> > I think you could just make the scheme be:
> >   Alice sends HTLC(k,v) + 1250 msat to Bob
> >   Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
> >   Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
> >   Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to
> > Carol

The math here doesn't add up. Let's assume I meant:

Bob keeps 500 sat, forwards 750 sat
Carol keeps 250 sat, forwards 500 sat
Dave keeps 300 sat, refunds 200 sat

> >   Carol redeems the HTLC and refunds 200 msat to Bob
> >   Bob redeems the HTLC and refunds 200 msat to Alice
> >
> > If there's a failure, Alice loses the 1250 msat, and someone in the
> > path steals the funds.
> This example confuses me.

Well, that makes us even at least? :)

> So, you're charging 250msat per hop?  Why is Bob taking 750?  Does Carol
> now know Dave is the last hop?

No, Alice is choosing to pay 500, 250 and 300 msat to Bob, Carol and
Dave respectively, as part of setting up the onion, and picks those
numbers via some magic algo trading off privacy and cost.

> Does Alice lose everything on any routing failure?

That was my thought yeah; it seems weird to pay upfront but expect a
refund on failure -- the HTLC funds are already committed upfront and
refunded on failure.

> If so, that is strong incentive for Alice to reduce path-length privacy
> by keeping payments minimal, which I was really trying to avoid.

Assuming v is much larger than 1250msat, and 1250 msat is much lower than
the cost to Bob of losing the channel with Alice, I don't think that's
a problem. 1250msat pays for 125kB of bandwdith under your assumptions
I think?

> > Does that miss anything that all the hashing achieves?
> It does nothing if Carol is the one who can't route.

If Carol can't route, then ideally she just refunds all the money and
everyone's happy.

If Carol tries to steal, then she can keep 750 msat instead of 250 msat.
This doesn't give any way for Bob to prove Carol cheated on him though;
but Bob could just refund the 1250 msat and write the 750 msat off as a
loss of dealing with cheaters like Carol.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
```

### Re: [Lightning-dev] A proposal for up-front payments.

```On Tue, Nov 05, 2019 at 07:56:45PM +1030, Rusty Russell wrote:
> Sure: for simplicity I'm sending a 0-value HTLC.
> ZmnSCPxj has balance 1msat in channel with Rusty, who has 1000msat
> in the channel with YAIjbOJa.

Alice, Bob and Carol sure seem simpler than Zmn YAI and Rusty...

> Rusty prepares a nonce, A and hashes it 25 times = Z.
> ZmnSCPxj prepares the onion, but adds extra fields (see below).

It would have made more sense to me for Alice (Zmn) to generate
the nonce, hash it, and prepare the onion, so that the nonce is
revealed to Dave (Rusty) if/when the message ever actually reaches its
destination. Otherwise Rusty has to send A to Zmn already so that
Zmn can prepare the onion?

> He then
> sends the HTLC to Rusty, but also sends Z, and 25x50 msat (ie. those
> fields are in the update_add_htlc msg).  His balance with Rusty is now
> 8750msat (ie. 25x50 to Rusty).
>
> Rusty decrypts the onion, reads the prepay field: it says 14, L.
> Rusty checks: the hash of the onion & block (or something) does indeed
> have the top 8 bits clear, so the cost is in fact 16 - 8/2 == 14.  He
> then hashes L 14 times, and yes, it's Z as ZmnSCPxj said it
> should be.

I'm not sure why lucky hashing should result in a discount? You're
giving a linear discount for exponentially more luck in hashing which
also seems odd.

You've only got two nonce choices -- the initial  and the depth
that you tell Bob and Carol to hash to as steps in the route; so the
incentive there seems to be to do a large depth, so you might hash
1000 times, and figure that you'll find a leading eight 0's once
in the first 256 entries, then another by the time you get up to 512,
and another by the time you get to 768, which gets you discounts on
three intermediaries. But the cost there is that your intermediaries
collectively have to do the same amount of hashing you did, so it's not
proof-of-work, because it's as hard to verify as it is to generate.

I think you could just make the scheme be:

Alice sends HTLC(k,v) + 1250 msat to Bob
Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to Carol
Carol redeems the HTLC and refunds 200 msat to Bob
Bob redeems the HTLC and refunds 200 msat to Alice

If there's a failure, Alice loses the 1250 msat, and someone in the
path steals the funds. You could make the accountable by having Alice
also provide "Hash(, refund=200)" to everyone, encoding  in the
onion to Dave, and then each hop reveals  and refunds 200msat to
demonstrate their honesty.

Does that miss anything that all the hashing achieves?

I think the idea here is that you're paying tiny amounts for the
bandwidth, which when it's successful does in fact pay for the bandwidth;
and when it's unsuccessful results in a channel closure, which makes it
unprofitable to cheat the system, but doesn't change the economics of
lightning much overall because channel closures can happen anytime anyway.
I think that approach makes sense.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

```On Thu, Oct 03, 2019 at 01:08:29PM +0200, Christian Decker wrote:
> >  * anyprevout signatures make the address you're signing for less safe,
> >which may cause you to lose funds when additional coins are sent to
> >the same address; this can be avoided if handled with care (or if you
> >don't care about losing funds in the event of address reuse)
> Excellent points, I had missed the hidden nature of the opt-in via
> pubkey prefix while reading your proposal. I'm starting to like that
> option more and more. In that case we'd only ever be revealing that we
> opted into anyprevout when we're revealing the entire script anyway, at
> which point all fungibility concerns go out the window anyway.
>
> Would this scheme be extendable to opt into all sighash flags the
> outpoint would like to allow (e.g., adding opt-in for sighash_none and
> sighash_anyonecanpay as well)? That way the pubkey prefix could act as a
> mask for the sighash flags and fail verification if they don't match.

For me, the thing that distinguishes ANYPREVOUT/NOINPUT as warranting
an opt-in step is that it affects the security of potentially many
UTXOs at once; whereas all the other combinations (ALL,SINGLE,NONE
cross ALL,ANYONECANPAY) still commit to the specific UTXO being spent,
so at most you only risk somehow losing the funds from the specific UTXO
you're working with (apart from the SINGLE bug, which taproot doesn't
support anyway).

Having a meaningful prefix on the taproot scriptpubkey (ie paying to
"[SIGHASH_SINGLE][32B pubkey]") seems like it would make it a bit easier
to distinguish wallets, which taproot otherwise avoids -- "oh this address
is going to be a SIGHASH_SINGLE? probably some hacker, let's ban it".

> > I think it might be good to have a public testnet (based on Richard Myers
> > et al's signet2 work?) where we have some fake exchanges/merchants/etc
> > and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
> > can think of, and just work out if we need any extra code/tagging/whatever
> > to keep those fake exchanges/merchants from losing money (and write up
> > the weird cases we've found in a wiki or a paper so people can easily
> > tell if we missed something obvious).
> That'd be great, however even that will not ensure that every possible
> corner case is handled [...]

Well, sure. I'm thinking of it more as a *necessary* step than a
*sufficient* one, though. If we can't demonstrate that we can deal with
the theoretical attacks people have dreamt up in a "laboratory" setting,
then it doesn't make much sense to deploy things in a real world setting,
does it?

I think if it turns out that we can handle every case we can think of
easily, that will be good evidence that output tagging and the like isn't
necessary; and conversely if it turns out we can't handle them easily,
it at least gives us a chance to see how output tagging (or chaperone
sigs, or whatever else) would actually work, and if they'd provide any
meaningful protection at all. At the moment the best we've got is ideas
and handwaving...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

```On Wed, Oct 02, 2019 at 02:03:43AM +, ZmnSCPxj via Lightning-dev wrote:
> So let me propose the more radical excision, starting with SegWit v1:
> * Remove `SIGHASH` from signatures.
> * Put `SIGHASH` on public keys.
>   OP_SETPUBKEYSIGHASH

I don't think you could reasonably do this for key path spends -- if
you included the sighash as part of the scriptpubkey explicitly, that
would lose some of the indistinguishability of taproot addresses, and be
more expensive than having the sighash be in witness data. So I think
that means sighashes would still be included in key path signatures,
which would make the behaviour a little confusingly different between
signing for key path and script path spends.

> This removes the problems with `SIGHASH_NONE` `SIGHASH_SINGLE`, as they are
> allowed only if the output specifically says they are allowed.

I don't think the problems with NONE and SINGLE are any worse than using
SIGHASH_ALL to pay to "1*G" -- someone may steal the money you send,
but that's as far as it goes. NOINPUT/ANYPREVOUT is worse in that if
you use it, someone may steal funds from other UTXOs too -- similar
to nonce-reuse. So I think having to commit to enabling NOINPUT for an
address may make sense; but I don't really see the need for doing the
same for other sighashes generally.

FWIW, one way of looking at a transaction spending UTXO "U" to address
"A" is something like:

* "script" lets you enforce conditions on the transaction when you
create "A" [0]
* "sighash" lets you enforce conditions on the transaction when
you sign the transaction
* nlocktime, nsequence, taproot annex are ways you express conditions
on the transaction

In that view, "sighash" is actually an *extremely* simple scripting
language itself (with a total of six possible scripts).

That doesn't seem like a bad design to me, fwiw.

Cheers,
aj

[0] "graftroot" lets you update those conditions for address "A" after
the fact
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

```On Mon, Sep 30, 2019 at 03:23:56PM +0200, Christian Decker via bitcoin-dev
wrote:
> With the recently renewed interest in eltoo, a proof-of-concept implementation
> [1], and the discussions regarding clean abstractions for off-chain protocols
> [2,3], I thought it might be time to revisit the `sighash_noinput` proposal
> (BIP-118 [4]), and AJ's `bip-anyprevout` proposal [5].

Hey Christian, thanks for the write up!

> ## Open questions
> The questions that remain to be addressed are the following:
> 1.  General agreement on the usefulness of noinput / anyprevoutanyscript /
> anyprevout[?]
> 2.  Is there strong support or opposition to the chaperone signatures[?]
> 3.  The same for output tagging / explicit opt-in[?]
> 4.  Shall we merge BIP-118 and bip-anyprevout. This would likely reduce the
> confusion and make for simpler discussions in the end.

I think there's an important open question you missed from this list:
(1.5) do we really understand what the dangers of noinput/anyprevout-style
constructions actually are?

My impression on the first 3.5 q's is: (1) yes, (1.5) not really,
(2) weak opposition for requiring chaperone sigs, (3) mixed (weak)
support/opposition for output tagging.

My thinking at the moment (subject to change!) is:

* anyprevout signatures make the address you're signing for less safe,
which may cause you to lose funds when additional coins are sent to
the same address; this can be avoided if handled with care (or if you

* being able to guarantee that an address can never be signed for with
an anyprevout signature is therefore valuable; so having it be opt-in
at the tapscript level, rather than a sighash flag available for
key-path spends is valuable (I call this "opt-in", but it's hidden
until use via taproot rather than "explicit" as output tagging
would be)

* receiving funds spent via an anyprevout signature does not involve any
qualitatively new double-spending/malleability risks.

(eltoo is unavoidably malleable if there are multiple update
transactions (and chaperone signatures aren't used or are used with
well known keys), but while it is better to avoid this where possible,
it's something that's already easily dealt with simply by waiting
for confirmations, and whether a transaction is malleable is always
under the control of the sender not the receiver)

* as such, output tagging is also unnecessary, and there is also no
need for users to mark anyprevout spends as "tainted" in order to
wait for more confirmations than normal before considering those funds
"safe"

I think it might be good to have a public testnet (based on Richard Myers
et al's signet2 work?) where we have some fake exchanges/merchants/etc
and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
can think of, and just work out if we need any extra code/tagging/whatever
to keep those fake exchanges/merchants from losing money (and write up
the weird cases we've found in a wiki or a paper so people can easily
tell if we missed something obvious).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

```On Mon, Sep 30, 2019 at 11:28:43PM +, ZmnSCPxj via bitcoin-dev wrote:
> Suppose rather than `SIGHASH_NOINPUT`, we created a new opcode,
> `OP_CHECKSIG_WITHOUT_INPUT`.

I don't think there's any meaningful difference between making a new
opcode and making a new tapscript public key type; the difference is
just one of encoding:

3301AC   [CHECKSIG of public key type 0x01]
32B3 [CHECKSIG_WITHOUT_INPUT (replacing NOP4) of key]

> This new opcode ignores any `SIGHASH` flags, if present, on a signature,

(How sighash flags are treated can be redefined by new public key types;

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Selling timestamps (via payment points and scalars + Pedersen commitments ) [try2]

```On Wed, Sep 25, 2019 at 01:30:39PM +, ZmnSCPxj wrote:
> > Since it's off chain, you could also provide R and C and a zero knowledge
> > proof that you know an r such that:
> > R = SHA256( r )
> > C = SHA256( x || r )

> > in which case you could do it with lightning as it exists today.
> I can insist on paying only if the server reveals an `r` that matches some
> known `R` such that `R = SHA256(r)`, as currently in Lightning network.
> However, how would I prove, knowing only `R` and `x`, and that there exists
> some `r` such that `R = SHA256(r)`, that `C = SHA256(x || r)`?

If you know x and r, you can generate C and R and a zero knowledge proof
of the relationship between x,C,R that doesn't reveal r (eg, I think
you could do that with bulletproofs). Unfortunately that zkp already
proves that C was generated based on x, so you get your timestamp for
free. Ooops. :(

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Eltoo, anyprevout and chaperone signatures

```On Thu, May 16, 2019 at 09:55:57AM +0200, Bastien TEINTURIER wrote:
> before I joined this list so I'll go dig into the archive ;)

The discussion was on a different list anyway, I think, this might be

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016777.html

> > Specifically we can't make make use of the collaborative path where
> > we override an `update_tx` with a newer one in taproot as far as I can
> > see, since the `update_tx` needs to be signed with noinput (for
> > rebindability) but there is no way for us to specify the chaperone key
> > since we're not revealing the committed script.
> Can you expand on that? Why do we need to "make use of the collaborative path"
> (maybe it's unclear to me what you mean by collaborative path here)?

I think Christian means the "key path" per the terminology in the
taproot bip. That's the path that just provides a signature, rather than
providing an internal key, a script, and signatures etc for the script.

> I feel like there will be a few other optimizations that are unlocked by
> taproot/tapscript, it will be interesting to dig into that.

I had a go at drafting up scripts, and passed them around privately to
some of the people on this list already. They're more "thought bubble"
than even "draft" yet, but for the sake of discussion:

---
FWIW, the eltoo scripts I'm imaginging with this spec are roughly:

UPDATE TX n:
nlocktime: 500e6+n
nsequence: 0
output 0:
P = muSig(A,B)
scripts = [
"OP_1 CHECKSIGVERIFY X CHECKSIGVERIFY 500e6+n+1 CLTV"
]
witness:
sig(P,hash_type=SINGLE|ANYPREVOUTANYSCRIPT=0xc3)
sig(X,hash_type=0)

SETTLEMENT TX n:
nlocktime: 500e6+n+1
nsequence: [delay]
output 0: A
output 1: B
output n: (HTLC)
P = muSig(A,B)
scripts = [
"OP_1 CHECKSIGVERIFY X CHECKSIG"
"A CHECKSIGVERIFY  CLTV"
]
witness:
sig(P,hash_type=ALL|ANYPREVOUT=0x41)
sig(X,hash_type=0)

HTLC CLAIM (reveal secp256k1 preimage R):
witness:
hash-of-alternative-script
sig(P,hash_type=SINGLE|ANYPREVOUT,reveal R)
sig(X,hash_type=0)

HTLC REFUND (timeout):
witness:
hash-of-alternative-script
sig(A,hash_type=ALL)

Because "n" changes for each UPDATE tx, each ANYPREVOUT signature
(for the SETTLEMENT tx) commits to a specific UPDATE tx via both the
scriptPubKey commitment and the tapleaf_hash commitment.

So the witness data for both txs involve revealing:

33 byte control block
43 byte redeem script
65 byte anyprevout sig
64 byte sighash all sig

Compared to a 65 byte key path spend (if ANYPREVOUT worked for key paths),
that's an extra 143 WU or 35.75 vbytes, so about 217% more expensive. The
update tx script proposed in eltoo.pdf is (roughly):

"IF 2 Asi Bsi ELSE <500e6+n+1> CLTV DROP 2 Au Bu ENDIF 2 OP_CHECKMULTISIG"

148 byte redeem script
65 byte anyprevout sig by them
64 byte sighash all sig by us
"1" or "0" to control the IF

which I think would be about 282 WU total, or an extra 216 WU/54 vbytes
over a 65 byte key path spend, so about 327% more expensive. So at least
we're a lot better than where we were with BIP 118, ECDSA and p2wsh.

Depending on if you can afford generating a bunch more signatures you
could also have a SIGHASH_ALL key path spend for the common unilateral
case where only a single UPDATE TX is published.

UPDATE TX n (alt):
input: FUNDING TX
witness: sig(P,hash_type=0)
output 0:
P = muSig(A,B)
scripts = [
"OP_1 CHECKSIGVERIFY X CHECKSIGVERIFY 500e6+n+1 CLTV"
]

SETTLEMENT TX n (alt):
nsequence: [delay]
input: UPDATE TX n (alt)
witness: sig(P+H(P||scripts)*G,hash_type=0)
outputs: [as above]

(This approach can either use the same ANYPREVOUT sigs for the HTLC
claims, or could include an additional sig for each active HTLC for each
channel update to allow HTLC claims via SIGHASH_ALL scriptless scripts...)

Despite using SIGHASH_SINGLE, I don't think you can combine two UPDATE txs
generally, because the nlocktime won't match (this could possibly be fixed
in a future soft-fork by using the annex to support per-input absolute
locktimes). You can't combine SETTLEMENT tx, because the ANYPREVOUT
signature needs to commit to multiple outputs (one for my balance, one
for yours, one for each active HTLC). Combining HTLC refunds is kind-of
easy, but only possible in the first place if you've got a bunch expiring
at the same time, which might not be that likely. Combining HTLC claims
should be easy enough since they just need scriptless-script signatures.

For fees, because of ALL|ANYPREVOUT, you can add a new input and new
change output to bring-your-own-fees for the UPDATE tx; and while you
can't do that for the SETTLEMENT tx, you can immediately spend your
channel-balance output to add fees via CPFP.

As far as "X" goes, calculating the private key as a HD key using ECDH
between the peers ```

### Re: [Lightning-dev] More thoughts on NOINPUT safety

```On Fri, Mar 22, 2019 at 01:59:14AM +, ZmnSCPxj wrote:
> > If codeseparator is too scary, you could probably also just always
> > require the locktime (ie for settlmenet txs as well as update txs), ie:
> > OP_CHECKLOCKTIMEVERIFY OP_DROP
> >  OP_CHECKDLSVERIFY  OP_CHECKDLS
> > and have update txs set their timelock; and settlement txs set a absolute
> > timelock, relative timelock via sequence, and commit to the script code.
>
> I think the issue I have here is the lack of `OP_CSV` in the settlement
> branch.

You can enforce the relative timelock in the settlement branch simply
by refusing to sign a settlement tx that doesn't have the timelock set;
the OP_CSV is redundant.

> Consider a channel with offchain transactions update-1, settlement-1,
> update-2, and settlement-2.
> If update-1 is placed onchain, update-1 is also immediately spendable by
> settlement-1.

settlement-1 was signed by you, and when you signed it you ensured that
nsequence was set as per BIP-68, and NOINPUT sigs commit to nsequence,
so if anyone changed that after the fact the sig isn't valid. Because
BIP-68 is enforced by consensus, update-1 isn't immediately spendable
by settlement-1.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] More thoughts on NOINPUT safety

```On Thu, Mar 21, 2019 at 10:05:09AM +, ZmnSCPxj wrote:
> > IF OP_CODESEPARATOR  OP_CHECKLOCKTIMEVERIFY OP_DROP ENDIF
> >  OP_CHECKDLSVERIFY  OP_CHECKDLS
> > Signing with NOINPUT,NOSCRIPT and codeseparatorpos=1 enforces CLTV
> > and allows binding to any prior update tx -- so works for an update tx
> > spending previous update txs; while signing with codeseparatorpos=-1
> > and NOINPUT but committing to the script code and nSequence (for the
> > CSV delay) allows binding to only that update tx -- so works for the
> > settlement tx. That's two pubkeys, two sigs, and the taproot point
> > reveal.
>
> Actually, the shared keys are different in the two branches above.

Yes, if you're not committing to the script code you need the separate
keys as otherwise any settlement transaction could be used with any
update transaction.

If you are committing to the script code, though, then each settlement
sig is already only usable with the corresponding update tx, so you
don't need to roll the keys. But you do need to make it so that the
update sig requires the CLTV; one way to do that is using codeseparator
to distinguish between the two cases.

> Also, I cannot understand `OP_CODESEPARATOR`, please no.

If codeseparator is too scary, you could probably also just always
require the locktime (ie for settlmenet txs as well as update txs), ie:

OP_CHECKLOCKTIMEVERIFY OP_DROP
OP_CHECKDLSVERIFY  OP_CHECKDLS

and have update txs set their timelock; and settlement txs set a absolute
timelock, relative timelock via sequence, and commit to the script code.

(Note that both those approaches (with and without codesep) assume there's
some flag that allows you to commit to the scriptcode even though you're
not committing to your input tx (and possibly not committing to the
scriptpubkey). BIP118 doesn't have that flexibility, so the A_s_i and
B_s_i key rolling is necessary)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] More thoughts on NOINPUT safety

```On Thu, Mar 14, 2019 at 05:22:59AM +, ZmnSCPxj via Lightning-dev wrote:
> output tagging somehow conflicting with Taproot, so I assumed Taproot is not
> useable in this case.

I'm thinking of tagged outputs as "taproot plus" (ie, plus noinput),
so if you used a tagged output, you could do everything normal taproot
address could, but also do noinput sigs for them.

So you might have:

funding tx -> cooperative claim

funding tx -> update 3 [TAGGED] -> settlement 3 -> claim

funding tx -> update 3 [TAGGED] ->
update 4 [TAGGED,NOINPUT] ->
settlement 4 [TAGGED,NOINPUT] ->
claim [NOINPUT]

In the cooperative case, no output tagging needed.

For the unilateral case, you need to tag all the update tx's, because
they *could* be spend by a later update with a NOINPUT sig, and if
that actually happens, then the settlement tx also needs to use a
NOINPUT sig, and if you're using scriptless scripts to resolve HTLCs,
claiming/refunding the HTLCs needs a partially-pre-signed tx which also
needs to be a NOINPUT sig, meaning the settlement tx also needs to be
tagged in that case.

You'd only need the script path for the last case where there actually
are multiple updates, but because you have to have a tagged output in the
second case anyway, maybe you've already lost privacy and always using
NOINPUT and the script path for update and settlement tx's would be fine.

> However, it is probably more likely that I simply misunderstood what you
> said, so if you can definitively say that it would be possible to hide the
> clause "or a NOINPUT sig from A with a non-NOINPUT sig from B" behind a
> Taproot then I am fine.

Yeah, that's my thinking.

> Minor pointless reactions:
> > 5.  if you're using scriptless scripts to do HTLCs, you'll need to
> > allow for NOINPUT sigs when claiming funds as well (and update
> > the partial signatures for the non-NOINPUT cases if you want to
> > maximise privacy), which is a bit fiddly
> If I remember accurately, we do not allow bilateral/cooperative close when
> HTLC is in-flight.
> However, I notice that later you point out that a non-cheating unilateral
> close does not need NOINPUT, so I suppose. the above thought applies to that
> case.

Yeah, exactly.

Trying to maximise privacy there has the disadvantage that you have to
do a new signature for every in-flight HTLC every time you update the
state, which could be a lot of signatures for very active channels.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] More thoughts on NOINPUT safety

```On Wed, Mar 13, 2019 at 06:41:47AM +, ZmnSCPxj via Lightning-dev wrote:
> > -   alternatively, we could require every script to have a valid signature
> > that commits to the input. In that case, you could do eltoo with a
> > script like either:
> >  CHECKSIGVERIFY  CHECKSIG
> > or  CHECKSIGVERIFY  CHECKSIG
> > where A is Alice's key and B is Bob's key, P is muSig(A,B) and Q is
> > a key they both know the private key for. In the first case, Alice
> > would give Bob a NOINPUT sig for the tx, and when Bob wanted to publish
> > Bob would just do a SIGHASH_ALL sig with his own key. In the second,
> > Alice and Bob would share partial NOINPUT sigs of the tx with P, and
> > finish that when they wanted to publish.
> At my point of view, if a NONINPUT sig is restricted and cannot be
> used to spend an "ordinary" 2-of-2, this is output tagging regardless
> of exact mechanism.

With taproot, you could always do the 2-of-2 spend without revealing a
script at all, let alone that it was meant to be NOINPUT capable. The
setup I'm thinking of in this scenario is something like:

0) my key is A, your key is B, we want to setup an eltoo channel

1) post a funding tx to the blockchain, spending money to an address
P = muSig(A,B)

2) we cycle through a bunch of states from 0..N, with "0" being the
refund state we establish before publishing the funding tx to
the blockchain. each state essentially has two corresponding tx's,
and update tx and a settlement tx.

3) the update tx for state k spends to an output Qk which is a
taproot address Qk = P + H(P,Sk)*G where Sk is the eltoo ratchet
condition:
Sk = (5e8+k+1) CLTV A CHECKDLS_NOINPUT B CHECKDLS_NOINPUT_VERIFY

we establish two partial signatures for update state k, one which
is a partial signature spending the funding tx with key P and
SIGHASH_ALL, the other is a NOINPUT signature via A (for you) and
via B (for me) with locktime set to (k+5e8), so that we can spend
any earlier state's update tx's, but not itself or any later
state's update tx's.

4) for each state we have also have a settlement transaction,
Sk, which spends update tx k, to outputs corresponding to the state
of the channel, after a relative timelock delay.

we have two partial signatures for this transaction too, one with
SIGHASH_ALL assuming that we directly spent the funding tx with
update state k (so the input txid is known), via the key path with
key Qk; the other SIGHASH_NOINPUT via the Sk path. both partially
signed tx's have nSequence set to the required relative timelock
delay.

5) if you're using scriptless scripts to do HTLCs, you'll need to
allow for NOINPUT sigs when claiming funds as well (and update
the partial signatures for the non-NOINPUT cases if you want to
maximise privacy), which is a bit fiddly

6) when closing the channel the process is then:

- if you're in contact with the other party, negotiate a new
key path spend of the funding tx, publish it, and you're done.

- otherwise, if the funding tx hasn't been spent, post the latest
update tx you know about, using the "spend the funding tx via
key path" partial signature

- otherwise, trace the children of the funding tx, so you can see
the most recent published state:
- if that's newer than the latest state you know about, your
info is out of date (restored from an old backup?), and you
have to wait for your counterparty to post the settlement tx
- if it's equal to the latest state you know about, wait
- if it's older than the latest state, post the latest update
tx (via the NOINPUT script path sig), and wait

- once the CSV delay for the latest update tx has expired, post
the corresponding settlement tx (key path if the update tx
spent the funding tx, NOINPUT if the update tx spent an earlier
update tx)

- once the settlement tx is posted, claim your funds

So the cases look like:

mutual close:
funding tx -> claimed funds

-- only see one key via muSig, single signature, SIGHASH_ALL
-- if there are active HTLCs when closing the channel, and they
timeout, then the claiming tx will likely be one-in, one-out,
SIGHASH_ALL, with a locktime, which may be unusual enough to
indicate a lightning channel.

unilateral close, no cheating:
funding tx -> update N -> settlement N -> claimed funds

-- update N is probably SINGLE|ANYONECANPAY, so chain analysis
of accompanying inputs might reveal who closed the channel
-- settlement N has relative timelock
-- claimed funds may have timelocks if they claim active HTLCs via
the refund path
-- no NOINPUT signatures needed, and all signatures use the key path
so don't reveal any scripts

```

### [Lightning-dev] More thoughts on NOINPUT safety

```Hi all,

The following has some more thoughts on trying to make a NOINPUT
implementation as safe as possible for the Bitcoin ecosystem.

One interesting property of NOINPUT usage like in eltoo is that it
actually reintroduces the possibility of third-party malleability to
transactions -- ie, you publish transactions to the blockchain (tx A,
which is spent by tx B, which is spent by tx C), and someone can come
along and change A or B so that C is no longer valid). The way this works
is due to eltoo's use of NOINPUT to "skip intermediate states". If you
publish to the blockchain:

funding tx -> state 3 -> state 4[NOINPUT] -> state 5[NOINPUT] -> finish

then in the event of a reorg, state 4 could be dropped, state 5's
inputs adjusted to refer to state 3 instead (the sig remains valid
due to NOINPUT, so this can be done by anyone not just holders of some
private key), and finish would no longer be a valid tx (because the new
"state 5" tx has different inputs so a different txid, and finish uses
SIGHASH_ALL for the signature so committed to state 5's original txid).

There is a safety measure here though: if the "finish" transaction is
itself a NOINPUT tx, and has a a CSV delay (this is the case in eltoo;
the CSV delay is there to give time for a hypothetical state 6 to be
published), then the only way to have a problem is for some SIGHASH_ALL tx
that spends finish, and a reorg deeper than the CSV delay (so that state
4 can be dropped, state 5 and finish can be altered). Since the CSV delay
is chosen by the participants, the above is still a possible scenario
in eltoo, though, and it means there's some risk for someone accepting
bitcoins that result from a non-cooperative close of an eltoo channel.

Beyond that, I think NOINPUT has two fundamental ways to cause problems
for the people doing NOINPUT sigs:

1) your signature gets applied to a unexpectedly different
script, perhaps making it look like you've being dealing
with some blacklisted entity. OP_MASK and similar solves
this.

2) your signature is applied to some transaction and works
perfectly; but then someone else sends money to the same address
and reuses your prior signature to forward it on to the same

I still like OP_MASK as a solution to (1), but I can't convince myself that
the problem it solves is particularly realistic; it doesn't apply to
the address has to be different, and you could just short circuit the
whole thing by sending money from a blacklisted address to the target's
personal address directly. Further, if the sig's been seen on chain
before, that's probably good evidence that someone's messing with you;
and if it hasn't been seen on chain before, how is anyone going to tell
it's your sig to blame you for it?

I still wonder if there isn't a real problem hiding somewhere here,
but if so, I'm not seeing it.

For the second case, that seems a little more concerning. The nightmare
scenario is maybe something like:

* naive users do silly things with NOINPUT signatures, and end up
losing funds due to replays like the above

* initial source of funds was some major exchange, who decide it's
cheaper to refund the lost funds than deal with the customer complaints

* the lost funds end up costing enough that major exchanges just outright
ban sending funds to any address capable of NOINPUT, which also bans

That's not super likely to happen by chance: NOINPUT sigs will commit
to the value being spent, so to lose money, you (Alice) have to have
done a NOINPUT sig spending a coin sent to your address X, to someone
(Bob) and then have to have a coin with the exact same value sent from
someone else again (Carol) to your address X (or if you did a script
path NOINPUT spend, to some related address Y with a script that uses the same
key). But because it involves losing money to others, bad actors might
trick people into having it happen more often than chance (or well
written software) would normally allow.

That "nightmare" could be stopped at either the first step or the
last step:

* if we "tag" addresses that can be spent via NOINPUT then having an
exchange ban those addresses doesn't also impact regular
taproot/schnorr addresses, though it does mean you can tell when
someone is using a protocol like eltoo that might need to make use
of NOINPUT signatures.  This way exchanges and wallets could simply
not provide NOINPUT capable addresses in the first place normally,
and issue very large warnings when asked to send money to one. That's
not a problem for eltoo, because all the NOINPUT-capable address eltoo
needs are internal parts of the protocol, and are spent automatically.

* or we could make it so NOINPUT signatures aren't replayable on
different transactions, at least by third parties. one way of doing
this might be to require ```

### Re: [Lightning-dev] Base AMP

```On Thu, Nov 15, 2018 at 11:54:22PM +, ZmnSCPxj via Lightning-dev wrote:
> The improvement is in a reduction in `fee_base_msat` in the C->D path.

I think reliability (and simplicity!) are the biggest things to improve
in lightning atm. Having the flag just be incuded in invoices and not
need to be gossiped seems simpler to me; and I think endpoint-only
merging is better for reliability too. Eg, if you find candidate routes:

A -> B -> M -- actual directed capacity \$6
A -> C -> M -- actual directed capacity \$5.50
M -> E -> F -- actual directed capacity \$6
A -> X -> F -- actual directed capacity \$7

and want to send \$9 form A to F, you might start by trying to send
\$5 via B and \$4 via C.

With endpoint-only merging you'd do:

\$5 via A,B,M,E,F -- partial success
\$4 via A,C,M,E -- failure
\$4 via A,X,F -- payment completion

whereas with in-route merging, you'd do:

\$5 via A,B,M -- held
\$4 via A,C,M -- to be continued
\$9 via M,E -- both partial payments fail

which seems a fair bit harder to incrementally recover from.

> Granted, current `fee_base_msat` across the network is very low currently.
> So I do not object to restricting merge points to ultimate payees.
> If fees rise later, we can revisit this.

So, while we already agree on the approach to take, I think the above

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Packet switching via intermediary rendezvous node

```On Thu, Nov 15, 2018 at 07:24:29PM -0800, Olaoluwa Osuntokun wrote:
> > If I'm not mistaken it'll not be possible for us to have spontaneous
> > ephemeral key switches while forwarding a payment
> If this _was_ possible, then it seems that it would allow nodes to create
> unbounded path lengths (looks to other nodes as a normal packet), possibly
> by controlling multiple nodes in a route, thereby sidestepping the 20 hop
> limit all together.

If you control other nodes in the route you can trivially create a "path"
of more than 20 hops -- go 18 hops from your first node to your second
node, and have the second node trigger on the payment hash to create
an entirely new onion to go another 18 hops, repeating if necessary to
create an arbitrarily long route.

> This would be undesirable many reasons, the most dire of
> which being the ability to further amplify null-routing attacks.

That doesn't really *amplify* null-routing attacks -- even if its
circular, you're still locking additional funds up each time you
route through yourself.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Probe cancellation

```PING,

It seems like ensuring reliability is going to involve nodes taking
active measures like probing routes fairly regularly. Maybe it would
be worth making that more efficient? For example, a risk of probing is
that if the probe discovers a failing node/channel, the probe HTLC will
get stuck, and have to gradually timeout, which at least uses up HTLC
slots and memory for each of the well-behaved nodes, but if the probe
has a realistic value rather than just a few (milli)satoshis, it might
lock up real money too.

It might be interesting to allow for cancelling stuck probes from
the sending direction as well as the receiving direction. eg if the
payment hash wasn't generated as SHA256("something") but rather as
SHA256("something") XOR 0xFF..FF or similar, then everyone can safely drop
the incoming transaction because they know that even if they forwarded
the tx, it will be refunded eventually anyway (or otherwise sha256 is
effectively broken and they're screwed anyway). So all I have to do is
send a packet saying this was a probe, and telling you the "something"
to verify, and I can free up the slot/funds from my probe, as can everyone
else except for the actual failing nodes.

>From the perspective of the sending node:

generate 128b random number X
calculate H=bitwise-not(SHA256(X))
make probe payment over path P, hash H, amount V
wait for response:
- success: Y, s.t. SHA256(Y)=H=not(SHA256(X)) -- wtf, sha is broken
- error, unknown hash: path works
- routing failed:  mark failing node, reveal X cancelling HTLC
- timeout: mark path as failed (?), reveal X cancelling HTLC

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

```On Thu, Nov 08, 2018 at 05:32:01PM +1030, Olaoluwa Osuntokun wrote:
> > A node, via their node_announcement,
> Most implementations today will ignore node announcements from nodes that
> don't have any channels, in order to maintain the smallest routing set
> possible (no zombies, etc). It seems for this to work, we would need to undo
> this at a global scale to ensure these announcements propagate?

Having incoming capacity from a random node with no other channels doesn't
seem useful though? (It's not useful for nodes that don't have incoming
capacity of their own, either)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

```On Wed, Nov 07, 2018 at 02:26:29AM +, Gert-Jaap Glasbergen wrote:
> > Otherwise, if you're happy accepting 652 satoshis, I don't see why you
> > wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
> > you're no worse off, in any event.
> I wouldn’t be worse off when accepting the payment, I agree. I can safely
> ignore whatever fraction was sent if I don’t care about it anyway. The
> protocol is however expecting (if not demanding) me to also route payments
> with fractions, provided they are above the set minimum. In that case I’m
> also expected to send out fractions. Even though they don’t exist on-chain,
> if I send a fraction of a satoshi my new balance will be 1 satoshi lower
> on-chain since everything is rounded down.

But that's fine: suppose you want everything divided up into lots of
1 satoshi, and you see 357.719 satoshis coming in and 355.715 satoshis
going out. Would you have accepted 357 satoshis going in (rounded down)
and 356 satoshis going out (rounded up)? If so, you're set. If not,
reject the HTLC as not having a high enough fee.

Yes, you're still expected to send fractions of a satoshi around, but
that doesn't have to affect your accounting (except occassionally to
your benefit when you end up with a thousand millisatoshis).

I think you can set your fee_base_msat to 2000 msat to make sure every
HTLC you route pays you at least a satoshi, even with losses from
rounding. If you're willing to find yourself having routed payments for
free (after rounding), then setting it to 1000 msat should work too.

> > Everything in open source is configurable by end users: at worst, either
> > by them changing the code, or by choosing which implementation to use…
> Well, yes, in that sense it is. But the argument was made that it’s too
> complex for average users to understand: I agree there, [...]

Then it's not really a good thing for different implementations to have
as a differentiator...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

```On Tue, Nov 06, 2018 at 10:22:56PM +, Gert-Jaap Glasbergen wrote:
> > On 6 Nov 2018, at 14:10, Christian Decker
> > wrote:
> > It should be pointed out here that the dust rules actually prevent us
> > from creating an output that is smaller than the dust limit (546
> > satoshis on Bitcoin). By the same logic we would be forced to treat the
> > dust limit as our atomic unit, and have transferred values and fees
> > always be multiples of that dust limit.
> I don’t follow the logic behind this.

I don't think it quite makes sense either fwiw.

> > 546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
> > the current minimum fee and value transferred. I think we will have to
> > deal with values that are not representable / enforceable on-chain
> > anyway, so we might as well make things more flexible by keeping
> > msatoshis.
> I can see how this makes sense. If you deviate from the realms of what is
> possible to enforce on chain,

What's enforcable on chain will vary though -- as fees rise, even if the
network will still relay your 546 satoshi output, it may no longer be
economical to claim it, so you might as well save fees by not including
it in the first place.

But equally, if you're able to cope with fees rising _at all_ then
you're already okay with losing a few dozen satoshis here and there, so
how much difference does it make if you're losing them because fees
rose, or because there was a small HTLC that you could've claimed in
theory (or off-chain) but just can't claim on-chain?

> Again, I am not advocating mandatory limitations to stay within base layer
> enforcement, I am advocating _not_ making it mandatory to depart from it.

That seems like it adds a lot of routing complexity for every node
(what is the current dust level? does it vary per node/channel? can I
get a path that accepts my microtransaction HTLC? do I pay enough less
in fees that it's better to bump it up to the dust level?), and routing

You could already get something like this behaviour by setting a high
"fee_base_msat" and a low "fee_proportional_millionths" so it's just
not economical to send small transactions via your channel, and a
corresponding "htlc_maximum_msat" to make sure you aren't too cheap at
the top end.

Otherwise, if you're happy accepting 652 satoshis, I don't see why you
wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
you're no worse off, in any event.

> I would not envision this to be even configurable by end users. I am just
> advocating the options in the protocol so that an implementation can choose
> what security level it prefers.

Everything in open source is configurable by end users: at worst, either
by them changing the code, or by choosing which implementation to use...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

```On Sun, Nov 04, 2018 at 08:04:20PM +1030, Rusty Russell wrote:
> >> >  - just send multiple payments with the same hash:
> >> > works with sha256
> >> > privacy not improved much (some intermediary nodes no longer know
> >> >   full invoice value)
> >> > can claim partial payments as soon as they arrive
> >> > accepting any partial payment provides proof-of-payment
> >> Interestingly, if vendor takes part payment, rest can be stolen by
> >> intermediaries.
> > Or you could just see a \$5 bill, send \$0.50 through, and wait to see
> > if the take the partial payment immediately before even trying the
> > remaining \$4.50.
> Sure, that's true today, too?

Yeah, exactly. So to get correct behaviour vendors/payees need to check
the HTLC amount matches what they expect already... They could just
automatically pause instead of rejecting here to see if more payments
come through in the next n seconds via (presumably) different paths,
with no extra message bit required. (A bit in the invoice indicating
you'll do this would probably be useful though)

> >  Vendor -> *:"I sell widgets for 0.01 BTC, my pubkey is P"
> >  Customer -> Vendor: "I want to buy a widget"
> >  Vendor -> Customer: "Here's an R value"
> >  Customer: calculates S = R + H(P,R,"send \$me a widget at \$address")*P
> >  Customer -> Vendor: "here's 0.01 BTC for s corresponding to S, my
> >   details are R, \$me, \$address"
> >  Vendor: looks up r for R=r*G, calculates s = r + H(P,R,"send \$me a
> >  widget at \$address")*p, checks S=s*G
> >  Vendor -> Customer:
> >
> >  Customer -> Court: reveals the invoice ("send \$me a widget...") and the
> > signature by Vendor's pubkey P, (s,R)
> >
> > I think the way to do secp256k1 AMP with that is that when sending
> > through the payment is for the customer to send three payments to the
> > Vendor conditional on preimages for A,B,C calculated as:
> >
> >A = S + H(1,secret)*G
> >B = S + H(2,secret)*G
> >C = S + H(3,secret)*G
> Note: I prefer the construction H(,)
> which doesn't require an explicit order.

Yes, you're quite right.

> I'm not sure I see the benefit over just treating them independently,
> so I also think we should defer.

If you've got a path that merges, then goes for a few hops, you'd save
on the fee_base_msat fees, and allow the merged hops to have smaller
commitment transactions. Kinda neat, but the complexity in doing the
onion stuff means it definitely makes sense to defer IMO.

> >> [1] If we're not careful we're going to implement HORNET so we can pass
> >> arbitrary messages around, which means we want to start charging for
> >> them to prevent spam, which means we reopen the pre-payment debate, and
> >> need reliable error messages...
> > Could leave the interactivity to the "web store" layer, eg have a BOLT
> > 11 v1.1 "offer" include a url for the website where you go an enter your
> > name and address and whatever other info they need, and get a personalised
> > BOLT 11 v1.1 "invoice" back with payment-hash/nonce/signature/whatever?
> I think that's out-of-scope, and I generally dislike including a URL
> since it's an unsigned externality and in practice has horrible privacy
> properties.

Maybe... I'm not sure that it'll make sense to try to negotiate postage
and handling fees over lightning, rather than over https, though?

BTW, reviewing contract law terminology, I think the way lawyers would
call it is:

"invitation to treat" -- advertising that you'll sell widgets for \$x
"offer" -- I'll pay you \$3x for delivery of 3 widgets to my address
"acceptance" -- you agree, take my \$3x and give me a receipt
"consideration" -- you get my \$3x, I get 3 widgets

So it might be better to have the terms be "advertisment", "invoice",
contract-law terms. In any event, I think that would mean the BOLT-11
terms and lightning payment process would map nicely into contract law

Oh! Post-Schnorr I think there's a good reason for the payee to
include their own crypto key in the invoice; so you can generate an
scriptless-script address for an on-chain fallback payment directly
between the payer/payee that reveals proof-of-payment on acceptance
(and allow refund on timeout via taproot I guess). At least, I think
all that might be theoretically feasible.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

```On Mon, Nov 05, 2018 at 01:05:17AM +, ZmnSCPxj via Lightning-dev wrote:
> > And it just doesn't work unless you give over uniquely identifying
> > information. AJ posts to r/bitcoin demonstrating payment, demanding his
> > goods. Sock puppet says "No, I'm the AJ in Australia" and cut & pastes
> > the same proof.
> Technically speaking, all that AJ in Australia needs to show is that he or
> she knows, the private key behind the public key that is indicated on the
> invoice.

Interesting. I think what you're saying is that with secp256k1 preimages
(with decorrelation), if you have the payment hash Q, then the payment
preimage q (Q=q*G) is only known to the payee and the payer (and not
any intermediaries thanks to decorrelation), so if you see a statement

m="This invoice has been paid but not delivered as at 2018-11-05"

signed by "Q" (so, some s,R s.t. s*G = R + H(Q,R,m)*Q) then that means
either the payee signed it, in which case there's no dispute, or the
payer signed it... And that's publicly verifiable with only the original
invoice information (ie "Q").

(I don't think there's any need for multiple rounds of signatures)

FWIW, I don't see reddit as a particularly viable "court"; there's
no way for reddit to tell who's actually right in a dispute, eg if I
say blockstream didn't send stickers I paid for, and blockstream says
they did; ie there's no need for a sock puppet in the above scenario,
blockstream can just say "according to our records you signed for
delivery, stop whinging". (And if we both agree that it did or didn't
arrive, there's no need to post cryptographic proofs to reddit afaics)

I think there's maybe four sorts of "proof of payment" people might
desire:

0) no proof: "completely" deniable payments (donations?)

1) shared secret: ability to prove directly to the payee that an
invoice was paid (what we have now)

2) signed payment: ability to prove to a different business unit of
the payee that payment was made, so that you can keep all the
secrets in the payment-handling part, and have the service-delivery
part not be at risk for losing all your money

3) third-party verifiable: so you can associate a payment with real
world identity information, and take them to court (or reddit) as a
contract dispute; needs PKI infrastructure so you can be confident
the pubkey maps to the real world people you think it does, etc

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

```On Sun, Nov 04, 2018 at 01:30:48PM +1030, Rusty Russell wrote:
> I'm not sure.  Jonas Nick proposed a scheme, which very much assumes
> Schnorr AFAICT:
> Jonas Nick wrote:
> > How I thought it would work is that the invoice would contain a
> > Schnorr nonce R.

(Note this means the "invoice" must be unique for each payment)

> > Then the payer would construct s*G = R +
> > H(payee_pubkey,R,"I've bought 5 shirts shipped to Germany")*G. Then
> > the payer builds the scriptless script payment path such that when the
> > payee claims, the payer learns s and thus has a complete
> > signature. However, that doesn’t work with recurrent payments because
> > the payee can use the nonce only once.

So that's totally fine to do however you receive the "s" value -- the
message that's getting the Schnorr signature isn't a valid bitcoin
transaction, so it's something that only needs to be validated by
BOLT-aware courts.

I also think you can get recurrent payments easily by extending the
verification algorithm. Basically instead of Verify(m,P,sig) have
Verify(m,P,n,sig) to verify you've made n payments of the invoice "m".

Construct "m" to include the postimage X = H(pre,1000) which indicates
"pre" has been hashed 1000 times, so X = H(H(pre,1000-n),n).

Calculate the original signature as:

s = r + H(P,R,m+X)*p

and verify that n payments have been made by checking:

Verify(m,P,n,(s,R,rcpt)) :: s*G = R + H(P,R,m+H(rcpt,n))*P

You'd provide s,R,X when setting up the subscription, then reveal the
preimage to X, the preimage to the preimage of X etc on each payment.
(Maybe shachain would work here?)

I think that approach is independent of using sha256/secp256k1 for
preimages over lightning too.

> I would probably enhance this to include a nonce, which allows for AMP
> (you have to xor the AMP payments to get the nonce):
> R + H(payee_pubkey,R,"I've bought 5 shirts shipped to Germany",NONCE)*G

R is already a unique nonce under the hash here, so I don't think a
second one adds any value fwiw.

> > I think it makes sense to think of proof-of-payment in terms of a
> > verification algorithm (that a third party court could use), that takes:
> >
> >   m - the invoice details, eg
> >   "aj paid \$11 for stickers to be delivered to Australia"
> >   P - the pubkey of the vendor
> >   sig - some signature
> >
> > With the current SHA256 preimages, you can make sig=(R,s,pre)
> > where the sig is valid if:
> >
> >   s*G = R + H(P,R,m+SHA256(pre))*P
> >
> > If you share R,s,SHA256(pre) beforehand, the payer can tell they'll have
> > a valid signature if they pay to SHA256(pre). That's a 96B signature,
> > and it requires "pre" be different for each sale, and needs pre-payment
> > interactivity to agree on m and communicate R,s back to the payer.
> For current-style invoices (no payer-supplied data), the payee knows
> 'm', so no interactivity needed, which is nice.

I'm looking at it as needing interactivity to determine m prior to
the payment going through -- the payer needs to send through "aj" and
"Australia" in the example above, before the payee can generate s,R to
send back, at which point the payer can make the payment knowing they'll
either get a cryptographic proof of payment or a refund.

> In the payer-supplied data case, I think 'm' should include a signature
> for a key only the payer knows: this lets them prove *they* made the
> payment.

I don't object to that, but I think it's unnecessary; as long as there
was a payment for delivery of the widget to "aj" in "Australia" does it
matter if the payment was technically made by "aj" by "Visa on behalf
of aj" or by "Bank of America on behalf of Mastercard on behalf of aj's
friend who owed him some money" ?

> How does this interact with AMP, however?

The way I see it is they're separate: you have a way of getting the
preimage back over lightning (which is affected by AMP), and you have a
way of turning a preimage into a third-party-verifiable PoP (with
Schnorr or whatever).

(That might not be true if there's a clever way of safely feeding the
nonce R back, so that you can go straight from a generic offer to an
accepted payment with proof of payment)

> > With seckp256k1 preimages, it's easy to reduce that to sig=(R,s),
> > and needing to communicate an R to the payer initially, who can then
> > calculate S and send "m" along with the payment.

Crap, do I need to give you proof of payment for it now? :)

> > Maybe it makes sense to disambiguate the term "invoice" -- when you don't
> > know who you might be giving the goods/service to, call it an "offer",
> > which can be a write-once/accept-by-anyone deal that you just leave on
> > a webpage or your email signature; but an "invoice" should be specific
> > to each individual payment, with a "receipt" provided once an invoice
> > is paid.
> "offer" is a good name, since I landed on the same one while thinking

Yay!

> > It seems to me like there are three levels that could be ```

### Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

```On Fri, Nov 02, 2018 at 03:45:58PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> > On Fri, Nov 02, 2018 at 10:20:46AM +1030, Rusty Russell wrote:
> >> There's been some discussion of what the lightning payment flow
> >> might look like in the future, and I thought I'd try to look forwards so
> >> we can avoid painting ourselves into a corner now.  I haven't spent time
> >> on concrete design and implementation to be sure this is correct,
> >> however.
> > I think I'd like to see v1.1 of the lightning spec include
> > experimental/optional support for using secp256k1 public/private keys
> > for payment hashes/preimages. That assumes using either 2-party ECDSA
> > magic or script magic until it's viable to do it via Schnorr scriptless
> > scripts, but that seems like it's not totally infeasible?
> Not totally infeasible, but since every intermediary needs to support
> it, I think we'd need considerable buy-in before we commit to it in 1.1.

"every intermediary" just means "you have to find a path where every
channel supports it"; nodes/channels that aren't in the route you choose
aren't a problem, and can still pass on the gossiped announcements,
I think?

> > I think the
> > components would need to be:
> >  - invoices: will the preimage for the hash be a secp256k1 private key
> >or a sha256 preimage? (or payer's choice?)
> From BOLT11:
>The `p` field supports the current 256-bit payment hash, but future
>specs could add a new variant of different length, in which case
>writers could support both old and new, and old readers would ignore
>the one not the correct length.
> So the plan would be you provide two `p` fields in transition.

Yeah, that sounds workable.

> >  - channel announcements: do you support secp256k1 for hashes or just
> >sha256?
> Worse, it becomes "I support secp256k1 with ECDSA" then a new "I support
> secp256k1 with Schnorr".  You need a continuous path of channels with
> the same feature.

I don't think that's correct: whether it's 2p-ecdsa, Schnorr or script
magic only matters for the two nodes directly involved in the channel
(who need to be able to understand the commitment transactions they're
signing, and extract the private key from the on-chain tx if the channel
gets unilaterally closed). For everyone else, they just need to know that
they can put in a public key based HTLC, and get back the corresponding
private key when the HTLC goes through.

It's also (theoretically) upgradable afaics: if two nodes have a channel
that supports 2p-ecdsa, and eventually both upgrade to support segwit
v1 scriptless schnorr sigs or whatever, they just need to change the
addresses they use in new commitment txs, even for existing HTLCs.

> > Even if you calculate r differently, I don't think you can do this
> > without Bob and Alice interacting to get the nonce R prior to sending
> > the transaction, which seems effectively the same as having dynamic
> > invoice hashes, though.
> I know Andrew Poelstra thought it was possible, so I'm going to leave a
> response to him :)

AFAICT, in general, if you're going to have n signatures with a public
key P, you need to generate the n R=r*G values from n*32B worth of random data,
that's previously unknown to the signature recipients. If you've got
less than that, then you will have calculated each R by doing something
like based on  I think a general scheme is: payer creates a random group-marker, sends
> <32-byte-randomness>[encrypted data...] in each payment.
> Receipient collects payments by , xoring the
> <32-byte-randomness>; if that xor successfully decrypts the data, you've
> got all the pieces.
>
> (For low-AMP, you use payment_hash as , and just use
> SHA256(<32-byte-randomness>) as the per-payment
> preimage so no [encrypted data] needed).

Hmm, right, I've got decorrelation and AMP combined in my head. I'm also
a bit confused about what exactly you mean by "low-AMP"...

Rereading through the AMP threads, Christian's post makes a lot of sense
to me:

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001023.html

I'm not really seeing the benefits in complicated AMP schemes without
decorrelation...

It seems to me like there are three levels that could be implemented:

- laolu/conner: ("low AMP" ?)
works with sha256
some privacy improvement
loses proof-of-payment
can't claim unless all payments arrive

- just send multiple payments with the same hash:
works with sha256
privacy not improved much (some intermediary nodes no longer know
full invoice value)
can claim partial payments as soon as they arrive
accepting any partial payment provides proof-of-payment

```

### Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

```On Fri, Nov 02, 2018 at 10:20:46AM +1030, Rusty Russell wrote:
> There's been some discussion of what the lightning payment flow
> might look like in the future, and I thought I'd try to look forwards so
> we can avoid painting ourselves into a corner now.  I haven't spent time
> on concrete design and implementation to be sure this is correct,
> however.

I think I'd like to see v1.1 of the lightning spec include
experimental/optional support for using secp256k1 public/private keys
for payment hashes/preimages. That assumes using either 2-party ECDSA
magic or script magic until it's viable to do it via Schnorr scriptless
scripts, but that seems like it's not totally infeasible? I think the
components would need to be:

- invoices: will the preimage for the hash be a secp256k1 private key
or a sha256 preimage? (or payer's choice?)
- channel announcements: do you support secp256k1 for hashes or just
sha256?
- node features: how do you support secp256k1? not at all (default),
via 2p-ecdsa, via script magic, (eventually) via schnorr, ...?

I think this is (close to) a necessary precondition for payment
decorrelation, AMP, and third-party verifiable proof-of-payment.

> Desired Status
> --
> Ideally, you could create one invoice which could be paid arbitrary many
> times, by different individuals.  eg. "My donation invoice is on my web
> page", or "I've printed out the invoice for a widget and stuck it to the
> widget", or "Pay this invoice once a month please".
>
> Also, you should be able to prove you've paid, in a way I can't just
> copy the proof and claim I paid, too, even if I'm the merchant, and that
> you agreed to my terms, eg. "I'm paying for 500 widgets to be shipped to
> Rusty in Australia".

So, I think at a high level the logic here goes:

1. Alice: "Buy a t-shirt from me for \$5!"
2. Bob: "Alice, I want to buy a t-shirt from you, here's \$5"
3. Alice: "Receipt: Bob bought a t-shirt from me"
4. Bob: "Your Honour, here's my receipt from Alice for a t-shirt, please
make her deliver on it!"

Going backwards; for the last step to be useful, the receipt has to be
a signature with the Alice's public key -- if it were anything short of
that, Alice will claim Bob could have just made up all the numbers. For a
Schnorr sig, that means (R,s) with the vendor choosing R and not revealing
R's preimage as that would reveal their private key.

If both vendor and customer know R, then to get the signature, you need
the private key holder to reveal s which is just revealing the secp256k1
private key corresponding to S, calculated as:

S = R + H(P,R,"Bob bought a \$5 t-shirt from me")*P

where P is Alice's public key. If R is calculated via the Schnorr BIP's
recommendation, then r = H(p, "Bob bought a \$5 t-shirt from me") -- ie,
based on the private key and the message being signed.

Even if you calculate r differently, I don't think you can do this
without Bob and Alice interacting to get the nonce R prior to sending
the transaction, which seems effectively the same as having dynamic
invoice hashes, though.

Maybe querying for a nonce through the lightning network would make
sense though, which would allow the "invoice" to be static, and all the
dynamic things would be via lightning p2p? That step could perhaps be
combined with the 0 satoshi payment probes that Fabrice proposes in

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-October/001484.html

but I think replying with a public nonce value would need a new message
type of some sort?

I think AMP is independent, other than also using secp256k1 preimages
rather than SHA256. I think AMP splits and joins are just:

- if you're joining incoming payments, don't forward until you've
got all the HTLCs, and ensure you can generate the secret for each
incoming payment from the single outgoing payment

- if you're splitting an incoming payment into many outgoing payments,
ensure you can claim the incoming payment from *any* outgoing
payments' secret

Which I think in practice just means knowing x_i for each input, and
y_j for each output other than the first, and verifying:

I_i = O_1 + x_i*G
O_j = O_1 + y_j*G

(this gives I_i = O_j + (x_i-y_j)*G and the corresponding secret being
i_i = o_j + x_i - y_j) allowing you to claim all incoming HTLCs given
the secret from any outgoing HTLC)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

```On 13 October 2018 7:12:03 pm GMT+09:00, Christian Decker
wrote:
>Great find ZmnSCPxj, we can also have an adaptive scheme here, in which
>we
>start with a single update transaction, and then at ~90% of the
>available
>range we add a second. This is starting to look a bit like the DMC
>invalidation tree :-)
>But realistically speaking I don't think 1B updates is going to be
>exhausted any time soon, but the adaptive strategy gets the best of
>both worlds.
>
>Cheers,
>Christian
>
>On Fri, Oct 12, 2018 at 5:21 AM ZmnSCPxj
>wrote:
>
>> Another way would be to always have two update transactions,
>effectively
>> creating a larger overall counter:
>>
>> [anchor] -> [update highbits] -> [update lobits] -> [settlement]
>>
>> We normally update [update lobits] until it saturates.  If lobits
>> saturates we increment [update highbits] and reset [update lobits] to
>the
>> lowest valid value.
>>
>> This will provide a single counter with 10^18 possible updates, which
>> should be enough for a while even without reanchoring.
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Sent with ProtonMail <https://protonmail.com> Secure Email.
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Friday, October 12, 2018 1:37 AM, Christian Decker <
>> decker.christ...@gmail.com> wrote:
>>
>> Thanks Anthony for pointing this out, I was not aware we could
>> roll keypairs to reset the state numbers.
>>
>> I basically thought that 1billion updates is more than I would
>> ever do, since with splice-in / splice-out operations we'd be
>> re-anchoring on-chain on a regular basis anyway.
>>
>>
>>
>> On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns
>wrote:
>>
>>> On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
>>> > eltoo is a drop-in replacement for the penalty based invalidation
>>> > mechanism that is used today in the Lightning specification. [...]
>>>
>>> Maybe this is obvious, but in case it's not, re: the locktime-based
>>> sequencing in eltoo:
>>>
>>>  "any number above 0.500 billion is interpreted as a UNIX timestamp,
>and
>>>   with a current timestamp of ~1.5 billion, that leaves about 1
>billion
>>>   numbers that are interpreted as being in the past"
>>>
>>> per second for 4 months?) I think you could reset the locktime by
>rolling
>>> over to use new update keys. When unilaterally closing you'd need to
>>> use an extra transaction on-chain to do that roll-over, but you'd
>save
>>> a transaction if you did a cooperative close.
>>>
>>> ie, rather than:
>>>
>>>   [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc]
>>> or
>>>   [funding] -> [coop close / re-fund] -> [coop close]
>>>
>>> you could have:
>>>   [funding] -> [update 1B] -> [update 23,310,561 with key2] ->
>[HTLCs]
>>> or
>>>   [funding] -> [coop close]
>>>
>>> You could repeat this when you get another 1B updates, making
>unilateral
>>> closes more painful, but keeping cooperative closes cheap.
>>>
>>> Cheers,
>>> aj
>>>
>>>
>>

Hmm - the range grows by one every second though, so as long as you don't go
through a billion updates per second, you can go to 100% of the range, knowing
that by the time you have to increment, you'll have 115% of the original range
available, meaning you never need more than two transactions (until locktime
overflows anyway) for the commitment, even at 900MHz transaction rates...

Cheers,
aj

--
Sent from my phone.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

```(bitcoin-dev dropped from cc)

On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
> eltoo is a drop-in replacement for the penalty based invalidation
> mechanism that is used today in the Lightning specification. [...]

I think you can simplify eltoo further, both in the way the transactions
work and in the game theory ensuring people play fair.

In essence: rather than having a funding transaction spending to address
"X", and a set of ratcheting states that spend from-and-to the same
address "X", I think it's feasible to have a simpler ratchet mechanism:

(1) funding address: multisig by A and B as usual

(2) commit to state >=N by A

(3a) commit to state N by A after delay D; or
(3b) commit to state M (M>=N) by B

I believe those transactions (while partially signed, before posting to
the blockchain) would look like:

(1) pay to "2 A1 B1 2 OP_CHECKMULTISIG"

(2) signed by B1, nlocktime set to (N+E)
pay to "(N+E) OP_CLTV OP_DROP 2 A2a B2a 2 OP_CHECKMULTISIG"

(3a) signed by B2a, nSequence set to the channel pay to self delay,
nlocktime set to (N+E)
pays to the channel balances / HTLCs, with no delays or
revocation clauses

(3b) signed by A2a with SIGHASH_NOINPUT_UNSAFE, nlocktime set to (M+E)
pays to the channel balances / HTLCs, with no delays or
revocation clauses

You spend (2)+delay+(3a)+[claim balance/HTLC] if your counterparty
goes away.  You spend (2) and your counterparty spends (3b) if you're
both monitoring the blockchain. (3a) and (3b) should have the same tx
size, fee rate and outputs.

(A1, A2a are keys held by A; B1, B2a are keys held by B; E is
LOCKTIME_THRESHOLD; N is the current state number)

That seems like it has a few nice features:

- txes at (3a) and (3b) can both pay current market fees with minimal
risk, and can be CPFPed by a tx spending your own channel balance

- txes at (2) can pay a non-zero fee, provided it's constant for the
lifetime of the channel (to conform with the NOINPUT rules)

- if both parties are monitoring the blockchain, then the channel
can be fully closed in a single block, by (2)+(3b)+[balance/HTLC
claims], and the later txes can do CPFP for tx (2).

- both parties can claim their funds as soon as the other can, no
matter who initiates the close

- you only need 3 pre-signed txes for the current state; the txes
for claiming HTLCs/balances don't need to be half-signed (unless
you're doing them via schnorr scriptless scripts etc)

The game theory looks fine to me. If you're posting transaction (2), then
you can choose between a final state F, paying you f and your counterparty
b-f, or some earlier state N, paying you n, and your counterparty b-n. If
f>n, it makes sense for you to choose F, in which case your counterparty
is also forced to choose state F for (3b) and you're forced to choose F
for (3a). If n>f, then if you choose N, your counterparty will either
choose state F because b-f>b-n and you will receive f as before, or
will choose some other state M>N, where b-m>b-f, and you will receive
m eltoo addresses some of the issues we encountered while speficying and
> implementing the Lightning Network. For example outsourcing becomes very
> simple since old states becoming public can't hurt us anymore.

The scheme above isn't great for (untrusted) outsourcing, because if
you reveal enough for an adversary to post tx (3b) for state N, then
they can then collaborate with your channel counterparty to roll you
back from state N+1000 back to state N.

With eltoo if they do the same, then you have the opportunity to catch
them at it, and play state N+1000 to the blockchain -- but if you're
monitoring the blockchain carefully enough to catch that, why are you
outsourcing in the first place? If you're relying on multiple outsourcers
to keep each other honest, then I think you run into challenges paying
them to publish the txes for you.

Thoughts? Apart from still requiring NOINPUT and not working with
adversarial outsourcing, this seems like it works nicely to me, but
maybe I missed something...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] New form of 51% attack via lightning's revocation system possible?

```On Tue, Mar 13, 2018 at 06:07:48PM +0100, René Pickhardt via Lightning-dev
wrote:
> Hey Christian,
> I agree with you on almost anything you said. however I disagree that in the
> lightning case it produces just another double spending. I wish to to
> emphasize
> on my statement that the in the case with lightning such a 51% attack can
> steal
> way more BTC than double spending my own funds.

I think you can get a simpler example:

* I setup a channel, funding it with 10 BTC (ie, balance is 100% on my side)

* Someone else sets up a channel with me, funding it with 5 BTC
(balance is 100% on their side)

* I route 5 BTC to myself from the first channel through the second:
aj -> X -> ... -> victim -> aj
* I save the state that says I own all 5BTC in the victim <-> aj channel

* I route 5 BTC to myself from the second channel throught the first:
aj -> victim -> ... -> X -> aj
* At this point I'm back to having 10 BTC (minus some small amont
of lightning fees) in the first channel

* I use 51% hashing power to mine a secret chain that uses the saved
state to close the victim<->aj channel. Once that chain is long enough
that I can claim the funds I do so. Once I have claimed the funds on
my secret chain and the secret chain has more work than the public
chain, I publish it, causing a reorg.

* At this point I still have 10 BTC in the original channel, and I have
the victim's 5 BTC.

I can parallelise this attack as well: before doing any private mining or
closing the victim's channel, I can do the same thing with another victim,
allowing me to collect old states worth many multiples of up to 10 BTC, and
mine them at once, leaving with my original 10BTC minus fees, plus n*10BTC
stolen from victims.

there already being a miner with >51% hashpower, who has financial
interests in seeing lightning fail...

The main limitation is that it still only allows a 51% miner to steal
funds from channels they participate in, so creating channels with
identifiable entities with whom you have an existing relationship (as
opposed to picking random anonymous nodes) is a defense against this
attack. Also, if 51% of hashpower is mining in secret for an extended
period, that may be detectable, which may allow countermeasures to
be taken?

You could also look at this the other way around: at the point when
lightning is widely deployed, this attack vector seems like it gives an
immediate, personal, financial justification for large economic actors
to ensure that hash rate is very decentralised.

> In particular I could run for a decade on stable payment channels
> storing old state and at some point realizing it would be a really big
> opportunity secretly cashing in all those old transactions which can't be
> revoked.

(I'd find it surprising if many channels stayed open for a decade; if
nothing else, I'd expect deflation over that time to cause people to
want to close channels)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### Re: [Lightning-dev] Proof of payment (Re: AMP: Atomic Multi-Path Payments over Lightning)

```On Tue, Feb 13, 2018 at 09:23:37AM -0500, ZmnSCPxj via Lightning-dev wrote:
> Good morning Corne and Conner,
> Ignoring the practical matters that Corne rightly brings up, I think,
> it is possible to use ZKCP to provide a "stronger" proof-of-payment in
> the sense that Conner is asking for.

I think Schnorr scriptless scripts work for this (assuming HTLC payment
hashes are ECC points rather than SHA256 hashes). In particular:

- Alice agrees to pay Bob \$5 for a coffee.

- Bob calculates a lightning payment hash preimage r, and payment hash
R=r*G. Bob also prepares a receipt message, saying "I've been paid \$5
to give Alice a coffee", and calculates a partial Schnorr signature
of this receipt (n a signature nonce, N=n*G, s=n+H(R+N,B,receipt)*b),
and sends Alice (R, N, s)

- Alice verfies the partial signature:
s*G = N + H(R+N,B,receipt)*B

- Alice pays over lightning conditional on receiving the preimage r of R.

- Alice then has a valid signature of the receipt, signed by Bob:
(R+N, r+s)

The benefit over just getting a hash preimage, is that you can use this to
prove that you paid Bob, rather than Carol or Dave, at some later date,
including to a third party (a small-claims court, tax authorities,
a KYC/AML audit?).

The nice part is you get that just by doing some negotiation at the
start, it's not something the lightning protocol needs to handle at all
(beyond switching to ECC points for payment hashes).

>  Original Message
>  On February 13, 2018 10:33 AM, Corné Plooy via Lightning-dev
>  wrote:
> >Hi Conner,
> > I do believe proof of payment is an important feature to have,
> > especially for the use case of a payer/payee pair that doesn't
> > completely trust each other, but does have the possibility to go to court.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

```

### [Lightning-dev] Post-Schnorr lightning txes

```Hi *,

My understanding of lightning may be out of date, so please forgive
(or at least correct :) any errors on my behalf.

I was thinking about whether Greg Maxwell's graftroot might solve the
channel monitoring problem (spoiler: not really) and ended up with maybe
an interesting take on Schnorr. I don't think I've seen any specific
writeup of what that might look like, so hopefully at least some of this
is novel!

I'm assuming familiarity with current thinking on Schnorr sigs -- but all
you should need to know is the quick summary at footnote [0].

So I think there's four main scenarios for closing a lightning channel:

- both parties are happy to close, do so cooperatively, and can
sign a new unconditional transaction that they agree on. already fine.
(should happen almost all of the time, call it 80%)

- communications failure: one side has to close, but the other side
is happy to cooperate as far as they're able but can only do so via
the blockchain and maybe with some delay (maybe 15% of the time)

- disappearance, uncooperative: one side effectively completely
disappears so the other side has to fully close the channel on their
own (5% of the time)

- misbehaviour: one side tries publishing an old channel state due to
error or maliciousness, and the other collects the entire balance as
penalty (0% of the time)

With "graftroot" in mind, I was thinking that optimising for the last
case might be interesting -- despite expecting it to be vanishingly
rare. That would have to look something like:

(0) funding tx
(1) ...which is spent by a misbehaving commitment tx
(2) ...which is spent by a penalty tx

You do need 3 txes for that case, but you really only need 1 output
for each: so (0) is 2-in-1-out, (1) is 1-in-1-out, (2) is 1-in-1-out;
which could all be relatively cheap. (And (2) could be batched with other
txes making it 1 input in a potentially large tx)

For concreteness, I'm going to treat A as the one doing the penalising,
and B (Bad?) as the one that's misbehaving.

If you treat each of those txes as a muSig Schnorr pay-to-pubkey, the

(0) funding tx pays to [A,B]
(1) commitment tx pays to [A(i),Revocation(B,i)]
(2) pays to A

(where i is a commitment id / counter for the channel state)

If B misbehaves by posting the commitment tx after revealing the
revocation secret, A can calculate A(i) and Revocation(B,i) and claim
all the funds immediately.

As far as the other cases go:

- In a cooperative close, you don't publish any commitment txes, you
just spend the funding to each party's preferred destinations
directly; so this is already great.

- Otherwise, you need to be able to actually commit to how the funds
get distributed.

But committing to distributing funds is easy: just jointly sign
a transaction with [A(i),Revocation(B,i)]. Since B is the one we're
worrying about misbehaving, it needs to hold a transaction with the
appropriate outputs that is:

- timelocked to `to_self_delay` blocks/seconds in advance via nSequence
- signed by A(i)

That ensures A has `to_self_delay` blocks/seconds to penalise misehaviour,
and that when closing properly, B can complete the signature using the
current revocation secret.

This means the "appropriate outputs" no longer need the OP_CSV step, which
should simplify the scripts a bit.

Having B have a distribution transaction isn't enough -- B could vanish
between publishing the commitment transaction and the distribution
transaction, leaving A without access to any funds. So A needs a
corresponding distribution transaction. But because that transaction can
only be published if B signs and publishes the corresponding commitment
transaction, the fact that it's published indicates both A and B are
happy with the channel close -- so this is a semi-cooperative close and
no delay is needed. So A should hold a partially signed transaction with
the same outputs:

- without any timelock
- signed by Revocation(B,i), waiting for signature by A(i)

Thus, if B does a non-cooperative close, either:

- A proves misbehaviour and claims all the funds immediately
- A agrees that the channel state is correct, signs and publishes
the un-timelocked distribution transaction, then claims A's outputs;
B can then immediately claim its outputs
- A does nothing, and B waits for the `to_self_delay` period, signs
and publishes its transaction, then claims B's outputs; A can eventually
claim its own outputs

In that case all of the transactions except the in-flight HTLCs just look
like simple pay-to-pubkey transactions.

Further, other than the historical secrets no old information needs
to be retained: misbehaviour can be dealt with (and can only be dealt
with) by creating a new transaction signed by your own secrets and the
revocation information.

None of that actually relies on Schnorr-multisig, I think -- it could
be done today with normal 2-of-2 multisig as far as ```