Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-14 Thread Bastien TEINTURIER
> Simple protocols are ideal -- easier implementation, less bugs, less
> attack surface, etc.

I couldn't agree more! Let's keep complexity for the cases where it's
really necessary and brings enough value. I don't think CLTVs in this
case bring enough value to make up for the additional complexity.

> I think Keagan's idea of streaming lease fees makes a lot of sense
> here -- it's an ongoing incentive to keep the channel open.

I agree, and it could be as simple as having the seller publish a bolt
12 offer with recurrence in their liquidity ads. The buyer can then
regularly pay that offer through the standard bolt 12 flow, and the
seller would take that into account in its analysis of whether to keep
the channel open or not, and potentially add more liquidity.

This can even be done in two phases. We can deploy a very simple version
of liquidity ads without any CLTV or recurring payments attached, and in
a second step, once bolt 12 is widely deployed, add an optional bolt 12
offer to liquidity ads rates.

Thanks,
Bastien

Le mer. 13 déc. 2023 à 20:47, Matt Morehouse  a
écrit :

> If timelocks don't give us substantial benefit, let's avoid them.
> Simple protocols are ideal -- easier implementation, less bugs, less
> attack surface, etc.
>
> Notably, the current network works without locking anyone into
> channels.  Alice can open a channel with outbound liquidity to Bob.
> If the channel doesn't generate enough fees for Bob, it costs him
> nothing to close the channel and reallocate any funds on his side.
> There is no mechanism to lock Bob into the channel, and AFAIK no one
> has requested such a mechanism to be implemented.
>
> Can an analogous approach work for inbound liquidity?  Alice wants to
> open a channel with inbound liquidity from Bob.  As above Alice is
> willing to pay all transaction fees, so the only cost to Bob is the
> opportunity cost of his liquidity.  Instead of locking Bob into the
> channel, Alice can incentivize him to keep it open by (1) routing
> payments through the channel and/or (2) paying Bob directly.
>
> I think Keagan's idea of streaming lease fees makes a lot of sense
> here -- it's an ongoing incentive to keep the channel open.  The
> amount and frequency of payments can be agreed to during channel open,
> to provide some predictability for Alice and Bob.  But at any later
> point, either one can back out of the agreement with minimal cost if
> it is no longer mutually beneficial.
>
> I think such an approach would simplify the protocol and still achieve
> good results in practice due to the incentive structure -- everyone
> benefits more from cooperating than from trying to cheat each other.
> This is very similar to how the current network works.
>
>
> On Wed, Dec 13, 2023 at 12:59 PM Bastien TEINTURIER 
> wrote:
> >
> > Hey Keagan,
> >
> > Thanks for your feedback!
> >
> > > The question I have before we even get to the starting line of
> > > implementation is "What is actually being bought?".
> >
> > I fully agree, that is exactly why I created this thread and the
> > question I was asking in the initial post. And it's not obvious
> > what we actually want to buy, because different scenarios seem to
> > need slightly different kinds of inbound liquidity guarantees (in
> > the ideal case).
> >
> > > What timelocks ensure is that *if* liquidity exists within the channel
> > > that it isn't worth it for the seller to close, but it does very
> > > little to actually incentivize that liquidity being there in the first
> > > place.
> >
> > Yes, I fully agree with that (and the rest of your post). I don't think
> > timelocks are the right solution here. If a buyer is generating enough
> > fees for the seller from their lightning usage, the seller will have an
> > incentive to regularly add inbound liquidity towards the buyer, when it
> > makes sense in terms of on-chain fees or other operations. That provides
> > more utility to the buyer than protecting the remaining liquidity with a
> > timelock. We should not force this to happen at defined times, because
> > that would be incompatible with the unpredictability of on-chain fee
> > fluctuations.
> >
> > I think we have to accept that in order for lightning to provide the
> > most utility, we may need to buy excess liquidity when on-chain fees
> > are low, and wait for good mempool conditions to opportunistically
> > re-allocate liquidity (and that liquidity providers will have to
> > allocate in a way that makes the most sense for them as a whole, not
> > to each individual buyer, even though they will strive to satisfy
> > both whenever possible).
> >
> >

Re: [Lightning-dev] The remote anchor of anchor channels is redundant

2023-12-13 Thread Bastien TEINTURIER
Hi Peter,

> it is more efficient to just open the channel with a dust-sized
> balance on the to_remote output.

Yes, that would work, basically whenever the `to_remote` output would
disappear, then you add an anchor output instead paid by the channel
initiator. It isn't only at channel creation time that you'd need this
though if you want to support 0-reserve channels (for mobile wallets).

One of its issues is that if that dust `to_remote` output isn't claimed
by the peer (because it's too small), it cannot be claimed by anyone
else, which ends up polluting the utxo set *forever*, whereas the anchor
outputs can be claimed by anyone after 16 blocks, and there are people
regularly sweeping them in batches when the mempool is empty.

It isn't entirely clear-cut which option would really be better. But
hopefully v3 provides a much cleaner way of achieving those results!

Cheers,
Bastien

Le mer. 13 déc. 2023 à 16:28, Peter Todd  a écrit :

> On Wed, Dec 13, 2023 at 02:27:13PM +0100, Bastien TEINTURIER wrote:
> > Hi Peter,
> >
> > No, we currently cannot get rid of the remote anchor in favor of the
> > main remote output. That was considered during the design of anchor
> > outputs, but that would create the following vulnerability.
> >
> > Alice opens a channel to Bob: Bob doesn't have a main output in the
> > commit tx yet. Alice sends HTLCs to Bob. Bob still doesn't have a main
>
> Obviously if there isn't a to_remote output, you need a way to CPFP.
>
> But even then the to_remote_anchor output is *still* unnecessary: it is
> more
> efficient to just open the channel with a dust-sized balance on the
> to_remote
> output. Either way you are giving up the dust amount. But a straightforward
> pubkey output is more efficient to spend, if needed. And of course, this
> doesn't require the remote anchor output implementation.
>
> > output in the commit tx yet. Bob sends `update_fulfill_htlc` and his
> > corresponding `commit_sig`, but Alice doesn't send `commit_sig` back
> > and broadcasts her commit tx. Bob needs to be able to claim the HTLCs
> > on-chain before they timeout. Bob thus needs to ensure that Alice's
> > commit tx confirms, which requires having a remote anchor in it.
> >
> > Note that Bob cannot simply broadcast his own commit tx and use the
> > local anchor on it, because its feerate is exactly the same as Alice's
> > commit tx. Since we don't have package relay and Alice was the first to
> > broadcast, it's likely that Alice's commit tx won the race in every
> > mempool, so CPFP-ing Bob's commit tx won't help it replace Alice's.
> >
> > The only way to get rid of this would have been to rework HTLCs to
> > allow using them as "anchors", but that was a more complex change
> > with its own set of drawbacks.
> >
> > I'd rather wait for v3 transactions and package relay to move to a
> > single ephemeral anchor, which fixes this issue altogether.
>
> Yes, I'm doing an analysis of v3 transactions, which is how I came across
> this
> issue to begin with.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] The remote anchor of anchor channels is redundant

2023-12-13 Thread Bastien TEINTURIER
Hi Peter,

No, we currently cannot get rid of the remote anchor in favor of the
main remote output. That was considered during the design of anchor
outputs, but that would create the following vulnerability.

Alice opens a channel to Bob: Bob doesn't have a main output in the
commit tx yet. Alice sends HTLCs to Bob. Bob still doesn't have a main
output in the commit tx yet. Bob sends `update_fulfill_htlc` and his
corresponding `commit_sig`, but Alice doesn't send `commit_sig` back
and broadcasts her commit tx. Bob needs to be able to claim the HTLCs
on-chain before they timeout. Bob thus needs to ensure that Alice's
commit tx confirms, which requires having a remote anchor in it.

Note that Bob cannot simply broadcast his own commit tx and use the
local anchor on it, because its feerate is exactly the same as Alice's
commit tx. Since we don't have package relay and Alice was the first to
broadcast, it's likely that Alice's commit tx won the race in every
mempool, so CPFP-ing Bob's commit tx won't help it replace Alice's.

The only way to get rid of this would have been to rework HTLCs to
allow using them as "anchors", but that was a more complex change
with its own set of drawbacks.

I'd rather wait for v3 transactions and package relay to move to a
single ephemeral anchor, which fixes this issue altogether.

Thanks,
Bastien

Le mer. 13 déc. 2023 à 13:59, Peter Todd  a écrit :

> As per BOLT #3,
> https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0cecb00ba7a48c2d28b3588/03-transactions.md#commitment-transaction-construction
>
> 9) If option_anchors applies to the commitment transaction:
> * if to_local exists or there are untrimmed HTLCs, add a
> to_local_anchor output
> * if to_remote exists or there are untrimmed HTLCs, add a
> to_remote_anchor output
>
> For reference, both the remote and local anchor output has the following
> form:
>
>  OP_CHECKSIG OP_IFDUP
> OP_NOTIF
> OP_16 OP_CHECKSEQUENCEVERIFY
> OP_ENDIF
>
> In the event that a CPFP fee bump is necessary, it is not possible to use
> the
> to_local output because of the CSV delay that gives the remote party a
> chance
> to use the revocation pubkey:
>
> OP_IF
> # Penalty transaction
> 
> OP_ELSE
> `to_self_delay`
> OP_CHECKSEQUENCEVERIFY
> OP_DROP
> 
> OP_ENDIF
> OP_CHECKSIG
>
> However the to_remote output in anchor channels has a much simpler form,
> almost, but not quite, allowing the funds to be spent in a CPFP while
> unconfirmed:
>
>  OP_CHECKSIGVERIFY 1 OP_CHECKSEQUENCEVERIFY
>
> This delay has no justified purpose, and indeed, non-anchor channels
> simply use
> a P2WPKH output spendable by the remotepubkey. Functionally, the output is
> identical to the remote anchor output, making it redundant; rather than
> use the
> to_remote_anchor output for CPFP the to_remote could have continued to be a
> P2WPKH, and to_remote output could have been used for CPFP directly.
>
>
> # Conclusion
>
> Having both remote and local anchor outputs was a design flaw that
> needlessly
> wastes chain space when using anchor outputs. This design flaw is doubly
> wasteful due to the tendency of Lightning implementations to always spend
> the
> CSV-delayed to_remote output immediately in a separate transaction to move
> the
> funds to a "normal" scriptPubKey, rather that treating them as a normal
> wallet
> output.
>
> Further work: when HTLCs are in flight, it may also be possible to omit the
> to_local anchor at the cost of additional implementation complexity with
> careful consideration of exactly who has the ability to spend the HTLCs.
>
>
> Credit goes to Matt Corallo for discussing this flaw with me and
> confirming its
> existence.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-13 Thread Bastien TEINTURIER
Hey Keagan,

Thanks for your feedback!

> The question I have before we even get to the starting line of
> implementation is "What is actually being bought?".

I fully agree, that is exactly why I created this thread and the
question I was asking in the initial post. And it's not obvious
what we actually want to buy, because different scenarios seem to
need slightly different kinds of inbound liquidity guarantees (in
the ideal case).

> What timelocks ensure is that *if* liquidity exists within the channel
> that it isn't worth it for the seller to close, but it does very
> little to actually incentivize that liquidity being there in the first
> place.

Yes, I fully agree with that (and the rest of your post). I don't think
timelocks are the right solution here. If a buyer is generating enough
fees for the seller from their lightning usage, the seller will have an
incentive to regularly add inbound liquidity towards the buyer, when it
makes sense in terms of on-chain fees or other operations. That provides
more utility to the buyer than protecting the remaining liquidity with a
timelock. We should not force this to happen at defined times, because
that would be incompatible with the unpredictability of on-chain fee
fluctuations.

I think we have to accept that in order for lightning to provide the
most utility, we may need to buy excess liquidity when on-chain fees
are low, and wait for good mempool conditions to opportunistically
re-allocate liquidity (and that liquidity providers will have to
allocate in a way that makes the most sense for them as a whole, not
to each individual buyer, even though they will strive to satisfy
both whenever possible).

Curious to see what other people on the list think as well.

Thanks,
Bastien

Le mer. 13 déc. 2023 à 03:24, Keagan McClelland 
a écrit :

> Hey y'all,
>
> When these sorts of debates arise it prompts me to want to take a step
> back and do a more fundamental analysis of what's going on. The question I
> have before we even get to the starting line of implementation is "What is
> actually being bought?".
>
> It appears to me that what the buyer *really* wants is the persistent
> ability to receive payments up to a certain size. What enables this is the
> seller maintaining a minimum liquidity provision.
>
> An idealized version of this product would be the seller doing a JIT
> splice-in of whatever liquidity gets depleted when the buyer receives a
> payment. The longer the time between the payment and the re-provisioning,
> the worse the quality of service. It's not enough to guareantee that the
> integrated average liquidity on the seller's side exceeds that amount since
> until the world economy moves towards streaming smaller payments more
> continuously, the payment size is the primary issue, not the total volume.
>
> Ok, that's an idealized version, but JIT splices on every payment would be
> ridiculous since chain fees don't scale with payment size so if the buyer
> is receiving micropayments this gets out of hand almost immediately. So we
> have to live with a world where for some duration after the payment is made
> from the seller to the buyer, the leased liquidity dips below what is
> promised.
>
> So how do timelocks actually fare as a means of guaranteeing that the
> buyer gets what they really want? I'm not sure they do very well. What
> timelocks ensure is that *if* liquidity exists within the channel that it
> isn't worth it for the seller to close, but it does very little to actually
> incentivize that liquidity being there in the first place. Instead, a
> different scheme would be needed to incentivize the sellers to keep the
> liquidity there. This is partially accomplished by the promise of routing
> fees. If the channel demonstrates a decent volume moving in the direction
> of the buyer, then the opportunity cost of keeping the liquidity on that
> channel is less pronounced. The lease fee is really more of a way to
> compensate the seller for the risk of low routing fee revenue during an
> initial demonstration period.
>
> As others have pointed out, those offering these leases have far more
> reputation risk than those who are buying them, so the ideal scheme would
> be one that makes it trivial for the buyer to prove the seller's
> impropriety and not one that makes it easy for the buyer to lock up the
> seller's liquidity. The risk the buyer incurs could be mitigated by just
> streaming the lease fee over the demonstration period. If they default on a
> payment, then the seller just closes the channel and as long as the buyer
> is responsible for paying the closing fees, the seller could sidestep
> griefing opportunities while not being in the position to use a small
> amount of liquidity to scam a large number of users in rapid succession.

Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-11 Thread Bastien TEINTURIER
Hey Matt,

Thanks for brainstorming this, that's an interesting variation of what
I proposed for option 1. It is indeed simpler for bookkeeping, but there
are still some additional complications to figure out.

Do you think this should support multiple concurrent leases? I think it
shouldn't because the additional complexity isn't worth it. Maybe buying
a new lease should simply override the existing one? That isn't entirely
trivial either, because while we're waiting for the splice transaction
that contains the new lease to confirm, we need to keep track of the old
lease as well on commitment transactions based on the previous funding
output.

What should we do when the lease ends? The seller doesn't want to keep
the additional output, because it's costly for them if they need to
force-close. But how do we synchronize to update the commit tx, since
nodes may not be at exactly the same block height? We'll need to
introduce a new message for that and negotiate a quiescence session.

Also, if the seller splices in, do you agree that the amount should go
to their unencumbered output?

I've been prototyping the proposal I made previously, and I'm not very
satisfied with it. It's a lot of additional complexity that interacts
with many parts of the codebase (e.g. splicing, force-close management,
channel reserve), mostly linked to the addition of a new output to the
commitment transaction.

I'm less and less convinced that we should go down that road: sellers
will always have ways of being dishonest (by not relaying HTLCs, force
closing regardless of the CLTV, or getting back some of their funds
through pending HTLCs). I'm afraid we'll be adding a lot of complexity
to the protocol (which in practice, means compatibility bugs and force
closes) without much benefit. It would be a whole lot simpler to not
add any CLTV on the seller side. Sure, they can still take their funds
out whenever they want, but that will be reflected in the price. And if
you're an interesting buyer that generates routing activity, they'll
keep that liquidity around (most likely longer than the lease you were
ready to pay for). That better matches the dynamics of how people want
to allocate their liquidity efficiently. And if you pay for liquidity
and don't get it long enough, then it's fine, just don't buy from that
node again, you only lost a small amount in that process!

I know this is controversial, but I think it's hard to appreciate the
additional complexity that these new CLTV-based outputs add. We need
more code that implements this thoroughly (on top of an implementation
that supports splicing) to have informed opinions on whether this
additional complexity really makes sense.

Cheers,
Bastien

Le ven. 8 déc. 2023 à 23:32, Matt Morehouse  a
écrit :

> Unless I'm missing something, we can make option 2 work with CLTV
> enforcement as well.  In fact, I think that makes the bookkeeping much
> simpler.
>
> Suppose the leased amount is X.  No counters are needed.  All we need
> is to ensure the seller's first X sats in the commitment are always
> encumbered by the CLTV.  Anything above X can go to a second output
> that's unencumbered.  That's it.
>
> Here's an example.  Alice leases 10k sats from Bob and also puts up
> 10k sats to make the channel balanced.  The initial commitment
> transaction looks like this:
>
> Alice: 10k sats
> Bob: 10k sats with CLTV
>
> Bob offers a 2k sats HTLC1 to Alice.  The commitment becomes:
>
> Alice: 10k sats
> Bob: 8k sats with CLTV
> HTLC1: 2k sats
>
> HTLC1 is fulfilled:
>
> Alice: 12k sats
> Bob: 8k sats with CLTV
>
> Alice offers a 4k sats HTLC2 to Bob:
>
> Alice: 8k sats
> Bob: 8k sats with CLTV
> HTLC2: 4k sats
>
> HTLC2 is fulfilled.  Since Bob now has a total balance greater than
> 10k sats, the excess goes to an unencumbered output:
>
> Alice: 8k sats
> Bob: 10k sats with CLTV
> Bob: 2k sats
>
> Bob offers a 1k sats HTLC3 to Alice.  The sats always come out of
> Bob's unencumbered output first:
>
> Alice: 8k sats
> Bob: 10k sats with CLTV
> Bob: 1k sats
> HTLC3: 1k sats
>
> Bob offers a 3k sats HTLC4 to Alice.  The sats always come out of
> Bob's unencumbered output first.  The remaining sats come out of the
> encumbered output:
>
> Alice: 8k sats
> Bob: 8k sats with CLTV
> HTLC3: 1k sats
> HTLC4: 3k sats
>
> HTLC3 is fulfilled and HTLC4 is failed.  Bob's total balance will
> increase to 11k sats, so the first 10k sats are encumbered and the
> last 1k are unencumbered:
>
> Alice: 9k sats
> Bob: 10k sats with CLTV
> Bob: 1k sats
>
>
> Alice can never lock up more than 10k sats on Bob's side, since that
> was the agreed lease amount.  Bob can still play games with circular
> routing or force closing with HTLCs outstanding to unlock some of his
> liquidity early, but that is also the ca

Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-08 Thread Bastien TEINTURIER
Hey all,

I'd like to detail a bit more my statement from the last email.

> I personally think it has to be the first option, because the second
> one exposes the seller to griefing

That is my current conclusion *if we want to provide some kind of lease
enforcement via CLTV*, and we want to ensure it protects both the buyer
and the seller equally.

But we can look at it from a different angle: if what people want to
buy is option 2, then we should find a way to make option 2 work. In
my opinion, option 2 would be best without any CLTV enforcement of the
lease, and relying only on incentives (and thus letting the buyer take
the risk that the seller splices out or force-closes).

But maybe then it doesn't even make sense to have lease durations? Maybe
we only need to provide a feature to buy X amount of inbound liquidity
now, and let the seller decide whether they want to keep that liquidity
available for a long time or move it elsewhere.

Cheers,
Bastien

Le ven. 8 déc. 2023 à 09:00, Bastien TEINTURIER  a écrit :

> Hey Matt,
>
> > The question then is really: do operators want to buy/sell X sats of
> > inbound flow or Y days of an open channel with an initial inbound
> > balance of X sats?
>
> Agreed, this is what we need to decide. I personally think it has to be
> the first option, because the second one exposes the seller to griefing
> by the attack described in my first email, which makes it impossible for
> the seller to find the right price for that channel because they don't
> know how much liquidity will actually end up being locked.
>
> But that's definitely up for debate if people feel otherwise!
>
> Thanks,
> Bastien
>
> Le jeu. 7 déc. 2023 à 22:18, Matt Morehouse  a
> écrit :
>
>> On Mon, Dec 4, 2023 at 9:48 AM Bastien TEINTURIER 
>> wrote:
>> >
>> > But I've thought about it more, and the following strategy may work:
>> >
>> > - store three counters for the seller's funds:
>> >   - `to_local`: seller's funds that are not leased
>> >   - `to_local_leased`: seller's funds that are leased
>> >   - `to_local_leased_owed`: similar to `to_local_leased`, without taking
>> > into account pending HTLCs sent by the seller
>> > - when the seller sends HTLCs:
>> >   - deduce the HTLC amounts from `to_local_leased` first
>> >   - when `to_local_leased` reaches `0 sat`, deduce from `to_local`
>> >   - keep `to_local_leased_owed` unchanged
>> > - when HTLCs sent by the seller are fulfilled:
>> >   - deduce the HTLC amounts from `to_local_leased_owed`
>> > - when HTLCs sent by the seller are failed:
>> >   - add the corresponding amount to `to_local_leased` first
>> >   - once `to_local_leased = to_local_leased_owed`, add the remaining
>> > amount to `to_local`
>> > - when creating commitment transactions:
>> >   - if `to_local_leased` is greater than dust, create a corresponding
>> > output with a CLTV matching the lease
>> >   - if `to_local` is greater than dust, create a corresponding output
>> > without any CLTV lease
>>
>> Neat idea.  This changes the meaning of liquidity ads slightly -- the
>> liquidity is only leased for the one-way trip, and any channel balance
>> that comes back to the seller is not encumbered.  In other words,
>> instead of the channel itself being leased, only the initial inbound
>> liquidity is.  Once the cumulative flow to the buyer meets the leased
>> amount, the seller can reclaim any balance on their side without
>> penalty.  Of course if there's enough bidirectional flow happening,
>> the seller may choose to keep the channel open to earn more fees.
>>
>> The question then is really: do operators want to buy/sell X sats of
>> inbound flow or Y days of an open channel with an initial inbound
>> balance of X sats?
>>
>> >
>> > This computation must occur when sending/receiving `commit_sig`. The
>> > order in which we evaluate those updates matters: we must start with
>> > the `update_fulfill_htlc` updates before the `update_fail_htlc` ones,
>> > because we need to start by updating `to_local_leased_owed`. I believe
>> > that works, but it needs more analysis. Please try to break it, and let
>> > me know what you find!
>>
>> On first look, I think this works.  I'll study it closer if we decide
>> this is the direction we want to go.
>>
>> >
>> > We also need to handle concurrent leases. We want to support the
>> > following scenario:
>> >
>> > - Alice buys 10k sats from Bob for one month
>> > - 1 week later, the on-chain fees are very low: Alice buys

Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-08 Thread Bastien TEINTURIER
Hey Matt,

> The question then is really: do operators want to buy/sell X sats of
> inbound flow or Y days of an open channel with an initial inbound
> balance of X sats?

Agreed, this is what we need to decide. I personally think it has to be
the first option, because the second one exposes the seller to griefing
by the attack described in my first email, which makes it impossible for
the seller to find the right price for that channel because they don't
know how much liquidity will actually end up being locked.

But that's definitely up for debate if people feel otherwise!

Thanks,
Bastien

Le jeu. 7 déc. 2023 à 22:18, Matt Morehouse  a
écrit :

> On Mon, Dec 4, 2023 at 9:48 AM Bastien TEINTURIER 
> wrote:
> >
> > But I've thought about it more, and the following strategy may work:
> >
> > - store three counters for the seller's funds:
> >   - `to_local`: seller's funds that are not leased
> >   - `to_local_leased`: seller's funds that are leased
> >   - `to_local_leased_owed`: similar to `to_local_leased`, without taking
> > into account pending HTLCs sent by the seller
> > - when the seller sends HTLCs:
> >   - deduce the HTLC amounts from `to_local_leased` first
> >   - when `to_local_leased` reaches `0 sat`, deduce from `to_local`
> >   - keep `to_local_leased_owed` unchanged
> > - when HTLCs sent by the seller are fulfilled:
> >   - deduce the HTLC amounts from `to_local_leased_owed`
> > - when HTLCs sent by the seller are failed:
> >   - add the corresponding amount to `to_local_leased` first
> >   - once `to_local_leased = to_local_leased_owed`, add the remaining
> > amount to `to_local`
> > - when creating commitment transactions:
> >   - if `to_local_leased` is greater than dust, create a corresponding
> > output with a CLTV matching the lease
> >   - if `to_local` is greater than dust, create a corresponding output
> > without any CLTV lease
>
> Neat idea.  This changes the meaning of liquidity ads slightly -- the
> liquidity is only leased for the one-way trip, and any channel balance
> that comes back to the seller is not encumbered.  In other words,
> instead of the channel itself being leased, only the initial inbound
> liquidity is.  Once the cumulative flow to the buyer meets the leased
> amount, the seller can reclaim any balance on their side without
> penalty.  Of course if there's enough bidirectional flow happening,
> the seller may choose to keep the channel open to earn more fees.
>
> The question then is really: do operators want to buy/sell X sats of
> inbound flow or Y days of an open channel with an initial inbound
> balance of X sats?
>
> >
> > This computation must occur when sending/receiving `commit_sig`. The
> > order in which we evaluate those updates matters: we must start with
> > the `update_fulfill_htlc` updates before the `update_fail_htlc` ones,
> > because we need to start by updating `to_local_leased_owed`. I believe
> > that works, but it needs more analysis. Please try to break it, and let
> > me know what you find!
>
> On first look, I think this works.  I'll study it closer if we decide
> this is the direction we want to go.
>
> >
> > We also need to handle concurrent leases. We want to support the
> > following scenario:
> >
> > - Alice buys 10k sats from Bob for one month
> > - 1 week later, the on-chain fees are very low: Alice buys 50k sats
> >   from Bob for 6 months
> > - 1 more week later, she buys 10k sats from Bob for one week
> >
> > We thus have three concurrent leases, with overlapping lease durations.
> > That's costly for the channel initiator, because we must add three new
> > outputs to the commitment transactions. But that part should be ok, the
> > seller should price that in its decision to fund the lease or not.
> >
> > I think we would need to have three `to_local_leased_owed` buckets, one
> > per lease. How do we order them, to decide from which bucket we take
> > funds first? I think the option that makes the most sense is to order
> > them by lease expiry (not by lease start or lease amount, which could
> > be unfair to the buyer).
> >
> > Would that create scenarios where one side can cheat? Are there issues
> > with channel reserve because of the increased commit tx weight?
> >
> > Cheers,
> > Bastien
> >
> > Le sam. 2 déc. 2023 à 03:23, ZmnSCPxj  a écrit
> :
> >>
> >> Halfway through, I thought "is it not possible to have two Bob-owned
> outputs that sum up to the total Bob-owned amount, with a CLTV on up to the
> amount that was purchased and the rest (if above dust limit) withou

Re: [Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-04 Thread Bastien TEINTURIER
Hi Matt, Zman,

You're both suggesting that we should explore in more detail the idea I
mentioned in my post, where the seller's funds would be split into two
outputs, one with the CLTV lease and the other without it. I've already
discussed that option with Lisa, and I was under the impression that it
could be gamed to bypass the lease.

But I've thought about it more, and the following strategy may work:

- store three counters for the seller's funds:
  - `to_local`: seller's funds that are not leased
  - `to_local_leased`: seller's funds that are leased
  - `to_local_leased_owed`: similar to `to_local_leased`, without taking
into account pending HTLCs sent by the seller
- when the seller sends HTLCs:
  - deduce the HTLC amounts from `to_local_leased` first
  - when `to_local_leased` reaches `0 sat`, deduce from `to_local`
  - keep `to_local_leased_owed` unchanged
- when HTLCs sent by the seller are fulfilled:
  - deduce the HTLC amounts from `to_local_leased_owed`
- when HTLCs sent by the seller are failed:
  - add the corresponding amount to `to_local_leased` first
  - once `to_local_leased = to_local_leased_owed`, add the remaining
amount to `to_local`
- when creating commitment transactions:
  - if `to_local_leased` is greater than dust, create a corresponding
output with a CLTV matching the lease
  - if `to_local` is greater than dust, create a corresponding output
without any CLTV lease

This computation must occur when sending/receiving `commit_sig`. The
order in which we evaluate those updates matters: we must start with
the `update_fulfill_htlc` updates before the `update_fail_htlc` ones,
because we need to start by updating `to_local_leased_owed`. I believe
that works, but it needs more analysis. Please try to break it, and let
me know what you find!

We also need to handle concurrent leases. We want to support the
following scenario:

- Alice buys 10k sats from Bob for one month
- 1 week later, the on-chain fees are very low: Alice buys 50k sats
  from Bob for 6 months
- 1 more week later, she buys 10k sats from Bob for one week

We thus have three concurrent leases, with overlapping lease durations.
That's costly for the channel initiator, because we must add three new
outputs to the commitment transactions. But that part should be ok, the
seller should price that in its decision to fund the lease or not.

I think we would need to have three `to_local_leased_owed` buckets, one
per lease. How do we order them, to decide from which bucket we take
funds first? I think the option that makes the most sense is to order
them by lease expiry (not by lease start or lease amount, which could
be unfair to the buyer).

Would that create scenarios where one side can cheat? Are there issues
with channel reserve because of the increased commit tx weight?

Cheers,
Bastien

Le sam. 2 déc. 2023 à 03:23, ZmnSCPxj  a écrit :

> Halfway through, I thought "is it not possible to have two Bob-owned
> outputs that sum up to the total Bob-owned amount, with a CLTV on up to the
> amount that was purchased and the rest (if above dust limit) without a
> CLTV?"
>
> so e.g. if the purchased amount is 2 units but the total channel capacity
> is 12 units:
>
> * Bob owns 0 units: no Bob outputs
> * Bob owns 1 unit: Bob has a CLTV-encumbered output of 1 unit
> * Bob owns 2 units: Bob has a CLTV-encumbered output of 2 units
> * Bob owns 3 units (assuming 1 unit is above dust limit):  Bob has:
>   * A CLTV-encumbered output of 2 units
>   * An ordinary output of 1 unit
> * etc.
>
> This locks up only the agreed-upon amount but lets Bob keep any amount
> above the rest.
>
> Alternately, only allow CLTV-locking if the buyer is not providing its own
> funds (i.e. pure inbound purchase).
> This is still effectively my original idea as then any funds Alice wants
> to add would be in a separate, unencumbered channel.
>
> Regards,
> ZmnSCPxj
>
>
> Sent with Proton Mail secure email.
>
> On Friday, December 1st, 2023 at 5:45 PM, Bastien TEINTURIER <
> bast...@acinq.fr> wrote:
>
>
> > Good morning list,
> >
> > I've been thinking a lot about liquidity ads recently, and I want to
> > highlight some subtleties that should be taken into account in the
> > protocol design. This is a rather long post, but I believe this is
> > important to make sure we get it right and strike the right balance
> > between protocol complexity and incentives design. Strap in, grab a
> > coffee, and enjoy the ride.
> >
> > First of all, it's important to understand exactly what you are buying
> > when paying for a liquidity ad. There are two dimensions here: an amount
> > and a duration. If Alice buys 100 000 sats of inbound liquidity from Bob
> > for one month, what exactly does that mean? Obviously, it means that Bob
> > will immediately add 10

[Lightning-dev] Liquidity Ads and griefing subtleties

2023-12-01 Thread Bastien TEINTURIER
Good morning list,

I've been thinking a lot about liquidity ads recently, and I want to
highlight some subtleties that should be taken into account in the
protocol design. This is a rather long post, but I believe this is
important to make sure we get it right and strike the right balance
between protocol complexity and incentives design. Strap in, grab a
coffee, and enjoy the ride.

First of all, it's important to understand exactly what you are buying
when paying for a liquidity ad. There are two dimensions here: an amount
and a duration. If Alice buys 100 000 sats of inbound liquidity from Bob
for one month, what exactly does that mean? Obviously, it means that Bob
will immediately add 100 000 sats (or more) to his side of the channel,
and Alice will pay a fee proportional to that amount *and* that duration
(because locking funds for one month should cost more than locking funds
for one day). But is that really the only thing that Alice is buying?

The current spec proposal adds a CLTV of one month to the seller's
output in the commitment transactions. Without that CLTV, the seller
could accept the trade, and immediately close the channel to reuse the
funds elsewhere. This CLTV protects the buyer from malicious sellers.
But it is actually doing a lot more: what Alice has bought is actually
*any* liquidity that ends up on Bob's side, for a whole month. And the
issue is that this is impossible for Bob to price correctly, and can be
used for liquidity griefing attacks against him.

Imagine that Alice opens a 1 BTC channel with Bob and asks him to add
10 000 sats of inbound liquidity for a month. This sounds interesting
for Bob, because Alice is bringing a lot of funds in, so he can expect
payments to flow towards him which will bring him routing fees. And she
isn't asking for a lot of liquidity, so Bob can definitely spare that.
But then Alice sends all the channels funds through Bob to Carol. This
means that Bob now has 1 BTC locked for the whole month, while Alice
only paid for 10 000 sats! He earned some routing fees, but that isn't
enough to make up for the long duration of the lock. If payments keep
flowing in both directions with enough velocity, this is great for both
Bob and Alice. But if the channel is then unused, or Alice just goes
offline, this was a very bad investment for Bob. With splicing, Bob
cannot even know beforehand how much liquidity is potentially at risk,
because Alice may splice funds in after paying for the liquidity ad.

If Alice pays for a 10 000 sats lease, we only want those 10 000 sats
to be encumbered with a CLTV. But this is actually not enforceable. We
could create a separate output in the commitment transaction with the
leased funds and a CLTV, while keeping the rest of the seller's funds in
a normal output that doesn't have a CLTV. But then what happens when
HTLCs are relayed and then failed? To which output should we add the
funds back? Any strategy we use here can be exploited either by the
seller to drain the leased funds back to its non-CLTV-locked output,
or by the buyer to keep funds in the CLTV-locked output forever.

Adding a CLTV thus protects the buyer at the expense of the seller. In
some cases this is ok: if you are a new seller who wants to attract
buyers, you may be willing to take that risk. But in most cases, the
seller is going to be more reputable than the buyer, and has more
incentives to behave correctly than the buyer. When buying liquidity,
you will likely look at the network's topology, and buy from nodes that
are well-connected, or known to be reliable. Or if you are a mobile
wallet user, you'll be buying from your LSPs, who have incentives to
behave honestly to earn fees and attract new users. In those scenarios,
the buyers will very often be new nodes, without any public channels,
which means that the seller has no way of knowing what their incentives
are beforehand. It thus makes more sense to protect the seller than to
protect the buyer. For those use-cases, the simplest solution is to not
add a CLTV at all: the buyer is taking the risk that the seller removes
the liquidity before the end of the lease. But if that happens, they'll
just mourn their lost fees, blacklist that node, and move on. There will
be a lot more buyers than sellers in that market, so I believe that this
model makes sense for most large public nodes.

I think that both use-cases should be supported, so I suggest making the
CLTV enforcement of the lease optional. It will be up to each seller to
decide whether they are willing to take the risk of locking their funds
to attract buyers or not. Sellers can (should) price that additional
risk in their advertised rates.

In the case where the CLTV is enforced, splicing brings in additional
subtleties. We should prevent the seller from making any splice-out,
otherwise they would effectively be bypassing the lease CLTV. But we
should allow the seller to splice-in, and the buyer should still be
allowed to splice-in and splice-out. 

Re: [Lightning-dev] Mailing List Future

2023-11-27 Thread Bastien TEINTURIER
Hey Matt,

That sounds good to me! Let's settle that during the next spec meeting.
Thank you for the option of hosting a mailing list instance, I'd be happy
to moderate it to share some of the burden.

Bastien

Le dim. 26 nov. 2023 à 17:51, Matt Corallo  a
écrit :

> During the last meeting it came up that the mailing list here will likely
> shut down somewhere around
> the end of the year. We listed basically the following options for future
> discussion forums:
>
> * google groups as a mailing list hoster. One question was whether its
> friendly to subscribing
> without a gmail account, which may be limiting to some.
> * github discussions on the lightning org. One question is whether the
> moderation tools here are
> sufficient.
> * Someone (probably me) host a mailman instance and we use another mailing
> list. I dug into this a
> bit and am happy to do this, on the one condition that the ML remains
> fully moderated, though this
> doens't seem like a substantial burden today. One question is if
> spam-foldering will be an issue,
> but with full moderation I'm pretty confident this will be tolerable, at
> least for those with email
> hosted anywhere but Microsoft lol.
> * A discourse instance (either we host one or we use delvingbitcoin, which
> AJ hosts and has
> previously offered to set up a lightning section on).
>
> There was some loose discussion, but I'm not sure there's going to be a
> clear conclusion. Thus, I
> think we should simply vote at the next meeting after a time-boxed minute
> or two discussion. If
> anyone has any thoughts or would like to have their voice heard, they can
> join the meeting in a week
> and a day or can respond here and I'll do my best to repeat the views
> expressed on the call.
>
> Matt
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Liquidity Ads: Updated Spec Posted, please review

2023-11-22 Thread Bastien TEINTURIER
Hey Lisa,

> One drawback is that it adds another reason an RBF attempt
> might be rejected.

That's true, but not including it is also a reason for an RBF attempt
to be rejected! If the seller changed their mind (because the tx took
too much time to confirm and they want to allocate their liquidity
elsewhere), they're incentivized to reject the RBF attempt and wait for
an opportunity to double-spend. Whereas now they have an opportunity to
ask for higher rates or to remove liquidity from this new funding tx.

> - leave the hole, tell lessors to restrict their HTLC inflight value

I'd favor that option as well. I don't think we can fully solve the
issue of a liquidity seller not playing honestly. They could simply
refuse to relay HTLCs, and you couldn't catch them doing so.

Restricting exposure and good incentives to behave correctly are most
likely better than over-engineering a complex protocol that cannot fix
every hole anyway!

Cheers,
Bastien

Le mar. 21 nov. 2023 à 19:09, niftynei  a écrit :

> > Each RBF attempt renegotiates a potential liquidity purchase,
>  independently of what the previous attempts contained.
>
> You're right, I'm missing the RBF spec! Thanks for the reminder.
>
> I think having the lease renegotiated makes sense, it gives
> maximum flexibility and allows the API for an open/splice/rbf
> to all be equivalent.
>
> One drawback is that it adds another reason an RBF attempt
> might be rejected.
>
> > It could work if that 2nd-stage transaction did not require a signature
> from the remote peer, or if we could directly express the additional
> CLTV constraint in the output without requiring a 2nd-stage transaction
> at all. But I'm not sure this can be done...
>
> Ah I think unfortunately you're right about this. Needing a peer sig
> for *their* commitment tx isn't something we can do with the current
> protocol. Drats.
>
> That leaves a few options, I think
>
> - leave the hole, tell lessors to restrict their HTLC inflight value
> - extend any payment htlc through the channel by the lease length
> - rearchitect the commitment flow
>
>
> From a time management perspective, option 1 is the most expedient.
>
> We can always rearchitect the commitment flow to remove this hole in
> the future, perhaps along with another upgrade that simplifies or needs
> the same thing? (syncrhonous commitment flow; PTLCs; eltoo?)
>
> ~nifty
>
>
> On Tue, Nov 21, 2023 at 4:33 AM Bastien TEINTURIER 
> wrote:
>
>> Hey Lisa,
>>
>> Thanks for the update! I believe that moving to CLTV instead of CSV is
>> definitely the right move here.
>>
>> Regarding the newly added 2nd-stage lease locked transactions, I don't
>> think that works. The issue is that you don't have an opportunity to
>> receive signatures for those transactions with the current message flow.
>> When you send `commit_sig`, the remote node will have a new commitment
>> transaction that they can immediately broadcast, but you won't have
>> their signatures if you need to claim your leased HTLC outputs. PTLCs
>> have the same kind of issue, and we resolved them by adding new messages
>> to the protocol flow, which I think would be overkill here.
>>
>> It could work if that 2nd-stage transaction did not require a signature
>> from the remote peer, or if we could directly express the additional
>> CLTV constraint in the output without requiring a 2nd-stage transaction
>> at all. But I'm not sure this can be done...
>>
>> My other main feedback is about RBF. It currently isn't specified what
>> behavior RBF attempts should have: should they honor the previous rates
>> or not? I believe we should add the new funding tlv fields to the RBF
>> messages (`tx_init_rbf` and `tx_ack_rbf`). Each RBF attempt renegotiates
>> a potential liquidity purchase, independently of what the previous
>> attempts contained. That will work better with splicing, where an RBF
>> attempt may result in a very different liquidity allocation than the
>> other pending splice transactions. I detailed that a bit more in my
>> comment on the spec PR [1].
>>
>> I'm actively working on implementing this in eclair, as I believe this
>> is a very important feature for the network. Thanks again for pushing
>> this spec forward!
>>
>> Cheers,
>> Bastien
>>
>> [1] https://github.com/lightning/bolts/pull/878#issuecomment-1814006160
>>
>> Le lun. 20 nov. 2023 à 20:05, niftynei  a écrit :
>>
>>> Hi all.
>>>
>>> The original Liquidity Ads spec draft[1] was posted a few years ago and
>>> implemented
>>>  and shipped in core-lightning's v0.10.1 (Aug 2021).
>>>

Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-21 Thread Bastien TEINTURIER
Hi Andy,

> Also, we might want to make it explicit in the spec that you can't
> have duplicate records? Many DNS records allow multiple for
> redundancy. Is that desired here?

Agreed, this should be made explicit in the bLIP. I don't see a reason
to allow duplicate records, so we should require uniqueness.

> Is there any problem allowing a different user to have a different
> blinded path? This not only helps with scalability, but say someone
> want's to have a domain that is shared by say 5 users, but all those
> users want to run their own node.

I'm not sure which option you're commenting on here. In option 1, you
can't have different blinded paths per user, since you have a single
DNS record that just points to the domain owner's node. In option 3,
there is already one record per user, and the user chose the blinded
path themselves. If the end user (payment recipient) wants to handle
this with their own node and control the blinded path, I think they'll
need to have their own domain and use option 3.

I may be misunderstanding your point though, let me know if that seems
to be the case.

Thanks,
Bastien

Le lun. 20 nov. 2023 à 17:32, Matt Corallo  a
écrit :

>
>
> On 11/20/23 6:53 AM, Andy Schroder wrote:
> >>
> >>> - I would omit suggesting to use DoH from the spec. DoH seems a bit
> centralized to me and that's
> >>> up to the client to decide what to do. DNS itself is a hierarchically
> distributed system, so
> >>> there is redundancy built into it (which has its flaw at the root
> nameserver / ICANN level) and
> >>> it seems to me like DoH is taking much of that distributed design
> away. It seems like if you are
> >>> concerned about your ISP snooping your traffic, you should use a
> tunnel so that your traffic is
> >>> obfuscated that way, that way things are done at the IP level and not
> way up at the HTTPS level.
> >>> Are you resorting to DoH because many ISP block traffic for DNSSEC
> records traffic through their
> >>> networks? Either way, how you query DNS seems like that should be left
> up to the client and not
> >>> really part of the spec.
> >>
> >> It is, but its worth mentioning in large part because almost certainly
> ~all implementations will
> >> use it. While I agree that it'd be very nice to not use it, in order to
> do so clients would need
> >> to (a) actually be able to query TXT records, which isn't in standard
> operating system libraries,
> >> so would probably mean DoH to 127.0.0.53 or so, (b) trust the
> resolver's DNSSEC validation, which
> >> means having some confidence its local, and not a coffee shop/etc.
> >>
> >> Given the level of trust you have to have here in the DNS resolution,
> its almost certainly best to
> >> cross-validate with at least multiple DoH services, unless you are
> validating the DNSSEC chain
> >> yourself (which I'd really strongly prefer as the best solution here,
> but I'm unaware of any open
> >> source code to do so).
> >
> > delv, part of bind9, does recursive DNSSEC validation locally:
> > https://manpages.ubuntu.com/manpages/jammy/en/man1/delv.1.html
>
> Sadly this doesn't really solve the issue. Lightning nodes need to be able
> to get a DNSSEC tree in a
> cross-platform way (which "just call delv" is not) ideally without relying
> on sending UDP directly
> at all. What this really means is that we'll eventually want to use the
> RFC 9102 CHAIN serialization
> format and put that in the node_announcement, but to do that we need some
> kind of (cross-platform
> library) client software which can take a serialized CHAIN and validate
> it. I'm unaware of any such
> software, though in theory it shouldn't be that hard to write.
>
> Matt
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Liquidity Ads: Updated Spec Posted, please review

2023-11-21 Thread Bastien TEINTURIER
Hey Lisa,

Thanks for the update! I believe that moving to CLTV instead of CSV is
definitely the right move here.

Regarding the newly added 2nd-stage lease locked transactions, I don't
think that works. The issue is that you don't have an opportunity to
receive signatures for those transactions with the current message flow.
When you send `commit_sig`, the remote node will have a new commitment
transaction that they can immediately broadcast, but you won't have
their signatures if you need to claim your leased HTLC outputs. PTLCs
have the same kind of issue, and we resolved them by adding new messages
to the protocol flow, which I think would be overkill here.

It could work if that 2nd-stage transaction did not require a signature
from the remote peer, or if we could directly express the additional
CLTV constraint in the output without requiring a 2nd-stage transaction
at all. But I'm not sure this can be done...

My other main feedback is about RBF. It currently isn't specified what
behavior RBF attempts should have: should they honor the previous rates
or not? I believe we should add the new funding tlv fields to the RBF
messages (`tx_init_rbf` and `tx_ack_rbf`). Each RBF attempt renegotiates
a potential liquidity purchase, independently of what the previous
attempts contained. That will work better with splicing, where an RBF
attempt may result in a very different liquidity allocation than the
other pending splice transactions. I detailed that a bit more in my
comment on the spec PR [1].

I'm actively working on implementing this in eclair, as I believe this
is a very important feature for the network. Thanks again for pushing
this spec forward!

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/878#issuecomment-1814006160

Le lun. 20 nov. 2023 à 20:05, niftynei  a écrit :

> Hi all.
>
> The original Liquidity Ads spec draft[1] was posted a few years ago and
> implemented
>  and shipped in core-lightning's v0.10.1 (Aug 2021).
>
> We received some great comments and feedback on the initial design,
> and I've since updated the spec to incorporate some of these changes.
>
> Big thanks to everyone that's already spent time reviewing it.
>
> The updated proposal hasn't been implemented in CLN yet, however I wanted
> to
> solicit some early feedback, particularly around the usage of CLTV and the
> introduction of a new 'second stage' transaction to help gate the leasor's
> funds
> for the duration of the lease.
>
> You can find the draft here: https://github.com/lightning/bolts/pull/878
>
> Here's an overview of notable changes.
>
> ### CSV to CLTV
> The original proposal used a constantly updated blockheight to lock up
> funds of the leasor with a CSV. We reused the CSV lock that was introduced
> by anchor outputs to add this.
>
> This created a dependency on anchor outputs, as well as added complexity
> around commitment
> transaction updates. It required constant updates to decrement the CSV
> lock as time went on.
>
> The HTLC outputs of the leasor in the remote's (leasee's) commitment
> transaction weren't
> encumbered with a timelock. This means that if the leasor convinced their
> peer into force
> closing the channel, any funds in inflight HTLCs would be available to
> them prior to
> the end of the agreed upon lease period.
>
> This new proposal switches from CSV to CLTV, and adds a CLTV lock on every
> output
> which goes to the leasor.
>
> For the above case of HTLCs in the leasee's commitment transaction, we
> can't add an
> additional CLTV directly to the script, as this would impact the timeout
> calculation
> for every payment routed through a leased channel. Instead, we introduce
> the concept
> of a "lease locked" transaction. These are almost identical to HTLC
> transactions, with the
> exception that they only exist on the commitment transaction where the
> leasor is remote.
>
> This change is more complex in terms of onchain handling, but it closes
> all possible avenues
> for the leasor gaining access to their funds onchain ahead of the lease
> end.
>
> Credit to @morehouse for identifying this and the proposed fix.
>
> For a more in-depth exploration of this change, please see the relevant
> proposed commit. [2]
>
> ### Variable Lease Terms
>
> Another change we've made is allowed for the lease length to be specified
> by the node
> asking for the lease. Previously, all leases were fixed at about a month,
> or 4032 blocks.
>
> This allows for a more dynamic pricing mechanism on the seller's side, as
> they can tailor
> the rates that they return back in `accept_tlv.lease_rates` to account for
> the desired lease
> length of the opener. (Generally, I'd anticipate longer leases would
> command a higher price).
>
> The channel fee cap commitments have been updated to specify a range of
> blocks
> for which the commitment is valid.
>
> ### Channel Fee Caps
>
> The channel fee caps were originally specified to be in increments of 1k
> ppm in the
> liquidity advertisement in the 

Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-20 Thread Bastien TEINTURIER
gt; >
> > - I think we should say that we cannot verify the offer *if* Bob does
> not self host and uses an LSP.
> > If Bob self hosts, we know it's from Bob if DNSSEC validates and the
> root nameservers and the tld
> > nameservers are honest.
> >
> > - I think there should be a QR code format that accompanies this so that
> phone apps can easily
> > validate the path (or for option 3 below the offer).
> >
> >
> > ## Option 2
> >
> >
> > - Seems to be a bad idea to me. You are relying on certificate
> authorities to prove the ownership of
> > a domain? The certificate authorities are not an authority on domain
> ownership. Also, it seems to me
> > like certificate authorities are a major weak link because if *any*
> certificate authority in your
> > local trust database becomes faulty, *all* certificates can no longer be
> trusted.
> >
> > - This approach seems *very* unscalable because it requires the
> announcements for all domains to be
> > gossiped to everyone? I think that there needs to be a decentralized DNS
> that is created, but this
> > seems to be headed in the wrong direction. We should be able to learn
> from some of the hierarchical
> > features of legacy DNS and build a truly decentralized "root", which
> will be more efficient.
> >
> >
> >
> >
> > ## Option 3
> >
> > - "The statement "Note that Alice cannot verify that the offer she
> receives is really from Bob" can
> > apply to this option too, right?
> >
> >
> > Andy Schroder
> >
> > On 11/16/23 08:51, Bastien TEINTURIER wrote:
> >> Good morning list,
> >>
> >> Most of you already know and love lightning addresses [1].
> >> I wanted to revisit that protocol, to see how we could improve it and
> >> fix its privacy drawbacks, while preserving the nice UX improvements
> >> that it brings.
> >>
> >> I have prepared a gist with three different designs that achieve those
> >> goals [2]. I'm attaching the contents of that gist below. I'd like to
> >> turn it into a bLIP once I collect enough feedback from the community.
> >>
> >> I don't think we should select and implement all three options. They
> >> show that we have a large enough design space, but I think we should
> >> aim for simplicity of implementation and deployment. My personal choice
> >> would be to do options 1 and 3: clients (mobile wallets) would first
> >> make a DNS request corresponding to option 3, and if that fails, they
> >> would fallback to option 1. Domain owners would implement only one of
> >> those two options, depending on their DNS capabilities.
> >>
> >> Curious to hear your thoughts!
> >>
> >> Many thanks to Rusty and Matt who reviewed early drafts of that gist.
> >>
> >> [1] https://lightningaddress.com/
> >> [2] https://gist.github.com/t-bast/78fd797a7da570d293a8663908d3339b
> >>
> >> # Lightning Address
> >>
> >> [Lightning Address](https://lightningaddress.com/) is a very popular
> protocol that brings UX improvements that users love.
> >> We'd like to provide those UX benefits without its privacy and security
> drawbacks.
> >>
> >> ## Issues with the current lightning address protocol
> >>
> >> As described [here](
> https://github.com/andrerfneves/lightning-address/blob/master/README.md),
> the lightning address protocol requires payment senders to make an HTTP
> request to the recipient's domain owner.
> >> This has some inconvenient side effects:
> >>
> >> 1. The payment sender reveals their IP address to the recipient's
> domain owner, who knows both the sender and the recipient.
> >> 2. The domain owner can swap invoices to steal some of the payment.
> >> 3. It introduces a dependency on DNS servers and the need for an HTTP
> stack on the sender side.
> >>
> >> We can do better and fix or mitigate some of these issues, without
> compromising on UX.
> >> We need two somewhat distinct mechanisms:
> >>
> >> 1. A way to privately obtain the `node_id` associated with a given
> domain.
> >> 2. A way to privately contact that domain to obtain the recipient's
> payment details.
> >>
> >> ## User story
> >>
> >> Alice wants to pay `b...@domain.com` without any other prior
> information.
> >> She doesn't want to reveal:
> >>
> >> * her identity to Bob (payment sender privacy)
> >> * her identity to 

Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-17 Thread Bastien TEINTURIER
Hi Tony,

> For completeness, would you be willing to demonstrate what it might
> look like if it were bolt12 in the normal LNURL way?

Not sure that would provide "completeness", but I guess it would work
quite similarly, but instead of putting data in DNS records, that data
would be directly on files on the service provider's web server and
fetched over HTTPS, thus revealing the user's IP address and who they
want to pay.

> At scale, that would be much more difficult for LNURL service
> providers to implement for their potentially thousands to millions
> of users.

Why would that be the case? I was told handling a few million entries in
a zonefile isn't a challenge at all. And it is only necessary if the
service provider absolutely wants to only support option 3. With option
1, the service provider has a single DNS record to create. If the
service provider doesn't need to hide its node_id, the blinded path can
be empty, which guarantees that the record never expires (unless they
want to change their node_id).

On the client-side, this is very simple as well: clients should use DoH,
so they simply make HTTPS requests (no need for deep integration in the
DNS stack). Clients should first try option 3, and if that query doesn't
return a result, they fallback to option 1. This only needs to happen
once in a while, after that they can save the offer in their contact
list and reuse it until it expires, at which point they make the queries
again.

Cheers,
Bastien

Le jeu. 16 nov. 2023 à 18:52, Tony Giorgio  a
écrit :

> Bastien,
>
> For completeness, would you be willing to demonstrate what it might look
> like if it were bolt12 in the normal LNURL way? The concern is mostly what
> you brought up with relying on DNS entries instead of a typical web server.
> At scale, that would be much more difficult for LNURL service providers to
> implement for their potentially thousands to millions of users.
>
> Something like Oblivious HTTP could be promising to remove the knowledge
> of IP for some of the larger LNURL service providers.
>
> Tony
> On 11/16/23 7:51 AM, Bastien TEINTURIER wrote:
>
> Good morning list,
>
> Most of you already know and love lightning addresses [1].
> I wanted to revisit that protocol, to see how we could improve it and
> fix its privacy drawbacks, while preserving the nice UX improvements
> that it brings.
>
> I have prepared a gist with three different designs that achieve those
> goals [2]. I'm attaching the contents of that gist below. I'd like to
> turn it into a bLIP once I collect enough feedback from the community.
>
> I don't think we should select and implement all three options. They
> show that we have a large enough design space, but I think we should
> aim for simplicity of implementation and deployment. My personal choice
> would be to do options 1 and 3: clients (mobile wallets) would first
> make a DNS request corresponding to option 3, and if that fails, they
> would fallback to option 1. Domain owners would implement only one of
> those two options, depending on their DNS capabilities.
>
> Curious to hear your thoughts!
>
> Many thanks to Rusty and Matt who reviewed early drafts of that gist.
>
> [1] https://lightningaddress.com/
> [2] https://gist.github.com/t-bast/78fd797a7da570d293a8663908d3339b
>
> # Lightning Address
>
> [Lightning Address](https://lightningaddress.com/) is a very popular protocol 
> that brings UX improvements that users love.
> We'd like to provide those UX benefits without its privacy and security 
> drawbacks.
>
> ## Issues with the current lightning address protocol
>
> As described 
> [here](https://github.com/andrerfneves/lightning-address/blob/master/README.md),
>  the lightning address protocol requires payment senders to make an HTTP 
> request to the recipient's domain owner.
> This has some inconvenient side effects:
>
> 1. The payment sender reveals their IP address to the recipient's domain 
> owner, who knows both the sender and the recipient.
> 2. The domain owner can swap invoices to steal some of the payment.
> 3. It introduces a dependency on DNS servers and the need for an HTTP stack 
> on the sender side.
>
> We can do better and fix or mitigate some of these issues, without 
> compromising on UX.
> We need two somewhat distinct mechanisms:
>
> 1. A way to privately obtain the `node_id` associated with a given domain.
> 2. A way to privately contact that domain to obtain the recipient's payment 
> details.
>
> ## User story
>
> Alice wants to pay `b...@domain.com` without any other prior information.
> She doesn't want to reveal:
>
> * her identity to Bob (payment sender privacy)
> * her identity to the manager of `domain.com` (payment sender privacy)
> * the fact that sh

[Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-16 Thread Bastien TEINTURIER
Good morning list,

Most of you already know and love lightning addresses [1].
I wanted to revisit that protocol, to see how we could improve it and
fix its privacy drawbacks, while preserving the nice UX improvements
that it brings.

I have prepared a gist with three different designs that achieve those
goals [2]. I'm attaching the contents of that gist below. I'd like to
turn it into a bLIP once I collect enough feedback from the community.

I don't think we should select and implement all three options. They
show that we have a large enough design space, but I think we should
aim for simplicity of implementation and deployment. My personal choice
would be to do options 1 and 3: clients (mobile wallets) would first
make a DNS request corresponding to option 3, and if that fails, they
would fallback to option 1. Domain owners would implement only one of
those two options, depending on their DNS capabilities.

Curious to hear your thoughts!

Many thanks to Rusty and Matt who reviewed early drafts of that gist.

[1] https://lightningaddress.com/
[2] https://gist.github.com/t-bast/78fd797a7da570d293a8663908d3339b

# Lightning Address

[Lightning Address](https://lightningaddress.com/) is a very popular
protocol that brings UX improvements that users love.
We'd like to provide those UX benefits without its privacy and
security drawbacks.

## Issues with the current lightning address protocol

As described 
[here](https://github.com/andrerfneves/lightning-address/blob/master/README.md),
the lightning address protocol requires payment senders to make an
HTTP request to the recipient's domain owner.
This has some inconvenient side effects:

1. The payment sender reveals their IP address to the recipient's
domain owner, who knows both the sender and the recipient.
2. The domain owner can swap invoices to steal some of the payment.
3. It introduces a dependency on DNS servers and the need for an HTTP
stack on the sender side.

We can do better and fix or mitigate some of these issues, without
compromising on UX.
We need two somewhat distinct mechanisms:

1. A way to privately obtain the `node_id` associated with a given domain.
2. A way to privately contact that domain to obtain the recipient's
payment details.

## User story

Alice wants to pay `b...@domain.com` without any other prior information.
She doesn't want to reveal:

* her identity to Bob (payment sender privacy)
* her identity to the manager of `domain.com` (payment sender privacy)
* the fact that she wants to pay `b...@domain.com` to her LSP (payment
recipient privacy)

## Option 1: use DNS records to link domains to nodes

A first proposal would be to use a DNS record to obtain the `node_id`
associated with a given domain.

### Obtain a blinded path to the node associated with a domain

Domain owners add a DNS `TXT` record for their domain containing a
blinded path to their node.
They may include an empty path if they wish to directly reveal their `node_id`.

| hostname   | record type | value   | TTL |
||-|-|-|
| _lnaddress.domain.com. | TXT | path: | path expiry |

Alice can then make a DNS query to obtain that blinded path.

```text
Alice
DNS server
  | |
  | dig TXT _lnaddress.domain.com   |
  |>|
  |  _lnaddress.domain.com. IN TXT "path:c3056fb73aa623..." |
  |<|
```

:question: What encoding should we use for the blinded path option?
Bech32m with the `lnp` prefix?

:warning: Alice should query that DNS record using
[DoH](https://datatracker.ietf.org/doc/html/rfc8484) for privacy.
She should also query multiple DoH servers to protect from malicious ones.

:warning: Alice should check the AD flag is correctly set (DNSSEC).

### Obtain a Bolt 12 offer from the recipient

Now that Alice has a way to reach the node that owns Bob's domain, she
needs to contact them to obtain a Bolt 12 offer from Bob.
We use an `onion_message` for that, which has the following benefits:

* Alice doesn't reveal her identity (IP address or `node_id`) to Bob
or Bob's domain
* Alice doesn't reveal Bob's identity (IP address or `node_id`) to her LSP
* Alice doesn't even need to know the IP address for Bob's domain's
lightning node

```text
Alice  Alice's LSP
 Bob's LSPBob
  | |
 | |
  |  onion_message  |
 | |
  |>|  onion_message
 | |
  | |  get_offer_from =
b...@domain.com  |  

Re: [Lightning-dev] Proposal: Bundled payments

2023-11-10 Thread Bastien TEINTURIER
Hi Thomas,

I thought it would be interesting to specify this in details, to figure out
the potential subtleties. I did that in a gist [1], that I plan to turn into
a bLIP after writing some prototype code for it.

Feel free to comment, either on the gist or here.

Cheers,
Bastien

[1] https://gist.github.com/t-bast/69018875f4f95e660ec2cbbc80f711a6


Le mar. 20 juin 2023 à 19:17, Steve Lee  a écrit :

>
>
> On Tue, Jun 20, 2023 at 6:17 AM Thomas Voegtlin 
> wrote:
>
>>
>>
>> We have not implemented BOLT-12 yet in Electrum. Would you care to
>> describe whether bundled payments already would work with the current
>> specification, or whether they would require changes to BOLT-12? We
>> are going to implement BOLT-12 support in Electrum in the coming
>> months, and I would be happy to help here.
>>
>>
> Fantastic news!
>
>
>> I believe that it will take years *after it is merged*, until BOLT-12
>> actually becomes the dominant payment method on Lightning. OTOH, if
>> this feature was adopted in BOLT-11, I think it could be deployed much
>> faster.
>>
>>
> Why do you think it will be adopted faster? History has shown that any
> upgrade requiring wallets to change takes years even if it is a small
> change to an existing design. For example, despite only requiring a tiny
> change, there is still not widespread bech32m support [1]. Bech32/native
> segwit support also took years.
>
> [1] http://whentaproot.com/
>
>
>>
>> cheers,
>>
>> Thomas
>>
>>
>>
>>
>>
>> On 15.06.23 11:01, Bastien TEINTURIER wrote:
>> > Hi Thomas,
>> >
>> > First of all, I'd like to highlight something that may not be obvious
>> > from your email, and is actually pretty important: your proposal
>> > requires *senders* to be aware that the payment will lead to a channel
>> > creation (or a splice) on the *receiver* end. In particular, it requires
>> > all existing software used by senders to be updated. For this reason, I
>> > think extending Bolt 12 (which requires new sender code anyway) makes
>> > more sense than updating Bolt 11.
>> >
>> > I see only three strategies to provide JIT liquidity (by opening a new
>> > channel or making a splice, I'll only use the open channel case below
>> > for simplicity):
>> >
>> > 1. Ask receiver for the preimage and a fee, then open a channel and
>> > push the HTLC amount minus the fee
>> > 2. Open a channel, then forward the HTLC amount minus a fee
>> > 3. Pre-pay fee, then open a channel and forward the whole HTLC amount
>> > on that channel
>> >
>> > What is currently deployed on the network is 1) and 2), while you're
>> > proposing 3). Both 1) and 2) have the advantages that the sender doesn't
>> > need to be aware that JIT liquidity is happening, and doesn't need to do
>> > anything special for that payment, which is the main reason those
>> > strategies were chosen.
>> >
>> > If all you're concerned about is trust and regulation, solution 2) works
>> > fine as long as the mempool isn't empty: if the user doesn't release the
>> > preimage after you've opened the channel, you should just blacklist that
>> > channel, reject payments made to it, and double-spend it whenever you
>> > have another on-chain transaction to make (and use 1 sat/byte for JIT
>> > liquidity transactions). Even if the mempool is empty, if your LSP has
>> > transactions to make at every block, it's likely that it will succeed
>> > at double-spending the faulty channel, and thus won't lose anything.
>> >
>> > But I agree that this only works when coupled with 0-conf. If we're not
>> > using 0-conf anymore, pre-paying fees would make more sense. But we will
>> > likely keep on using 0-conf at least until Bolt 12 is deployed, so it
>> > seems more reasonable to include this new feature in Bolt 12 rather than
>> > Bolt 11, since all implementations are actively working on this?
>> >
>> > Cheers,
>> > Bastien
>> >
>> > Le jeu. 15 juin 2023 à 10:52, Thomas Voegtlin  a
>> > écrit :
>> >
>> >> Hello Matt,
>> >>
>> >> I think it is not too late to add a new feature to BOLT-11. In any
>> >> case, the belief that BOLT-11 is ossified should not be a reason to
>> >> make interactive something that fundamentally does not require more
>> >> interactivity than what BOLT-11 already offers. Technical decisions
>> >> should be dictated by t

Re: [Lightning-dev] [lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-24 Thread Bastien TEINTURIER
Hi Dave,

> Is swap-in-potentiam[1] an option here?

While this could work, it does require more on-chain transaction bytes
since you first make a SiP transaction then settlement transactions.

My goal was to design something with the smallest on-chain footprint
possible.

> I think that's the same number (and approximate size) of transactions
that you'll get from the SIGHASH_ANYPREVOUT|SIGHASH_SINGLE solution you
outline

My proposal avoids the settlement transactions entirely, and produces
only one transaction. With your example participants, it would produce
a single splice transaction with 4 inputs and 3 outputs:

- E's funding input
- {A,D}'s current channel input
- {B,E}'s current channel input
- {C,F}'s current channel input
- {A,D}'s new channel output
- {B,E}'s new channel output
- {C,F}'s new channel output

While the number of inputs/outputs stays the same, we're paying for the
common transaction fields only once instead of N times (where N is the
size of the batch). For larger batches, in a high-fee future, I believe
this could be significant?

Thanks,
Bastien

Le mar. 24 oct. 2023 à 06:41, David A. Harding  a écrit :

> On 2023-10-17 03:03, Bastien TEINTURIER via bitcoin-dev wrote:
> > Good morning list,
> >
> > I've been trying to design a protocol to let users withdraw funds from
> > exchanges directly into their lightning wallet in an efficient way
> > (with the smallest on-chain footprint possible).
>
> Hi, Bastien.
>
> Is swap-in-potentiam[1] an option here?  For example, Exchange E wants
> to pay users A, B, and C, who each have different counterparties.  Then:
>
> - E gets from each of A, B, C a public key for their separate
> counterparties (e.g., D, E, F)
> - E gets confirmed a transaction paying three swap-in-potentiam outputs,
> one each for {A,D}, {B,E}, {C,F}
> - Each of the parties then offchain spends the SiP outputs into a
> standard LN-penalty channel construction and starts using it
> - Ideally, before the SiP expires, each party is able to drain the
> channel into their other channels and mutually settle it with just an
> onchain spend of the SiP output
> - Non-ideally, the previously offchain spend of the SiP output that
> established the LN-penalty channel is put onchain
>
> In the best case, this involves four transactions:
>
> - E's one-input, four-output batch withdrawl (the fourth output is E's
> change)
> - Three separate one-input, one-output transactions to settle the SiP
> outputs
>
> I think that's the same number (and approximate size) of transactions
> that you'll get from the SIGHASH_ANYPREVOUT|SIGHASH_SINGLE solution you
> outline, although your solution allows the channels to remain open
> indefinitely, whereas the SiP solution has an expiry.
>
> -Dave
>
> [1]
>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-January/003810.html
>  (I know Eclair already uses SiP; the above reference is for other
> readers)
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing channel reserve for mobile wallet users

2023-10-24 Thread Bastien TEINTURIER
Hi Ziggie,

> Do we want to add this via a Blip?

Sure, I'll open a PR to the bLIP repository, that would be useful.

> (well, down to the dust limit because it doesn't work currently without
it).
> I think this behaviour could/should be fixed, have you already reported
the issue?

I'm really unsure what "it doesn't work currently without it" refers to
here. Is that an implementation-specific issue? If we remove the channel
reserve, there's no reason to care about the dust limit.

Cheers,
Bastien

Le lun. 23 oct. 2023 à 23:16, ziggie1984  a
écrit :

> Hi Bastien, hi list,
>
> Thank you for taking both perspectives on this topic. I am in favour of
> introducing the 0-channel reserve option into the protocol (via an optional
> feature bit). Do we want to add this via a Blip?
> Although it makes sense to have this reserve when you are running a
> routing node, it would be neat to at least have the option to open
> zero-reserve channels (pops up from time to time in noderunner groups). Of
> course this would have to be restricted via a channel interceptor so that
> not only by signaling zero-reserve feature bit, you would be accepting
> those channels (same approach as for zero-conf channels maybe, but thats
> implementation related imo).
>
> I find it also useful to include informations how one could prove possible
> cheating attacks in the spec/blip (as SomberNight already elaborated).
>
> We recently removed the reserve for our users in Bitkit (well, down to the
> dust limit because it doesn't work currently without it).
>
> I think this behaviour could/should be fixed, have you already reported
> the issue?
>
> Cheers,
> ziggie
>
> --- Original Message ---
> On Wednesday, October 18th, 2023 at 15:51, Bastien TEINTURIER <
> bast...@acinq.fr> wrote:
>
> Good morning list,
>
> I'd like to discuss the channel reserve requirement, and argue that it
> should be fine to get rid of it for channels between mobile wallet users
> and their service provider. I know, I know, your first reaction will be
> "but this is a security parameter, I don't want to hear about it", but
> please bear with me (and I'll be happy to hear thoughts on why we should
> *not* get rid of this requirement if you still feel strongly about that
> after reading this post).
>
> Let's start by explaining why we generally want a channel reserve. It
> ensures that both peers always have an output in the commit tx, which
> has two important consequences:
>
> - if a malicious node publishes a revoked commitment, they will always
> have some funds in it that the honest node can claim, so they risk
> losing money
> - nodes are disincentivized from force-closing channels, because they
> will need to pay on-chain fees to get their funds back (through a
> 2nd-stage transaction)
>
> I believe those are important properties for channels between normal
> routing nodes that don't provide paid services to each other. If we
> remove the channel reserve, and at one point in time, one of the nodes
> has nothing at stake in the channel, they will be incentivized to
> broadcast a revoked commit tx: if they get away with it, they win some
> money, and otherwise, they don't lose any (because they have nothing at
> stake in the latest channel state). This is particularly true for the
> non-initiator, who doesn't pay the on-chain fees for the commit tx,
> otherwise a malicious initiator would still lose on-chain fees.
>
> Now what are the drawbacks of having a channel reserve? The first one is
> capital efficiency, because this channel reserve is unused liquidity. If
> you are a routing node this is fine, because you actively manage your
> channels to only keep those that earn you enough routing fees. But if
> you are a wallet provider, this is a very different story: you need to
> keep at least one channel open with each of your users. For each of
> these channels, you must maintain a reserve of 1% of the channel
> capacity, even if all the funds are on their side. You thus have unused
> liquidity proportional to the number of users and the total amount of
> sats your users own. This doesn't scale very well.
>
> The second drawback is UX: users look at their channel state to figure
> out how much they can receive off-chain. It's really hard to explain
> why there is a large gap between what they think they should be able
> to receive and what they can actually receive.
>
> Now, why is it ok in this setting to remove the reserve on both sides?
> First of all, the service provider is the one paying the on-chain fees
> for the commit tx (at least that's what we do for Phoenix). That means
> that when publishing a revoked commit tx, even if the service provider
> doesn't have an output in 

Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-19 Thread Bastien TEINTURIER
preimage, before its subsequent mempool
>> replacement. The preimage can be
>> > extracted from the second-stage HTLC-preimage and used to fetch the
>> off-chain inbound HTLC with a
>> > cooperative message or go on-chain with it to claim the accepted HTLC
>> output.
>> >
>> > Implemented and deployed by Eclair and LND.
>> >
>> > CLTV Expiry Delta: With every jammed block comes an absolute fee cost
>> paid by the attacker, a risk
>> > of the HTLC-preimage being detected or discovered by the honest
>> lightning node, or the HTLC-timeout
>> > to slip in a winning block template. Bumping the default CLTV delta
>> hardens the odds of success of a
>> > simple replacement cycling attack.
>> >
>> > Default setting: Eclair 144, Core-Lightning 34, LND 80 and LDK 72.
>> >
>> > ## Affected Bitcoin Protocols and Applications
>> >
>> >  From my understanding the following list of Bitcoin protocols and
>> applications could be affected by
>> > new denial-of-service vectors under some level of network mempools
>> congestion. Neither tests or
>> > advanced review of specifications (when available) has been conducted
>> for each of them:
>> > - on-chain DLCs
>> > - coinjoins
>> > - payjoins
>> > - wallets with time-sensitive paths
>> > - peerswap and submarine swaps
>> > - batch payouts
>> > - transaction "accelerators"
>> >
>> > Inviting their developers, maintainers and operators to investigate how
>> replacement cycling attacks
>> > might disrupt their in-mempool chain of transactions, or fee-bumping
>> flows at the shortest delay.
>> > Simple flows and non-multi-party transactions should not be affected to
>> the best of my understanding.
>> >
>> > ## Open Problems: Package Malleability
>> >
>> > Pinning attacks have been known for years as a practical vector to
>> compromise lightning channels
>> > funds safety, under different scenarios (cf. current bip331's
>> motivation section). Mitigations at
>> > the mempool level have been designed, discussed and are under
>> implementation by the community
>> > (ancestor package relay + nverrsion=3 policy). Ideally, they should
>> constraint a pinning attacker to
>> > always attach a high feerate package (commitment + CPFP) to replace the
>> honest package, or allow a
>> > honest lightning node to overbid a malicious pinning package and get
>> its time-sensitive transaction
>> > optimistically included in the chain.
>> >
>> > Replacement cycling attack seem to offer a new way to neutralize the
>> design goals of package relay
>> > and its companion nversion=3 policy, where an attacker package RBF a
>> honest package out of the
>> > mempool to subsequently double-spend its own high-fee child with a
>> transaction unrelated to the
>> > channel. As the remaining commitment transaction is pre-signed with a
>> minimal relay fee, it can be
>> > evicted out of the mempool.
>> >
>> > A functional test exercising a simple replacement cycling of a
>> lightning channel commitment
>> > transaction on top of the nversion=3 code branch is available:
>> > https://github.com/ariard/bitcoin/commits/2023-10-test-mempool-2
>> > <https://github.com/ariard/bitcoin/commits/2023-10-test-mempool-2>
>> >
>> > ## Discovery
>> >
>> > In 2018, the issue of static fees for pre-signed lightning transactions
>> is made more widely known,
>> > the carve-out exemption in mempool rules to mitigate in-mempool package
>> limits pinning and the
>> > anchor output pattern are proposed.
>> >
>> > In 2019, bitcoin core 0.19 is released with carve-out support.
>> Continued discussion of the anchor
>> > output pattern as a dynamic fee-bumping method.
>> >
>> > In 2020, draft of anchor output submitted to the bolts. Initial finding
>> of economic pinning against
>> > lightning commitment and second-stage HTLC transactions. Subsequent
>> discussions of a
>> > preimage-overlay network or package-relay as mitigations. Public call
>> made to inquiry more on
>> > potential other transaction-relay jamming attacks affecting lightning.
>> >
>> > In 2021, initial work in bitcoin core 22.0 of package acceptance.
>> Continued discussion of the
>> > pinning attacks and shortcomings of current mempool rules during
>> community-wide online workshops.
>> > L

Re: [Lightning-dev] Removing channel reserve for mobile wallet users

2023-10-19 Thread Bastien TEINTURIER
Hi Tony, Kulpreet,

> The main concern on the LSP not keeping a reserve is that it's much
> easier for them to steal since the offline concern is on the mobile
> user. We still do not yet have reliable watch tower
> integrations/products to help mitigate this. Yes, there's reputation,
> but how does a user go about publishing a previous commitment? Is that
> something we should also solve for and expose to users?

If you're assuming that your non-custodial wallet is unable to react to
revoked commitments, then you have a lot of other problems in your trust
model anyway?

Note that the important delay here is the `to_self_delay` parameter,
which is usually set to two weeks. Your phone only needs to be online
once every two weeks to detect the revoked commit and react to it.
Mobile wallets should all run frequent background jobs to perform those
checks, and warn the user with a notification if they've been unable to
get enough CPU time to run those checks. That's what Phoenix does.

> For dust reserves that still apply:

That doesn't apply here, this goes away when removing the reserve
requirement. There will always be at least one output since the channel
total capacity stays (much) greater than dust.

> From my nascent understanding this will require differentiating
> between types of participants.
> Will the above then add complications of participant type into the
> protocol at the time of creating commitments, forwarding HTLCs and
> also finding routes?

This doesn't add anything new, this distinction already exists today.
It doesn't add any complication to the protocol either, LSPs simply
add extensions for additional features. Negotiating 0-reserve is as
simple as setting a feature bit or adding a TLV to existing messages.

Cheers,
Bastien

Le jeu. 19 oct. 2023 à 09:28, Kulpreet Singh  a écrit :

> From my nascent understanding this will require differentiating between
> types of participants.
>
> Will the above then add complications of participant type into the
> protocol at the time of creating commitments, forwarding HTLCs and also
> finding routes?
>
> -kp
>
> --- Original Message ---
> On Wednesday, October 18th, 2023 at 3:51 PM, Bastien TEINTURIER <
> bast...@acinq.fr> wrote:
>
> Good morning list,
>
> I'd like to discuss the channel reserve requirement, and argue that it
> should be fine to get rid of it for channels between mobile wallet users
> and their service provider. I know, I know, your first reaction will be
> "but this is a security parameter, I don't want to hear about it", but
> please bear with me (and I'll be happy to hear thoughts on why we should
> *not* get rid of this requirement if you still feel strongly about that
> after reading this post).
>
> Let's start by explaining why we generally want a channel reserve. It
> ensures that both peers always have an output in the commit tx, which
> has two important consequences:
>
> - if a malicious node publishes a revoked commitment, they will always
> have some funds in it that the honest node can claim, so they risk
> losing money
> - nodes are disincentivized from force-closing channels, because they
> will need to pay on-chain fees to get their funds back (through a
> 2nd-stage transaction)
>
> I believe those are important properties for channels between normal
> routing nodes that don't provide paid services to each other. If we
> remove the channel reserve, and at one point in time, one of the nodes
> has nothing at stake in the channel, they will be incentivized to
> broadcast a revoked commit tx: if they get away with it, they win some
> money, and otherwise, they don't lose any (because they have nothing at
> stake in the latest channel state). This is particularly true for the
> non-initiator, who doesn't pay the on-chain fees for the commit tx,
> otherwise a malicious initiator would still lose on-chain fees.
>
> Now what are the drawbacks of having a channel reserve? The first one is
> capital efficiency, because this channel reserve is unused liquidity. If
> you are a routing node this is fine, because you actively manage your
> channels to only keep those that earn you enough routing fees. But if
> you are a wallet provider, this is a very different story: you need to
> keep at least one channel open with each of your users. For each of
> these channels, you must maintain a reserve of 1% of the channel
> capacity, even if all the funds are on their side. You thus have unused
> liquidity proportional to the number of users and the total amount of
> sats your users own. This doesn't scale very well.
>
> The second drawback is UX: users look at their channel state to figure
> out how much they can receive off-chain. It's really hard to explain
> why there is a large gap between what t

Re: [Lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-19 Thread Bastien TEINTURIER
Hi Antoine,

> If I'm correct, two users can cooperate maliciously against the batch
> withdrawal transactions by re-signing a CPFP from 2-of-2 and
> broadcasting the batch withdrawal as a higher-feerate package / high
> fee package and then evicting out the CPFP.

Yes, they can, and any user could also double-spend the batch using a
commit tx spending from the previous funding output. Participants must
expect that this may happen, that's what I mentioned previously that
you cannot use 0-conf on that splice transaction. But apart from that,
it acts as a regular splice: participants must watch for double-spends
(as discussed in the previous messages) while waiting for confirmations.

> If the batch withdrawal has been signed with 0-fee thanks to the
> nversion=3 policy exemption, it will be evicted out of the mempool.
> A variant of a replacement cycling attack.

I don't think this should use nVersion=3 and pay 0 fees. On the contrary
this is a "standard" transaction that should use a reasonable feerate
and nVersion=2, that's why I don't think this comment applies.

Cheers,
Bastien

Le mer. 18 oct. 2023 à 20:04, Antoine Riard  a
écrit :

> Hi Bastien,
>
> Thanks for the answer.
>
> If I understand correctly the protocol you're describing you're aiming to
> enable batched withdrawals where a list of users are being sent funds from
> an exchange directly in a list of channel funding outputs ("splice-out").
> Those channels funding outputs are 2-of-2, between two lambda users or e.g
> a lambda user and a LSP.
>
> If I'm correct, two users can cooperate maliciously against the batch
> withdrawal transactions by re-signing a CPFP from 2-of-2 and broadcasting
> the batch withdrawal as a higher-feerate package / high fee package and
> then evicting out the CPFP.
>
> If the batch withdrawal has been signed with 0-fee thanks to the
> nversion=3 policy exemption, it will be evicted out of the mempool. A
> variant of a replacement cycling attack.
>
> I think this more or less matches the test I'm pointing to you which is on
> non-deployed package acceptance code:
>
> https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>
> Please correct me if I'm wrong or missing assumptions. Agree with you on
> the assumptions that the exchange does not have an incentive to
> double-spend its own withdrawal transactions, or if all the batched funding
> outputs are shared with a LSP, malicious collusion is less plausible.
>
> Best,
> Antoine
>
> Le mer. 18 oct. 2023 à 15:35, Bastien TEINTURIER  a
> écrit :
>
>> Hey Z-man, Antoine,
>>
>> Thank you for your feedback, responses inline.
>>
>> z-man:
>>
>> > Then if I participate in a batched splice, I can disrupt the batched
>> > splice by broadcasting the old state and somehow convincing miners to
>> > confirm it before the batched splice.
>>
>> Correct, I didn't mention it in my post but batched splices cannot use
>> 0-conf, the transaction must be confirmed to remove the risk of double
>> spends using commit txs associated with the previous funding tx.
>>
>> But interestingly, with the protocol I drafted, the LSP can finalize and
>> broadcast the batched splice transaction while users are offline. With a
>> bit of luck, when the users reconnect, that transaction will already be
>> confirmed so it will "feel 0-conf".
>>
>> Also, we need a mechanism like the one you describe when we detect that
>> a splice transaction has been double-spent. But this isn't specific to
>> batched transactions, 2-party splice transactions can also be double
>> spent by either participant. So we need that mechanism anyway? The spec
>> doesn't have a way of aborting a splice after exchanging signatures, but
>> you can always do it as an RBF operation (which actually just does a
>> completely different splice). This is what Greg mentioned in his answer.
>>
>> > part of the splice proposal is that while a channel is being spliced,
>> > it should not be spliced again, which your proposal seems to violate.
>>
>> The spec doesn't require that, I'm not sure what made you think that.
>> While a channel is being spliced, it can definitely be spliced again as
>> an RBF attempt (this is actually a very important feature), which double
>> spends the other unconfirmed splice attempts.
>>
>> ariard:
>>
>> > It is uncertain to me if secure fee-bumping, even with future
>> > mechanisms like package relay and nversion=3, is robust enough for
>> > multi-party transactions and covenant-enable constructions under usual
>> > risk models.
>>
>> I'm not entirely sure

Re: [Lightning-dev] Removing channel reserve for mobile wallet users

2023-10-18 Thread Bastien TEINTURIER
Hey Tony,

> But don't wallets & LSPs already have the option to provide this UX
> and have been doing it for years?

I'm not sure what other wallets do, but in Phoenix we've only gone half
way so far: we allow the wallet user to have no reserve, but we require
the LSP to meet the usual reserve requirements. The goal of my post is
to argue that we could also remove that requirement for the LSP side
without adding trust.

> Are you proposing a network wide switch away from reserves or just
> between mobile wallets and LSPs if they opt in?

I think the channel reserve is useful between routing nodes, because
they don't have a "service provider" relationship so there is more
incentive to always try cheating.

I'm only arguing for removing it between wallet users and their LSP
(partly because LSPs are *not* anonymous nodes who don't care about
their reputation).

> And what about the dust reserve limit too?

What do you mean by dust reserve limit?

Cheers,
Bastien

Le mer. 18 oct. 2023 à 16:33, Tony Giorgio  a
écrit :

> Bastien,
>
> ACK for this. But don't wallets & LSPs already have the option to provide
> this UX and have been doing it for years? Are you proposing a network wide
> switch away from reserves or just between mobile wallets and LSPs if they
> opt in? And what about the dust reserve limit too? From my understanding,
> all of the node implementations allow removing the 1% reserve requirement
> now but still keep the dust reserve.
>
> Tony Giorgio
>
>
>
>
>
>  Original Message 
> On Oct 18, 2023, 8:51 AM, Bastien TEINTURIER < bast...@acinq.fr> wrote:
>
>
> Good morning list,
>
> I'd like to discuss the channel reserve requirement, and argue that it
> should be fine to get rid of it for channels between mobile wallet users
> and their service provider. I know, I know, your first reaction will be
> "but this is a security parameter, I don't want to hear about it", but
> please bear with me (and I'll be happy to hear thoughts on why we should
> *not* get rid of this requirement if you still feel strongly about that
> after reading this post).
>
> Let's start by explaining why we generally want a channel reserve. It
> ensures that both peers always have an output in the commit tx, which
> has two important consequences:
>
> - if a malicious node publishes a revoked commitment, they will always
>   have some funds in it that the honest node can claim, so they risk
>   losing money
> - nodes are disincentivized from force-closing channels, because they
>   will need to pay on-chain fees to get their funds back (through a
>   2nd-stage transaction)
>
> I believe those are important properties for channels between normal
> routing nodes that don't provide paid services to each other. If we
> remove the channel reserve, and at one point in time, one of the nodes
> has nothing at stake in the channel, they will be incentivized to
> broadcast a revoked commit tx: if they get away with it, they win some
> money, and otherwise, they don't lose any (because they have nothing at
> stake in the latest channel state). This is particularly true for the
> non-initiator, who doesn't pay the on-chain fees for the commit tx,
> otherwise a malicious initiator would still lose on-chain fees.
>
> Now what are the drawbacks of having a channel reserve? The first one is
> capital efficiency, because this channel reserve is unused liquidity. If
> you are a routing node this is fine, because you actively manage your
> channels to only keep those that earn you enough routing fees. But if
> you are a wallet provider, this is a very different story: you need to
> keep at least one channel open with each of your users. For each of
> these channels, you must maintain a reserve of 1% of the channel
> capacity, even if all the funds are on their side. You thus have unused
> liquidity proportional to the number of users and the total amount of
> sats your users own. This doesn't scale very well.
>
> The second drawback is UX: users look at their channel state to figure
> out how much they can receive off-chain. It's really hard to explain
> why there is a large gap between what they think they should be able
> to receive and what they can actually receive.
>
> Now, why is it ok in this setting to remove the reserve on both sides?
> First of all, the service provider is the one paying the on-chain fees
> for the commit tx (at least that's what we do for Phoenix). That means
> that when publishing a revoked commit tx, even if the service provider
> doesn't have an output in the transaction, they still pay on-chain fees,
> so they lose *something*. For the wallet user, this is ok: they still
> get their funds back using penalty transactions,

Re: [Lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-18 Thread Bastien TEINTURIER
Hey Z-man, Antoine,

Thank you for your feedback, responses inline.

z-man:

> Then if I participate in a batched splice, I can disrupt the batched
> splice by broadcasting the old state and somehow convincing miners to
> confirm it before the batched splice.

Correct, I didn't mention it in my post but batched splices cannot use
0-conf, the transaction must be confirmed to remove the risk of double
spends using commit txs associated with the previous funding tx.

But interestingly, with the protocol I drafted, the LSP can finalize and
broadcast the batched splice transaction while users are offline. With a
bit of luck, when the users reconnect, that transaction will already be
confirmed so it will "feel 0-conf".

Also, we need a mechanism like the one you describe when we detect that
a splice transaction has been double-spent. But this isn't specific to
batched transactions, 2-party splice transactions can also be double
spent by either participant. So we need that mechanism anyway? The spec
doesn't have a way of aborting a splice after exchanging signatures, but
you can always do it as an RBF operation (which actually just does a
completely different splice). This is what Greg mentioned in his answer.

> part of the splice proposal is that while a channel is being spliced,
> it should not be spliced again, which your proposal seems to violate.

The spec doesn't require that, I'm not sure what made you think that.
While a channel is being spliced, it can definitely be spliced again as
an RBF attempt (this is actually a very important feature), which double
spends the other unconfirmed splice attempts.

ariard:

> It is uncertain to me if secure fee-bumping, even with future
> mechanisms like package relay and nversion=3, is robust enough for
> multi-party transactions and covenant-enable constructions under usual
> risk models.

I'm not entirely sure why you're bringing this up in this context...
I agree that we most likely cannot use RBF on those batched transactions
we will need to rely on CPFP and potentially package relay. But why is
it different from non-multi-party transactions here?

> See test here:
>
https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf

I'd argue that this is quite different from the standard replacement
cycling attack, because in this protocol wallet users can only
unilaterally double-spend with a commit tx, on which they cannot set
the feerate. The only participant that can "easily" double-spend is
the exchange, and they wouldn't have an incentive to here, users are
only withdrawing funds, there's no opportunity of stealing funds?

Thanks,
Bastien

Le mar. 17 oct. 2023 à 21:10, Antoine Riard  a
écrit :

> Hi Bastien,
>
> > The naive way of enabling lightning withdrawals is to make the user
> > provide a lightning invoice that the exchange pays over lightning. The
> > issue is that in most cases, this simply shifts the burden of making an
> > on-chain transaction to the user's wallet provider: if the user doesn't
> > have enough inbound liquidity (which is likely), a splice transaction
> > will be necessary. If N users withdraw funds from an exchange, we most
> > likely will end up with N separate splice transactions.
>
> It is uncertain to me if secure fee-bumping, even with future mechanisms
> like package relay and nversion=3, is robust enough for multi-party
> transactions and covenant-enable constructions under usual risk models.
>
> See test here:
>
> https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>
> Appreciated expert eyes of folks understanding both lightning and core
> mempool on this.
> There was a lot of back and forth on nversion=3 design rules, though the
> test is normally built on glozow top commit of the 3 Oct 2023.
>
> Best,
> Antoine
>
> Le mar. 17 oct. 2023 à 14:03, Bastien TEINTURIER  a
> écrit :
>
>> Good morning list,
>>
>> I've been trying to design a protocol to let users withdraw funds from
>> exchanges directly into their lightning wallet in an efficient way
>> (with the smallest on-chain footprint possible).
>>
>> I've come to the conclusion that this is only possible with some form of
>> covenants (e.g. `SIGHASH_ANYPREVOUT` would work fine in this case). The
>> goal of this post is to explain why, and add this usecase to the list of
>> useful things we could do if we had covenants (insert "wen APO?" meme).
>>
>> The naive way of enabling lightning withdrawals is to make the user
>> provide a lightning invoice that the exchange pays over lightning. The
>> issue is that in most cases, this simply shifts the burden of making an
>> on-chain transaction to the user's wallet provider: if the user doesn't
>> have enough inbound

[Lightning-dev] Removing channel reserve for mobile wallet users

2023-10-18 Thread Bastien TEINTURIER
Good morning list,

I'd like to discuss the channel reserve requirement, and argue that it
should be fine to get rid of it for channels between mobile wallet users
and their service provider. I know, I know, your first reaction will be
"but this is a security parameter, I don't want to hear about it", but
please bear with me (and I'll be happy to hear thoughts on why we should
*not* get rid of this requirement if you still feel strongly about that
after reading this post).

Let's start by explaining why we generally want a channel reserve. It
ensures that both peers always have an output in the commit tx, which
has two important consequences:

- if a malicious node publishes a revoked commitment, they will always
  have some funds in it that the honest node can claim, so they risk
  losing money
- nodes are disincentivized from force-closing channels, because they
  will need to pay on-chain fees to get their funds back (through a
  2nd-stage transaction)

I believe those are important properties for channels between normal
routing nodes that don't provide paid services to each other. If we
remove the channel reserve, and at one point in time, one of the nodes
has nothing at stake in the channel, they will be incentivized to
broadcast a revoked commit tx: if they get away with it, they win some
money, and otherwise, they don't lose any (because they have nothing at
stake in the latest channel state). This is particularly true for the
non-initiator, who doesn't pay the on-chain fees for the commit tx,
otherwise a malicious initiator would still lose on-chain fees.

Now what are the drawbacks of having a channel reserve? The first one is
capital efficiency, because this channel reserve is unused liquidity. If
you are a routing node this is fine, because you actively manage your
channels to only keep those that earn you enough routing fees. But if
you are a wallet provider, this is a very different story: you need to
keep at least one channel open with each of your users. For each of
these channels, you must maintain a reserve of 1% of the channel
capacity, even if all the funds are on their side. You thus have unused
liquidity proportional to the number of users and the total amount of
sats your users own. This doesn't scale very well.

The second drawback is UX: users look at their channel state to figure
out how much they can receive off-chain. It's really hard to explain
why there is a large gap between what they think they should be able
to receive and what they can actually receive.

Now, why is it ok in this setting to remove the reserve on both sides?
First of all, the service provider is the one paying the on-chain fees
for the commit tx (at least that's what we do for Phoenix). That means
that when publishing a revoked commit tx, even if the service provider
doesn't have an output in the transaction, they still pay on-chain fees,
so they lose *something*. For the wallet user, this is ok: they still
get their funds back using penalty transactions, which doesn't cost
them more than normal 2nd-stage transactions. The service provider
cannot steal funds, it only lets them grief their users (at the cost
of paying on-chain fees and missing out on future routing fees). Also,
the wallet user can publicly show that the service provider published
a revoked commitment, which is bad for their reputation.

Removing the reserve on the wallet user's side is a risk that the wallet
provider takes in order to guarantee a good UX. The user can grief the
service provider, but the griefing amount is limited. Also, the user has
paid fees to the wallet provider before that, because they must have
used the wallet to get into that state. This makes it an acceptable
trade-off for service providers.

Lastly, we can also argue that LN-penalty without channel reserves is
similar to LN-symmetry (Eltoo). In Eltoo, a cheating node can always
publish a previous commitment: the honest node will simply be able to
replay the latest state on top of that commitment, and the cheating
node's only penalty is the on-chain fees they paid for that commit tx.
Here this is the same when the service provider is trying to cheat,
because they pay the on-chain fees for the commit tx. If this is ok
for Eltoo, why wouldn't it be ok now?

Cheers,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-17 Thread Bastien TEINTURIER
Good morning list,

I've been trying to design a protocol to let users withdraw funds from
exchanges directly into their lightning wallet in an efficient way
(with the smallest on-chain footprint possible).

I've come to the conclusion that this is only possible with some form of
covenants (e.g. `SIGHASH_ANYPREVOUT` would work fine in this case). The
goal of this post is to explain why, and add this usecase to the list of
useful things we could do if we had covenants (insert "wen APO?" meme).

The naive way of enabling lightning withdrawals is to make the user
provide a lightning invoice that the exchange pays over lightning. The
issue is that in most cases, this simply shifts the burden of making an
on-chain transaction to the user's wallet provider: if the user doesn't
have enough inbound liquidity (which is likely), a splice transaction
will be necessary. If N users withdraw funds from an exchange, we most
likely will end up with N separate splice transactions.

Hence the idea of batching those into a single transaction. Since we
don't want to introduce any intermediate transaction, we must be able
to create one transaction that splices multiple channels at once. The
issue is that for each of these channels, we need a signature from the
corresponding wallet user, because we're spending the current funding
output, which is a 2-of-2 multisig between the wallet user and the
wallet provider. So we run into the usual availability problem: we need
signatures from N users who may not be online at the same time, and if
one of those users never comes online or doesn't complete the protocol,
we must discard the whole batch.

There is a workaround though: each wallet user can provide a signature
using `SIGHASH_SINGLE | SIGHASH_ANYONECANPAY` that spends their current
funding output to create a new funding output with the expected amount.
This lets users sign *before* knowing the final transaction, which the
exchange can create by batching pairs of inputs/outputs. But this has
a fatal issue: at that point the wallet user has no way of spending the
new funding output (since it is also a 2-of-2 between the wallet user
and the wallet provider). The wallet provider can now blackmail the user
and force them to pay to get their funds back.

Lightning normally fixes this by exchanging signatures for a commitment
transaction that sends the funds back to their owners *before* signing
the parent funding/splice transaction. But here that is impossible,
because we don't know yet the `txid` of the batch transaction (that's
the whole point, we want to be able to sign before creating the batch)
so we don't know the new `prevout` we should spend from. I couldn't find
a clever way to work around that, and I don't think there is one (but
I would be happy to be wrong).

With `SIGHASH_ANYPREVOUT`, this is immediately fixed: we can exchange
anyprevout signatures for the commitment transaction, and they will be
valid to spend from the batch transaction. We are safe from signature
reuse, because funding keys are rotated at each splice so we will never
create another output that uses the same 2-of-2 script.

I haven't looked at other forms of covenants, but most of them likely
address this problem as well.

Cheers,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Remotely control your lightning node from your favorite HSM

2023-09-08 Thread Bastien TEINTURIER
Hey Christian,

You're right, if we create runes inside the HSM then we end up with the
same security model.
It then boils down to whether we'd rather implement Bolt 8 or rune
management inside an HSM!
I'd prefer Bolt 8, as I think it has more universality (and is simpler),
but it could be worth experimenting with both approaches.

It will also be interesting to see how we actually configure rights (access
control) on the lightning node side.
That really deserves some implementation work to flesh out that kind of
details.

Cheers,
Bastien

Le ven. 8 sept. 2023 à 16:51, Christian Decker 
a écrit :

> Very interesting proposal, though as Will points out we could implement
> the same using runes: have the rune be managed by the hardware wallet, and
> commit the rune used to authenticate the RPC call commit to the call's
> payload. That way a potentially compromised client cannot authenticate
> arbitrary calls, since the hardware wallet is required to associate a rune
> with it, giving it a chance for review.
>
> This is similar to how authentication of RPC calls works in greenlight,
> where the node host is not trusted, and we need to pass the authenticated
> commands forward to the signer for verification before processing any
> signature request from the node. We chose to authenticate the payload
> rather than the transport (which is what partonnere does) because it
> removes the need for a direct connection, and adds flexibility to how we
> can deliver the commands. Functionally they are very similar however.
>
> Cheers,
> Christian
>
> On Thu, Sep 7, 2023, 15:06 Bastien TEINTURIER  wrote:
>
>> Hi William,
>>
>> > What is wrong with runes/macaroons for validating and authenticating
>> > commands?
>>
>> Runes/macaroons don't provide any protection if the machine you are
>> issuing the RPCs from is compromised. The attacker can change the
>> parameters of your RPC call and your lightning node will still gladly
>> execute it.
>>
>> > I can't imagine validating every RPC request with a hardware
>> > device and trusted display, unless you have some specific use case in
>> > mind.
>>
>> I think that this is because you have the wrong idea of which RPCs
>> this is supposed to protect. This is useful for the RPCs that actually
>> involve paying something (channel open, channel close, pay invoice).
>> This isn't useful for "read" RPCs (listing channels).
>>
>> Making an on-chain operation or paying an invoice is something that is
>> infrequent enough for the vast majority of nodes that it makes sense
>> to validate it manually. Also, this is fully configurable: you can
>> choose which RPCs you want to protect that way and which RPCs you want
>> to keep open.
>>
>> Thanks,
>> Bastien
>>
>> Le mer. 6 sept. 2023 à 17:42, William Casarin  a écrit :
>> >
>> > On Wed, Sep 06, 2023 at 03:32:50AM +0200, Bastien TEINTURIER wrote:
>> > >Hey Zman,
>> > >
>> > >I saw the announcement about the commando plugin, and it was actually
>> > >one of the reasons I wanted to write up what I had in mind, because
>> > >while commando also uses a lightning connection to send commands to a
>> > >lightning node, it was missing what in my opinion is the most important
>> > >part: having all of Bolt 8 handled by the HSM and validating commands
>> > >using a trusted display.
>> >
>> > What is wrong with runes/macaroons for validating and authenticating
>> > commands? I can't imagine validating every RPC request with a hardware
>> > device and trusted display, unless you have some specific use case in
>> > mind.
>> >
>> > Will
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Remotely control your lightning node from your favorite HSM

2023-09-07 Thread Bastien TEINTURIER
Hi William,

> What is wrong with runes/macaroons for validating and authenticating
> commands?

Runes/macaroons don't provide any protection if the machine you are
issuing the RPCs from is compromised. The attacker can change the
parameters of your RPC call and your lightning node will still gladly
execute it.

> I can't imagine validating every RPC request with a hardware
> device and trusted display, unless you have some specific use case in
> mind.

I think that this is because you have the wrong idea of which RPCs
this is supposed to protect. This is useful for the RPCs that actually
involve paying something (channel open, channel close, pay invoice).
This isn't useful for "read" RPCs (listing channels).

Making an on-chain operation or paying an invoice is something that is
infrequent enough for the vast majority of nodes that it makes sense
to validate it manually. Also, this is fully configurable: you can
choose which RPCs you want to protect that way and which RPCs you want
to keep open.

Thanks,
Bastien

Le mer. 6 sept. 2023 à 17:42, William Casarin  a écrit :
>
> On Wed, Sep 06, 2023 at 03:32:50AM +0200, Bastien TEINTURIER wrote:
> >Hey Zman,
> >
> >I saw the announcement about the commando plugin, and it was actually
> >one of the reasons I wanted to write up what I had in mind, because
> >while commando also uses a lightning connection to send commands to a
> >lightning node, it was missing what in my opinion is the most important
> >part: having all of Bolt 8 handled by the HSM and validating commands
> >using a trusted display.
>
> What is wrong with runes/macaroons for validating and authenticating
> commands? I can't imagine validating every RPC request with a hardware
> device and trusted display, unless you have some specific use case in
> mind.
>
> Will
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Remotely control your lightning node from your favorite HSM

2023-09-05 Thread Bastien TEINTURIER
Hey Zman,

I saw the announcement about the commando plugin, and it was actually
one of the reasons I wanted to write up what I had in mind, because
while commando also uses a lightning connection to send commands to a
lightning node, it was missing what in my opinion is the most important
part: having all of Bolt 8 handled by the HSM and validating commands
using a trusted display.

That is what really brings additional security without compromising UX,
and enabling secure remote control from a mobile phone.

Cheers,
Bastien

Le mer. 6 sept. 2023 à 00:59, ZmnSCPxj  a écrit :
>
> Good morning t-bast,
>
> CLN already has something similar in standard CLN distrib:
https://docs.corelightning.org/docs/commando
>
> However it is tied specifically to the CLN command set.
> Nevertheless, it is largely the same idea, just CLN-specific.
>
> Regards,
> ZmnSCPxj
>
>
> Sent with Proton Mail secure email.
>
> --- Original Message ---
> On Tuesday, September 5th, 2023 at 5:26 PM, Bastien TEINTURIER <
bast...@acinq.fr> wrote:
>
>
> > Good morning list,
> >
> > I have just opened a PR to the bLIPs repository [1] to document an idea
> > that I started investigating a long time ago and had already discussed
> > with a few people, but never found the time to write it up before.
> >
> > This is a very simple architecture to securely send administrative
> > commands to your lightning node (such as opening a channel or paying
> > an invoice) from an untrusted machine (laptop, mobile phone or even
> > smart watch, let's be crazy), by using an HSM acting as a whitelisted
> > lightning peer (by implementing Bolt 8 entirely inside the HSM). The
> > interesting part is that it requires almost nothing new on the lightning
> > node itself, since we simply use a standard lightning connection as our
> > communication channel and custom lightning messages to send commands.
> >
> > This should be doable for example in a custom application running on a
> > Ledger Nano S [2], which is what I had started investigating.
> >
> > The bLIP still needs some work on the actual commands (and potentially
> > their encoding), but the interesting part is mostly the HSM app (the
> > rest is probably bikeshedding).
> >
> > If someone wants to actually work on implementing this, I think it
> > would be very useful! I'd gladly volunteer to specify this better and
> > review the implementation. Maybe that kind of work could be done under
> > an open-source grant for example.
> >
> > Cheers,
> > Bastien
> >
> > [1] https://github.com/lightning/blips/pull/28
> > [2] https://developers.ledger.com/docs/embedded-app/framework/
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Remotely control your lightning node from your favorite HSM

2023-09-05 Thread Bastien TEINTURIER
Good morning list,

I have just opened a PR to the bLIPs repository [1] to document an idea
that I started investigating a long time ago and had already discussed
with a few people, but never found the time to write it up before.

This is a very simple architecture to securely send administrative
commands to your lightning node (such as opening a channel or paying
an invoice) from an untrusted machine (laptop, mobile phone or even
smart watch, let's be crazy), by using an HSM acting as a whitelisted
lightning peer (by implementing Bolt 8 entirely inside the HSM). The
interesting part is that it requires almost nothing new on the lightning
node itself, since we simply use a standard lightning connection as our
communication channel and custom lightning messages to send commands.

This should be doable for example in a custom application running on a
Ledger Nano S [2], which is what I had started investigating.

The bLIP still needs some work on the actual commands (and potentially
their encoding), but the interesting part is mostly the HSM app (the
rest is probably bikeshedding).

If someone wants to actually work on implementing this, I think it
would be very useful! I'd gladly volunteer to specify this better and
review the implementation. Maybe that kind of work could be done under
an open-source grant for example.

Cheers,
Bastien

[1] https://github.com/lightning/blips/pull/28
[2] https://developers.ledger.com/docs/embedded-app/framework/
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Disclosure: Fake channel DoS vector

2023-08-28 Thread Bastien TEINTURIER
Hey Matt,

Great work on finding this issue, thoroughly testing it against
implementations,
and on the follow-up you did after reporting this to the various teams.

We all agree that having more people spending time poking at the code
to find issues is very beneficial for the project. I hope your work will
encourage
more people to work on similar research! Protecting p2p networks against
DoS vectors is a hard task, and the more eyes on the project, the better.

Thanks,
Bastien

Le lun. 28 août 2023 à 08:46, Antoine Riard  a
écrit :

> Hi Matt,
>
> > You've definitely done some review for some subset of code, mostly the
> anchors code which was added
> > not too long ago, but please don't pretend you've reviewed a large
> volume of the pull requests in
> > LDK, as far as I understand you have several other projects you focus
> heavily on, which is great,
> > but that's not being a major LDK contributor.
>
> This is insulting, as I remember very well reviewing some hard parts of
> recent ongoing changes such as some API changes for watchtower integration
> or the state machine for collaborative transaction construction. Adopting a
> pure quantitative measurement of the review contributions is short-sighted
> and I think your position forgets with any mature open-source project
> review of the hard and sensitive part of the codebase is the bottleneck.
>
> We're working on a decentralized bitcoin open-source project, there is no
> benevolent dictator for life with the legitimacy to qualify who is a major
> or "spokesperson" contributor, and who is not. I understand this is your
> job to work full-time on LDK, though I don't think there is any contractual
> provision in your work contract requesting to play the BDLF.
>
> If you have a different viewpoint or other professional information to
> communicate to the community, thanks.
>
> > In 2022 and 2023 you:
> >  * landed a PR removing yourself from the security-reporting list
> (#2323, no idea why you're trying
> > to speak for the project when you removed yourself!)
> >  * fixed one bug in the anchors aggregation stuff before it was released
> (#1841, thanks!)
> >  * made some constants public (#1839)
> >  * increase a constant (#1532)
> >  * added a trivial double-check of user code (#1531)
>
> If I remember very well my anchor output patchset was ready to land in the
> second-half of 2020, though at that time we didn't have enough qualified
> review bandwidth and I spent a hell of a lot of time reviewing other
> people's contributions through 2020/2021 to move the project nearer from
> production-readiness. That lesson about the anchor output patchset, i.e
> coming with meaningful code diff to do _interesting_ things and solve hard
> problems as quite rendered me frivolous to show up with big diff, and waste
> my coding time (when I can make advance on solving LN-related problems in
> Bitcoin Core).
>
> I think very recently I proposed changes to advance mempool monitoring and
> custom script support, though here again it sounds to me we're still
> lacking qualified eyes to bring informed technical opinions on the areas
> necessitating a lot of context.
>
> About #2323, the reason of removing myself from the security-reporting
> list is related to weak and non-consensual code of conduct you introduced
> last year which brings severe vulnerabilities to LDK process w.r.t social
> attacks by external actors and risks of long-term shitshow a la Rust
> governance, which I raised immediately on the code of conduct PR. I think
> you never answered me neither publicly or privately on the lack of
> robustness of this code of conduct if we're targeted by psyops from
> NK-hacking groups (Sadly, real-world thing in the cryptocurrency world).
>
> I have no doubt we'll be able to rebuild consensus on our
> security-handling / project community process in the future, with calm and
> patience.
>
> > You've also, to my knowledge, never joined the public bi-weekly LDK
> development calls, don't join
> > the lightning spec meeting, and don't engage in the public discord
> discussions where development
> > decisions are made.
>
> Again this is insulting to use the word "never" as I was the original host
> of the LDK development meetings on Slack and I think I took the initiative
> to launch the LDK review club last year. All those meetings / communication
> spaces have public logs to parse interesting to look on and in fine
> development decisions are made in a continuous fashion, where the review
> and testing process on the repository being the main factor. And I'm pretty
> active reviewing things on the lightning spec side at least as much as you.
>
> > This implies you absolutely don't have a deep understanding of all the
> things happening in the
> > project, which makes you poorly suited to speak on behalf of the
> project. I'm not trying to pass
> > judgement on whether you've contributed (you have! thanks for your
> contributions!), but only
> > suggesting that if you 

Re: [Lightning-dev] Resumable channels using OP_CHECKSIGFROMSTACK

2023-08-18 Thread Bastien TEINTURIER
Hi ghost,

> Note that the "reputation loss" argument does not hold up that well if
> you let Bob connect to arbitrary nodes.

That's true, in that case such nodes don't care as much about their
reputation and could have an incentive to cheat. But even in that
case, the user will detect it easily (because again, the backup
provider doesn't know when the wallet has lost state) and in that
case they shouldn't ignore it but should rather close their channel.

By itself, this is an incentive to not cheat, because even though the
wallet user lost money in on-chain fees, the backup provider loses
potential routing fees and has a very small chance of successfully
stealing funds from the user.

But I agree that in that case, a more complex protocol like what you
and Thomas suggest could make sense. But I'd still be wary about the
additional complexity it would add compared to the benefits it brings.

Thanks,
Bastien

Le jeu. 17 août 2023 à 20:43, SomberNight  a
écrit :
>
> Hi Bastien,
>
> > I don't think this is an attack wallet providers can reasonably attempt.
> > The mobile wallet can check at every connection that the provider isn't
> > trying to cheat, and the provider doesn't have any way of knowing when
> > the mobile wallet has lost data: it would thus be a very risky move to
> > try to cheat, because it is very unlikely to succeed and will result in
> > reputation loss for the provider.
>
> It would be nice to have a peer backup solution that can be provided by
> ~any node, and reasonably used without trust by ~any node [0]. To be more
> precise (and practical), imagine providing the backup "service" by
> default if you run a public forwarding node. E.g. if Alice runs eclair
> on their server and have some public channels, the default config of
> eclair could result in Alice signalling the "backup-provider" feature
> bit. Then, if Bob uses a phone-wallet, and opens to Alice (chosen by Bob
> arbitrarily), Bob could store their backups with her.
>
> Note that the "reputation loss" argument does not hold up that well if
> you let Bob connect to arbitrary nodes. Something stronger, such as
> actual confiscation of funds via Thomas' idea seems more applicable.
> Afterall, what does a random node have to lose if they tried to replay
> an old backup? You said in another email Phoenix atm simply ignores the
> stale backup. I wonder, does the client happily keep using the channel
> after a reconnect? If so, what is there to lose by attempting to replay
> old states? Random nodes don't really have an easy concept of
> reputation.
>
> Obviously the ACINQ node is not random - being hardcoded in your case
> it is anything but. Of course, in the case of an LSP/wallet combo
> such as Phoenix, the reputation argument is sufficient, I agree.
>
> -
>
> [0]: I think the whole backup provicer idea could be made symmetric.
> I had previously thought that it is inherently asymmetric but after
> some more thought I think I was wrong:
>
> Symmetric resumable channels
> 
>
> Alice and Bob could have a channel where they both provide state backups
> to each other. Regarding who goes first in channel_reestablish, we need
> an extra preliminary round where both Alice and Bob commit to what they
> will send in channel_reestablish:
>
> Round 1:
> 1. Alice sends hashA = hash(bobs_backup, nonceA)
> 2. Bob sends hashB = hash(alices_backup, nonceB)
>
> Alice persists hashB to disk upon receiving it, and enforces that even
> if there is a disconnection, Bob cannot arbitrarily send a different
> commitment next time. (if the channel gets reestablished fully and
> Alice sends a new backup to Bob, the stored hashB can be cleared)
>
> Round 2:
> 3. Alice sends channel_reestablish containing bobs_backup, and nonceA
> 4. Bob sends channel_reestablish containing alices_backup, and nonceB
>
> Alice checks that the commitment received from Bob in round1 matches
> what was received in round2.
>
> Regarding the opening post OP_CHECKSIGFROMSTACK on-chain enforcement,
> that can be made symmetric as well! The channel funding output script
> needs one taproot branch each for both Alice and Bob lying. The protocol
> needs to be tweaked a bit so as to allow a party to legitimately admit
> having lost state and commit to the hash of that in round1, and then
> reveal they lost state in round2 (e.g. send null data). In which case
> the other party would not be able to use the fraud proof taproot branch.
>
> Though note that user-error and manual copying/restoring of DBs could
> lead to catastrophic failure with the on-chain enforcement:
> if you restore an old db and launch your client, the client won't know
> it is running an old state and happily participate in the two round
> dance, giving a fraud proof to the counterparty in the process.
>
> Regards,
> ghost43
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org

Re: [Lightning-dev] Resumable channels using OP_CHECKSIGFROMSTACK

2023-08-17 Thread Bastien TEINTURIER
Hi Thomas,

> That's a pity. I would have expected you to be interested, given that
> Phoenix could benefit from that feature.

I don't think this is an attack wallet providers can reasonably attempt.
The mobile wallet can check at every connection that the provider isn't
trying to cheat, and the provider doesn't have any way of knowing when
the mobile wallet has lost data: it would thus be a very risky move to
try to cheat, because it is very unlikely to succeed and will result in
reputation loss for the provider.

So for this specific backup use case, I think it would simply be adding
complexity to solve an issue that doesn't matter in practice.

> How are the fraud proofs I described more dangerous than revoked states?
> There is no "toxic data" in here.

I didn't say they were more dangerous than revoked states?
What I meant is that signing those "commitments" by the wallet provider
is dangerous for their reputation if that service is only best-effort,
because there are race conditions in the lightning protocol for which
the wallet provider may not have the mobile wallet's latest state (it
is not entirely trivial to keep track of that state).

> Perhaps that PR could benefit from my idea of sending backup data as new
> fields of existing messages? I see that update_channel_backup needs to be
> sent *before* the corresponding change of state. I think using the
existing
> messages would be more elegant, because it makes things atomic.

That was the main thing discussed in that PR. See [1] for the end of
that discussion (the earlier comments contain a lot of details on that
design choice). Sending data in existing messages has a length issue,
because `commitment_signed` for example may already fill up the message
which doesn't leave any room for an additional backup.

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/881#issuecomment-1132698926

Le jeu. 17 août 2023 à 12:52, Thomas Voegtlin  a
écrit :
>
> Hello Bastien
>
> > I don't think those fraud proofs are necessary at all.
>
> That's a pity. I would have expected you to be interested, given that
> Phoenix could benefit from that feature.
>
> Anyway, since my proposal requires new opcodes, I think of it more as
> a theoretical discussion, rather than a concrete proposal.
>
> > They're also
> > dangerous, because they impose a hard penalty on LSPs for something
> > that should be best effort (and could get desynchronized by connection
> > issues, especially with flaky mobile connections).
> >
>
> How are the fraud proofs I described more dangerous than revoked states?
> There is no "toxic data" in here.
>
> The server must not sign (ctn1, t1), (ctn2, t2) with ctn1>ctn2 and t1 That is the only constraint, and it does not depend on flaky connections.
>
> All the server needs to do is remember the value of the highest timestamp
> signed so far. And, if they need to subtract leap seconds to their clock,
> wait a little bit before resuming the channel. That does not sound too
hard...
>
>
> > I'm surprised that you don't mention the BOLT PR we created for those
> > backups in [1], I believe that is sufficient. It should probably be
> > moved to a blip instead of a BOLT once we've implemented this version
> > (the approach we use in Phoenix currently is slightly different), but
> > apart from that it contains all the mechanisms necessary to achieve
> > this today.
> >
>
> Sorry, I had seen that PR a few years ago, but in the meantime I had
> forgotten about it. I just had a new look at it now.
>
> Perhaps that PR could benefit from my idea of sending backup data as new
> fields of existing messages? I see that update_channel_backup needs to be
> sent *before* the corresponding change of state. I think using the
existing
> messages would be more elegant, because it makes things atomic.
>
> Note that in practice, the client would only need to send his signature
> of the backup, and not the backup itself, which should be reconstructed
> by the server on each new state (see section 'saving bandwidth').
>
> cheers
>
> Thomas
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Resumable channels using OP_CHECKSIGFROMSTACK

2023-08-17 Thread Bastien TEINTURIER
Hi Peter,

> Note that a hard penalty might be more appropriate for a paid service,
where
there's a clear expectation that the service works.

Agreed, and in that case we'd have to design payment for this service,
which isn't completely trivial and I'm not sure users are ready to pay
for it (the free version that mostly works is likely a better fit for
the majority of users).

> Does Phoenix tell the user this has happened? Phoenix being a centralized
entity, a useful outcome would be people finding out something is wrong and
saying so on, eg, social media.

We made sure that the code clearly distinguishes that case: we currently
handle it by simply ignoring the stale server backup, but it should be
easy to surface this to the user (which would also help us figure out
some edge cases where disconnections lead to a desynchronization between
the wallet and the wallet provider). The non-trivial part is to design
the proof that the user could share publicly, without revealing its
private data or leaking its backup encryption keys.

Thanks,
Bastien

Le mer. 16 août 2023 à 15:16, Peter Todd  a écrit :

> On Wed, Aug 16, 2023 at 09:56:21AM +0200, Bastien TEINTURIER wrote:
> > Hi Thomas,
> >
> > I don't think those fraud proofs are necessary at all. They're also
> > dangerous, because they impose a hard penalty on LSPs for something
> > that should be best effort (and could get desynchronized by connection
> > issues, especially with flaky mobile connections).
>
> Note that a hard pentalty might be more appropriate for a paid service,
> where
> there's a clear expectation that the service works.
>
> Also I'll point out to Thomas, that he's actually come up with a generic
> "latest state backup" oracle scheme that might be useful for applications
> outside of Lightning. It might be wortwhile to forward it to bitcoin-dev,
> pointing that out, so it sees a wider audience. I can't recall anyone else
> coming up with that precise mechanism before.
>
> > I agree with Peter Todd that since the mobile wallet can check the state
> > of the returned backup at every connection request, this makes it highly
> > unlikely that the LSP can cheat: that's the approach we've taken for
> > Phoenix.
>
> Does Phoenix tell the user this has happened? Phoenix being a centralized
> entity, a useful outcome would be people finding out something is wrong and
> saying so on, eg, social media.
>
> Similarly, the most likely reason why that would be triggered is Phoenix
> making
> a mistake, not malice. Having clients loudly shutdown probably protects
> Phoenix
> overall from their own mistakes in that scenario, by quickly getting
> people to
> stop using Phoenix wallet temporarily until the situation is fixed. Bad for
> reputation in the short term. But IMO better in the long term.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Resumable channels using OP_CHECKSIGFROMSTACK

2023-08-16 Thread Bastien TEINTURIER
Hi Thomas,

I don't think those fraud proofs are necessary at all. They're also
dangerous, because they impose a hard penalty on LSPs for something
that should be best effort (and could get desynchronized by connection
issues, especially with flaky mobile connections).

I agree with Peter Todd that since the mobile wallet can check the state
of the returned backup at every connection request, this makes it highly
unlikely that the LSP can cheat: that's the approach we've taken for
Phoenix.

I'm surprised that you don't mention the BOLT PR we created for those
backups in [1], I believe that is sufficient. It should probably be
moved to a blip instead of a BOLT once we've implemented this version
(the approach we use in Phoenix currently is slightly different), but
apart from that it contains all the mechanisms necessary to achieve
this today.

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/881

Le mar. 15 août 2023 à 05:52, Thomas Voegtlin  a
écrit :

> Hello list,
>
> Here is an idea to make lightning channels resumable from wallet seed.
> I have not implemented it yet, and there might be issues I am not
> seeing. Thus, I would be grateful for feedback.
>
> Thanks to SomberNight and Peter Todd for reviewing earlier versions of
> this proposal.
>
> Thomas
>
>
> ---
>
> Resumable channels using OP_CHECKSIGFROMSTACK
> =
>
>
> In order to resume the activity of a Lightning channel, one needs a
> backup that contains all the information about the current channel
> state. The need to perform channel backups has plagued user
> experience, with many implementations reverting to static backups,
> which can be used to recover funds, but not to resume channel
> operations.
>
> Asking your channel counterparty to store your channel state has the
> advantage to make backup operations atomic. However, there is no
> guarantee that this is safe. Indeed, if the other party suspects that
> you have lost your state (for example, because you have been offline
> for a long time, or if they can see that you requested blockchain
> information following a certain pattern), they can try to send you a
> revoked state, and there is no way to punish them for doing that.
>
> Here is a proposal for a new type of channel funding transaction,
> where the redeem script has an additional spending path, that accepts
> a fraud proof: a proof that the channel counterparty has lied about
> the current state. This proposal requires two opcodes that are
> currently not available in Bitcoin: OP_CAT and OP_CHECKSIGFROMSTACK.
>
> Roles are asymmetric in this channel: Alice is a client, and Bob is a
> server, who stores Alice's state. Thus, this proposal is mostly suited
> for private channels with Lightning service providers. During channel
> reestablishment, Bob will send her latest state to Alice, using an
> extra field in the channel_reestablish message. Since Alice cannot
> punish Bob if she has lost her state, she must not let Bob learn
> whether she still has her state. Thus, Alice will never send
> channel_reestablish first.
>
> This proposal assumes that Alice and Bob each have a clock, and that
> these clocks do not drift too much relative to each other. The channel
> may become unusable if clocks differ too much, as discussed below.
>
>
> Simplified description
> --
>
> The *state* of the channel refers to everything Alice needs in order
> to resume channel operations. With every new commitment, Alice sends
> her current state, with her signature of that state:
>
>   - if Alice sends commitment_signed, the state and signature are
> included in that message.
>
>   - if Alice receives commitment_signed, the state and signature will
> be included in the next revoke_and_ack sent by Alice.
>
> With every new commitment, Bob sends a signed tuple (ctn, timestamp),
> where ctn is the current commitment number (for the moment, forget
> about the distinction between local and remote ctns), and timestamp is
> the current time for Bob.
>
>   - if Bob sends commitment_signed, the signed tuple is included in
> that message.
>
>   - if Bob receives commitment_signed, the signed tuple will be
> included in the next revoke_and_ack sent by Bob.
>
> The private key used by Bob to sign the tuples is constant over the
> lifetime of the channel, and it must not be reused in other
> channels. The corresponding public key will be used in the fraud proof
> spending path of the redeem script.
>
> Alice verifies Bob's signature. She also checks that the received
> timestamps are reasonable (see below) and strictly monotonic.
>
> With every channel_reestablish message, Bob will send two extra fields:
>   - (ctn, timestamp, bob_signature)
>   - (alice_state, alice_signature).
>
> Alice verifies that the state she received was signed by her, and that
> the (ctn, timestamp) tuple was signed by Bob. She also checks that the
> 

Re: [Lightning-dev] Blinded Paths Doom Scenario

2023-07-27 Thread Bastien TEINTURIER
Hi Tony,

> There's multiple deanonymizing techniques on LN today. Timing, CLTV, etc.

Those techniques only allow deanonymizing the recipient, not the sender?

> Or you could just be a major LSP with direct routes to every node and
> have end users with unannounced channels opened to you. You are aware
> that with Phoenix, Acinq is aware of the sender and destination for
> every outbound payment and they know the destination for every inbound?

But that has nothing to do with blinded paths, and doesn't change with
them? Whenever you're using a mobile wallet, you have to accept that the
nodes you are connected to know that you are the sender/recipient when
they forward an HTLC to/from you (because you'll never be able to hide
that you're using a mobile phone and thus not relaying).

And on the contrary, blinded paths will help here, because a mobile
wallet using trampoline won't have to reveal the payment destination,
just the blinded path introduction node.

Bastien


Le jeu. 27 juil. 2023 à 16:31, Tony Giorgio PM 
a écrit :
>
> Bastien,
>
> > They can't even know that they are the first hop
>
> There's multiple deanonymizing techniques on LN today. Timing, CLTV, etc.
>
> Or you could just be a major LSP with direct routes to every node and
have end users with unannounced channels opened to you. You are aware that
with Phoenix, Acinq is aware of the sender and destination for every
outbound payment and they know the destination for every inbound?
>
> There's no way sphinx routing helps end users with only unannounced
channels. The direct connection knows when they are the sender or receiver.
So all it takes is an unannounced channel with one of the hops in the
blinded route to ruin their privacy if those blinded hops are colluding.
>
> How it's different than today is that there's many possible routes to any
given node. It's significantly more degraded with each hop added to the
blinded route if those nodes participate in data sharing. And the worst
part is that senders have no idea who they forward the payment down to not
get caught up in that.
>
> I think the point is to bring awareness of this, not that we can always
protect against it. Route Blinding does flip the trade offs so that
receivers have great privacy and senders have worse. I think that's clear
when you consider correlation attacks, unannounced channel assumptions,
LSPs, and the strict enforcement of many hops being included in the blinded
part of the route.
>
> Tony
>
> On 7/27/23 02:20, Bastien TEINTURIER wrote:
>
> Hey,
>
> > This breaks down since we have pretty weak anti-correlation mechanisms
when a payment is being routed. With every node the recipient adds in the
blinded route, there's a higher degree that a user is much closer to one of
them without realizing. A sender might try to go for 6 hops, but if it
turns out that their first hop is one of the nodes in the blinded route, it
ruins the privacy they were trying to attain.
>
> I still don't see why this would expose the sender's identity?
> Even if the sender is close to one of the regulated nodes, how does
> that let them learn who the sender is? We're still using Sphinx, so
> intermediate nodes have no way of knowing how close they are to the
> sender? They can't even know that they are the first hop, and any
> heuristic they'd use to try to infer that can be defeated.
>
> Cheers,
> Bastien
>
> Le jeu. 27 juil. 2023 à 07:35, Tony Giorgio PM 
a écrit :
>>
>> Bastien,
>>
>> > the recipient would only provide blinded paths that go through
"regulated" nodes so that they can witness the payment.
>>
>> Not necessarily just to witness the payment, but to ensure multiple hops
away from any given payment. It's very similar to coinjoined funds. Some
bitcoin custodians have implemented the concept of "multiple hops away" for
on chain payments. Not all, and maybe not anymore, but I believe it was a
thing. I know some moved on to "percentage of identified funds" as a risk
metric.
>>
>> > what preserves the sender's privacy are the hops before the
introduction point
>>
>> This breaks down since we have pretty weak anti-correlation mechanisms
when a payment is being routed. With every node the recipient adds in the
blinded route, there's a higher degree that a user is much closer to one of
them without realizing. A sender might try to go for 6 hops, but if it
turns out that their first hop is one of the nodes in the blinded route, it
ruins the privacy they were trying to attain. PTLCs could help, but there's
still timing and amount analysis.
>>
>> Tony Giorgio
>>
>> On 7/26/23 11:18, Bastien TEINTURIER wrote:
>>
>> Hey Ben,
>>
>> I'm not sure why it would be dramatically different from today. If I
>> understand your sc

Re: [Lightning-dev] Blinded Paths Doom Scenario

2023-07-27 Thread Bastien TEINTURIER
Hey,

> This breaks down since we have pretty weak anti-correlation mechanisms
when a payment is being routed. With every node the recipient adds in the
blinded route, there's a higher degree that a user is much closer to one of
them without realizing. A sender might try to go for 6 hops, but if it
turns out that their first hop is one of the nodes in the blinded route, it
ruins the privacy they were trying to attain.

I still don't see why this would expose the sender's identity?
Even if the sender is close to one of the regulated nodes, how does
that let them learn who the sender is? We're still using Sphinx, so
intermediate nodes have no way of knowing how close they are to the
sender? They can't even know that they are the first hop, and any
heuristic they'd use to try to infer that can be defeated.

Cheers,
Bastien

Le jeu. 27 juil. 2023 à 07:35, Tony Giorgio PM 
a écrit :

> Bastien,
>
> > the recipient would only provide blinded paths that go through
> "regulated" nodes so that they can witness the payment.
>
> Not necessarily just to witness the payment, but to ensure multiple hops
> away from any given payment. It's very similar to coinjoined funds. Some
> bitcoin custodians have implemented the concept of "multiple hops away" for
> on chain payments. Not all, and maybe not anymore, but I believe it was a
> thing. I know some moved on to "percentage of identified funds" as a risk
> metric.
>
> > what preserves the sender's privacy are the hops before the introduction
> point
>
> This breaks down since we have pretty weak anti-correlation mechanisms
> when a payment is being routed. With every node the recipient adds in the
> blinded route, there's a higher degree that a user is much closer to one of
> them without realizing. A sender might try to go for 6 hops, but if it
> turns out that their first hop is one of the nodes in the blinded route, it
> ruins the privacy they were trying to attain. PTLCs could help, but there's
> still timing and amount analysis.
>
> Tony Giorgio
> On 7/26/23 11:18, Bastien TEINTURIER wrote:
>
> Hey Ben,
>
> I'm not sure why it would be dramatically different from today. If I
> understand your scenario correctly, the recipient would only provide
> blinded paths that go through "regulated" nodes so that they can
> witness the payment. Since the recipient agrees on doing that, the
> recipient could simply share data with those "regulated" nodes without
> forcing payments to go through them? And they can do that today without
> blinded paths?
>
> Even if the payments go through such "regulated" nodes, what preserves
> the sender's privacy are the hops before the introduction point, that
> they can choose freely. This is exactly the same model as freely chosing
> the hops to the recipient directly (when not using blinded paths). Apart
> from the loss of potential payment routes for the recipient, it doesn't
> look like the sender's privacy is very different?
>
> Cheers,
> Bastien
>
> Le mer. 26 juil. 2023 à 16:56, Ben Carman  a
> écrit :
>
>> Hi list,
>>
>> This is an idea I had the other week about a potential downside of
>> blinded paths that people should be aware of.
>>
>> Blinded paths work by encrypting specific paths to reach the destination
>> node, and each of these paths have an introduction point.
>> This has big privacy benefits for the receiving node as they can hide
>> among an anon set of anyone within X hops of the introduction node (X being
>> the size of the blinded path).
>>
>> However, this can have a potential downside for privacy and
>> decentralization on the network as a whole.
>> With blinded paths since you do not know the destination node the only
>> way to pay them is through one of the given paths.
>> Because of this, they can be used to enforce that "compliant" nodes are
>> the only ways to reach a given destination.
>>
>> In my experience today you can get away with telling your compliance
>> officer you will only open channels with people you trust, and we see this
>> with some regulated businesses today (Cash App & River only open to
>> sepcific peers).
>> However with blinded paths we could have a world where not only do they
>> only open channels to specific peers but they enforce that when paying
>> them, the payment must go through at least N "compliant" nodes first.
>> This would make it so the pleb routing nodes of today would be completely
>> circumvented and users would be forced to route only through large
>> regulated hubs.
>> The receiver would be hurting their payment reliability as they are
>> removing potent

Re: [Lightning-dev] Blinded Paths Doom Scenario

2023-07-26 Thread Bastien TEINTURIER
Hey Ben,

I'm not sure why it would be dramatically different from today. If I
understand your scenario correctly, the recipient would only provide
blinded paths that go through "regulated" nodes so that they can
witness the payment. Since the recipient agrees on doing that, the
recipient could simply share data with those "regulated" nodes without
forcing payments to go through them? And they can do that today without
blinded paths?

Even if the payments go through such "regulated" nodes, what preserves
the sender's privacy are the hops before the introduction point, that
they can choose freely. This is exactly the same model as freely chosing
the hops to the recipient directly (when not using blinded paths). Apart
from the loss of potential payment routes for the recipient, it doesn't
look like the sender's privacy is very different?

Cheers,
Bastien

Le mer. 26 juil. 2023 à 16:56, Ben Carman  a écrit :

> Hi list,
>
> This is an idea I had the other week about a potential downside of blinded
> paths that people should be aware of.
>
> Blinded paths work by encrypting specific paths to reach the destination
> node, and each of these paths have an introduction point.
> This has big privacy benefits for the receiving node as they can hide
> among an anon set of anyone within X hops of the introduction node (X being
> the size of the blinded path).
>
> However, this can have a potential downside for privacy and
> decentralization on the network as a whole.
> With blinded paths since you do not know the destination node the only way
> to pay them is through one of the given paths.
> Because of this, they can be used to enforce that "compliant" nodes are
> the only ways to reach a given destination.
>
> In my experience today you can get away with telling your compliance
> officer you will only open channels with people you trust, and we see this
> with some regulated businesses today (Cash App & River only open to
> sepcific peers).
> However with blinded paths we could have a world where not only do they
> only open channels to specific peers but they enforce that when paying
> them, the payment must go through at least N "compliant" nodes first.
> This would make it so the pleb routing nodes of today would be completely
> circumvented and users would be forced to route only through large
> regulated hubs.
> The receiver would be hurting their payment reliability as they are
> removing potential paths they can receive from, but this is already the
> case for all blinded paths.
>
> This could hurt sender side privacy as well, since payment reliability
> rapidly falls off the more hops that are needed it is likely the sender
> would need to be very closely connected the introduction node or any of the
> nodes along the blinded path, and if all these compliant nodes are data
> sharing they'll be able to track a payment as it happens through the
> network just through basic timing analysis.
>
> My concern is lightning "chain analysis" companies could strong arm
> businesses into doing things like this under the guise of making sure you
> don't receieve OFAC coins. However, I am not sure if this is a "fixable"
> problem and just a trade off we'll have to make to get receiver privacy in
> lightning but wanted to put out there for people's opinions/awareness.
>
> Best,
>
> benthecarman
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Computing Blinding Factors in a PTLC and Trampoline World

2023-07-03 Thread Bastien TEINTURIER
Hey Zman,

I'm a bit confused because you use a different mechanism than the one
specified in the latest schnorr multi-hop locks proposal that I know of,
which can be found in [1].

If we use the construction from [1], it seems straightforward that using
trampoline doesn't have any impact on the protocol: the sender follows
the exact steps of that protocol and sends the left/right locks in the
trampoline onion.

Then intermediate trampoline nodes can draw their blinding factors
randomly, using the exact same steps, but using the right lock they
received as the secret they're trying to learn (from their point of
view, they're simply doing a normal session of this protocol with a
different secret - and don't learn anything about the "real" secret).

If this doesn't seem straightforward, I can create the same kind of
diagram as the one in [1] but for a trampoline scenario to show that
the maths checks out, let me know if that would be useful.

Cheers,
Bastien

[1]
https://github.com/BlockstreamResearch/scriptless-scripts/blob/master/md/multi-hop-locks.md

Le jeu. 29 juin 2023 à 16:49, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning list,
>
> Here is a simple mathematical demonstration of a particular way to compute
> blinding factors such that:
>
> * All non-Trampoline intermediate nodes only need to know one blinding
> factor.
> * The receiver only needs to know one blinding factor.
> * Trampoline nodes can provide the nodes on the sub-routes they decide the
> blinding factors without the non-Trampoline intermediate nodes knowing they
> are on a trampoline and not a "direct" source-to-dest route.
>
> First, let us start with:
>
> * The ultimate receiver has a secret `r`.
> * The ultimate receiver gives the ultimate sender the point `R`, such that
> `R = r * G`.
>
> Now, suppose the ultimate sender is directly channeled with the ultimate
> receiver.
>
> The ultimate sender chooses a fresh random scalar `e`, the "error"
> blinding factor that reaches the ultimate receiver.
> It then constructs an onion with `e` decryptable by the ultimate receiver.
> Together with that onion, the ultimate sender offers a PTLC with the point
> `e * G + R`.
>
> The ultimate receiver can claim that PTLC by revealing `e + r`, as it
> learns `e` from the onion, and knows `r` and the contract is to give `r` in
> exchange for payment.
>
> That is the simplest case.
>
> --
>
> Now let us suppose that the ultimate receiver needs to got through an
> intermediate node, Carol.
> The ultimate sender still needs to choose a final error scalar, `e`, by
> random.
> However, it also needs to generate two scalars, `c` and `d`, such that `c
> + d = e`.
> This can be done by selecting a random `d` and computing `c = e - d` (by
> algebra).
> By algebra, `d = e - c`.
> The ultimate sender then encrypts the onion:
>
> * `e` encrypted to the ultimate receiver.
> * The above ciphertext, and `d` encrypted to intermediate node Carol.
>
> The ultimate sender then sends the PTLC with point `c * G + R` to Carol.
>
> Each intermediate non-Trampoline node --- such as Carol --- then gets the
> input point, adds its per-hop blinding factor times `G`, and uses the
> result as the output point to the next hop.
>
> So Carol receives `c * G + R`.
> Carol then adds `d * G` (which is the `d` error it got from the onion).
> Carol then sends a PTLC (with one less onion hop layer as well) with the
> point `c * G + R + d * G`.
>
> Note that `e = c + d`, so we can re-arrange the PTLC sent by Carol to the
> ultimate sender as `(c + d) * G + R`.
> This is equal to `e * G + R`, the exact same as in the case where the
> ultimate sender is directly channeled with ultimate receiver.
>
> The ultimate receiver therefore cannot know whether it received from
> Carol, or from a further node, as in both the direct case and the indirect
> case, the ultimate receiver sees `e * G + R`.
>
> When the ultimate receiver releases `e + r`, Carol can compute `c + r` by
> taking `e + r - d`, and since `c = e - d`, `e + r - d = e - d + r = c + r`.
> It can then claim the incoming `c * G + R` with scalar `c + r`.
> Carol does not know `c`, it only knows `d`, and thus it cannot compute `r`.
>
> --
>
> Now let us instead suppose that Carol is a Trampoline node, and that the
> ultimate sender does not provide a detailed route from Carol to the next
> Trampoline hop.
> In this next example, the ultimate receiver is actually the final
> Trampoline hop after Carol, but Carol does not know this fact (and should
> not be able to learn this fact).
>
> The ultimate sender learns `R`, then selects a random `e`.
> It then selects `c` and `d` such that `c + d = e`, using the same
> technique as above (i.e. random-pick `d` and compute `c = e - d`).
>
> Now the ultimate sender computes a Trampoline-level onion, with the
> following:
>
> * `e` encrypted to the ultimate receiver.
> * the above ciphertext, `d`, and the next Trampoline hop (i.e. the node ID
> of 

Re: [Lightning-dev] Proposal: Bundled payments

2023-06-20 Thread Bastien TEINTURIER
Hi Thomas,

> I believe pre-payment of the mining fee can be combined with 0-conf;
> I am not sure why you picture them as opposed? Even with BOLT-12, I
> don't see 0-conf going away.

Sorry if that was unclear, that's not at all what I meant. What I meant
is that if we *stopped* using 0-conf for some reason, the solution I
described wouldn't work anymore and we would have to use a prepayment.

> Would you care to describe whether bundled payments already would
> work with the current specification, or whether they would require
> changes to BOLT-12?

That would require adding a TLV field to Bolt 12 invoices, or a TLV
field to onion messages. The design space for a prepayment solution
based on Bolt 12 is larger than with Bolt 11: I believe we can come
up with a more satisfying protocol.

> I believe that it will take years *after it is merged*, until BOLT-12
> actually becomes the dominant payment method on Lightning. OTOH, if
> this feature was adopted in BOLT-11, I think it could be deployed much
> faster.

I'm not sure why you think it would be faster using Bolt 11? It does
require all sender and receiver software to be updated, and implementers
are currently focused on Bolt 12 so I find it less likely that they
will prioritize work on extensions to Bolt 11 (but I could be wrong).

> The goal of my proposal is to level the field of competition between
> Lightning service providers

I agree that it would be great to have a more satisfying solution than
what currently exists, but this is not a reason to rush it. I think
it's worth trying to build this on top of Bolt 12, where we can
probably do something cleaner since invoices are delivered on-the-fly
and short-lived.

Thanks,
Bastien

Le mar. 20 juin 2023 à 10:47, Thomas Voegtlin  a
écrit :

> Hello Dave,
>
> That is an interesting idea; it would indeed save space for the prepayment
> hash.
> I think the invoice would still need a feature bit, so that the receiver
> can
> decide to make prepayment optional or required.
>
> Note that for the feature to be optional, we need to subtract the
> prepayment
> amount from the main payment amount. Thus, in your example, Alice would
> expect
> to receive either:
>   (1 BTC, invoice payment_hash)
> or:
>   (1 BTC - minus 10k sats, invoice payment_hash) + (10k sats,
> prepayment_hash via keysend)
>
> cheers
>
> Thomas
>
>
>
>
> On 19.06.23 22:29, David A. Harding wrote:
> > On 2023-06-12 22:10, Thomas Voegtlin wrote:
> >> The semantics of bundled payments is as follows:
> >>  - 1. the BOLT-11 invoice contains two preimages and two amounts:
> >> prepayment and main payment.
> >>  - 2. the receiver should wait until all the HTLCs of both payments
> >> have arrived, before they fulfill the HTLCs of the pre-payment. If the
> >> main payment does not arrive, they should fail the pre-payment with a
> >> MPP timeout.
> >>  - 3. once the HTLCs of both payments have arrived, the receiver
> >> fulfills the HTLCs of the prepayment, and they broadcast their
> >> on-chain transaction. Note that the main payment can still fail if the
> >> sender never reveal the preimage of the main payment.
> >
> > Hi Thomas,
> >
> > Do you actually require a BOLT11 invoice to contain a payment hash for
> > the prepayment, or would it be acceptable for the prepayment to use a
> > keysend payment with the onion message payload for the receiver
> > indicating what payment hash to associate with the prepayment (e.g.,
> > Alice wants to receive 1 BTC to hash 0123...cdef with a prepayment of
> > 10k sats, so the 10k sats is sent via keysend with metadata indicating
> > the receiver shouldn't claim it until they receive the 1 BTC HTLC to
> > 0123...cdef).
> >
> > If so, I think then you'd only need BOLT11 invoices to be extended with
> > an extra_fee_via_keysend field.  That would be significantly smaller and
> > it also allows encoding the extra_fee_via_keysend field in an existing
> > BOLT11 field like (d) description or the relatively new (m) metadata
> > field, which may allow immediate implementation until an updated version
> > of BOLT11 (or an alternative using offers) becomes widely deployed.
> >
> > Thanks,
> >
> > -Dave
>
> --
> Electrum Technologies GmbH / Paul-Lincke-Ufer 8d / 10999 Berlin / Germany
> Sitz, Registergericht: Berlin, Amtsgericht Charlottenburg, HRB 164636
> Geschäftsführer: Thomas Voegtlin
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal: Bundled payments

2023-06-15 Thread Bastien TEINTURIER
Hi Thomas,

First of all, I'd like to highlight something that may not be obvious
from your email, and is actually pretty important: your proposal
requires *senders* to be aware that the payment will lead to a channel
creation (or a splice) on the *receiver* end. In particular, it requires
all existing software used by senders to be updated. For this reason, I
think extending Bolt 12 (which requires new sender code anyway) makes
more sense than updating Bolt 11.

I see only three strategies to provide JIT liquidity (by opening a new
channel or making a splice, I'll only use the open channel case below
for simplicity):

1. Ask receiver for the preimage and a fee, then open a channel and
   push the HTLC amount minus the fee
2. Open a channel, then forward the HTLC amount minus a fee
3. Pre-pay fee, then open a channel and forward the whole HTLC amount
   on that channel

What is currently deployed on the network is 1) and 2), while you're
proposing 3). Both 1) and 2) have the advantages that the sender doesn't
need to be aware that JIT liquidity is happening, and doesn't need to do
anything special for that payment, which is the main reason those
strategies were chosen.

If all you're concerned about is trust and regulation, solution 2) works
fine as long as the mempool isn't empty: if the user doesn't release the
preimage after you've opened the channel, you should just blacklist that
channel, reject payments made to it, and double-spend it whenever you
have another on-chain transaction to make (and use 1 sat/byte for JIT
liquidity transactions). Even if the mempool is empty, if your LSP has
transactions to make at every block, it's likely that it will succeed
at double-spending the faulty channel, and thus won't lose anything.

But I agree that this only works when coupled with 0-conf. If we're not
using 0-conf anymore, pre-paying fees would make more sense. But we will
likely keep on using 0-conf at least until Bolt 12 is deployed, so it
seems more reasonable to include this new feature in Bolt 12 rather than
Bolt 11, since all implementations are actively working on this?

Cheers,
Bastien

Le jeu. 15 juin 2023 à 10:52, Thomas Voegtlin  a
écrit :

> Hello Matt,
>
> I think it is not too late to add a new feature to BOLT-11. In any
> case, the belief that BOLT-11 is ossified should not be a reason to
> make interactive something that fundamentally does not require more
> interactivity than what BOLT-11 already offers. Technical decisions
> should be dictated by technical needs, and I am a minimalist when it
> comes to adding new messages to protocols.
>
> I believe that two major implementations have an incentive to support
> this proposal (although I cannot speak for them):
>   - Lightning Labs could potentially offer their Loop service to
> non-LND users.
>   - ACINQ would be able to open channels to Phoenix users without
> requesting the preimage first. This would put them on the safe side
> of the upcoming MICA regulation; I cannot emphasize enough how
> important that is.
>
> In addition, you could certainly decide to support that feature in
> LDK, and I can speak for Electrum :-)
>
> It is the first time I suggest a change to the Lightning protocol, and
> what I am proposing is really a tiny change. All we need is a new
> invoice feature, that describes the prepayment of a fee using a
> different preimage. This feature does not need to be set on all
> invoices, and it could be made optional during a transition period.
>
> Here is how that feature could possibly made optional:
>   - a new feature bit is defined, BUNDLE_PREPAYMENT
>   - two extra fields are defined: prepayment_amount, prepayment_hash
>   - if the sender does not support BUNDLE_PREPAYMENT and the feature is
> optional, it ignores the new fields
>   - if the sender support BUNDLE_PREPAYMENT:
>  - sender sends (amount - prepayment_amount) with payment_hash
>  - sender sends prepayment_amount with prepayment_hash
>
> The decision to make this feature required or optional remains with
> the service provider. I can see how submarine swap providers who are
> already exposed to the mining fee griefing attack could decide to make
> it optional for a transition period.
>
> cheers,
> Thomas
>
>
> Regarding your question (a) about the distinction between splice-out
> and submarine swaps: Submarine swaps make it possible to add receiving
> capacity to a channel.
>
>
>
>
> On 14.06.23 19:28, Matt Corallo wrote:
> > I think the ship has probably sailed on getting any kind of new
> interoperable change in to BOLT-11.
> >
> > We already can't get amount-less BOLT-11 invoices broadly supported,
> rolling out yet another new incompatible version of BOLT-11 and expecting
> the entire ecosystem to support it doesn't seem all that likely.
> >
> > If we're working towards specifying some "standard" way of doing swaps,
> (a) I'd be curious to understand why the need isn't obviated by splice-out,
> and (b) why it shouldn't be 

Re: [Lightning-dev] Liquidity griefing for 0-conf dual-funded txs

2023-06-07 Thread Bastien TEINTURIER
Hey Antoine,

Sure, I agree with you, the usual mempool pinning issues still apply
regardless of whether we use 0-conf or not. And we must solve them
all at some point!

I think a reasonable mid-term solution is to use v3 transactions for
channel funding and splicing, with the obvious caveat that it makes
them identifiable on-chain (unless in the longer term, everyone moves
to v3 transactions for everything, which doesn't seem crazy to me?).

In the longer term, we know that some kind of anti-DoS token will need
to be exchanged to avoid this whole class of issue, but as we've often
discussed this isn't an easy thing to design and analyze...

> it can still be valuable to disable inbound payments, or requires a
> longer `cltv_expiry_delta` than usual, in case of mempool fee spikes
> delaying the 0-conf chain confirmation.

Sure, that's a policy that nodes can decide to apply if they want to,
and it should be simple enough to implement (no protocol changes are
needed).

Thanks,
Bastien

Le mer. 7 juin 2023 à 02:41, Antoine Riard  a
écrit :

> Hi Bastien,
>
> > This can be fixed by using a "soft lock" when selecting utxos for a non
> > 0-conf funding attempt. 0-conf funding attempts must ignore soft locked
> > utxos while non 0-conf funding attempts can (should) reuse soft locked
> > utxos.
>
> If my understanding of the "soft lock" strategy is correct - Only locking
> UTXO when it's a non 0-conf funding attempt - I think you're still exposed
> to liquidity griefing with dual-funding or splicing.
>
> The vector of griefing you're mentioning is the lack of signature release
> for a shared input by your counterparty. However, in the context of
> dual-funding where the counterparty can add any output with
> `tx_add_output`, the transaction can be pinned in the mempool in a very
> sneaky way e.g abuse of replacement rule 3.
>
> This latter pinning vector is advantageous to the malicious counterparty
> as I think you can batch your pinning against unrelated dual-funding, only
> linked in the mempool by a malicious pinning CPFP.
>
> It is left as an exercise to the reader to find other vectors of pinnings
> that can be played out in the dual-funding flow.
>
> In terms of (quick) solution to prevent liquidity griefing related to
> mempool vectors, the (honest) counterparty can enforce that any contributed
> outputs must be encumbered by a 1 CSV, unless being a 2-of-2 funding.
> Still, this mitigation can be limited as I think the initial commitment
> transaction must have anchor outputs on each-side, for each party to
> recover its contributed UTXOs in any case.
>
> > Then we immediately send `channel_ready` as well and start using that
> > channel (because we know we won't double spend ourselves). This is nice
> > because it lets us use 0-conf in a way where only one side of the
> > channel needs to trust the other side (instead of both sides trusting
> > each other).
>
> From the 0-conf initiator viewpoint (the one contributing the UTXO(s)), it
> can still be valuable to disable inbound payments, or requires a longer
> `cltv_expiry_delta` than usual, in case of mempool fee spikes delaying the
> 0-conf chain confirmation.
>
> Beyond, it sounds liquidity griefing provoked by a lack of signature
> release or mempool funny games will always be there ? Even for the second
> with package relay/nVersion deployment, there is still the duration between
> the pinning happening among network mempools and your replacement broadcast
> kickstarts.
>
> As a more long-term solution, we might reuse solutions worked out to
> mitigate channel jamming, as the abstract problem is the same, namely your
> counterparty can lock up scarce resources without (on-chain/off-chain
> whatever) fees paid.
>
> E.g the Staking Credentials framework could be deployed by dual-funding
> market-makers beyond routing hops [0]. The dual-funding initiator should
> pay to the maker a fee scale up on the amount of UTXOs contributed, and
> some worst-case liquidity griefing scenario. A privacy-preserving
> credential can be introduced between the payment of the fee and the redeem
> of the service to unlink dual-funding initiators (if the maker has enough
> volume to constitute a reasonable anonymity set).
>
> Best,
> Antoine
>
> [0]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-May/003964.html
>
>
> Le sam. 6 mai 2023 à 04:15, Bastien TEINTURIER  a
> écrit :
>
>> Good morning list,
>>
>> One of the challenges created by the introduction of dual funded
>> transactions [1] in lightning is how to protect against liquidity
>> griefing attacks from malicious peers [2].
>>
>> Let's start by reviewing this liquidity griefing issue

Re: [Lightning-dev] Liquidity griefing for 0-conf dual-funded txs

2023-05-10 Thread Bastien TEINTURIER
Hey Matt, Zman,

> I propose that we DO lock our UTXOs after tx_completes have been
> exchanged IF we are the only contributor.  We don't have to worry
> about liquidity griefing in this case, since the peer has no
> tx_signatures to withhold from us.

While this is true for dual funding, this isn't true for splicing, where
we need the remote `tx_signatures` to spend the channel's current
funding output. But it's not an issue, the untrusted peer will send
their `tx_signatures` first (since they're not contributing to the
transaction) and we can `TryLock()` once we receive that.

> Your proposal basically means "do not dual-fund 0-conf".
> You might as well use the much simpler openv1 flow in that case, just
> because it is simpler.

I also thought that this was the easy way out, but I was hoping we could
do better. The issue with that option (using v1 with locks for 0-conf,
and v2 with soft locks for non 0-conf) is that we need to implement that
soft lock mechanism (we cannot simply rely on bitcoin core, which only
supports hard locks) or use separate bitcoin core wallets for 0-conf and
non 0-conf.

But there is probably no free lunch here! And anyway, this post was also
made to raise awareness for implementers to make sure they don't end up
accidentally double-spending 0-conf channels when implementing dual
funding.

Thanks,
Bastien

Le mer. 10 mai 2023 à 02:07, ZmnSCPxj  a écrit :

> Good morning Matt, and t-bast,
>
> Your proposal basically means "do not dual-fund 0-conf".
> You might as well use the much simpler openv1 flow in that case, just
> because it is simpler.
>
> Regards,
> ZmnSCPxj
>
>
>
>
> Sent with Proton Mail secure email.
>
> --- Original Message ---
> On Tuesday, May 9th, 2023 at 5:38 PM, Matt Morehouse <
> mattmoreho...@gmail.com> wrote:
>
>
> > Hi Bastien,
> >
> > In general, 0-conf is only safe when WE are the only contributor to
> > the channel, otherwise the peer could double spend us.
> >
> > The problem you seem to be describing is that we might double-spend
> > ourselves if we don't lock our 0-conf UTXOs at some point. I propose
> > that we DO lock our UTXOs after tx_completes have been exchanged IF we
> > are the only contributor. We don't have to worry about liquidity
> > griefing in this case, since the peer has no tx_signatures to withhold
> > from us. Of course, the opportunistic upgrade of a regular channel to
> > 0-conf won't work -- we need a way to differentiate 0-conf channels
> > prior to UTXO selection, so that we don't reuse soft-locked UTXOs.
> >
> > All together, what I propose is:
> >
> > 1) If the channel type has option_zeroconf, select UTXOs that are not
> > soft locked.
> > 2) If the peer adds any inputs to the funding transaction, abort
> > (0-conf is unsafe for us in this case).
> > 3) After tx_complete exchange, TryLock() our UTXO inputs and abort if
> > already locked.
> > 4) Broadcast funding transaction and begin using the 0-conf channel.
> >
> > I think this at least enables the common use case for 0-conf: LSPs can
> > use their own funds to open 0-conf channels for clients.
> >
> > - Matt
> >
> >
> >
> >
> > On Sat, May 6, 2023 at 3:16 AM Bastien TEINTURIER bast...@acinq.fr
> wrote:
> >
> > > Good morning list,
> > >
> > > One of the challenges created by the introduction of dual funded
> > > transactions [1] in lightning is how to protect against liquidity
> > > griefing attacks from malicious peers [2].
> > >
> > > Let's start by reviewing this liquidity griefing issue. The dual
> funding
> > > protocol starts by exchanging data about the utxos each peer adds to
> the
> > > shared transaction, then exchange signatures and broadcast the
> resulting
> > > transaction. If peers lock their utxos as soon as they've decided to
> add
> > > them to the shared transaction, the remote node may go silent. If that
> > > happens, the honest node has some liquidity that is locked and
> unusable.
> > >
> > > This cannot easily be fixed by simply unlocking utxos after detecting
> > > that the remote node is fishy, because the remote node would still have
> > > succeeded at locking your liquidity for a (small) duration, and could
> > > start other instances of that attack with different node_ids.
> > >
> > > An elegant solution to this issue is to never lock utxos used in dual
> > > funded transactions. If a remote node goes silent in the middle of an
> > > instance of the protocol, your utxos will automatically be re-used in
> > > another in

[Lightning-dev] Liquidity griefing for 0-conf dual-funded txs

2023-05-05 Thread Bastien TEINTURIER
Good morning list,

One of the challenges created by the introduction of dual funded
transactions [1] in lightning is how to protect against liquidity
griefing attacks from malicious peers [2].

Let's start by reviewing this liquidity griefing issue. The dual funding
protocol starts by exchanging data about the utxos each peer adds to the
shared transaction, then exchange signatures and broadcast the resulting
transaction. If peers lock their utxos as soon as they've decided to add
them to the shared transaction, the remote node may go silent. If that
happens, the honest node has some liquidity that is locked and unusable.

This cannot easily be fixed by simply unlocking utxos *after* detecting
that the remote node is fishy, because the remote node would still have
succeeded at locking your liquidity for a (small) duration, and could
start other instances of that attack with different node_ids.

An elegant solution to this issue is to never lock utxos used in dual
funded transactions. If a remote node goes silent in the middle of an
instance of the protocol, your utxos will automatically be re-used in
another instance of the protocol. The only drawback with that approach
is that when you have multiple concurrent instances of dual funding with
honest peers, some of them may fail because they are double-spent by one
of the concurrent instances. This is acceptable, since the protocol
should complete fairly quickly when peers are honest, and at worst, it
can simply be restarted when failure is detected.

But that solution falls short when using 0-conf, because accidentally
double-spending a 0-conf channel (because of concurrent instances) can
result in loss of funds for one of the peers (if payments were made on
that channel before detecting the double-spend). It seems like using
0-conf forces us to lock utxos to avoid this issue, which means that
nodes offering 0-conf services expose themselves to liquidity griefing.

Another related issue is that nodes that want to offer 0-conf channels
must ensure that the utxos they use for 0-conf are isolated from the
utxos they use for non 0-conf, otherwise it is not possible to properly
lock utxos, because of the following race scenario:

- utxoA is selected for a non 0-conf funding attempt and not locked
  (to protect against liquidity griefing)
- utxoA is also selected for a 0-conf funding attempt (because it is
  found unlocked in the wallet) and then locked
- the funding transaction for the 0-conf channel is successfully
  published first and that channel is instantly used for payments
- the funding transaction for the non 0-conf channel is then published
  and confirms, accidentally double-spending the 0-conf channel

This can be fixed by using a "soft lock" when selecting utxos for a non
0-conf funding attempt. 0-conf funding attempts must ignore soft locked
utxos while non 0-conf funding attempts can (should) reuse soft locked
utxos.

In eclair, we are currently doing "opportunistic" 0-conf:

- if we receive `channel_ready` immediately (which means that our peer
  trusts us to use 0-conf)
- and we're the only contributor to the funding transaction (our peer
  doesn't have any input that they could use to double-spend)
- and the transaction hasn't been RBF-ed yet

Then we immediately send `channel_ready` as well and start using that
channel (because we know we won't double spend ourselves). This is nice
because it lets us use 0-conf in a way where only one side of the
channel needs to trust the other side (instead of both sides trusting
each other).

Unfortunately, we cannot do that anymore when mixing 0-conf and non
0-conf funding attempts, because the utxos may be soft locked,
preventing us from "upgrading" to 0-conf.

You have successfully reached the end of this quite technical post,
congrats! My goal with this post is to gather ideas on how we could
improve that situation and offer good enough protections against
liquidity griefing for nodes offering 0-conf services. Please share
your ideas! And yes, I know, 0-conf is a massive implementation pain
point that we would all like to remove from our codebases, but hey,
users like it ¯\_(ツ)_/¯

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/851
[2] https://github.com/lightning/bolts/pull/851#discussion_r997537630
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splice Lock Race Condition Solution

2023-04-09 Thread Bastien TEINTURIER
Hi Dustin,

I believe this is the scenario I described in [1]?

I haven't looked at the `announcement_signatures` case yet, but at least
for the `commit_sig` case this should never be an issue. It only means
that sometimes, after sending `splice_locked`, you will receive some
`commit_sig` messages that are for commitments that you don't care about
anymore. You should be able to safely ignore those `commit_sig`. I have
provided more details in the gist linked.

Let me know if I'm missing something, but I believe this is simply an
edge case that implementations need to correctly handle, not a protocol
issue? Or maybe I'm not understanding the scenario correctly?

By the way, I find your notation a bit hard to follow...I think that we
really need to detail the exact message flow (like I did in the linked
gist) to be able to explain protocol issues, otherwise there's always
a risk that people think about a slightly different message flow, which
means we'll just talk past each other...

Cheers,
Bastien

[1]
https://gist.github.com/t-bast/1ac31f4e27734a10c5b9847d06db8d86#multiple-splices-with-racy-splice_locked


Le jeu. 6 avr. 2023 à 02:40, Dustin Dettmer  a écrit :

> Hey,
>
> In testing the `splice_locked` workflow I discovered a race condition
> which is critical we solve correctly. The core problem happens if any
> channel activity occurs in the time after `splice_locked` is sent and
> before `splice_locked` is received.
>
> `splice_locked` is defined as being locked once it is both sent and
> received. It is fairly trivial to build a test case for this -- have a node
> continually spamming payments while `splice_lock`ing is occurring and the
> race condition will trigger relatively often.
>
> The race condition effects two messages in particular: `commitment_signed`
> and `announcement_signatures`. Below is an example of how it occurs with
> commitment but the flow is essentially the same for announcement:
>
> Legend:
> Item -> means sent
> Item <- means received
> Chan X (implies a channel at block height X)
> (Since these happen at different times)
> Splice locked race condition example
> Node A. Node B.
> * Channel starts at block height 100
> splice_locked ->
> <- splice_locked
> <- commitments_signed (Chan 100)
> -> splice_locked
> Node B now considers splice locked (Chan 106)
> <- commitments_signed (Chan 106)
> splice_locked <-
> Node A now considers splice locked (Chan 106)
> commitments_signed <- (Chan 100)
> commitments_signed <- (Chan 106)
> Node A considers the commitments_signed for Chan 100 invalid.
> The commitments_signed for Chan 106 is, however, valid.
> This example uses commitments_signed but remains a problem for any message
> that depends on channel state.
>
> The solution requires the temporary storing of two items:
> * [scid] last_short_channel_id (the pre-splice short channel id)
> * [bool] splice_await_commitment_succcess
>
> After sending & receiving `splice_locked` (so called 'mutual splice lock),
> the last_short_channel_id should be set to the pre-splice short channel id
> and splice_await_commitment_succcess should be flagged to true.
>
> If an `announcement_signatures` is received with an scid matching
> `last_short_channel_id` the message should be ignored and the channel
> connection should not be aborted (as it normally would).
>
> If a `commitment_signed` message is received with the
> tlv splice_info->splice_channel_id set to something other than the
> successfully confirmed splice channel_id, the message should be ignored.
>
> Once a revoke_and_ack is successfully sent OR received,
> `last_short_channel_id` and `splice_await_commitment_succcess` should be
> reset and normal validation of `announcement_signatures` and
> `commitment_signed` should be resumed.
>
> This solves the race condition while preserving as strict a validation of
> messages as possible and removes the need to add new fields to these
> messages.
>
> Cheers,
> Dusty
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Proposed changes to the splicing specification

2023-04-02 Thread Bastien TEINTURIER
Good morning list,

As some of you may know, we've been hard at work experimenting with
splicing [1]. Splicing is a complex feature with a large design space.
It was interesting to iterate on two separate implementations (eclair
and cln) and discover the pain points, edge cases and things that could
be improved in the protocol specification.

After a few months trying out different approaches, we'd like to share
changes that we believe make the splicing protocol simpler and more
robust.

We call "active commitments" the set of valid commitment transactions to
which updates must be applied. While one (or more) splices are ongoing,
there is more than one active commitment. When signing updates, we send
one `commitment_signed` message per active commitment. We send those
messages in the order in which the corresponding funding transactions
have been created, which lets the receiver implicitly match every
`commitment_signed` to their respective funding transaction.

Once we've negotiated a new splice and reached the signing steps of the
interactive-tx protocol, we send a single `commitment_signed` for that
new commitment. We don't revoke the previous commitment(s), as this adds
an unnecessary step. Conceptually, we're simply adding a new commitment
to our active commitments set.

A sample flow will look like this:

   Alice   Bob
 | stfu |
 |->|
 | stfu |
 |<-|
 |  splice_init |
 |->|
 |  splice_ack  |
 |<-|
 |  |
 |  |
 |<>|
 |  |
 | tx_complete  |
 |->|
 | tx_complete  |
 |<-|
 | commit_sig   | Sign the new commitment.
 |->|
 | commit_sig   | Sign the new commitment.
 |<-|
 |tx_signatures |
 |->|
 |tx_signatures |
 |<-|
 |  |
 |   update_add_htlc| Alice and Bob use the channel while
the splice transaction is unconfirmed.
 |->|
 |   update_add_htlc|
 |->|
 | commit_sig   | Sign the old commitment.
 |->|
 | commit_sig   | Sign the new commitment.
 |->|
 |   revoke_and_ack |
 |<-|
 | commit_sig   | Sign the old commitment.
 |<-|
 | commit_sig   | Sign the new commitment.
 |<-|
 |   revoke_and_ack |
 |->|
 |  |
 |splice_locked | The splice transaction confirms.
 |->|
 |splice_locked |
 |<-|
 |  |
 |   update_add_htlc| Alice and Bob can use the channel and
forget the old commitment.
 |->|
 | commit_sig   | Sign the new commitment.
 |->|
 |   revoke_and_ack |
 |<-|
 | commit_sig   | Sign the new commitment.
 |<-|
 |   revoke_and_ack |
 |->|
 |  |

You can find many more details and sample flows in [2].

We require nodes to store data about the funding transaction as soon as
they send their `commitment_signed` message. This lets us handle every
disconnection scenario safely, allowing us to either resume the signing
steps on reconnection or forget the funding attempt. This is important
because if peers disagree on the set of active commitments, this will
lead to a force-close. In order to achieve that, we only need to add
the `next_funding_txid` to the `channel_reestablish` message, and fill
it when we're missing signatures from our peer. Again, you can find more
details and sample flows in [2].

Finally, after trying various approaches, we believe that the funding
amounts that peer exchange in `splice_init` and `splice_ack` should be
relative amounts based on each peer's current channel balance.

If Alice sends `funding_amount = 200_000 sats`, it means she will be
adding 200 000 sats to the channel's capacity (splice-in).

If she sends `funding_amount = -50_000 sats`, it means she will be

Re: [Lightning-dev] A pragmatic, unsatisfying work-around for anchor outputs fee-bumping reserve requirements

2022-11-07 Thread Bastien TEINTURIER
cenarios,
a
> > node may need to fee-bump thousands of HTLC transactions in a short
period
> > of time.
>
> IMO these new considerations aren't any worse than needing to predict the
> future fee schedule of the chain to ensure that you can force close in a
> timely manner when you need to. Re fee bumping thousands of HTLCs: anchor
> lets them all be batched in the same transaction, which reduces fees and
> also the worst-case on-chain force close footprint.
>
> > each node can simply sign multiple versions of the HTLC transactions at
> > various feerates
>
> I'm not sure this can be mapped super cleanly to taproot channels that use
> musig2. Today in the spec draft/impl, both sides maintain a pair of nonces
> (one for each commitment transaction). If they need to sign N different
> versions, then they also need to exchange N nonces, both during the
initial
> funding process, and also each time a new commitment transaction is
created.
> Mo signatures means mo transaction latency. Also how would retransmitting
be
> handled? By sending distinct valid signatures for a given fee rate, you're
> effectively creating _even more_ commitments one needs to watch to be able
> to play once they potentially hit the chain.
>
> Ultimately, I'm not sure why implementations that have already rolled out
> anchors by default, and have a satisfactory policy for ensuring fee
bumping
> UTXOs are available at all times would implement this. It's just yet
another
> option defined in the spec, and prescribes a more restrictive solution to
> what's already possible: being able to dynamically fee bump commitment
> transactions, and aggregate second level spends.
>
> -- Laolu
>
> On Thu, Oct 27, 2022 at 6:51 AM Bastien TEINTURIER 
wrote:
>>
>> Good morning list,
>>
>> The lightning network transaction format was updated to leverage CPFP
>> carve-out and allow nodes to set fees at broadcast time, using a feature
>> called anchor outputs [1].
>>
>> While desirable, this change brought a whole new set of challenges, by
>> requiring nodes to maintain a reserve of available utxos for fee-bumping.
>> Correctly managing this fee-bumping reserve involves a lot of complex
>> decisions and dynamic risk assessment, because in worst-case scenarios,
>> a node may need to fee-bump thousands of HTLC transactions in a short
>> period of time.
>>
>> This is especially frustrating because HTLC transactions should not need
>> external inputs, as the whole value of the HTLC is already provided in
>> its input, which means we could in theory "simply" decrease the amount of
>> the corresponding output to set the fees to any desired value. However,
>> we can't do this safely because it doesn't work well with the revocation
>> mechanism, unless we find fancy new sighash flags to add to bitcoin.
>> See [2] for a longer rant on this issue.
>>
>> A very low tech and unsatisfying solution exists, which is what I'm
>> proposing today: each node can simply sign multiple versions of the
>> HTLC transactions at various feerates, and at broadcast time if you're
>> lucky you'll have a pre-signed transaction that approximately matches
>> the feerate you want, so you don't need to add inputs from your fee
>> bumping reserve. This reduces the requirements on your on-chain wallet
>> and simplifies transaction management logic. I believe that it's a
>> pragmatic approach, even though not very elegant, to increase funds
>> safety for existing node operators and wallets. I opened a spec PR
>> that is currently chasing concept ACKs before I refine it [3].
>>
>> Please let me know what you think, and if this is something that you
>> would like your implementation to provide.
>>
>> Thanks,
>> Bastien
>>
>> [1] https://github.com/lightning/bolts/pull/688
>> [2] https://github.com/lightning/bolts/issues/845
>> [3] https://github.com/lightning/bolts/pull/1036
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Watchtower-Free Lightning Channels For Casual Users

2022-11-02 Thread Bastien TEINTURIER
nsaction appears 
> to have some advantages in terms of capital efficiency. It's also somewhat 
> more compatible with the WF protocol, as the delay for the dedicated user to 
> obtain spliced-out funds is dependent on the actual time until the casual 
> user next comes online, rather than the worst-case delay until the casual 
> user comes online. This could be a big difference if the casual user is 
> typically online every day, but doesn't want the burden of having to be 
> online every day (or every week). I'm interested in your thoughts on this.
>
> Finally, I understand that the ability to quickly splice out channel funds to 
> improve capital efficiency is critical in the current environment. However, 
> if we eventually get to the point where the blockchain is highly-contested 
> and fees are high, it may no longer be worth paying the fees to put a splice 
> transaction on-chain unless a large amount of capital is at stake for a long 
> period of time.
>
> Regards,
> John
>
>
>
>
> Sent with Proton Mail <https://proton.me/> secure email.
>
> --- Original Message ---
> On Monday, October 24th, 2022 at 2:50 AM, Bastien TEINTURIER <
> bast...@acinq.fr> wrote:
>
> Hi John,
>
> > My understanding of the current Lightning protocol is that users specify
> a to_self_delay safety parameter which is typically about 2 weeks and that
> they pay for routing, but not for their partner's cost-of-capital. Is that
> correct?
> >
> > If it is, then when a dedicated user (DLU) partners with a casual user
> (CLU), the DLU can only move liquidity to another Lightning channel by
> either:
> > 1) getting the CLU to sign a cooperative close transaction that enables
> (or directly implements) the desired movement of funds, or
> > 2) putting a non-cooperative close transaction on-chain and waiting
> approximately 2 weeks (based on the to_self_delay parameter set by the CLU)
> before moving the liquidity.
>
> That's correct, but that's something we intend to change to let LSPs
> re-allocate their liquidity more frequently and more efficiently.
>
> We don't have a fully specified proposal yet, but by leveraging a
> mechanism similar to splicing [1], mobile users would pre-sign a
> transaction that keeps the channel open, but lets the LSP get their
> balance (or part of it) out non-interactively. This would be used by
> LSPs if the user isn't using their channel actively enough and the LSP
> is low on available liquidity for other, more active users.
>
> This transaction would need to be revokable and must be delayed, since
> we still need the user to be able to punish malicious LSPs, but ideally
> (from the LSP's point of view) that delay should be at most 1 week, which
> forces users to regularly check the blockchain (which isn't ideal).
>
> It really is a trade-off to be able to lower the fees LSPs make users
> pay for liquidity, because LSPs know they can move it cheaply when it
> becomes necessary. I can see a future where users chose their trade-off:
> pay more to be able to go offline for longer periods or pay less but
> check the blockchain regularly. The same LSP could offer both features,
> if they're able to price them correctly (which isn't trivial).
>
> > My intuition is that in the long run, the cost of bitcoin capital will
> be very low, as it is an inherently deflationary monetary unit (and thus
> its value should increase with time). If this is correct, the long term
> cost-of-capital charges should be very low.
>
> I'm not convinced by that...even though the value of capital increases
> with time, liquidity providers will compete to earn more return on their
> locked capital. If one liquidity provider is able to use their capital
> more efficiently than another, they will be able to offer lower prices
> to their customers to a point that isn't economically viable for the
> other, less efficient liquidity provider?
>
> Since lightning doesn't allow any form of fractional reserve, the total
> available capital needs to be split between all existing users of the
> system, which is very inconvenient when trying to onboard a high number
> of new users.
>
> This is very theoretical though, I have absolutely no idea how those
> dynamics will actually play out, but it will be interesting to watch it
> unfold.
>
> Cheers,
> Bastien
>
> [1] https://github.com/lightning/bolts/pull/863
>
> Le mer. 12 oct. 2022 à 02:11, jlspc  a écrit :
>
>>
>> Hey Bastien,
>>
>> Thanks for your reply.
>>
>> Responses are in-line below:
>>
>> > Hey John,
>> >
>> > Thanks for sharing, this is very interesting.
>> >
>> > There is a good insigh

[Lightning-dev] A pragmatic, unsatisfying work-around for anchor outputs fee-bumping reserve requirements

2022-10-27 Thread Bastien TEINTURIER
Good morning list,

The lightning network transaction format was updated to leverage CPFP
carve-out and allow nodes to set fees at broadcast time, using a feature
called anchor outputs [1].

While desirable, this change brought a whole new set of challenges, by
requiring nodes to maintain a reserve of available utxos for fee-bumping.
Correctly managing this fee-bumping reserve involves a lot of complex
decisions and dynamic risk assessment, because in worst-case scenarios,
a node may need to fee-bump thousands of HTLC transactions in a short
period of time.

This is especially frustrating because HTLC transactions should not need
external inputs, as the whole value of the HTLC is already provided in
its input, which means we could in theory "simply" decrease the amount of
the corresponding output to set the fees to any desired value. However,
we can't do this safely because it doesn't work well with the revocation
mechanism, unless we find fancy new sighash flags to add to bitcoin.
See [2] for a longer rant on this issue.

A very low tech and unsatisfying solution exists, which is what I'm
proposing today: each node can simply sign multiple versions of the
HTLC transactions at various feerates, and at broadcast time if you're
lucky you'll have a pre-signed transaction that approximately matches
the feerate you want, so you don't need to add inputs from your fee
bumping reserve. This reduces the requirements on your on-chain wallet
and simplifies transaction management logic. I believe that it's a
pragmatic approach, even though not very elegant, to increase funds
safety for existing node operators and wallets. I opened a spec PR
that is currently chasing concept ACKs before I refine it [3].

Please let me know what you think, and if this is something that you
would like your implementation to provide.

Thanks,
Bastien

[1] https://github.com/lightning/bolts/pull/688
[2] https://github.com/lightning/bolts/issues/845
[3] https://github.com/lightning/bolts/pull/1036
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Watchtower-Free Lightning Channels For Casual Users

2022-10-24 Thread Bastien TEINTURIER
Hi John,

> My understanding of the current Lightning protocol is that users specify
a to_self_delay safety parameter which is typically about 2 weeks and that
they pay for routing, but not for their partner's cost-of-capital. Is that
correct?
>
> If it is, then when a dedicated user (DLU) partners with a casual user
(CLU), the DLU can only move liquidity to another Lightning channel by
either:
> 1) getting the CLU to sign a cooperative close transaction that enables
(or directly implements) the desired movement of funds, or
> 2) putting a non-cooperative close transaction on-chain and waiting
approximately 2 weeks (based on the to_self_delay parameter set by the CLU)
before moving the liquidity.

That's correct, but that's something we intend to change to let LSPs
re-allocate their liquidity more frequently and more efficiently.

We don't have a fully specified proposal yet, but by leveraging a
mechanism similar to splicing [1], mobile users would pre-sign a
transaction that keeps the channel open, but lets the LSP get their
balance (or part of it) out non-interactively. This would be used by
LSPs if the user isn't using their channel actively enough and the LSP
is low on available liquidity for other, more active users.

This transaction would need to be revokable and must be delayed, since
we still need the user to be able to punish malicious LSPs, but ideally
(from the LSP's point of view) that delay should be at most 1 week, which
forces users to regularly check the blockchain (which isn't ideal).

It really is a trade-off to be able to lower the fees LSPs make users
pay for liquidity, because LSPs know they can move it cheaply when it
becomes necessary. I can see a future where users chose their trade-off:
pay more to be able to go offline for longer periods or pay less but
check the blockchain regularly. The same LSP could offer both features,
if they're able to price them correctly (which isn't trivial).

> My intuition is that in the long run, the cost of bitcoin capital will be
very low, as it is an inherently deflationary monetary unit (and thus its
value should increase with time). If this is correct, the long term
cost-of-capital charges should be very low.

I'm not convinced by that...even though the value of capital increases
with time, liquidity providers will compete to earn more return on their
locked capital. If one liquidity provider is able to use their capital
more efficiently than another, they will be able to offer lower prices
to their customers to a point that isn't economically viable for the
other, less efficient liquidity provider?

Since lightning doesn't allow any form of fractional reserve, the total
available capital needs to be split between all existing users of the
system, which is very inconvenient when trying to onboard a high number
of new users.

This is very theoretical though, I have absolutely no idea how those
dynamics will actually play out, but it will be interesting to watch it
unfold.

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/863

Le mer. 12 oct. 2022 à 02:11, jlspc  a écrit :

>
> Hey Bastien,
>
> Thanks for your reply.
>
> Responses are in-line below:
>
> > Hey John,
> >
> > Thanks for sharing, this is very interesting.
> >
> > There is a good insight here that we can remove the intermediate
> > HTLC-timeout transaction for outgoing payments because we are the
> > origin of that payment (and thus don't need to quickly claim the
> > HTLC on-chain to then relay that failure to a matching incoming HTLC).
> >
> > More generally, you have perfectly identified that most of the
> > complexity of today's transactions come from the need to ensure that
> > a failing/malicious downstream channel doesn't negatively impact
> > honest upstream channels when relaying payments, and that some of this
> > complexity can be lifted when nodes don't relay payments.
>
> Thanks!
>
> > However, my main criticism of your proposal is that liquidity isn't free.
> > While your improvements are great from the CLU's point of view, I'm not
> > sure they're acceptable for the DLU. The main (probably only) job of an
> > LSP (DLU in your terminology) is to efficiently allocate their liquidity.
> > In order to do so, they must be able to quickly move liquidity from where
> > it's unused to where it may be better used. That means closely watching
> > the demand for block space and doing on-chain transactions when fees are
> > low (to open/close channels, splice funds in/out [1], make peer swaps [2],
> > etc). With your proposal, DLUs won't be able to quickly move liquidity
> > around, so the only way to make up for this is to charge the CLU for the
> > loss of expected revenue. I'm afraid that the amount DLUs would need to
> > charge CLUs will be prohibitively expensive for most CLUs.
> >
> > I'm curious to get your feedback on that point.
>
> I really appreciate your insight here. I'm just an interested observer who 
> doesn't have experience with creating and deploying 

Re: [Lightning-dev] Fat Errors

2022-10-20 Thread Bastien TEINTURIER
Hi Joost,

I need more time to review your proposed change, but I wanted to quickly
correct a misunderstanding you had in quoting eclair's code:

> Unfortunately it is possible for nodes on the route to hide themselves.
> If they return random data as the failure message, the sender won't know
> where the failure happened. Some senders then penalize all nodes that
> were part of the route [4][5]. This may exclude perfectly reliable nodes
> from being used for future payments.

Eclair's code does not penalize nodes for future payment attempts in this
case. It only ignores them for the retries of that particular payment.

Cheers,
Bastien

Le mer. 19 oct. 2022 à 13:13, Joost Jager  a écrit :

> Hi list,
>
> I wanted to get back to a long-standing issue in Lightning: gaps in error
> attribution. I've posted about this before back in 2019 [1].
>
> Error attribution is important to properly penalize nodes after a payment
> failure occurs. The goal of the penalty is to give the next attempt a
> better chance at succeeding. In the happy failure flow, the sender is able
> to determine the origin of the failure and penalizes a single node or pair
> of nodes.
>
> Unfortunately it is possible for nodes on the route to hide themselves. If
> they return random data as the failure message, the sender won't know where
> the failure happened. Some senders then penalize all nodes that were part
> of the route [4][5]. This may exclude perfectly reliable nodes from being
> used for future payments. Other senders penalize no nodes at all [6][7],
> which allows the offending node to keep the disruption going.
>
> A special case of this is a final node sending back random data. Senders
> that penalize all nodes will keep looking for alternative routes. But
> because each alternative route still ends with that same final node, the
> sender will ultimately penalize all of its peers and possibly a lot of the
> rest of the network too.
>
> I can think of various reasons for exploiting this weakness. One is just
> plain grievance for whatever reason. Another one is to attract more traffic
> by getting competing routing nodes penalized. Or the goal could be to
> sufficiently mess up reputation tracking of a specific sender node to make
> it hard for that node to make further payments.
>
> Related to this are delays in the path. A node can delay propagating back
> a failure message and the sender won't be able to determine which node did
> it.
>
> The link at the top of this post [1] describes a way to address both
> unreadable failure messages as well as delays by letting each node on the
> route append a timestamp and hmac to the failure message. The great
> challenge is to do this in such a way that nodes don’t learn their position
> in the path.
>
> I'm revisiting this idea, and have prototyped various ways to implement
> it. In the remainder of this post, I will describe the variant that I
> thought works best (so far).
>
> # Failure message format
>
> The basic idea of the new format is to let each node (not just the error
> source) commit to the failure message when it passes it back by adding an
> hmac. The sender verifies all hmacs upon receipt of the failure message.
> This makes it impossible for any of the nodes to modify the failure message
> without revealing that they might have played a part in the modification.
> It won’t be possible for the sender to pinpoint an exact node, because
> either end of a communication channel may have modified the message.
> Pinpointing a pair of nodes however is good enough, and is commonly done
> for regular onion failures too.
>
> On the highest level, the new failure message consists of three parts:
>
> `message` (var len) | `payloads` (fixed len) | `hmacs` (fixed len)
>
> * `message` is the standard onion failure message as described in [2], but
> without the hmac. The hmac is now part of `hmacs` and doesn't need to be
> repeated.
>
> * `payloads` is a fixed length array that contains space for each node
> (`hop_payload`) on the route to add data to return to the sender. Ideally
> the contents and size of `hop_payload` is signaled so that future
> extensions don’t require all nodes to upgrade. For now, we’ll assume the
> following 9-byte format:
>
>   `is_final` (1 byte) | `duration` (8 bytes)
>
>   `is_final` indicates whether this node is the failure source. The sender
> uses `is_final` to determine when to stop the decryption/verification
> process.
>
>   `duration` is the time in milliseconds that the node held the htlc. By
> observing the series of reported durations, the sender is able to pinpoint
> a delay down to a pair of nodes.
>
>   The `hop_payload` is repeated 27 times (the maximum route length).
>
>   Every hop shifts `payloads` 9 bytes to the right and puts its own
> `hop_payload` in the 9 left-most bytes.
>
> * `hmacs` is a fixed length array where nodes add their hmacs as the
> failure message travels back to the sender.
>
>   To keep things simple, I'll describe the 

Re: [Lightning-dev] Watchtower-Free Lightning Channels For Casual Users

2022-10-10 Thread Bastien TEINTURIER
Hey John,

Thanks for sharing, this is very interesting.

There is a good insight here that we can remove the intermediate
HTLC-timeout transaction for outgoing payments because we are the
origin of that payment (and thus don't need to quickly claim the
HTLC on-chain to then relay that failure to a matching incoming HTLC).

More generally, you have perfectly identified that most of the
complexity of today's transactions come from the need to ensure that
a failing/malicious downstream channel doesn't negatively impact
honest upstream channels when relaying payments, and that some of this
complexity can be lifted when nodes don't relay payments.

However, my main criticism of your proposal is that liquidity isn't free.
While your improvements are great from the CLU's point of view, I'm not
sure they're acceptable for the DLU. The main (probably only) job of an
LSP (DLU in your terminology) is to efficiently allocate their liquidity.
In order to do so, they must be able to quickly move liquidity from where
it's unused to where it may be better used. That means closely watching
the demand for block space and doing on-chain transactions when fees are
low (to open/close channels, splice funds in/out [1], make peer swaps [2],
etc). With your proposal, DLUs won't be able to quickly move liquidity
around, so the only way to make up for this is to charge the CLU for the
loss of expected revenue. I'm afraid that the amount DLUs would need to
charge CLUs will be prohibitively expensive for most CLUs.

I'm curious to get your feedback on that point.

Thanks again for sharing, and for the inherited IDs [3] proposal as well!

Bastien

[1] https://github.com/lightning/bolts/pull/863
[2] https://www.peerswap.dev/
[3] https://github.com/JohnLaw2/btc-iids


Le lun. 3 oct. 2022 à 18:55, jlspc via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :
>
> This is the first in a series of posts on ideas to improve the usability
> and scalability of the Lightning Network. This post presents a new channel
> protocol that allows casual users to send and receive Lightning payments
> without having to meet onerous availability requirements or use a
> watchtower service. This new Watchtower-Free (WF) protocol can also be
> used to simplify the reception of Lightning payments for casual users. No
> change to the underlying Bitcoin protocol is required.
>
> A paper with a more complete description of the protocol, including
> figures, is available [5].
>
> Properties
> ==
>
> The user-visible properties of the WF protocol can be expressed using
> two parameters:
> * I_S: a short time interval (e.g., 10 minutes) for communicating with
>   peers, checking the blockchain, and submitting transactions to the
>   blockchain, and
> * I_L: a long time interval (e.g., 1-3 months).
>
> The casual user must be online for up to:
> * I_S every I_L (e.g., 10 minutes every 1-3 months) to safeguard the funds
>   in their Lightning channel.
>
> With the WF protocol, the latency for payments is unchanged from the
> current protocol, but the latency for getting a payment receipt from an
> uncooperative channel partner is increased. In addition, the casual user
> may have to pay their channel partner for the partner's cost of capital
> (which depends on I_L). If the casual user and their channel partner
> follow the protocol, the channel can remain off-chain arbitrarily long.
>
> First Attempt: Use The Current Lightning Protocol
> =
>
> In order to motivate the new protocol, first consider what would happen if
> a casual user attempted to achieve the above properties with the current
> Lightning channel protocol. The casual user would set their
> "to_self_delay" (which controls how quickly their channel partner can
> receive funds from a transaction they put on-chain) and
> "cltv_expiry_delta" (which controls the staggering of timeouts between
> successive hops) parameters to values approaching I_L (because the casual
> user could be unavailable for nearly that long). This would create three
> problems:
>
> * Problem 1: The casual user's proposed channel partner would likely
>   reject the creation of the channel due to the excessive "to_self_delay"
>   value.
>
> * Problem 2: If a channel were created with these parameters, Lightning
>   payments would not be routed through it due to the excessive
>   "cltv_expiry_delta" value.
>
> * Problem 3: If a channel were created with these parameters and if the
>   casual user sent a payment on that channel, their partner could have to
>   go on-chain in order to pull the payment from the casual user. In
>   particular, the casual user could be offline for nearly I_L (e.g., 1-3
>   months) when their partner receives the receipt, thus forcing their
>   partner to go on-chain to receive payment before the expiry of the
>   associated HTLC.
>
> The WF Protocol
> ===
>
> The WF protocol solves these problems by modifying the Lightning 

Re: [Lightning-dev] Onion messages rate-limiting

2022-07-26 Thread Bastien TEINTURIER
Hey all,

Thanks for the comments!
Here are a few answers inline to some points that aren't fully addressed
yet.

@laolu

> Another question on my mind is: if this works really well for rate
limiting of
> onion messages, then why can't we use it for HTLCs as well?

Because HTLC DoS is fundamentally different: the culprit isn't always
upstream, most of the time it's downstream (holding an HTLC), so back
pressure cannot work.

Onion messages don't have this issue at all because there's no
equivalent to holding an onion message downstream, it doesn't have
any impact on previous intermediate nodes.

@ariard

> as the onion messages routing is source-based, an attacker could
> exhaust or reduce targeted onion communication channels to prevent
> invoices exchanges between LN peers

Can you detail how? That's exactly what this scheme is trying to prevent.
This looks similar to Joost's early comment, but I think it's based on a
misunderstanding of the proposal (as Joost then acknowledged). Spammers
will be statistically penalized, which will allow honest messages to go
through. As Joost details below, attackers with perfect information about
the state of rate-limits can in theory saturate links, but in practice I
believe this cannot work for an extended period of time.

@joost

Cool work with the simulation, thanks!
Let us know if that yields other interesting results.

Cheers,
Bastien

Le lun. 11 juil. 2022 à 11:09, Joost Jager  a écrit :

> On Sun, Jul 10, 2022 at 9:14 PM Matt Corallo 
> wrote:
>
>> > It can also be considered a bad thing that DoS ability is not based on
>> a number of messages. It
>> > means that for the one time cost of channel open/close, the attacker
>> can generate spam forever if
>> > they stay right below the rate limit.
>>
>> I don't see why this is a problem? This seems to assume some kind of
>> per-message cost that nodes
>> have to bear, but there is simply no such thing. Indeed, if message spam
>> causes denial of service to
>> other network participants, this would be an issue, but an attacker
>> generating spam from one
>> specific location within the network should not cause that, given some
>> form of backpressure within
>> the network.
>>
>
> It's more a general observation that an attacker can open a set of
> channels in multiple locations once and can use them forever to support
> potential attacks. That is assuming attacks aren't entirely thwarted with
> backpressure.
>
>
>> > Suppose the attacker has enough channels to hit the rate limit on an
>> important connection some hops
>> > away from themselves. They can then sustain that attack indefinitely,
>> assuming that they stay below
>> > the rate limit on the routes towards the target connection. What will
>> the response be in that case?
>> > Will node operators work together to try to trace back to the source
>> and take down the attacker?
>> > That requires operators to know each other.
>>
>> No it doesn't, backpressure works totally fine and automatically applies
>> pressure backwards until
>> nodes, in an automated fashion, are appropriately ratelimiting the source
>> of the traffic.
>>
>
> Turns out I did not actually fully understand the proposal. This version
> of backpressure is nice indeed.
>
> To get a better feel for how it works, I've coded up a simple single node
> simulation (
> https://gist.github.com/joostjager/bca727bdd4fc806e4c0050e12838ffa3),
> which produces output like this:
> https://gist.github.com/joostjager/682c4232c69f3c19ec41d7dd4643bb27.
> There are a few spammers and one real user. You can see that after some
> time, the spammers are all throttled down and the user packets keep being
> handled.
>
> If you add enough spammers, they are obviously still able to hit the next
> hop rate limit and affect the user. But because their incoming limits have
> been throttled down, you need a lot of them - depending on the minimum rate
> that the node goes down to.
>
> I am wondering about that spiraling-down effect for legitimate users. Once
> you hit the limit, it is decreased and it becomes easier to hit it again.
> If you don't adapt, you'll end up with a very low rate. You need to take a
> break to recover from that. I guess the assumption is that legitimate users
> never end up there, because the rate limits are much much higher than what
> they need. Even if they'd occasionally hit a limit on a busy connection,
> they can go through a lot of halvings before they'll get close to the rate
> that they require and it becomes a problem.
>
> But how would that work if the user only has a single channel and wants to
> retry? I suppose they need to be careful to use a long enough delay to not
> get into that down-spiral. But how do they determine what is long enough?
> Probably not a real problem in practice with network latency etc, even
> though a concrete value does need to be picked.
>
> Spammers are probably also not going to spam at max speed. They'd want to
> avoid their rate limit being slashed. In 

Re: [Lightning-dev] Inbound channel routing fees

2022-07-01 Thread Bastien TEINTURIER
Hi Joost,

> isn't it the case that it is always possible to DoS your peer by just
rejecting any forward that comes in from them?

Yes, this is a good point. But there is a difference though. If you do that
with inbound fees, the "malicious" peer is able to prevent _everyone_ from
even trying to route through you (because it's advertised).

Whereas if they selectively fail HTLCs you forward to them, only the payer
for
that HTLC knows about it, and they can attribute the failure to the
malicious
node, not to you.

Of course, that malicious node could also withhold the HTLC or return a
malformed error, but unfortunately we cannot easily protect against this.
My point is that this is bad behavior, and we shouldn't be giving more
tools for nodes to misbehave, and inbound fees are a very powerful tool
to help misbehaving nodes.

> Or indirectly affecting them negatively by setting high fees on all
outbound channels?

This case is completely different, because the "malicious" node can't
selectively
advertise that, it will affect traffic coming from all of their peers so
they
would really be shooting themselves in the foot if they did that.

> My thinking is that if I accept an incoming htlc, my local balance
increases
> on that incoming channel. My money gets locked up in a channel that may or
> may not be interesting to me. Wouldn't it be fair to be compensated for
that?

If that channel isn't interesting to you, then by all means you should fail
that HTLC or close the channel? Or you shouldn't have accepted it in the
first place?

I understand the will to optimize revenue here, but I fear this concrete
proposal leads to many kinds of unhealthy incentives. I agree that there is
a
risk in accepting channels from unknown nodes, but I think it should be
addressed differently: you could for example make the opener pay a fee when
they open a channel to you to compensate that risk (some kind of reversed
liquidity ads).

Cheers,
Bastien

Le ven. 1 juil. 2022 à 14:17, Thomas HUET  a écrit :

> Hi Joost,
>
> It was discussed in this issue:
> https://github.com/lightning/bolts/issues/835
>
> On the network, the traffic is not balanced. Some nodes tend to receive
> more than they send, merchants for instance. For the lightning network to
> be reliable, we need to incentivise people to open channels to such nodes,
> or else there won't be enough liquidity available and payments will fail.
> The current fee structure provides this incentive: You pay some onchain
> fees and lock some funds and in exchange you will earn routing fees. My
> concern is that your proposed change would break that incentive and make
> the network less reliable.
>
> Thomas
>
> Le ven. 1 juil. 2022 à 14:02, Joost Jager  a
> écrit :
>
>> Hi Bastien,
>>
>> I vaguely remembered that the idea of inbound fees had been discussed
>> before. Before writing my post, I scanned through old ML posts and bolts
>> issues but couldn't find the discussion. Maybe it was part of a different
>> but related email or a bolts pr?
>>
>> With regards to your objections, isn't it the case that it is always
>> possible to DoS your peer by just rejecting any forward that comes in from
>> them? Or indirectly affecting them negatively by setting high fees on all
>> outbound channels? To me it seems that there is nothing to lose by adding
>> inbound fees.
>>
>> My thinking is that if I accept an incoming htlc, my local balance
>> increases on that incoming channel. My money gets locked up in a channel
>> that may or may not be interesting to me. Wouldn't it be fair to be
>> compensated for that?
>>
>> Any thoughts from routing node operators would be welcome too (or links
>> to previous threads).
>>
>> Joost
>>
>> On Fri, Jul 1, 2022 at 1:19 PM Bastien TEINTURIER 
>> wrote:
>>
>>> Hi Joost,
>>>
>>> As I've already stated every time this has been previously discussed, I
>>> believe
>>> this doesn't make any sense. The funds that are on the other side of the
>>> channel belong to your peer, not you, so they're free to use it however
>>> they
>>> want. If you're not happy with the way your peer is managing their fees,
>>> then
>>> don't open channels to them and let the network decide whether you're
>>> right or
>>> not.
>>>
>>> Moreover, you shouldn't care at all. If all the funds are on your peer's
>>> side,
>>> this isn't your problem, you used up all the money that was yours. As
>>> long as
>>> the channel is open, this is free inbound liquidity for you, so you're
>>> even
>>> benefiting from this.
>>>
>>> If Alice could set fees for Bob

Re: [Lightning-dev] Inbound channel routing fees

2022-07-01 Thread Bastien TEINTURIER
Hi Joost,

As I've already stated every time this has been previously discussed, I
believe
this doesn't make any sense. The funds that are on the other side of the
channel belong to your peer, not you, so they're free to use it however they
want. If you're not happy with the way your peer is managing their fees,
then
don't open channels to them and let the network decide whether you're right
or
not.

Moreover, you shouldn't care at all. If all the funds are on your peer's
side,
this isn't your problem, you used up all the money that was yours. As long
as
the channel is open, this is free inbound liquidity for you, so you're even
benefiting from this.

If Alice could set fees for Bob's side of the channel, Alice could
arbitrarily
DoS Bob's payments by setting a high fee. This is just one example of the
many
ways this idea completely breaks the routing incentives.

Cheers,
Bastien

Le ven. 1 juil. 2022 à 13:10, Joost Jager  a écrit :

> Path-finding algorithms that are currently in use generally don’t support
>> negative fees. But in this case, the sum of inbound and outbound fees is
>> still positive and therefore not a problem. If routing nodes set their
>> policies accidentally or intentionally so that the sum of fees turns out
>> negative, senders can just round up to zero and find a path as normal.
>>
>
> Correction to this:
>
> The sum of inbound and outbound are not the fees set by one single routing
> node. When path-finding considers a candidate hop, this adds the outbound
> fee of the "from" node and the inbound fee of the "to" node. Because those
> nodes don't necessarily coordinate fees, it may happen more often that the
> fee goes negative. Rounding up to zero is still a quick fix and better than
> ignoring inbound fees completely.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Bastien TEINTURIER
onymous tokens to allow nodes to give them to "good"
> > clients, which is pretty similar to my lofty Forwarding Pass idea as
> relates
> > to onion messaging, and also general HTLC jamming mitigation.
> >
> > In summary, we're not the first to attempt to tackle the problem of rate
> > limiting relayed message spam in an anonymous/pseudonymous network, and
> we
> > can probably learn a lot from what is and isn't working w.r.t how Tor
> > handles things. As you note near the end of your post, this might just be
> > the first avenue in a long line of research to best figure out how to
> handle
> > the spam concerns introduced by onion messaging. From my PoV, it still
> seems
> > to be an open question if the same network can be _both_ a reliable
> > micro-payment system _and_ also a reliable arbitrary message transport
> > layer. I guess only time will tell...
> >
> >  > The `shared_secret_hash` field contains a BIP 340 tagged hash
> >
> > Any reason to use the tagged hash here vs just a plain ol HMAC? Under the
> > hood, they have a pretty similar construction [4].
> >
> > [1]:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003498.html
> > <
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003498.html
> >
> > [2]: https://status.torproject.org/issues/2022-06-09-network-ddos/
> > <https://status.torproject.org/issues/2022-06-09-network-ddos/>
> > [3]: https://blog.torproject.org/stop-the-onion-denial/
> > <https://blog.torproject.org/stop-the-onion-denial/>
> > [4]: https://datatracker.ietf.org/doc/html/rfc2104 <
> https://datatracker.ietf.org/doc/html/rfc2104>
> >
> > -- Laolu
> >
> >
> >
> > On Wed, Jun 29, 2022 at 1:28 AM Bastien TEINTURIER  <mailto:bast...@acinq.fr>> wrote:
> >
> > During the recent Oakland Dev Summit, some lightning engineers got
> together to discuss DoS
> > protection for onion messages. Rusty proposed a very simple
> rate-limiting scheme that
> > statistically propagates back to the correct sender, which we
> describe in details below.
> >
> > You can also read this in gist format if that works better for you
> [1].
> >
> > Nodes apply per-peer rate limits on _incoming_ onion messages that
> should be relayed (e.g.
> > N/seconds with some burst tolerance). It is recommended to allow
> more onion messages from
> > peers with whom you have channels, for example 10/seconds when you
> have a channel and 1/second
> > when you don't.
> >
> > When relaying an onion message, nodes keep track of where it came
> from (by using the `node_id` of
> > the peer who sent that message). Nodes only need the last such
> `node_id` per outgoing connection,
> > which ensures the memory footprint is very small. Also, this data
> doesn't need to be persisted.
> >
> > Let's walk through an example to illustrate this mechanism:
> >
> > * Bob receives an onion message from Alice that should be relayed to
> Carol
> > * After relaying that message, Bob stores Alice's `node_id` in its
> per-connection state with Carol
> > * Bob receives an onion message from Eve that should be relayed to
> Carol
> > * After relaying that message, Bob replaces Alice's `node_id` with
> Eve's `node_id` in its
> > per-connection state with Carol
> > * Bob receives an onion message from Alice that should be relayed to
> Dave
> > * After relaying that message, Bob stores Alice's `node_id` in its
> per-connection state with Dave
> > * ...
> >
> > We introduce a new message that will be sent when dropping an
> incoming onion message because it
> > reached rate limits:
> >
> > 1. type: 515 (`onion_message_drop`)
> > 2. data:
> > * [`rate_limited`:`u8`]
> > * [`shared_secret_hash`:`32*byte`]
> >
> > Whenever an incoming onion message reaches the rate limit, the
> receiver sends `onion_message_drop`
> > to the sender. The sender looks at its per-connection state to find
> where the message was coming
> > from and relays `onion_message_drop` to the last sender, halving
> their rate limits with that peer.
> >
> > If the sender doesn't overflow the rate limit again, the receiver
> should double the rate limit
> > after 30 seconds, until it reaches the default rate limit again.
> >
> > The flow will look like:
> >
> > Alice  Bob 

[Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Bastien TEINTURIER
During the recent Oakland Dev Summit, some lightning engineers got
together to discuss DoS
protection for onion messages. Rusty proposed a very simple
rate-limiting scheme that
statistically propagates back to the correct sender, which we describe
in details below.

You can also read this in gist format if that works better for you [1].

Nodes apply per-peer rate limits on _incoming_ onion messages that
should be relayed (e.g.
N/seconds with some burst tolerance). It is recommended to allow more
onion messages from
peers with whom you have channels, for example 10/seconds when you
have a channel and 1/second
when you don't.

When relaying an onion message, nodes keep track of where it came from
(by using the `node_id` of
the peer who sent that message). Nodes only need the last such
`node_id` per outgoing connection,
which ensures the memory footprint is very small. Also, this data
doesn't need to be persisted.

Let's walk through an example to illustrate this mechanism:

* Bob receives an onion message from Alice that should be relayed to Carol
* After relaying that message, Bob stores Alice's `node_id` in its
per-connection state with Carol
* Bob receives an onion message from Eve that should be relayed to Carol
* After relaying that message, Bob replaces Alice's `node_id` with
Eve's `node_id` in its
per-connection state with Carol
* Bob receives an onion message from Alice that should be relayed to Dave
* After relaying that message, Bob stores Alice's `node_id` in its
per-connection state with Dave
* ...

We introduce a new message that will be sent when dropping an incoming
onion message because it
reached rate limits:

1. type: 515 (`onion_message_drop`)
2. data:
   * [`rate_limited`:`u8`]
   * [`shared_secret_hash`:`32*byte`]

Whenever an incoming onion message reaches the rate limit, the
receiver sends `onion_message_drop`
to the sender. The sender looks at its per-connection state to find
where the message was coming
from and relays `onion_message_drop` to the last sender, halving their
rate limits with that peer.

If the sender doesn't overflow the rate limit again, the receiver
should double the rate limit
after 30 seconds, until it reaches the default rate limit again.

The flow will look like:

Alice  Bob  Carol
  | | |
  |  onion_message  | |
  |>| |
  | |  onion_message  |
  | |>|
  | |onion_message_drop   |
  | |<|
  |onion_message_drop   | |
  |<| |

The `shared_secret_hash` field contains a BIP 340 tagged hash of the
Sphinx shared secret of the
rate limiting peer (in the example above, Carol):

* `shared_secret_hash = SHA256(SHA256("onion_message_drop") ||
SHA256("onion_message_drop") || sphinx_shared_secret)`

This value is known by the node that created the onion message: if
`onion_message_drop` propagates
all the way back to them, it lets them know which part of the route is
congested, allowing them
to retry through a different path.

Whenever there is some latency between nodes and many onion messages,
`onion_message_drop` may
be relayed to the incorrect incoming peer (since we only store the
`node_id` of the _last_ incoming
peer in our outgoing connection state). The following example highlights this:

 Eve   Bob  Carol
  |  onion_message  | |
  |>|  onion_message  |
  |  onion_message  |>|
  |>|  onion_message  |
  |  onion_message  |>|
  |>|  onion_message  |
|>|
Alice   |onion_message_drop   |
  |  onion_message  |+|
  |>|  onion_message ||
  | ||--->|
  | |||
  | |||
  | |||
  |onion_message_drop   |<---+|
  |<| |

In this example, Eve is spamming but `onion_message_drop` is
propagated back to Alice instead.
However, this scheme will _statistically_ penalize the right incoming
peer (with a probability
depending on the volume of onion messages that the spamming peer is
generating compared to the
volume of legitimate onion messages).

It is an interesting research problem to find formulas for those
probabilities to evaluate how
efficient this will be against various 

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-15 Thread Bastien TEINTURIER
Hey Zman and list,

I don't think waxwing's proposal will help us for private gossip.
The rate-limiting it provides doesn't seem to be enough in our case.
The proposal rate-limits token issuance to once every N blocks where
N is the age of the utxo to which we prove ownership of. Once the token
is issued and verified, the attacker can spend that utxo, and after N
blocks he's able to get a new token with this new utxo.

That is a good enough rate-limit for some scenarios, but in our case
it means that every N blocks people are able to double the capacity
they advertise without actually having more funds.

We can probably borrow ideas from this proposal, but OTOH I don't
see how to apply it to lightning gossip, what we want isn't really rate
limiting, we want a stronger link between advertised capacity and
real on-chain capacity.

Cheers,
Bastien

Le mer. 15 juin 2022 à 00:01, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

>
> > ## Lightning Gossip
> >
> > # Gossip V2: Now Or Later?
>
> 
>
> > A proposal for the "re-design the entire thing" was floated in the past
> by
> > Rusty [6]. It does away with the strict coupling of channels to channel
> > announcements, and instead moves them to the _node_ level. Each node
> would
> > then advertise the set of "outputs" they have control of, which would
> then
> > be mapped to the total capacity of a node, without requiring that these
> > outputs self identify themselves on-chain as Lightning Channels. This
> also
> > opens up the door to different, potentially more privacy preserving
> > proofs-of-channel-ownership (something something zkp).
>
> waxwing recently posted something interesting over in bitcoin-dev, which
> seems to match the proof-of-channel-ownereship.
>
> https://gist.github.com/AdamISZ/51349418be08be22aa2b4b469e3be92f
>
> I confess to not understanding the mathy bits but it seems to me, naively,
> that the feature set waxwing points out match well with the issues we want
> to have:
>
> * We want to rate-limit gossip somehow.
> * We want to keep the mapping of UTXOs to channels private.
>
> It requires a global network that cuts across all uses of the same
> mechanism (similar to defiads, but more private --- basically this means
> that it cannot be just Lightning which uses this mechanism, at least to
> acquire tokens-to-broadcast-my-channels) to prevent a UTXO from being
> reused across services, a property I believe is vital to the expected
> spam-resistance.
>
> > # Friend-of-a-friend Balance Sharing & Probing
> >
> > A presentation was given on friend-of-a-friend balance sharing [16]. The
> > high level idea is that if we share _some_ information within a local
> > radius, then this gives the sender more information to choose a path
> that's
> > potentially more reliable. The tradeoff here ofc is that nodes will be
> > giving away more information that can potentially be used to ascertain
> > payment flows. In an attempt to minimize the amount of information
> shared,
> > the presenter proposed that just 2 bits of information be shared. Some
> > initial simulations showed that sharing local information actually
> performed
> > better than sharing global information (?). Some were puzzled w.r.t how
> > that's possible, but assuming the slides+methods are published others can
> > dig further into the model/parameter used to signal the inclusion.
> >
> > Arguably, information like this is already available via probing, so one
> > line of thinking is something like: "why not just share _some_ of it"
> that
> > may actually lead to less internal failures? This is related to a sort of
> > tension between probing as a tool to increase payment reliability and
> also
> > as a tool to degrade privacy in the network. On the other hand, others
> > argued that probing provides natural cover traffic, since they actually
> > _are_ payments, though they may not be intended to succeed.
> >
> > On the topic of channel probing, a sort of makeshift protocol was
> devised to
> > make it harder in practice, sacrificing too much on the axis of payment
> > reliability. At a high level it proposes that:
> >
> > * nodes more diligently set both their max_htlc amount, as well as the
> > max_htlc_value_in_flight amount
> >
> > * a 50ms (or select other value) timer should be used when sending out
> > commitment signatures, independent of HTLC arrival
> >
> > * nodes leverage the max_htlc value to set a false ceiling on the max in
> > flight parameter
> >
> > * for each HTLC sent/forwarded, select 2 other channels at random and
> > reduce the "fake" in-flight ceiling for a period of time
> >
> > Some more details still need to be worked out, but some felt that this
> would
> > kick start more research into this area, and also make balance mapping
> > _slightly_ more difficult. From afar, it may be the case that achieving
> > balance privacy while also achieving acceptable levels of payment
> > reliability might be at odds with each other.

Re: [Lightning-dev] #PickhardtPayments implemented in lnd-manageJ

2022-05-17 Thread Bastien TEINTURIER
I completely agree with Matt, these two components are completely
independent
and too often conflated. Scoring channels and estimating liquidity is
something
that has been regularly discussed by implementations for the last few years,
where every implementation did its own experiments over time.

Eclair has quite a large, configurable set of heuristics around channel
scoring,
along with an A/B testing system that we've been using for a while on
mainnet
(see [1] for details). We've also been toying with channel liquidity
estimation for
more than half a year, which you can follow in [2] and [3].

These are heuristics, and it's impossible to judge whether they work or not
until
you've tried them on mainnet with real payments, so I strongly encourage
people
to run such experiments. But when you do, you should have enough volume for
the result data to be statistically meaningful and you should do A/B
testing,
otherwise you can make the data say pretty much everything you want. What
I believe is mostly missing is the volume, the network doesn't have enough
real
payments yet IMHO for this data to accurately say that one heuristic is
better
than another.

Using an MCF algorithm instead of dijkstra is useful when relaying large
payments
that will need to be split aggressively to reach the destination. It does
make a lot
of sense in that scenario. However, it's important to also take a step back
and
look at whether it is economical to make such payments on lightning.

For a route with an aggregated proportional fee of 1000ppm, here is a rough
comparison of the fees between on-chain and lightning:

* At 1 sat/byte on-chain, payments above 2mBTC cost less on-chain than
off-chain
* At 10 sat/byte on-chain, payments above 20mBTC cost less on-chain than
off-chain
* At 25 sat/byte on-chain, payments above 50mBTC cost less on-chain than
off-chain
* And so on (just keep multiplying)

Of course, making payments on lightning has more benefits than just fees,
they
also confirm faster than on-chain payments, but I think it's important to
keep these
figures in mind.

It would be also useful to think about the shape of the network. Using an
MCF
algorithm makes sense when payments are saturating channels. But if channels
are much bigger than your payment size, this is probably overkill. If
channels are
small "at the edges of the network" and bigger than payments at the "core
of the
network", and we're using trampoline routing [4], it makes sense to run
different
path-finding algorithms depending on where we are (e.g. MCF at the edges on
a small subset of the graph and dijkstra inside the core).

I'm very happy that all this research is happening and helping lightning
payments
become more reliable, thanks for everyone involved! I think the design
space is
still quite large when we take everything into account, so I expect that
we'll see
even more innovation in the coming years.

Cheers,
Bastien

[1]
https://github.com/ACINQ/eclair/blob/10eb9e932f9c0de06cc8926230d8ad4e2d1d9e2c/eclair-core/src/main/resources/reference.conf#L237
[2] https://github.com/ACINQ/eclair/pull/2263
[3] https://github.com/ACINQ/eclair/pull/2071
[4] https://github.com/lightning/bolts/pull/829


Le lun. 16 mai 2022 à 22:59, Matt Corallo  a
écrit :

> Its probably worth somewhat disentangling the concept of switching to a
> minimum-cost flow routing
> algorithm from the concept of "scoring based on channel value and
> estimated available liquidity".
>
> These are two largely-unrelated concepts that are being mashed into one in
> this description - the
> first concept needs zero-base-fee to be exact, though its not clear to me
> that a heuristics-based
> approach won't give equivalent results in practice, given the noise in
> success rate compared to
> theory here.
>
> The second concept is something that LDK (and I believe CLN and maybe even
> eclair now) do already,
> though lnd does not last I checked. For payments where MPP does not add
> much to success rate (i.e.
> payments where the amount is relatively "low" compared to available
> network liquidity) dijkstra's
> with a liquidity/channel-size based scoring will give you the exact same
> result.
>
> For cases where you're sending an amount which is "high" compared to
> available network liquidity,
> taking a minimum-cost-flow algorithm becomes important, as you point out.
> Of course you're always
> going to suffer really slow payment and many retires in this case anyway.
>
> Matt
>
> On 5/15/22 1:01 PM, Carsten Otto via Lightning-dev wrote:
> > Dear all,
> >
> > the most recent version of lnd-manageJ [1] now includes basic, but
> usable,
> > support for #PickhardtPayments. I kindly invite you to check out the
> code, give
> > it a try, and use this work for upcoming experiments.
> >
> > Teaser with video:
> https://twitter.com/c_otto83/status/1525879972786749453
> >
> > The problem, heavily summarized:
> >
> > - Sending payments in the LN often fails, especially with larger amounts.
> > - Splitting a 

[Lightning-dev] Security issue in anchor outputs implementations

2022-04-22 Thread Bastien TEINTURIER
Good morning list,

I will describe here a vulnerability found in older versions of some
lightning implementations of anchor outputs. As most implementations
have not yet released support for anchor outputs, they should verify
that they are not impacted by this type of vulnerability while they
implement this feature.

I want to thank the impacted implementations for their reactivity in
fixing this issue, which hasn't impacted any user (as far as I know).

## Timeline

- March 23 2021: I discovered an interesting edge case while
implementing anchor outputs in eclair ([1]).
- August 2021: while I was finalizing support for the 0-htlc-fees
variant of anchor outputs in eclair, I was able to do in-depth
interoperability tests with other implementations that supported
anchor outputs (only lnd and c-lightning at that time). These tests
revealed that both implementations were impacted by the edge case
discovered in March and that it could be exploited to steal funds.
- September 2 2021: I notified both development teams.
- October 11 2021: I disclosed the vulnerability to Electrum and LDK
to ensure they would not ship a version of anchor outputs containing
the same issue (anchor outputs wasn't shipped in their software yet).
- November 2021: a fix for this vulnerability was released in lnd 0.14.0
and c-lightning 0.10.2.

## Impacted users

- Users running versions of lnd prior to 0.14.0
- Users running versions of c-lightning prior to 0.10.2 if they have
activated experimental features (and have anchor outputs channels)

## Description of the vulnerability

With anchor outputs, your lightning node doesn't use `SIGHASH_ALL` when
sending its signatures for htlc transactions in `commitment_signed`.
It uses `SIGHASH_SINGLE | SIGHASH_ANYONECANPAY` instead and the other
node is supposed to add a `SIGHASH_ALL` signature when they broadcast
the htlc transaction.

Interestingly, this lets the other node combine multiple htlcs in a
single transaction without invalidating your signatures, as long as the
`nLockTime` of all htlcs match. This has been a known fact for a long
time, which can be used to batch transactions and save on fees.

The vulnerability lies in how *revoked* htlc transactions were handled.
Because we historically used `SIGHASH_ALL`, we could assume that htlc
transactions had a single output. For example, older eclair versions
used that fact, and when presented with a revoked htlc transaction,
would claim a single output of that transaction via a penalty/justice
transaction (see [2]). This was completely ok before anchor outputs.
But after anchor outputs, if the revoked htlc transaction actually
contained many htlcs, your node should claim *all* of the revoked
outputs with penalty/justice transactions.

When presented with a transaction containing multiple revoked htlcs,
both impacted implementations would fail to claim any output. This means
the attacker could publish a revoked commitment with htlcs that have
been settled since then, and claim these htlcs a second time on-chain,
thus stealing funds.

Let's take a concrete example, where Bob is under attack.
Bob has channels with Alice and Carol: Alice ---> Bob ---> Carol.
Alice sends N htlcs to Carol spending all of her channel balance, which
Carol then fulfills.
Carol has then irrevocably received the funds from Bob.
Then Alice publishes her old commitment where all the htlcs were pending
and aggregates all of her htlc-timeouts in a single transaction.
Bob will fail to claim the revoked htlc outputs which will go back to
Alice's on-chain wallet. Bob has thus lost the full channel amount.

## Caveat

An important caveat is that this attack will not work all the time, so
it can carry a risk for the attacker. The reason for that is that the
htlc transactions have a relative delay of 1 block. If the node under
attack is able to make his penalty/justice transactions confirm
immediately after the revoked commitment (by claiming outputs directly
from the commitment transaction with a high enough feerate) the
attacker won't be able to broadcast the aggregated htlc transaction
(and loses their channel reserve).

The success of the attack depends on what block target implementations
use for penalty/justice transactions and how congested the mempool is
(unless the attacker notices that their peer is offline, in which case
they can use this opportunity to carry out the attack).

I'm pretty confident all users have already upgraded to newer versions
(particularly since there have been important bug fixes on unrelated
issues since then), but if your node still hasn't upgraded, you should
consider doing it as soon as possible.

Cheers,
Bastien

[1] https://github.com/ACINQ/eclair/pull/1738
[2]
https://github.com/ACINQ/eclair/blob/35b070ee5de2ea3847cf64b86f7e47abcca10b95/eclair-core/src/main/scala/fr/acinq/eclair/transactions/Transactions.scala#L613
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org

Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-15 Thread Bastien TEINTURIER
Good morning Alex,

I’ve been investigating set reconciliation as a means to reduce bandwidth

and redundancy of gossip message propagation.
>

Cool project, glad to see someone working on it! The main difficulty here
will
indeed be to ensure that the number of differences between sets is bounded.
We will need to maintain a mechanism to sync the whole graph from scratch
for new nodes, so the minisketch diff must be efficient enough otherwise
nodes
will just fall back to a full sync way too often (which would waste a lot of
bandwidth).

Picking several offending channel ids, and digging further, the majority of
> these

appear to be flapping due to Tor or otherwise intermittent connections.
>

One thing that may help here from an implementation's point of view is to
avoid
sending a disabled channel update every time a channel goes offline. What
eclair does to avoid spamming is to only send a disabled channel update when
someone actually tries to use that channel. Of course, if people choose this
offline node in their route, you don't have a choice and will need to send a
disabled channel update, but we've observed that many channels come back
online before we actually need to use them, so we're saving two channel
updates
(one to disable the channel and one to re-enable it). I think all
implementations
should do this. Is that the case today?

We could go even further, and when we receive an htlc that should be relayed
to an offline node, wait a bit to give them an opportunity to come online
instead
of failing the htlc and sending a disabled channel update. Eclair currently
doesn't
do that, but it would be very easy to add.

- A common listing of current default rate limits across lightning network
> implementations.
>

Eclair doesn't do any rate-limiting. We wanted to "feel the pain" before
adding
anything, and to be honest we haven't really felt it yet.

which will use a common, simple heuristic to accept or reject a gossip
> message.

(Think one channel update per block, or perhaps one per block_height << 5.)
>

I think it would be easy to come to agreement between implementations and
restrict channel updates to at most one every N blocks. We simply need to
add
the `block_height` in a tlv in `channel_update` and then we'll be able to
actually
rate-limit based on it. Given how much time it takes to upgrade most of the
network, it may be a good idea to add the `block_height` tlv now in the
spec,
and act on it later? Unless your work requires bigger changes in channel
update
in which case it will probably be a new message.

Note that it will never be completely accurate though, as different nodes
can
have different blockchain tips. My nodes may be one or two blocks late
compared
to the node that emits the channel update. We need to allow a bit of leeway
there.

Cheers,
Bastien




Le jeu. 14 avr. 2022 à 23:06, Alex Myers  a écrit :

> Hello lightning developers,
>
>
> I’ve been investigating set reconciliation as a means to reduce bandwidth
> and redundancy of gossip message propagation. This builds on some earlier work
> from Rusty using the minisketch library [1]. The idea is that each node
> will build a sketch representing it’s own gossip set. Alice’s node will
> encode and transmit this sketch to Bob’s node, where it will be merged with
> his own sketch, and the differences produced. These differences should
> ideally be exactly the latest missing gossip of both nodes. Due to size
> constraints, the set differences will necessarily be encoded, but Bob’s
> node will be able to identify which gossip Alice is missing, and may then
> transmit exactly those messages.
>
>
> This process is relatively straightforward, with the caveat that the sets
> must otherwise match very closely (each sketch has a maximum capacity for
> differences.) The difficulty here is that each node and lightning
> implementation may have its own rules for gossip acceptance and
> propagation. Depending on their gossip partners, not all gossip may
> propagate to the entire network.
>
>
> Core-lightning implements rate limiting for incoming channel updates and
> node announcements. The default rate limit is 1 per day, with a burst of
> 4. I analyzed my node’s gossip over a 14 day period, and found that, of
> all publicly broadcasting half-channels, 18% of them fell afoul of our
> spam-limiting rules at least once. [2]
>
>
> Picking several offending channel ids, and digging further, the majority
> of these appear to be flapping due to Tor or otherwise intermittent
> connections. Well connected nodes may be more susceptible to this due to more
> frequent routing attempts, and failures resulting in a returned channel
> update (which otherwise might not have been broadcast.) A slight
> relaxation of the rate limit resolves the majority of these cases.
>
>
> A smaller subset of channels broadcast frequent channel updates with minor
> adjustments to htlc_maximum_msat and fee_proportional_millionths
> parameters. These nodes appear to be 

[Lightning-dev] Blinded payments and unblinding attacks

2022-04-01 Thread Bastien TEINTURIER
Good morning list,

In the last couple of months, @thomash-acinq and I have spent a lot of time
working on route blinding for payments [1]. As you may know, route blinding
is a prerequisite for onion messages [2] and Bolt 12 offers [3].

Using route blinding to provide anonymity for onion messages is quite
simple, but it is harder to use safely for payments. The reason for that is
that the lightning network is a very heterogeneous channels network.

The parameters used to relay payments vary widely from one channel to the
other, and can dynamically vary over time: if not accounted for, this can
provide an easy fingerprint to let malicious actors guess what channels are
actually used inside a blinded route. The ideas behind these probing attacks
are described in more details in the route blinding proposals [4].

To protect against such attacks, the latest version of the route blinding
specification lets the recipient impose what parameters will be used by
intermediate blinded nodes to relay payments (instead of using the values
they advertise in their `channel_update`). The parameters that matter are:

* `fee_base_msat`
* `fee_proportional_millionths`
* `cltv_expiry_delta`
* `htlc_minimum_msat`
* `features` that impact payment relaying behavior

We'd like help from this list to figure out whether these are the only
parameters that an attacker can use to fingerprint channels, or if there
are others that we need to take into account to guarantee user privacy.

Note that these attacks only work against public channels: wallet users
relying on unannounced channels are not at risk and will more easily
benefit from route blinding.

I spent a lot of time re-working the specification PR to make it as clear
as possible: please have a look at it and let me know if I can do anything
to make it better. Don't hesitate to reach out directly with questions and
feedback. I strongly recommend to start with the high-level design doc [5],
as natural language and detailed examples will help grasp the main ideas
and subtleties of the proposal.

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/765
[2] https://github.com/lightning/bolts/pull/759
[3] https://github.com/lightning/bolts/pull/798
[4]
https://github.com/lightning/bolts/blob/route-blinding/proposals/route-blinding.md#attacks
[5]
https://github.com/lightning/bolts/blob/route-blinding/proposals/route-blinding.md
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-22 Thread Bastien TEINTURIER
Hey AJ,

Right, I was probably confused between local/remote, especially when
we're talking about our anchor in the remote commitment (should it be
called local anchor, which is from our point of view, or remote?).

Let's call them Alice and Bob, and Bob is publishing a commitment.
Correct me if I'm wrong there, what you're suggesting is that:

* Bob's anchor on Bob's commitment can be spent with revkey
* Alice's anchor on Bob's commitment can be spent with Alice's pubkey

This does ensure that each participant is able to claim their anchor in
the latest commitment, and Alice is able to claim both anchors in any of
Bob's outdated commitments.

But I think it defeats the `OP_16 OP_CHECKSEQUENCEVERIFY` script branch.
We have that branch to allow anyone to spend anchor outputs *after* the
commitment is confirmed, to avoid keeping them around in the utxo set
forever. However, the trick is that the internal pubkey must be set to
something that is publicly revealed when the channel closes. Now that we
put the revkey in internal pubkeys everywhere instead of script branches,
that revkey is *not* revealed when channels close with the latest commit.
So it would prevent people from using that script branch to clean up the
utxo set...

I have currently used  and  because
they're revealed whenever main outputs are claimed, but there is probably
a smarter solution (maybe one that would let us use revkey here as you
suggest), this will be worth thinking about a bit more.

Thanks,
Bastien

Le mar. 21 déc. 2021 à 17:04, Anthony Towns  a écrit :

> On Tue, Dec 21, 2021 at 04:25:41PM +0100, Bastien TEINTURIER wrote:
> > The reason we have "toxic waste" with HTLCs is because we commit to the
> > payment_hash directly inside the transaction scripts, so we need to
> > remember all the payment_hash we've seen to be able to recreate the
> > scripts (and spend the outputs, even if they are revoked).
>
> I think "toxic waste" refers to having old data around that, if used,
> could cause you to lose all the funds in your channel -- that's why it's
> toxic. This is more just regular landfill :)
>
> > *_anchor: dust, who cares -- might be better if local_anchor used key =
> > > revkey
> > I don't think we can use revkey,
>
> musig(revkey, remote_key)
>   --> allows them to spend after you've revealed the secret for revkey
>   you can never spend because you'll never know the secret for
>   remote_key
>
> but if you just say:
>
> (revkey)
>
> then you can spend (because you know revkey) immediately (because it's
> an anchor output, so intended to be immediately spent) or they can spend
> if it's an obsolete commitment and you've revealed the revkey secret.
>
> > this would prevent us from bumping the
> > current remote commitment if it appears on-chain (because we don't know
> > the private revkey yet if this is the latest commitment). Usually the
> > remote peer should bump it, but if they don't, we may want to bump it
> > ourselves instead of publishing our own commitment (where our main
> > output has a long CSV).
>
> If we're going to bump someone else's commitment, we'll use the
> remote_anchor they provided, not the local_anchor, so I think this is
> fine (as long as I haven't gotten local/remote confused somewhere along
> the way).
>
> Cheers,
> aj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A blame ascribing protocol towards ensuring time limitation of stuck HTLCs in flight.

2021-12-15 Thread Bastien TEINTURIER
Good morning,

I agree, this onion message trick could let us work around this kind of
cheating
attempt. However, it becomes quite a complex protocol, and it's likely that
the more we progress towards specifying it, the more subtle issues we will
find that will require making it even more complex.

I'm more hopeful that we'll find channel jamming mitigations that work for
both
fast spam and slow spam, and will remove the need for this protocol (which
doesn't protect against fast spam, only against slow spam).

`D` can present to `B` its own `revoke_and_ack` in the above mentioned
> onion message reply.
>

A few high-level notes on why I think this is still harder than it looks:

* even if `D` shows B its `revoke_and_ack`, it doesn't prove that D sent it
to C
* it's impossible for a node to prove that it did *not* receive a message:
you can prove knowledge,
  but proving lack of knowledge is much harder (impossible?)

Cheers,
Bastien

Le jeu. 16 déc. 2021 à 01:50, lightning developer <
lightning-develo...@protonmail.com> a écrit :

> Good Morning Bastien,
>
> I believe there is another limitation that you're not mentioning: it's
> easy for a malicious node to blame an honest node. I'm afraid this is a
> serious limitation of the proposal.
>
>
> Thank you very much for your review and comments. I have just updated the
> proposal on github with a section "Security Considerations" that is
> equivalent to what I will send in this mail as I believe that the "serious
> limitation" that you pointed out can be resolved with the help of onion
> messages similar to what I tried to communicate in the already existing
> "Extensions" section. BTW before I sent my initial mail I was thinking
> exactly about the example that you mentioned! I elected to not include it
> to keep the text concise and short. Of course I might have back then and
> still a mistake in my thinking and in that case I apologize for asking you
> to review the proposal and my rebuttal.
>
> If we have a payment: A -> B -> C -> D and C is malicious.
> C can forward the payment to D, and even wait for D to correctly settle it
> (with `update_fulfill_htlc` or `update_fail_htlc`), but then withhold that
> message instead of forwarding it to B. Then C blames D, everyone agrees
> that
> D is bad node that must be avoided. Later, C unblocks the `update_*_htlc`
> and everyone thinks that D hodled the HTLC for a long time, which is bad.
>
>
> The above issue can be addressed by `B` verifying the proof it received
> from `C`. This can be done by presenting the proof to `D` via an onion
> message along a different node than `C`. If `D` cannot refute the proof by
> presenting a newer state to `B` then `B` knows that `D` was indeed
> dishonest. Otherwise `D` and `B` have discovered that `C` was misbehaving
> and tried to frame `D`.
>
> `B` indicates to `D` that it is allowed to ask such verification question
> by include the received proof from `C`. Note that `B` could never own such
> proof if `C` has not communicated with `B`. Of course if `C` has never
> talked to `B` in the first place `B` would have send a
> `TEMPORARY_CHANNEL_FAILURE` and if `C` stopped during the update of the
> statemachine to communicate to `B` then `B` can blame `C` via the above
> mechanism and `A` can verify the claim it received from `B`.
>
> Also `B` cannot just send garbage to `D` and try to frame `C` because as
> soon as `B` would frame `C` the upstream node `A` would talk to `C` and
> recognize that it was `B` who was dishonest.
>
> Going back to the situation assuming that `C` and `D` have indeed already
> successfully resolved the HTLC then the node `D` could in the reply to `B`
> even securely include the preimage allowing `B` to reclaim the funds from
> `A` and settle the HTLC in the A->B channel. Only the HTLC in the B->C
> channel would be locked which doesn't have to bother `B` as `B` expects
> that `C` is pulling / settling the HTLC anyway.  Only `C` would have the
> disadvantage as it is not pulling its liquidity as soon as it can.
>
> So far - besides a rather complicated flow of information - I do not see
> why the principles of my suggestion would not be possible to work at any
> other point of the channel state machine. So when queried by `B` the node
>  `D` could always replay with the latest state it has in the C->D channel
> indicating to `B` that `C` was dishonest.
>
> Of course we could ask now what is if `B` is also malicious? In this case
> `B` could propagate the `blame_channel` back but `A` could again use the
> onion trick to verify and discover that `B` and `C` are not following the
> protocol.
>
>
> Apart from this, I think the blame proof isn't that easy to build.
> It cannot simply use `commitment_signed`, because HTLCs are relayed only
> once the previous commitment has been revoked (through `revoke_and_ack`).
> So the proof should contain data from `commitment_signed` and a proof that
> the previous commitment was revoked (and that it was indeed the 

Re: [Lightning-dev] A blame ascribing protocol towards ensuring time limitation of stuck HTLCs in flight.

2021-12-15 Thread Bastien TEINTURIER
Good morning,

Thanks for looking into this!

I believe there is another limitation that you're not mentioning: it's
easy for a malicious node to blame an honest node. I'm afraid this is a
serious limitation of the proposal.

If we have a payment: A -> B -> C -> D and C is malicious.
C can forward the payment to D, and even wait for D to correctly settle it
(with `update_fulfill_htlc` or `update_fail_htlc`), but then withhold that
message instead of forwarding it to B. Then C blames D, everyone agrees that
D is bad node that must be avoided. Later, C unblocks the `update_*_htlc`
and everyone thinks that D hodled the HTLC for a long time, which is bad.

Apart from this, I think the blame proof isn't that easy to build.
It cannot simply use `commitment_signed`, because HTLCs are relayed only
once the previous commitment has been revoked (through `revoke_and_ack`).
So the proof should contain data from `commitment_signed` and a proof that
the previous commitment was revoked (and that it was indeed the previous
commitment) which is likely very hard to do securely without disclosing
too much about your channel.

Cheers,
Bastien

Le mer. 15 déc. 2021 à 02:08, lightning developer via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning list,
>
> I have just published a proposal to address (but unfortunately not solve)
> the old issue of HTLC spam via onions:
> https://github.com/lightning-developer/lightning-network-documents/blob/main/A%20blame%20ascribing%20protocol%20to%20mitigate%20HTLC%20spam.md
>
> The proposal picks up the early idea by Rusty, AJ and others to ascribe
> blame to a malicious actor but hopefully in a cheaper way than providing
> proof of a channel close by making use of a new lightning message
> `blame_channel` in combination with the proposed onion messages. I guess
> similar ideas and follow ups are already community knowledge (for example
> the local reputation tracking by Jim Posen at:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-May/001232.html)
> However I had the feeling that the current write up might provide some
> additional value to the community.
>
> The proposal also ensures that blame can be ascribed quickly by requiring
> a reply from the downstream onion that is proportional to the `cltv delta`
> at the hop. In this way a sending node will quickly know that a (and more
> importantly which) downstream channel is not working properly.
>
> I will be delighted to read your feedback, thoughts and criticism. For
> your convenience and archiving I also copied the raw markdown file of the
> proposal to the end of this Mail.
>
> Sincerely Lighting Developer
>
>
> - Begin Proposal --
>
> # A blame ascribing protocol towards ensuring time limitation of stuck
> HTLCs in flight.
>
> I was reviewing the [HOLD fee proposal by Joost](
> https://github.com/lightning/bolts/pull/843) and the [excellent summary
> of known mitigation techniques by t-bast](
> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md)
> when I revisited the very [first idea to mitigate HTLC spam via onions](
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-August/000135.html)
> that was discussed back in 2015 by Rusty, AJ and a few others. At that time
> the idea was to ascribe blame to a malicious actor by triggering a force
> close and proofing ones own honesty by providing the force close
> transaction. I think there is a lot of merit to the idea of ascribing blame
> and I think it might be possible with the help of [onion messages](
> https://github.com/lightning/bolts/pull/759) without the necessity to
> trigger full force closes.
>
> As I am not entirely sure if this suggestion is a reasonable improvement
> (it certainly does not resolve all the issues we have) I did not spec out
> the details and message formats and fields but only described the high
> level idea. I hope this is sufficient to discuss the principles and get the
> feedback from you if you consider this to be of use and if you think we
> should work on the details.
>
> Idea / Obervation:
> =
> The key idea is to set a fixed time in seconds (the `reply_interval`)
> after successfully negotiating an HTLC until when a node requires a
> resultion or reply from its peer to which it previously has forwarded a
> downstream onion. If the HTLC is not resolved and no reply was sent the
> downstream peer is considered to be acting maliciously.
>
> The amount in seconds can be proportional to the `cltv_delta` of that hop.
> To me the arbitrary choice of translating 10 blocks of `cltv_delta` to `1`
> second of expected reply time seems reasonable for now but could be chosen
> differently as long as the entire network (or at least every node included
> to the payment attempt) agrees upon the same conversion rate from
> `cltv_delta` to expected response time from downstream nodes.
>
> There are three cases for the reply:
>
> The Good reply case (HTLC 

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi again AJ and list,

I have slightly re-worked your proposal, and came up with the following
(I also added the musig2 nonces for completeness):

Alice -> Bob: commitment_proposed
channel id
adaptor sigs for PTLCs to Bob in Alice's next commitment
musig nonces for Alice to spend funding tx
musig nonces for Bob to spend funding tx

Bob -> Alice: commitment_proposed
channel id
adaptor sigs for PTLCs to Alice in Bob's next commitment
musig nonces for Alice to spend funding tx
musig nonces for Bob to spend funding tx

Bob -> Alice: commitment_signed
channel id
signature for Alice to spend funding tx
sigs for Alice to spend HTLCs and PTLCs from her next commitment

Alice -> Bob: revoke_and_ack
channel id
reveal previous commitment secret
next commitment point

Alice -> Bob: commitment_signed
channel id
signature for Bob to spend funding tx
sigs for Bob to spend HTLCs and PTLCs from his next commitment

Bob -> Alice: revoke_and_ack
channel id
reveal previous commitment secret
next commitment point

I believe it's exactly the same flow of data between peers as your
proposal, but I simply split the data into several messages. Let me
know if that's incorrect or if I missed a subtlety in your proposal.

This has some small advantages:

* commitment_signed and revoke_and_ack are mostly unchanged, we just
add a new message before the commit / revoke dance. The only change
happens in commitment_signed, where the signatures for PTLC-success
transactions will actually become adaptor signatures.
* the new adaptor signatures are in commitment_proposed instead of being
in commitment_signed, which ensures that we can still have 2*483
pending (H|P)TLCs: since the message size is limited to 65kB, we would
otherwise decrease our maximum to ~2*335 with your proposal (very rough
calculation)
* the messages are now symmetrical, which may be easier to reason about

One thing to note is that we reversed the order in which participants
sign new commitments. We previously had Alice sign first, whereas now
if Alice initiates, Bob will sign the updated commitment first. This is
why we add only 0.5 RTT instead of 1 RTT compared to the current protocol.
I don't think this is an issue, but if someone sees a way to maliciously
exploit this, please share it!

I updated my article [0], people jumping on the thread now may find it
helpful to better understand this discussion.

Thanks,
Bastien

[0] https://github.com/t-bast/lightning-docs/pull/16

Le mer. 8 déc. 2021 à 11:00, Bastien TEINTURIER  a écrit :

> Hi AJ,
>
> I think the problem t-bast describes comes up here as well when you
>> collapse the fast-forwards (or, anytime you update the commitment
>> transaction even if you don't collapse them).
>
>
> Yes, exactly.
>
> I think doing a synchronous update of commitments to the channel state,
>> something like:
>
>
>
> Alice -> Bob: propose_new_commitment
>> channel id
>> adaptor sigs for PTLCs to Bob
>
>
>> Bob -> Alice: agree_new_commitment
>> channel id
>> adaptor sigs for PTLCs to Alice
>> sigs for Alice to spend HTLCs and PTLCs to Bob from her own
>> commitment tx
>> signature for Alice to spend funding tx
>>
>> Alice -> Bob: finish_new_commitment_1
>> channel id
>> sigs for Bob to spend HTLCs and PTLCs to Alice from his own
>> commitment tx
>> signature for Bob to spend funding tx
>> reveal old prior commitment secret
>> new commitment nonce
>>
>> Bob -> Alice: finish_new_commitment_2
>> reveal old prior commitment secret
>> new commitment nonce
>>
>> would work pretty well.
>
>
> I agree, this is better than my naive addition of a `remote_ptlcs_signed`
> message in both directions, and even though it changes the protocol
> messages
> it stays very close to the mechanisms we currently have.
>
> I'll spend some time specifying this in more details, to verify that we're
> not missing anything. What I really like about this proposal is that we
> can probably bundle that protocol change with `option_simplified_update`
> [0]
> without the adaptor sigs, and simply add the adaptor sigs as tlvs when we
> do PTLCs. That lets us deploy this new update protocol separately from
> PTLCs
> and ensure it also simplifies the state machine and makes other features
> such as splicing [1] and dynamic channel upgrades [2] easier.
>
> Thanks,
> Bastien
>
> [0] https://github.com/lightning/bolts/pull/867
> [1] https://github.com/lightning/bolts/pull/863
> [2] https://github.com/lightning/bolts/pull/868
>
> Le mer. 8 déc. 2021 à 10:29, Anthony Towns  a écrit :
>
>> On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev
>> wrote:
>> > Alternate

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi AJ,

I think the problem t-bast describes comes up here as well when you
> collapse the fast-forwards (or, anytime you update the commitment
> transaction even if you don't collapse them).


Yes, exactly.

I think doing a synchronous update of commitments to the channel state,
> something like:



Alice -> Bob: propose_new_commitment
> channel id
> adaptor sigs for PTLCs to Bob


> Bob -> Alice: agree_new_commitment
> channel id
> adaptor sigs for PTLCs to Alice
> sigs for Alice to spend HTLCs and PTLCs to Bob from her own
> commitment tx
> signature for Alice to spend funding tx
>
> Alice -> Bob: finish_new_commitment_1
> channel id
> sigs for Bob to spend HTLCs and PTLCs to Alice from his own
> commitment tx
> signature for Bob to spend funding tx
> reveal old prior commitment secret
> new commitment nonce
>
> Bob -> Alice: finish_new_commitment_2
> reveal old prior commitment secret
> new commitment nonce
>
> would work pretty well.


I agree, this is better than my naive addition of a `remote_ptlcs_signed`
message in both directions, and even though it changes the protocol messages
it stays very close to the mechanisms we currently have.

I'll spend some time specifying this in more details, to verify that we're
not missing anything. What I really like about this proposal is that we
can probably bundle that protocol change with `option_simplified_update` [0]
without the adaptor sigs, and simply add the adaptor sigs as tlvs when we
do PTLCs. That lets us deploy this new update protocol separately from PTLCs
and ensure it also simplifies the state machine and makes other features
such as splicing [1] and dynamic channel upgrades [2] easier.

Thanks,
Bastien

[0] https://github.com/lightning/bolts/pull/867
[1] https://github.com/lightning/bolts/pull/863
[2] https://github.com/lightning/bolts/pull/868

Le mer. 8 déc. 2021 à 10:29, Anthony Towns  a écrit :

> On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev wrote:
> > Alternately, fast-forwards, which avoid this because it does not change
> commitment transactions on the payment-forwarding path.
> > You only change commitment transactions once you have enough changes to
> justify collapsing them.
>
> I think the problem t-bast describes comes up here as well when you
> collapse the fast-forwards (or, anytime you update the commitment
> transaction even if you don't collapse them).
>
> That is, if you have two PTLCs, one from A->B conditional on X, one
> from B->A conditional on Y. Then if A wants to update the commitment tx,
> she needs to
>
>   1) produce a signature to give to B to spend the funding tx
>   2) produce an adaptor signature to authorise B to spend via X from his
>  commitment tx
>   3) produce a signature to allow B to recover Y after timeout from his
>  commitment tx spending to an output she can claim if he cheats
>   4) *receive* an adaptor signature from B to be able to spend the Y output
>  if B posts his commitment tx using A's signature in (1)
>
> The problem is, she can't give B the result of (1) until she's received
> (4) from B.
>
> It doesn't matter if the B->A PTLC conditional on Y is in the commitment
> tx itself or within a fast-forward child-transaction -- any previous
> adaptor sig will be invalidated because there's a new commitment
> transaction, and if you allowed any way of spending without an adaptor
> sig, B wouldn't be able to recover the secret and would lose funds.
>
> It also doesn't matter if the commitment transaction that A and B will
> publish is the same or different, only that it's different from the
> commitment tx that previous adaptor sigs committed to. (So ANYPREVOUT
> would fix this if it were available)
>
> So I think this is still a relevant question, even if fast-forwards
> make it a rare problem, that perhaps is only applicable to very heavily
> used channels.
>
> (I said the following in email to t-bast already)
>
> I think doing a synchronous update of commitments to the channel state,
> something like:
>
>Alice -> Bob: propose_new_commitment
>channel id
>adaptor sigs for PTLCs to Bob
>
>Bob -> Alice: agree_new_commitment
>channel id
>adaptor sigs for PTLCs to Alice
>sigs for Alice to spend HTLCs and PTLCs to Bob from her own
>  commitment tx
>signature for Alice to spend funding tx
>
>Alice -> Bob: finish_new_commitment_1
>channel id
>sigs for Bob to spend HTLCs and PTLCs to Alice from his own
>  commitment tx
>signature for Bob to spend funding tx
>reveal old prior commitment secret
>new commitment nonce
>
>Bob -> Alice: finish_new_commitment_2
>reveal old prior commitment secret
>new commitment nonce
>
> would work pretty well.
>
> This adds half a round-trip compared to now:
>
>Alice -> Bob: commitment_signed
>Bob -> Alice: revoke_and_ack, commitment_signed
>Alice -> Bob: revoke_and_ack
>
> The timings change like 

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi Z,

`SIGHASH_NONE | SIGHASH_NOINPUT` (which will take another what, four
> years?) or a similar "covenant" opcode,

such as `OP_CHECKTEMPLATEVERIFY` without any commitments or an
> `OP_CHECKSIGFROMSTACK` on an empty message.
> All you really need is a signature for an empty message, really...
>

That fails my requirement of "deployable in 2022" :)

Same thing applies to fast-forwards: I do see their value, but I'd like to
focus on a first version with minimal changes to the transaction structure
and the update protocol, to ensure we can actually get agreement on it
somewhat quickly and ship it in 2022. Then we can start working on a
more ambitious rework of the protocol that adds a lot of cool features,
such as what AJ proposed recently.

Cheers,
Bastien

Le mer. 8 déc. 2021 à 00:52, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > I believe these new transactions may require an additional round-trip.
> > Let's take a very simple example, where we have one pending PTLC in each
> > direction: PTLC_AB was offered by A to B and PTLC_BA was offered by B to
> A.
> >
> > Now A makes some unrelated updates and wants to sign a new commitment.
> > A cannot immediately send her `commitment_signed` to B.
> > If she did, B would be able to broadcast this new commitment, and A would
> > not be able to claim PTLC_BA from B's new commitment (even if she knew
> > the payment secret) because she wouldn't have B's signature for the new
> > PTLC-remote-success transaction.
> >
> > So we first need B to send a new message `remote_ptlcs_signed` to A that
> > contains B's adaptor signatures for the PTLC-remote-success transactions
> > that would spend B's future commitment. After that A can safely send her
> > `commitment_signed`. Similarly, A must send `remote_ptlcs_signed` to B
> > before B can send its `commitment_signed`.
> >
> > It's actually not that bad, we're only adding one message in each
> direction,
> > and we're not adding more data (apart from nonces) to existing messages.
> >
> > If you have ideas on how to avoid this new message, I'd be glad to hear
> > them, hopefully I missed something again and we can make it better!
>
> `SIGHASH_NONE | SIGHASH_NOINPUT` (which will take another what, four
> years?) or a similar "covenant" opcode, such as `OP_CHECKTEMPLATEVERIFY`
> without any commitments or an `OP_CHECKSIGFROMSTACK` on an empty message.
> All you really need is a signature for an empty message, really...
>
> Alternately, fast-forwards, which avoid this because it does not change
> commitment transactions on the payment-forwarding path.
> You only change commitment transactions once you have enough changes to
> justify collapsing them.
> Even in the aj formulation, when A adds a PTLC it only changes the
> transaction that hosts **only** A->B PTLCs as well as the A main output,
> all of which can be sent outright by A without changing any B->A PTLCs.
>
> Basically... instead of a commitment tx like this:
>
> +---+
> funding outpoint -->|   |--> A main
> |   |--> B main
> |   |--> A->B PTLC
> |   |--> B->A PTLC
> +---+
>
> We could do this instead:
>
> +---+2of2  +-+
> funding outpoint -->|   |->| |--> A main
> |   |  | |--> A->B PTLC
> |   |  +-+
> |   |2or2  +-+
> |   |->| |--> B main
> |   |  | |--> B->A PTLC
> +---+  +-+
>
> Then whenever A wants to add a new A->B PTLC it only changes the tx inputs
> of the *other* A->B PTLCs without affecting the B->A PTLCs.
> Payment forwarding is fast, and you only change the "big" commitment tx
> rarely to clean up claimed and failed PTLCs, moving the extra messages out
> of the forwarding hot path.
>
> But this is basically highly similar to what aj designed anyway, so...
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Bastien TEINTURIER
Hi Jeremy,

Right now, lightning anchor outputs use a 330 sats amount. Each commitment
transaction has two such outputs, and only one of them is spent to help the
transaction get confirmed, so the other stays there and bloats the utxo set.
We allow anyone to spend them after a csv of 16 blocks, in the hope that
someone will claim a batch of them when the fees are low and remove them
from the utxo set. However, that trick wouldn't work with 0-value outputs,
as
no-one would ever claim them (doesn't make economical sense).

We actually need to have two of them to avoid pinning: each participant is
able to spend only one of these outputs while the parent tx is unconfirmed.
I believe N-party protocols would likely need N such outputs (not sure).

You mention a change to the carve-out rule, can you explain it further?
I believe it would be a necessary step, otherwise 0-value outputs for
CPFP actually seem worse than low-value ones...

Thanks,
Bastien

Le mer. 8 déc. 2021 à 02:29, Jeremy via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

> Bitcoin Devs (+cc lightning-dev),
>
> Earlier this year I proposed allowing 0 value outputs and that was shot
> down for various reasons, see
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
>
> I think that there can be a simple carve out now that package relay is
> being launched based on my research into covenants from 2017
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf.
>
> Essentially, if we allow 0 value outputs BUT require as a matter of policy
> (or consensus, but policy has major advantages) that the output be used as
> an Intermediate Output (that is, in order for the transaction to be
> creating it to be in the mempool it must be spent by another tx)  with the
> additional rule that the parent must have a higher feerate after CPFP'ing
> the parent than the parent alone we can both:
>
> 1) Allow 0 value outputs for things like Anchor Outputs (very good for not
> getting your eltoo/Decker channels pinned by junk witness data using Anchor
> Inputs, very good for not getting your channels drained by at-dust outputs)
> 2) Not allow 0 value utxos to proliferate long
> 3) It still being valid for a 0 value that somehow gets created to be
> spent by the fee paying txn later
>
> Just doing this as a mempool policy also has the benefits of not
> introducing any new validation rules. Although in general the IUTXO concept
> is very attractive, it complicates mempool :(
>
> I understand this may also be really helpful for CTV based contracts (like
> vault continuation hooks) as well as things like spacechains.
>
> Such a rule -- if it's not clear -- presupposes a fully working package
> relay system.
>
> I believe that this addresses all the issues with allowing 0 value outputs
> to be created for the narrow case of immediately spendable outputs.
>
> Cheers,
>
> Jeremy
>
> p.s. why another post today? Thank Greg
> https://twitter.com/JeremyRubin/status/1468390561417547780
>
>
> --
> @JeremyRubin 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-07 Thread Bastien TEINTURIER
Hi Z, Lloyd,

Let's ignore the musig nonce exchanges for now. I believe these can be
easily included in existing messages: they probably aren't the reason we
need more round-trips (at least not the one I'm concerned about for now).

Basically, if my memory and understanding are accurate, in the above,
> it is the *PTLC-offerrer* which provides an adaptor signature.
> That adaptor signature would be included in the `update_add_ptlc` message.


Neat, you're completely right, I didn't realize that the adaptor signature
could be completed by the other party, this is a great property I had
missed.
Thanks for pointing it out, it does simplify the protocol a lot!

I don't think you can include it in `update_add_ptlc` though, it has to
be in `commitment_signed`, because if you do a batch of updates before
signing, you would immediately invalidate the adaptor signatures you
previously sent.

But it would be a simple change, where the signatures in `commitment_signed`
would actually be adaptor signatures for PTLC-success transactions and
normal signatures for PTLC-timeout transactions.

Isn't it the case that all previous PTLC adaptor signatures need to be
> re-sent for each update_add_ptlc message because the signatures would
> no longer be valid once the commit tx changes


Yes indeed, whenever the commitment changes, peers need to create new
signatures and adaptor signatures for all pending PTLCs.

This is completely fine for PTLC-success and PTLC-timeout transactions,
but we also need to exchange signatures for the new pre-signed transactions
that spend a PTLC from the remote commitment. Let's call this new pre-signed
transaction PTLC-remote-success (not a great name).

I believe these new transactions may require an additional round-trip.
Let's take a very simple example, where we have one pending PTLC in each
direction: PTLC_AB was offered by A to B and PTLC_BA was offered by B to A.

Now A makes some unrelated updates and wants to sign a new commitment.
A cannot immediately send her `commitment_signed` to B.
If she did, B would be able to broadcast this new commitment, and A would
not be able to claim PTLC_BA from B's new commitment (even if she knew
the payment secret) because she wouldn't have B's signature for the new
PTLC-remote-success transaction.

So we first need B to send a new message `remote_ptlcs_signed` to A that
contains B's adaptor signatures for the PTLC-remote-success transactions
that would spend B's future commitment. After that A can safely send her
`commitment_signed`. Similarly, A must send `remote_ptlcs_signed` to B
before B can send its `commitment_signed`.

It's actually not that bad, we're only adding one message in each direction,
and we're not adding more data (apart from nonces) to existing messages.

If you have ideas on how to avoid this new message, I'd be glad to hear
them, hopefully I missed something again and we can make it better!

Thanks,
Bastien

Le mar. 7 déc. 2021 à 09:04, ZmnSCPxj  a écrit :

> Good morning LL, and t-bast,
>
> > > Basically, if my memory and understanding are accurate, in the above,
> it is the *PTLC-offerrer* which provides an adaptor signature.
> > > That adaptor signature would be included in the `update_add_ptlc`
> message.
> >
> > Isn't it the case that all previous PTLC adaptor signatures need to be
> re-sent for each update_add_ptlc message because the signatures would no
> longer be valid once the commit tx changes. I think it's better to put it
> in `commitment_signed` if possible. This is what is done with pre-signed
> HTLC signatures at the moment anyway.
>
> Agreed.
>
> This is also avoided by fast-forwards, BTW, simply because fast-forwards
> delay the change of the commitment tx.
> It is another reason to consider fast-forwards, too
>
> Regards,
> ZmnSCPxj
>
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] PTLCs early draft specification

2021-12-06 Thread Bastien TEINTURIER
Good morning list,

There was a great recent post on the mailing list detailing how we could
do PTLCs on lightning with a lot of other goodies [0]. This proposal
contained heavy changes to the transaction structure and the update
protocol. While it's certainly something we'll want to do in the long
run, I wanted to explore the minimal set of changes we would need to be
able to deploy PTLCs as soon as possible.

The current result is a somewhat high-level article, where each section
could be a separate update of the lightning protocol [1].

I tried to make PTLCs work with minimal changes to the transaction
structure and the update protocol, but they introduce a fundamental
change which forces us to make more changes than I'd like.

With HTLCs, the payment secret (the preimage of the payment hash) was
directly revealed in the witness of a spending transaction.

With PTLCs, this isn't the case anymore. The payment secret is a private
key, and a spending transaction only reveals that key if you have a
matching adaptor signature. This forces us to make two changes:

1. We must obtain adaptor signatures before sending our commit_sig
2. We must use a pre-signed HTLC-success transaction not only with our
local commit, but also with the remote commit

This means that we will need more round-trips whenever we update our
commitment. I'd like to find the right design trade-off where we don't
introduce too many changes in the protocol while minimizing the number
of additional round-trips.

We currently exchange the following messages:

Alice   Bob
  update_add_htlc
  --->
  update_add_htlc
  --->
  update_add_htlc
  --->
commit_sig
  --->
   revoke_and_ack
  <---
commit_sig
  <---
   revoke_and_ack
  --->

It works well because the commit_sig sent by Alice only contains signatures
for Bob's transactions (commit and htlc transactions), and the commit_sig
sent by Bob only contains signatures for Alice's transactions, and Alice
and Bob don't need anything else to spend outputs from either commitment.

But with PTLCs, Bob needs a signature from Alice to be able to fulfill a
PTLC from Alice's commitment. And Alice needs Bob to provide an adaptor
signature for that transaction before she can give him her signature.
We don't have the clean ordering that we had before.

The designs I came up with that keep the current messages and just insert
new ones are either too costly (too many additional round-trips) or too
complex (most likely broken in some edge cases).

I believe we need to change the commit_sig / revoke_and_ack protocol if
we want to find the sweet spot I'm looking for. I'd like to collect ideas
from this list's participants on how we could do that. This is probably
something that should be bundled with option_simplified_commitment [2]
(or at least we must ensure that option_simplified_commitment is a first
step towards the protocol we'll need for PTLCs). It's also important to
note that the protocol changes must work for both HTLCs and PTLCs, and
shouldn't change the structure of the transactions (not more than the
simple addition of PTLC outputs done in [1]).

Cheers,
Bastien

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003278.html
[1] https://github.com/t-bast/lightning-docs/pull/16
[2] https://github.com/lightning/bolts/pull/867
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-19 Thread Bastien TEINTURIER
Hi Matt,

I like this proposal, it's a net improvement compared to hodling HTLCs
at the recipient's LSP. With onion messages, we do have all the tools we
need to build this. I don't think we can do much better than that anyway
if we want to keep payments fully non-custodial. This will be combined
with notifications to try to get the recipient to go online asap.

One thing to note is that the senders also need to come online while
the payment isn't settled, otherwise there is a risk they'll lose their
channels. If the sender's LSP receives the preimage but the sender does
not come online, the sender's LSP will have to force-close to claim the
HTLC on-chain when it gets close to the timeout.

Definitely not a show-stopper, just an implementation detail to keep in
mind.

Bastien

Le jeu. 14 oct. 2021 à 02:20, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Matt,
>
> > On 10/13/21 02:58, ZmnSCPxj wrote:
> >
> > > Good morning Matt,
> > >
> > > >  The Obvious (tm) solution here is PTLCs - just have the sender
> always add some random nonce * G to
> > > >  the PTLC they're paying and send the recipient a random nonce
> in the onion. I'd generally suggest we
> > > >  just go ahead and do this for every PTLC payment, cause why
> not? Now the sender and the lnurl
> > > >  endpoint have to collude to steal the funds, but, like, the
> sender could always just give the lnurl
> > > >  endpoint the money. I'd love suggestions for fixing this short
> of PTLCs, but its not immediately
> > > >  obvious to me that this is possible.
> > > >
> > >
> > > Use two hashes in an HTLC instead of one, where the second hash is
> from a preimage the sender generates, and which the sender sends (encrypted
> via onion) to the receiver.
> > > You might want to do this anyway in HTLC-land, consider that we have a
> `payment_secret` in invoices, the second hash could replace that, and
> provide similar protection to what `payment_secret` provides (i.e.
> resistance against forwarding nodes probing; the information in both cases
> is private to the ultimate sender and ultimate reeceiver).
> >
> > Yes, you could create a construction which does this, sure, but I'm not
> sure how you'd do this
> > without informing every hop along the path that this is going on, and
> adapting each hop to handle
> > this as well. I suppose I should have been more clear with the
> requirements, or can you clarify
> > somewhat what your proposed construction is?
>
> Just that: two hashes instead of one.
> Make *every* HTLC on LN use two hashes, even for current "online RPi user
> pays online RPi user" --- just use the `payment_secret` for the preimage of
> the second hash, the sender needs to send it anyway.
>
> >
> > If you're gonna adapt every node in the path, you might as well just use
> PTLC.
>
> Correct, we should just do PTLCs now.
> (Basically, my proposal was just a strawman to say "we should just do
> PTLCs now")
>
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Opening balanced channels using PSBT

2021-09-22 Thread Bastien TEINTURIER
Hi,

This is exactly what the dual funding proposal provides:
https://github.com/lightningnetwork/lightning-rfc/pull/851

Cheers,
Bastien

Le mer. 22 sept. 2021 à 07:29, Ole Henrik Skogstrøm 
a écrit :

> Hi
>
> I have found a way of opening balanced channels using LND's psbt option
> when opening channels. What I'm doing is essentially just joining funded
> PSBTs before signing and submitting them. This makes it possible to open a
> balanced channel between two nodes or open a ring of balanced channels
> between multiple nodes (ROF).
>
> I found this interesting, however I don't know if this is somehow unsafe
> or for some other reason a bad idea. If not, then it could be an
> interesting alternative to only being able to open unbalanced channels.
>
> To do this efficiently, nodes need to collaborate by sending PSBTs back
> and forth to each other and doing this manually is a pain, so if this makes
> sense to do, it would be best to automate it through a client.
>
> --
> --- Here is an example of the complete flow for a single channel:
> --
>
> ** Node A: generates a new address and sends address to Node B *(lncli
> newaddress p2wkh)
>
> ** Node A starts an Interactive channel **open** to Node B* *using psbt*
> (lncli openchannel --psbt  200 100)
>
> ** Node A funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B funds the refund transaction to Node A and sends PSBT back to
> Node A (*bitcoin-cli walletcreatefundedpsbt []
> '[{"":0.01}]')
>
> * *Node A joins the two PSBTs and sends it back to Node B (*bitcoin-cli
> joinpsbts '["", ""]')
>
> ** Node B verifies the content and signs the joined PSBT before sending it
> back to Node A *(bitcoin-cli walletprocesspsbt )
>
> ** Node A: Verifies the content and signs the joined PSBT *(bitcoin-cli
> walletprocesspsbt )
>
> ** Node A: Completes channel open by publishing the fully signed PSBT*
>
>
> --
> --- Here is an example of the complete flow for a ring of channels between
> multiple nodes:
> --
>
> ** Node A starts an Interactive open channel to Node B using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node A funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B starts an Interactive open channel to Node C using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node B funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node C starts an Interactive open channel to Node A using psbt* (lncli
> openchannel --psbt  200 100)
> ** Node C funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B and C sends Node A their PSBTs*
>
> ** Node A joins all the PSBTs* (bitcoin-cli joinpsbts
> '["", "",
> ""]')
>
> Using (bitcoin-cli walletprocesspsbt ):
>
>
>
> ** Node A verifies and signs the PSBT and sends it to Node B (1/3
> signatures)* Node B verifies and signs the PSBT and sends it to Node C (2/3
> signatures)* Node C verifies and signs the PSBT (3/3 signatures) before
> sending it to Node A and B.*
>
>
> ** Node A completes channel open (no_publish)* Node B completes channel
> open (no_publish)* Node C completes channel open and publishes the
> transaction.*
>
> --
> Ole Henrik Skogstrøm
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Bastien TEINTURIER
Hi Joost,

Concept ACK, I had toyed with something similar a while ago, but I hadn't
realized
that invoice storage was such a DoS vector for merchants/hubs and wasn't
sure it
would be useful.

Do you have an example of what information you would usually put in your
`encoded_order_details`?

I'd imagine that it would usually be simply a skuID from the merchant's
product
database, but it could also be fully self-contained data to identify a
"transaction"
(probably encrypted with a key belonging to the payee).

We'd want to ensure that this field is reasonably small, to ensure it can
fit in
onions without forcing the sender to use shorter routes or disable other
features.

Cheers,
Bastien


Le mar. 21 sept. 2021 à 15:17, Joost Jager  a écrit :

> On Tue, Sep 21, 2021 at 3:06 PM fiatjaf  wrote:
>
>> I would say, however, that these are two separate proposals:
>>
>>   1. implementations should expose a "stateless invoice" API for
>> receiving using the payment_secret;
>>   2. when sending, implementations should attach a TLV record with
>> encoded order details.
>>
>> Of these, 1 is very simple to do and do not require anyone to cooperate,
>> it just works.
>>
>> 2 requires full network compatibility, so it's harder. But 2 is also very
>> much needed otherwise the payee has to keep track of all the invoice ids
>> related to the orders they refer to, right?
>>
>
> Not completely sure what you mean by full network compatibility, but a
> network-wide upgrade including all routing nodes isn't needed. I think to
> do it cleanly we need a new tag for bolt11 and node implementations that
> carry over the contents of this field to a tlv record. So senders do need
> to support this.
>
>
>> But I think just having 1 already improves the situation a lot, and there
>> are application-specific workarounds that can be applied for 2 (having a
>> fixed, hardcoded set of possible orders, encoding the order very minimally
>> in the payment secret or route hint, storing order details on redis for
>> only 3 minutes and using lnurlpay to reduce the delay between invoice
>> issuance and user confirmation to zero, and so on).
>>
>
> A stateless invoice API would be a great thing to have. I've prototyped
> this in lnd and if you implement it so that a regular invoice is inserted
> 'just in time', it isn't too involved as you say.
>
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Asymmetric features

2021-07-08 Thread Bastien TEINTURIER
Good morning list,

I've been mulling over some limitations of our feature bits mechanism and
I'm interested in your ideas and comments.

Our feature bits mechanism works well for symmetric features (where both
peers play the same role) but not so well for asymmetric features (where
there is a client and a service provider). Here is a hypothetical example to
illustrate that. Any similarity to existing wallet features is entirely
coincidental.

Alice has a mobile lightning wallet that can be woken up via push
notifications.
Bob runs a lightning node that can send push notifications to mobile
wallets to
wake them up on important events (e.g. incoming htlcs).

We can't use a single feature bit to model that, because what Alice supports
is actually "I can be woken up via push notifications", but she can't send
push
notifications to other nodes (and similarly, Bob only supports waking up
other
nodes, not receiving push notifications).

So we must use two feature bits: `wake_me_up_plz` and `i_say_wake_up`.
Alice activates `wake_me_up_plz`, Bob activates `i_say_wake_up` and it's
now clear what part of the protocol each node can handle.

But how does Alice require her peers to support `i_say_wake_up`?
She can't turn on the feature with the mandatory bit because then her peers
would be confused and think she can wake up other devices.

I see two potential solutions:

   1. Re-purpose the meaning of `optional` and `mandatory` bits for
   asymmetric feature: the odd bit would mean "I support this feature"
   and the even bit would mean "I require my peer to support this feature"
   2. Add a requirement to send a warning and disconnect when a client
   connects to a provider that hasn't activated the provider-side feature

Thoughts?

Cheers,
Bastien

Note: I opened an issue for that for those who prefer github:
https://github.com/lightningnetwork/lightning-rfc/issues/885
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-02 Thread Bastien TEINTURIER
n
> actually jump in and use LN.
>
> In the end though, there's no grand global committee that prevents people
> from deploying software they think is interesting or useful. In the long
> run, I guess one simply needs to hope that bad ideas die out, or speak out
> against them to the public. As LN sits a layer above the base protocol,
> widespread global consensus isn't really required to make certain classes
> of
> changes, and you can't stop people from experimenting on their own.
>
> > We can't have collisions on any of these three things.
>
> Yeah, collisions are def possible. IMO, this is where the interplay with
> BOLTs comes in: BOLTs are the global feature bit/tlv/message namespace.  A
> bLIP might come with the amendment of BOLT 9 to define feature bits they
> used. Of course, this should be done on a best effort basis, as even if you
> assign a bit for your idea, someone can just go ahead and deploy something
> else w/ that same bit, and they may never really intersect depending on the
> nature or how widespread the new feature is.
>
> It's also likely the case that already implementations, or typically forks
> of implementations are already using "undocumented" TLVs or feature bits in
> the wild today. I don't know exactly which TLV type things like
> applications
> that tunnel messages over the network use, but afaik so far there haven't
> been any disastrous collisions in the wild.
>
> -- Laolu
>
> On Thu, Jul 1, 2021 at 2:19 AM Bastien TEINTURIER 
> wrote:
>
>> Thanks for starting that discussion.
>>
>> In my opinion, what we're really trying to address here are the two
>> following
>> points (at least from the point of view of someone who works on the spec
>> and
>> an implementation):
>>
>> - Implementers get frustrated when they've worked on something that they
>> think
>> is useful and they can't get it into the BOLTs (the spec PR isn't
>> reviewed,
>> it progresses too slowly or there isn't enough agreement to merge it)
>> - Implementers expect other implementers to specify the optional features
>> they
>> ship: we don't want to have to reverse-engineer a sub-protocol when users
>> want our implementation to provide support for feature XXX
>>
>> Note that these are two very different concerns.
>>
>> bLIPs/SPARKS/BIPs clearly address the second point, which is good.
>> But they don't address the first point at all, they instead work around
>> it.
>> To be fair, I don't think we can completely address that first point:
>> properly
>> reviewing spec proposals takes a lot of effort and accepting complex
>> changes
>> to the BOLTs shouldn't be done lightly.
>>
>> I am mostly in favor of this solution, but I want to highlight that it
>> isn't
>> only rainbows and unicorns: it will add fragmentation to the network, it
>> will
>> add maintenance costs and backwards-compatibility issues, many bLIPs will
>> be
>> sub-optimal solutions to the problem they try to solve and some bLIPs
>> will be
>> simply insecure and may put users' funds at risk (L2 protocols are hard
>> and have
>> subtle issues that can be easily missed). On the other hand, it allows
>> for real
>> world experimentation and iteration, and it's easier to amend a bLIP than
>> the
>> BOLTs.
>>
>> On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
>> bazaar
>> style of evolution. Most of them will need:
>>
>> - to assign feature bit(s)
>> - to insert new tlv fields in existing messages
>> - to create new messages
>>
>> We can't have collisions on any of these three things. bLIP XXX cannot
>> use the
>> same tlv types as bLIP YYY otherwise we're creating network
>> incompatibilities.
>> So they really need to be centralized, and we need a process to assign
>> these
>> and ensure they don't collide. It's not a hard problem, but we need to be
>> clear
>> about the process around those.
>>
>> Regarding the details of where they live, I don't have a strong opinion,
>> but I
>> think they must be easy to find and browse, and I think it's easier for
>> readers
>> if they're inside the spec repository. We already have PRs that use a
>> dedicated
>> "proposals" folder (e.g. [1], [2]).
>>
>> Cheers,
>> Bastien
>>
>> [1] https://github.com/lightningnetwork/lightning-rfc/pull/829
>> [2] https://github.com/lightningnetwork/lightning-rfc/pull/854
>>
>> Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a
>> écrit :
>>
>>> BIPs are already the Baza

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Bastien TEINTURIER
Thanks for starting that discussion.

In my opinion, what we're really trying to address here are the two
following
points (at least from the point of view of someone who works on the spec and
an implementation):

- Implementers get frustrated when they've worked on something that they
think
is useful and they can't get it into the BOLTs (the spec PR isn't reviewed,
it progresses too slowly or there isn't enough agreement to merge it)
- Implementers expect other implementers to specify the optional features
they
ship: we don't want to have to reverse-engineer a sub-protocol when users
want our implementation to provide support for feature XXX

Note that these are two very different concerns.

bLIPs/SPARKS/BIPs clearly address the second point, which is good.
But they don't address the first point at all, they instead work around it.
To be fair, I don't think we can completely address that first point:
properly
reviewing spec proposals takes a lot of effort and accepting complex changes
to the BOLTs shouldn't be done lightly.

I am mostly in favor of this solution, but I want to highlight that it isn't
only rainbows and unicorns: it will add fragmentation to the network, it
will
add maintenance costs and backwards-compatibility issues, many bLIPs will be
sub-optimal solutions to the problem they try to solve and some bLIPs will
be
simply insecure and may put users' funds at risk (L2 protocols are hard and
have
subtle issues that can be easily missed). On the other hand, it allows for
real
world experimentation and iteration, and it's easier to amend a bLIP than
the
BOLTs.

On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
bazaar
style of evolution. Most of them will need:

- to assign feature bit(s)
- to insert new tlv fields in existing messages
- to create new messages

We can't have collisions on any of these three things. bLIP XXX cannot use
the
same tlv types as bLIP YYY otherwise we're creating network
incompatibilities.
So they really need to be centralized, and we need a process to assign these
and ensure they don't collide. It's not a hard problem, but we need to be
clear
about the process around those.

Regarding the details of where they live, I don't have a strong opinion,
but I
think they must be easy to find and browse, and I think it's easier for
readers
if they're inside the spec repository. We already have PRs that use a
dedicated
"proposals" folder (e.g. [1], [2]).

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/829
[2] https://github.com/lightningnetwork/lightning-rfc/pull/854

Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a écrit :

> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
> create a BIP and they create an environment of discussion).
>
> BOLTs are essentially one big BIP in the sense that they started as a
> place for discussion but are now more rigid. BOLTs must be followed
> strictly to ensure a node is interoperable with the network. And BOLTs
> should be rigid, as rigid as any widely used BIP like 32 for example.
> Even though BOLTs were flexible when being drafted their purpose has
> changed from descriptive to prescriptive.
> Any alternatives, or optional features should be extracted out of
> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
> required flags set, messages sent, or alterations made to signal that
> the BIP's feature is enabled.
>
> A BOLT may at some point organically change to reference a BIP. For
> example if a BIP was drafted as an optional feature but then becomes
> more widespread and then turns out to be crucial for the proper
> operation of the network then a BOLT can be changed to just reference
> the BIP as mandatory. There isn't anything wrong with this.
>
> All of the above would work exactly the same if there was a bLIP
> repository instead. I don't see the value in having both bLIPs and
> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
> not restricted to exclude lightning, and never have been.
>
> I believe the reason this move to BIPs hasn't happened organically is
> because many still perceive the BOLTs available for editing, so
> changes continue to be made. If instead BOLTs were perceived as more
> "consensus critical", not subject to change, and more people were
> strongly encouraged to write specs for new lightning features
> elsewhere (like the BIP repo) then you would see this issue of growing
> BOLTs resolved.
>
> Cheers
> Ariel Lorenzo-Luaces
>
> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
> wrote:
> >
> > > That being said I think all the points that are addressed in Ryan's
> mail
> > > could very well be formalized into BOLTs but maybe we just need to
> rethink
> > > the current process of the BOLTs to make it more accessible for new
> ideas
> > > to find their way into the BOLTs?
> >
> > I think part of what bLIPs are trying to solve here 

Re: [Lightning-dev] Turbo channels spec?

2021-06-30 Thread Bastien TEINTURIER
>
> - MUST NOT send `announcement_signatures` messages until `funding_locked`
>   has been sent and received AND the funding transaction has at least
> six confirmations.
>
> So still compliant there?
>

Great, I hadn't spotted that one, so we're good on the
`announcement_signatures` side.

I'm wondering if `option_zeroconf` implies that we should set `min_depth =
0` in
`accept_channel`, since that's the number of confirmations before we can
send
`funding_locked`.

We need a signal that this channel uses zero-conf, and the two obvious
choices are:

   - set `min_depth = 0`
   - use a `channel_type` that sets `option_zeroconf`

I think the second option is better, this way we can keep a "normal"
`min_depth` set
and when we send `funding_locked`, we know that the channel is now
perfectly safe
to use (out of the zero-conf zone).

Cheers,
Bastien



Le mer. 30 juin 2021 à 02:09, Rusty Russell  a
écrit :

> Bastien TEINTURIER  writes:
> > Hi Rusty,
> >
> > On the eclair side, we instead send `funding_locked` as soon as we
> > see the funding tx in the mempool.
> >
> > But I think your proposal would work as well.
>
> This would be backward compatible, I think.  Eclair would send
> `funding_locked`, which is perfectly legal, but a normal peer would
> still wait for confirms before also sending `funding_locked`; it's
> just that option_zeroconf_channels would mean it doesn't have to
> wait for that before sending HTLCs?
>
> > We may want to defer sending `announcement_signatures` until
> > after the funding tx has been confirmed? What `min_depth` should
> > we use here? Should we keep a non-zero value in `accept_channel`
> > or should it be zero?
>
> You can't send it before you know the channel_id, so it has to be at
> least 1.  Spec says:
>
>   - MUST NOT send `announcement_signatures` messages until
> `funding_locked`
>   has been sent and received AND the funding transaction has at least
> six confirmations.
>
> So still compliant there?
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-06-29 Thread Bastien TEINTURIER
Hi Rusty,

On the eclair side, we instead send `funding_locked` as soon as we
see the funding tx in the mempool.

But I think your proposal would work as well.

We may want to defer sending `announcement_signatures` until
after the funding tx has been confirmed? What `min_depth` should
we use here? Should we keep a non-zero value in `accept_channel`
or should it be zero?

Cheers,
Bastien



Le mar. 29 juin 2021 à 07:34, Rusty Russell  a
écrit :

> Hi all!
>
> John Carvalo recently pointed out that not every implementation
> accepts zero-conf channels, but they are useful.  Roasbeef also recently
> noted that they're not spec'd.
>
> How do you all do it?  Here's a strawman proposal:
>
> 1. Assign a new feature bit "I accept zeroconf channels".
> 2. If both negotiate this, you can send update_add_htlc (etc) *before*
>funding_locked without the peer getting upset.
> 3. Nodes are advised *not* to forward HTLCs from an unconfirmed channel
>unless they have explicit reason to trust that node (they can still
>send *out* that channel, because that's not their problem!).
>
> It's a pretty simple change, TBH (this zeroconf feature would also
> create a new set of channel_types, altering that PR).
>
> I can draft something this week?
>
> Thanks!
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Bastien TEINTURIER
I looked into this more closely, and as far as I understand it, the spec
already states that you should not count dust HTLCs:

*if result would be offering more than the remote's max_accepted_htlcs
HTLCs, in the remote commitment transaction: *

   - *MUST NOT add an HTLC.*

Note that it clearly says "in the remote commitment transaction", which
means
you don't count HTLCs that are dust or trimmed.

That matches eclair's behavior: we don't count dust HTLCs towards that
limit.
Is lnd including them in that count? What about other implementations?
If that's the case, that can simply be fixed in lnd without any spec change
IMHO.

Note that this also excludes trimmed HTLCs from the count, which means that
nodes that set `max_accepted_htlcs` to 483 may be exposed to the issue I
described earlier (impossible to lower the feerate because the HTLC count
would
become greater than the limit).

Bastien

Le sam. 24 avr. 2021 à 10:01, Bastien TEINTURIER  a
écrit :

> You're right, I was thinking about trimmed HTLCs (which can re-appear in
> the commit tx
> if you lower the feerate via update_fee).
>
> Dust HTLCs will never appear in the commit tx regardless of subsequent
> update_fees,
> so Eugene's suggestion could make sense!
>
> Le sam. 24 avr. 2021 à 06:02, Matt Corallo  a
> écrit :
>
>> The update_fee message does not, as far as I recall, change the dust
>> limit for outputs in a channel (though I’ve suggested making such a change).
>>
>> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
>>
>> 
>> Hi Eugene,
>>
>> The reason dust HTLCs count for the 483 HTLC limit is because of
>> `update_fee`.
>> If you don't count them and exceed the 483 HTLC limit, you can't lower
>> the fee anymore
>> because some HTLCs that were previously dust won't be dust anymore and
>> you may end
>> up with more than 483 HTLC outputs in your commitment, which opens the
>> door to other
>> kinds of attacks.
>>
>> This is the first issue that comes to mind, but there may be other
>> drawbacks if we dig into
>> this enough with an attacker's mindset.
>>
>> Bastien
>>
>> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a
>> écrit :
>>
>>> I propose a simple mitigation to increase the capital requirement of
>>> channel-jamming attacks. This would prevent an unsophisticated attacker
>>> with low capital from jamming a target channel.  It seems to me that this
>>> is a *free* mitigation without any downsides (besides code-writing), so I'd
>>> like to hear other opinions.
>>>
>>> In a commitment transaction, we trim dust HTLC outputs.  I believe that
>>> the reason for the 483 HTLC limit each side has in the spec is to prevent
>>> commitment tx's from growing unreasonably large, and to ensure they are
>>> still valid tx's that can be included in a block.  If we don't include dust
>>> HTLCs in this calculation, since they are not on the commitment tx, we
>>> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
>>> There could be a configurable limit on the number of outstanding dust
>>> HTLCs, but the point is that it doesn't affect the non-dust throughput of
>>> the channel.  This raises the capital requirement of channel-jamming so
>>> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>>>
>>> Interested in others' thoughts.
>>>
>>> Eugene (Crypt-iQ)
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-24 Thread Bastien TEINTURIER
You're right, I was thinking about trimmed HTLCs (which can re-appear in
the commit tx
if you lower the feerate via update_fee).

Dust HTLCs will never appear in the commit tx regardless of subsequent
update_fees,
so Eugene's suggestion could make sense!

Le sam. 24 avr. 2021 à 06:02, Matt Corallo  a
écrit :

> The update_fee message does not, as far as I recall, change the dust limit
> for outputs in a channel (though I’ve suggested making such a change).
>
> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
>
> 
> Hi Eugene,
>
> The reason dust HTLCs count for the 483 HTLC limit is because of
> `update_fee`.
> If you don't count them and exceed the 483 HTLC limit, you can't lower the
> fee anymore
> because some HTLCs that were previously dust won't be dust anymore and you
> may end
> up with more than 483 HTLC outputs in your commitment, which opens the
> door to other
> kinds of attacks.
>
> This is the first issue that comes to mind, but there may be other
> drawbacks if we dig into
> this enough with an attacker's mindset.
>
> Bastien
>
> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a écrit :
>
>> I propose a simple mitigation to increase the capital requirement of
>> channel-jamming attacks. This would prevent an unsophisticated attacker
>> with low capital from jamming a target channel.  It seems to me that this
>> is a *free* mitigation without any downsides (besides code-writing), so I'd
>> like to hear other opinions.
>>
>> In a commitment transaction, we trim dust HTLC outputs.  I believe that
>> the reason for the 483 HTLC limit each side has in the spec is to prevent
>> commitment tx's from growing unreasonably large, and to ensure they are
>> still valid tx's that can be included in a block.  If we don't include dust
>> HTLCs in this calculation, since they are not on the commitment tx, we
>> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
>> There could be a configurable limit on the number of outstanding dust
>> HTLCs, but the point is that it doesn't affect the non-dust throughput of
>> the channel.  This raises the capital requirement of channel-jamming so
>> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>>
>> Interested in others' thoughts.
>>
>> Eugene (Crypt-iQ)
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-23 Thread Bastien TEINTURIER
Hi Eugene,

The reason dust HTLCs count for the 483 HTLC limit is because of
`update_fee`.
If you don't count them and exceed the 483 HTLC limit, you can't lower the
fee anymore
because some HTLCs that were previously dust won't be dust anymore and you
may end
up with more than 483 HTLC outputs in your commitment, which opens the door
to other
kinds of attacks.

This is the first issue that comes to mind, but there may be other
drawbacks if we dig into
this enough with an attacker's mindset.

Bastien

Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a écrit :

> I propose a simple mitigation to increase the capital requirement of
> channel-jamming attacks. This would prevent an unsophisticated attacker
> with low capital from jamming a target channel.  It seems to me that this
> is a *free* mitigation without any downsides (besides code-writing), so I'd
> like to hear other opinions.
>
> In a commitment transaction, we trim dust HTLC outputs.  I believe that
> the reason for the 483 HTLC limit each side has in the spec is to prevent
> commitment tx's from growing unreasonably large, and to ensure they are
> still valid tx's that can be included in a block.  If we don't include dust
> HTLCs in this calculation, since they are not on the commitment tx, we
> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
> There could be a configurable limit on the number of outstanding dust
> HTLCs, but the point is that it doesn't affect the non-dust throughput of
> the channel.  This raises the capital requirement of channel-jamming so
> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>
> Interested in others' thoughts.
>
> Eugene (Crypt-iQ)
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Bastien TEINTURIER
Great idea, I'll join as well.
Thanks for setting this in motion.

Le ven. 23 avr. 2021 à 17:39, Antoine Riard  a
écrit :

> Hi Jeremy,
>
> Yes dates are floating for now. After Bitcoin 2021, sounds a good idea.
>
> Awesome, I'll be really interested to review again an improved version of
> sponsorship. And I'll try to sketch out the sighash_no-input fee-bumping
> idea which was floating around last year during pinnings discussions. Yet
> another set of trade-offs :)
>
> Le ven. 23 avr. 2021 à 11:25, Jeremy  a écrit :
>
>> I'd be excited to join. Recommend bumping the date  to mid June, if
>> that's ok, as many Americans will be at Bitcoin 2021.
>>
>> I was thinking about reviving the sponsors proposal with a 100 block lock
>> on spending a sponsoring tx which would hopefully make less controversial,
>> this would be a great place to discuss those tradeoffs.
>>
>> On Fri, Apr 23, 2021, 8:17 AM Antoine Riard 
>> wrote:
>>
>>> Hi,
>>>
>>> During the lastest years, tx-relay and mempool acceptances rules of the
>>> base layer have been sources of major security and operational concerns for
>>> Lightning and other Bitcoin second-layers [0]. I think those areas require
>>> significant improvements to ease design and deployment of higher Bitcoin
>>> layers and I believe this opinion is shared among the L2 dev community. In
>>> order to make advancements, it has been discussed a few times in the last
>>> months to organize in-person workshops to discuss those issues with the
>>> presence of both L1/L2 devs to make exchange fruitful.
>>>
>>> Unfortunately, I don't think we'll be able to organize such in-person
>>> workshops this year (because you know travel is hard those days...) As a
>>> substitution, I'm proposing a series of one or more irc meetings. That
>>> said, this substitution has the happy benefit to gather far more folks
>>> interested by those issues that you can fit in a room.
>>>
>>> # Scope
>>>
>>> I would like to propose the following 4 items as topics of discussion.
>>>
>>> 1) Package relay design or another generic L2 fee-bumping primitive like
>>> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
>>> making obsolete propagation of transactions with pre-signed feerate, solve
>>> pinning attacks compromising Lightning/multi-party contract protocol
>>> safety, offer an usable and stable API to L2 software stack, stay
>>> compatible with miner and full-node operators incentives and obviously
>>> minimize CPU/memory DoS vectors.
>>>
>>> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it
>>> trivial for an attacker to partition network mempools in divergent subsets
>>> and from then launch advanced security or privacy attacks against a
>>> Lightning node. Note, it might also be a concern for bandwidth bleeding
>>> attacks against L1 nodes.
>>>
>>> 3) Guidelines about coordinated cross-layers security disclosures.
>>> Mitigating a security issue around tx-relay or the mempool in Core might
>>> have harmful implications for downstream projects. Ideally, L2 projects
>>> maintainers should be ready to upgrade their protocols in emergency in
>>> coordination with base layers developers.
>>>
>>> 4) Guidelines about L2 protocols onchain security design. Currently
>>> deployed like Lightning are making a bunch of assumptions on tx-relay and
>>> mempool acceptances rules. Those rules are non-normative, non-reliable and
>>> lack documentation. Further, they're devoid of tooling to enforce them at
>>> runtime [2]. IMHO, it could be preferable to identify a subset of them on
>>> which second-layers protocols can do assumptions without encroaching too
>>> much on nodes's policy realm or making the base layer development in those
>>> areas too cumbersome.
>>>
>>> I'm aware that some folks are interested in other topics such as
>>> extension of Core's mempools package limits or better pricing of RBF
>>> replacement. So l propose a 2-week concertation period to submit other
>>> topics related to tx-relay or mempools improvements towards L2s before to
>>> propose a finalized scope and agenda.
>>>
>>> # Goals
>>>
>>> 1) Reaching technical consensus.
>>> 2) Reaching technical consensus, before seeking community consensus as
>>> it likely has ecosystem-wide implications.
>>> 3) Establishing a security incident response policy which can be applied
>>> by dev teams in the future.
>>> 4) Establishing a philosophy design and associated documentations (BIPs,
>>> best practices, ...)
>>>
>>> # Timeline
>>>
>>> 2021-04-23: Start of concertation period
>>> 2021-05-07: End of concertation period
>>> 2021-05-10: Proposition of workshop agenda and schedule
>>> late 2021-05/2021-06: IRC meetings
>>>
>>> As the problem space is savagely wide, I've started a collection of
>>> documents to assist this workshop : https://github.com/ariard/L2-zoology
>>> Still wip, but I'll have them in a good shape at agenda publication,
>>> with reading suggestions and open questions to structure discussions.

[Lightning-dev] Trampoline routing improvements and updates

2020-12-28 Thread Bastien TEINTURIER
Good morning list,

Before we close this amazing year, I wanted to give you an update and reboot
excitement around trampoline routing for 2021.

Acinq has been running a trampoline node for more than a year to provide
simple
and reliable payments for tens of thousands of Phoenix [1] users. We've
learned
a lot and I just opened a new trampoline routing spec PR to reflect that
[2].

The TL;DR is:

* it's simpler than the previous proposal and more flexible
* it makes MPP more cost-efficient and reliable
* it works nicely with rendezvous or route blinding
* it's as private as normal payments (likely more private) if used properly
(details in the PR)

I strongly believe the current state of trampoline routing can provide great
benefits in terms of wallet UX and reliability, but we need more reviews for
the spec to converge before it can be broadly deployed without fear of
moving
parts or breaking changes. Please have a look at the proposal without
preconceived ideas; you may be surprised by how simple and natural it feels.

I also want to stress that the code changes are very reasonable as it
re-uses
a lot of components that are already part of every lightning implementation
and
doesn't introduce new assumptions.

As a matter of fact, an independent implementation has been completed by the
Electrum team and has been recently tested E2E on mainnet! Having a spec
agreement on feature bits, invoice hints format and onion error codes would
allow their wallet to fully interoperate with Phoenix and future trampoline
wallets, as well as unblock development of even more improvements.

Happy end of year to all and stay #reckless in 2021!

Bastien

[1] https://phoenix.acinq.co/
[2] https://github.com/lightningnetwork/lightning-rfc/pull/829
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mitigating Channel Jamming with Stake Certificates

2020-11-27 Thread Bastien TEINTURIER
Good morning list,

This is an interesting approach to solve this problem, I really like the
idea.
It definitely deserves digging more into it: the fact that it doesn't add
an additional
payment makes it largely superior to upfront payment schemes in terms of UX.

If we restrict these stake certificates to LN funding txs, which have a
very specific format
(multisig 2-of-2) there are probably smart ways to achieve this.
If for example we're able to do it easily with Schnorr-based funding txs,
it may be worth
waiting for that to happen.
I'm a bit afraid of having to use ZKPs for general statements, I'd prefer
something tailored
to that specific case (it would likely be more efficient and have less new
assumptions - even
though you're right to point out that this is a non-critical system, so
we're freer to experiment
with hot new stuff).

I completely agree with Z that it should be added to the requirements that
a node cannot
reuse a stake certificate from another node for himself.

Another constraint is that the proof has to be small, since we have to fit
> it all in a small onion...
>

I'm not sure that's necessary. If I understand correctly, you're saying
that because in your
model, the sender (Alice) creates one stake certificate for each node in
the route (Bob, Carol)
and puts them in the onion.

But instead it could be a point-to-point property: each node provides its
own stake certificate
to the next node (and only to that node). Alice provides a stake
certificate to Bob, then Bob
provides a stake certificate to Carol, and so on. If that's the case, it
can be in a tlv field in the
`update_add_htlc` message and doesn't need to be inside the onion. This
also makes it less
likely that Alice is exposing herself to remote nodes in the route (payer
privacy).

Of course, this depends on the implementation details we choose, but I
think it's worth stressing
that these two models exist and are quite different.

Thanks,
Bastien

Le ven. 27 nov. 2020 à 07:46, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Gleb,
>
> > Thank you for your interest :)
> >
> > > Quick question: if I am a routing node and receive a valid stake
> certificate, can I reuse this stake certificate on my own outgoing payments?
> >
> > That probably should be avoided, otherwise a mediocre routing node gets
> a lot of jamming opportunities for no good.
> >
> > You are right, that’s a strong argument for proof “interactivity”: every
> Certificate should probably commit to *at least* public key of the routing
> node it is generated for.
>
> Right, it would be better to have the certificate commit to a specific
> routing node rather than the payment hash/point as I proposed.
> Committing to a payment hash/point allows a random forwarding node to
> probe the rest of the network using the same certificate, lowering the
> score for that certificate on much of the network.
>
> Another constraint is that the proof has to be small, since we have to fit
> it all in a small onion...
>
> Presumably we also want the score to eventually "settle to 0" over time.
>
> Regards,
> ZmnSCPxj
>
> >
> > – gleb
> > On Nov 27, 2020, 2:16 AM +0200, ZmnSCPxj ,
> wrote:
> >
> > > Good morning Gleb and Antoine,
> > >
> > > This is certainly interesting!
> > >
> > > Quick question: if I am a routing node and receive a valid stake
> certificate, can I reuse this stake certificate on my own outgoing payments?
> > >
> > > It seems to me that the proof-of-stake-certificate should also somehow
> integrate a detail of the current payment (such as payment hash/point) so
> it cannot be reused by routing nodes for their own outgoing payments.
> > >
> > > For example, looking only at your naive privacy-broken proposal, the
> signature must use a `sign-to-contract` where the `R` in the signature is
> actually `R' + h(R' | payment_hash)` with the `R'` also revealed.
> > >
> > > Regards,
> > > ZmnSCPxj
> > >
> > > > Hello list,
> > > >
> > > > In this post, we explore a different approach to channel jamming
> mitigation.
> > > > We won’t talk about the background here, for the problem description
> as well as some proposed solutions (mainly upfront payment schemes), see
> [1].
> > > >
> > > > We’re suggesting using UTXO ownership proofs (a.k.a. Stake
> Certificates) to solve this problem. Previously, these proofs were only
> used in the Lightning Network at channel announcement time to prevent
> malicious actors from announcing channels they don’t control. One can think
> of it as a “fidelity bond” (as a scarce resource) as a requirement for
> sending HTLCs.
> > > >
> > > > We start by overviewing issues with other solutions, and then
> present a naive, privacy-broken Stake Certificates. Then we examine
> designing a privacy-preserving version, evaluating them. At the end, we
> talk about non-trivial design decisions and open questions.
> > > >
> > > > ## Issues with other proposals
> > > >
> > > > We find unsatisfying that upfront 

Re: [Lightning-dev] Minor tweaks to blinded path proposal

2020-11-19 Thread Bastien TEINTURIER
Hey Rusty,

Good questions.

I think we could use additive tweaks, and they are indeed faster so it can
be worth doing.
We would replace `B(i) = HMAC256("blinded_node_id", ss(i)) * P(i)` by `B(i)
= HMAC256("blinded_node_id", ss(i)) * G + P(i)`.
Intuitively since the private key of the tweak comes from a hash function,
it should offer the same security.
But there may be dragons lurking there, I don't know how to properly
evaluate whether it's as secure (whereas the multiplicative
version is really just Sphinx, so we know it should be secure).

If we're able to use additive tweaks, we can probably indeed use x-only
pubkeys.
Even though we're not storing these on-chain, so the 1 byte saved isn't
worth much.
I'd say that if it's trivial to use them, let's do it, otherwise it's not
worth any additional effort.

Cheers,
Bastien

Le mer. 18 nov. 2020 à 06:18, Rusty Russell  a
écrit :

>
> See:
>
> https://github.com/lightningnetwork/lightning-rfc/blob/route-blinding/proposals/route-blinding.md
>
> 1. Can we use additive tweaks instead of multiplicative?
>They're slightly faster, and supported by the x-only secp API.
> 2. Can we use x-only pubkeys?  It's generally trivial, and a byte
>shorter.  I'm using them in offers to great effect.
>
> Thanks!
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-11-02 Thread Bastien TEINTURIER via Lightning-dev
Good morning Joost and Z,

So in your proposal, an htlc that is received by a routing node has the
> following properties:
> * htlc amount
> * forward up-front payment (anti-spam)
> * backward up-front payment (anti-hold)
> * grace period
> The routing node forwards this to the next hop with
> * lower htlc amount (to earn routing fees when the htlc settles)
> * lower forward up-front payment (to make sure that an attacker at the
> other end loses money when failing quickly)
> * higher backward up-front payment (to make sure that an attacker at the
> other end loses money when holding)
> * shorter grace period (so that there is time to fail back and not lose
> the backward up-front payment)


That's exactly it, this is a good summary.

An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.


Yes, that is a good observation.
But shouldn't the swap service take that into account in the fee it
collects to
perform the swap? That way it is in fact the user who pays for that fee.

Cheers,
Bastien

Le mer. 28 oct. 2020 à 02:13, ZmnSCPxj  a écrit :

> Good morning Bastien, Joost, and all,
>
> An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
>
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.
>
> It seems to me that the hold-fees mechanism cannot be ported over in the
> onchain side, so even if you set a "reasonable" grace period at the swap
> service of say 1 hour (and assuming forwarding nodes are OK with that
> humongous grace period!), the onchain side of the swap can delay the
> release of onchain.
>
> To mitigate against this, the swap service would need to issue a separate
> invoice to pay for the hold fee for the "real" swap payment.
> The Boltz protocol supports a separate mining-fee invoice (disabled on the
> Boltz production servers) that is issued after the invoice is "locked in"
> at the swap service, but I think that in view of the use of hold fee, a
> combined mining-fee+hold-fee invoice would have to be issued at the same
> time as the "real" swap invoice.
>
> Regards,
> ZmnSCPxj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Hey Joost and Z,

I brought up the question about the amounts because it could be that
> amounts high enough to thwart attacks are too high for honest users or
> certain uses.


I don't think this is a concern for this proposal, unless there's an attack
vector I missed.
The reason I claim that is that the backwards upfront payment can be made
somewhat big without any
negative impact on honest nodes. If you're an honest intermediate node,
only two cases are possible:

* your downstream peer settled the HTLC quickly (before the grace period
ends): in that case you
refund him his upfront fee, and you have time to settle the HTLC upstream
while still honoring
the grace period, so it will be refunded to you as well (unless you delay
the settlement upstream
for whatever reason, in which case you deserve to pay the hold_fee)
* your grace period has expired, so you can't get a refund upstream: if
that happens, the grace
period with your downstream node has also expired, so you're earning money
downstream and paying
money upstream, and you'll usually even take a small positive spread so
everything's good

The only node that can end up loosing money on the backwards upfront
payment is the last node in
the route. But that node should always settle the HTLC quickly (or decide
to hodl it, but in that
case it's normal that it pays the hold_fee).

But what happens if the attacker is also on the other end of the
> uncontrolled spam payment? Not holding the payment, but still collecting
> the forward payments?


That's what I call short-lived `controlled spam`. In that case the attacker
pays the forward fee at
the beginning of the route but has it refunded at the end of the route. If
the attacker doesn't
want to lose any money, he has to release the HTLC before the grace period
ends (which is going to
be short-lived - at least compared to block times). This gives an
opportunity for legitimate payments
to use the HTLC slots (but it's a race between the attacker and the
legitimate users).

It's not ideal, because the attacker isn't penalized...the only way I think
we can penalize this
kind of attack is if the forward fee decrements at each hop, but in that
case it needs to be in the
onion (to avoid probing) and the delta needs to be high enough to actually
penalize the attacker.
Time to bikeshed some numbers!

C can trivially grief D here, making it look like D is delaying, by
> delaying its own `commitment_signed` containing the *removal* of the HTLC.


You're right to dive into these, there may be something here.
But I think your example doesn't work, let me know if I'm mistaken.
D is the one who decides whether he'll be refunded or not, because D is the
first to send the
`commit_sig` that removes the HTLC. I think we would extend `commit_sig`
with a tlv field that
indicates "I refunded myself for HTLC N" to help C compute the same commit
tx and verify sigs.

I agree with you that the details of how we'll implement the grace period
may have griefing attacks
depending on how we do it, it's worth exploring further.

Cheers,
Bastien

Le ven. 23 oct. 2020 à 12:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > > And in this case C earns.
> >
> > > Can C delay the refund to D to after the grace period even if D
> settled the HTLC quickly?
> >
> > Yes C earns, but D has misbehaved. As a final recipient, D isn't
> dependent on anyone downstream.
> > An honest D should settle the HTLC before the `grace_period` ends. If D
> chooses to hold the HTLC
> > for a while, then it's fair that he pays C for this.
>
>
> Okay, now let us consider the case where the supposedly-delaying party is
> not the final destination.
>
> So, suppose D indicates to C that it should fail the HTLC.
> In this case, C cannot immediately propagate the `update_fail_htlc`
> upstream, since the latest commitment transaction for the C<->D channel
> still contains the HTLC.
>
> In addition, our state machine is hand-over-hand, i.e. there is a small
> window where there are two valid commitment transactions.
> What happens is we sign the next commitment transaction and *then* revoke
> the previous one.
>
> So I think C can only safely propagate its own upstream `update_fail_htlc`
> once it receives the `revoke_and_ack` from D.
>
> So the time measured for the grace period between C and D should be from C
> sending `update_add_htlc` to C receiving `revoke_and_ack` from D, in case
> the HTLC fails.
> This is the time period that D is allowed to consume, and if it exceeds
> the grace period, it is penalized.
>
> (In this situation, it is immaterial if D is the destination: C cannot
> know this fact.)
>
> So let us diagram this better:
>
>  C   D
>  |update_add_htlc--->| ---
>  |---commitment_signed-->|  ^
>  |  |<--commitment_signed---|  |
>  |-revoke_and_ack--->|  |
>  |   | grace period
>  |<--update_fail_htlc|  |
>  |<--commitment_signed---|  |

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Thanks for your answers,

My first instinct is that additional complications are worse in general.
> However, it looks like simpler solutions are truly not enough, so adding
> the complication may very well be necessary.


I agree with both these statements ;). I'd love to find a simpler solution,
but this is the simplest
I've been able to come up with for now that seems to work without adding
griefing vectors...

The succeeding text refers to HTLCs "settling".


As you noted, settling means getting the HTLC removed from the commitment
transaction.
It includes both fulfills and fails, otherwise the proposal indeed doesn't
penalize spam.

If we also require that the hold fee be funded from the main output, then
> we cannot use single-funded channels, except perhaps with `push_msat`.


I see what you mean, the first payment cannot require a hold fee since the
fundee doesn't have a
main output. I think it's ok, it's the same thing as the reserve not being
met initially.

But you're right that there are potentially other mechanisms to enforce the
fee (like your suggestion
of subtracting from the HTLC output), I chose the simplest for now but we
can (and will) revisit
that choice if we think that the overall mechanisms work!

And in this case C earns.

Can C delay the refund to D to after the grace period even if D settled the
> HTLC quickly?


Yes C earns, but D has misbehaved. As a final recipient, D isn't dependent
on anyone downstream.
An honest D should settle the HTLC before the `grace_period` ends. If D
chooses to hold the HTLC
for a while, then it's fair that he pays C for this.

it is the fault of the peer for getting disconnected and having a delay in
> reconnecting, possibly forfeiting the hold fee because of that.


I think I agree with that, but we'll need to think about the pros and cons
when we get to details.

Is 1msat going to even deter anyone?

I am wondering though what the values for the fwd and bwd fees should be. I
> agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.


These values are only chosen for the simplicity of the example's sake. If
we agree the proposal works
to fight spam, we will do some calculations to figure a good value for
this. But I think finding the
right base values will not be the hard part, so we'll focus on this if
we're convinced the proposal
is worth exploring in full details.

It is interesting that the forward and backward payments are relatively
> independent of each other


To explain this further, I think it's important to highlight that the
forward fee is meant to fight
`uncontrolled spam` (where the recipient is an honest node) while the
backward fee is meant to fight
`controlled spam` (where the recipient also belongs to the attacker).

The reason it works is because the `uncontrolled spam` requires the
attacker to send a large volume
of HTLCs, so a very small forward fee gets magnified. The backward fee will
be much bigger because
in `controlled spam`, the attacker doesn't need a large volume of HTLCs but
holds them for a long
time. What I think is nice is that this proposal has only a tiny cost for
honest senders (the
forward fee).

What I'd really like to explore is whether there is a type of spam that I
missed or griefing attacks
that appear because of the mechanisms I introduce. TBH I think the
implementation details (amounts,
grace periods and their deltas, when to start counting, etc) are things
we'll be able to figure out
collectively later.

Thanks again for your time!
Bastien


Le ven. 23 oct. 2020 à 07:58, Joost Jager  a écrit :

> Hi Bastien,
>
> We add a forward upfront payment of 1 msat (fixed) that is paid
>> unconditionally when offering an HTLC.
>> We add a backwards upfront payment of `hold_fees` that is paid when
>> receiving an HTLC, but refunded
>> if the HTLC is settled before the `hold_grace_period` ends (see footnotes
>> about this).
>>
>
> It is interesting that the forward and backward payments are relatively
> independent of each other. In particular the forward anti-spam payment
> could quite easily be implemented to help protect the network. As you said,
> just transfer that fixed fee for every `update_add_htlc` message from the
> offerer to the receiver.
>
> I am wondering though what the values for the fwd and bwd fees should be.
> I agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.
>
> Maybe a way to approach it is this: suppose routing nodes are able to make
> 5% per year on their committed capital. An aggressive routing node could be
> willing to spend up to that amount to take down a competitor.
>
> Suppose the network consists only of 1 BTC, 483 slot channels. What should
> the fwd and bwd fees be so that even an attacked routing node will still
> earn that 5% (not through forwarding fees, but through hold fees) in both
> the controlled and the uncontrolled spam scenario?
>
> - Joost
>
___
Lightning-dev mailing 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-22 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Sorry in advance for the lengthy email, but I think it's worth detailing my
hybrid proposal
(bidirectional upfront payments), it feels to me like a workable solution
that builds on
previous proposals. You can safely ignore the details at the end of the
email and focus only on
the high-level mechanism at first.

Let's consider the following route: A -> B -> C -> D

We add a `hold_grace_period_delta` field to `channel_update` (in seconds).
We add two new fields in the tlv extension of `update_add_htlc`:

* `hold_grace_period` (seconds)
* `hold_fees` (msat)

We add an `outgoing_hold_grace_period` field in the onion per-hop payload.

When nodes receive an `update_add_htlc`, they verify that:

* `hold_fees` is not unreasonable large
* `hold_grace_period` is not unreasonably small or large
* `hold_grace_period` - `outgoing_hold_grace_period` >=
`hold_grace_period_delta`

Otherwise they immediately fail the HTLC instead of relaying it.

For the example we assume all nodes use `hold_grace_period_delta = 10`.

We add a forward upfront payment of 1 msat (fixed) that is paid
unconditionally when offering an HTLC.
We add a backwards upfront payment of `hold_fees` that is paid when
receiving an HTLC, but refunded
if the HTLC is settled before the `hold_grace_period` ends (see footnotes
about this).

* A sends an HTLC to B:
* `hold_grace_period = 100 sec`
* `hold_fees = 5 msat`
* `next_hold_grace_period = 90 sec`
* forward upfront payment: 1 msat is deduced from A's main output and added
to B's main output
* backwards upfront payment: 5 msat are deduced from B's main output and
added to A's main output
* B forwards the HTLC to C:
* `hold_grace_period = 90 sec`
* `hold_fees = 6 msat`
* `next_hold_grace_period = 80 sec`
* forward upfront payment: 1 msat is deduced from B's main output and added
to C's main output
* backwards upfront payment: 6 msat are deduced from C's main output and
added to B's main output
* C forwards the HTLC to D:
* `hold_grace_period = 80 sec`
* `hold_fees = 7 msat`
* `next_hold_grace_period = 70 sec`
* forward upfront payment: 1 msat is deduced from C's main output and added
to D's main output
* backwards upfront payment: 7 msat are deduced from D's main output and
added to C's main output

* Scenario 1: D settles the HTLC quickly:
* all backwards upfront payments are refunded (returned to the respective
main outputs)
* only the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 2: D settles the HTLC after the grace period:
* D's backwards upfront payment is not refunded
* If C and B relay the settlement upstream quickly (before
`hold_grace_period_delta`) their backwards
upfront payments are refunded
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 3: C delays the HTLC:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* C delays before settling upstream: it can ensure B will not get refunded,
but C will not get
refunded either so B gains the difference in backwards upfront payments
(which protects against
`controlled spam`)
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 4: the channel B <-> C closes:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* for whatever reason (malicious or not) the B <-> C channel closes
* this ensures that C's backwards upfront payment is paid to B
* if C publishes an HTLC-fulfill quickly, B may have his backwards upfront
payment refunded by A
* if B is forced to wait for his HTLC-timeout, his backwards upfront
payment will not be refunded
but it's ok because B got C's backwards upfront payment
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

If done naively, this mechanism may allow intermediate nodes to deanonymize
sender/recipient.
If the base `grace_period` and `hold_fees` are randomized, I believe this
attack vector disappears,
but it's worth exploring in more details.

The most painful part of this proposal will be handling the `grace_period`:

* when do you start counting: when you send/receive `update_add_htlc`,
`commit_sig` or
`revoke_and_ack`?
* what happens if there is a disconnection (how do you account for the
delay of reconnecting)?
* what happens if the remote settles after the `grace_period`, but refunds
himself when sending his
`commit_sig` (making it look like from his point of view he settled before
the `grace_period`)?
I think in that case the behavior should be to give your peers some leeway
and let them get away
with it, but record it. If they're doing it too often, close channels and
ban them; stealing
upfront fees should never be worth losing channels.

I chose to make the backwards upfront payment fixed instead of scaling it
based on the time an HTLC
is left pending; it's slightly less penalizing for spammers, but is less
complex and 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-19 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

I've started summarizing proposals, attacks and threat models on github [1].
I'm hoping it will help readers get up-to-speed and avoid falling in the
same pitfalls we already
fell into with previous proposals.

I've kept it very high-level for now; we can add nitty-gritty technical
details as we slowly
converge towards acceptable solutions. I have probably missed subtleties
from previous proposals;
feel free to contribute to correct my mistakes. I have omitted for examples
the details of Rusty's
previous proposal since he mentioned a new, better one that will be
described soon.

While doing this exercise, I couldn't find a reason why the `reverse
upfront payment` proposal
would be broken (notice that I described it using a flat amount after a
grace period, not an amount
based on the time HTLCs are held). Can someone point me to the most obvious
attacks on it?

It feels to me that its only issue is that it still allows spamming for
durations smaller than the
grace period; my gut feeling is that if we add a smaller forward direction
upfront payment to
complement it it could be a working solution.

Pasting it here for completeness:

### Reverse upfront payment

This proposal builds on the previous one, but reverses the flow. Nodes pay
a fee for *receiving*
HTLCs instead of *sending* them.

```text
A -> B -> C -> D

B pays A to receive the HTLC.
Then C pays B to receive the forwarded HTLC.
Then D pays C to receive the forwarded HTLC.
```

There must be a grace period during which no fees are paid; otherwise the
`uncontrolled spam` attack
allows the attacker to force all nodes in the route to pay fees while he's
not paying anything.

The fee cannot be the same at each hop, otherwise it's free for the
attacker when he is at both
ends of the payment route.

This fee must increase as the HTLC travels downstream: this ensures that
nodes that hold HTLCs
longer are penalized more than nodes that fail them fast, and if a node has
to hold an HTLC for a
long time because it's stuck downstream, they will receive more fees than
what they have to pay.

The grace period cannot be the same at each hop either, otherwise the
attacker can force Bob to be
the only one to pay fees. Similarly to how we have `cltv_expiry_delta`,
nodes must have a
`grace_period_delta` and the `grace_period` must be bigger upstream than
downstream.

Drawbacks:

* The attacker can still lock HTLCs for the duration of the `grace_period`
and repeat the attack
continuously

Open questions:

* Does the fee need to be based on the time the HTLC is held?
* What happens when a channel closes and HTLC-timeout has to be redeemed
on-chain?
* Can we implement this without exposing the route length to intermediate
nodes?

Cheers,
Bastien

[1] https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md

Le dim. 18 oct. 2020 à 09:25, Joost Jager  a écrit :

> > We've looked at all kinds of trustless payment schemes to keep users
>>
>> > honest, but it appears that none of them is satisfactory. Maybe it is
>> even
>> > theoretically impossible to create a scheme that is trustless and has
>> all
>> > the properties that we're looking for. (A proof of that would also be
>>
>> > useful information to have.)
>>
>> I don't think anyone has drawn yet a formal proof of this, but roughly a
>> routing peer Bob, aiming to prevent resource abuse at HTLC relay is seeking
>> to answer the following question "Is this payment coming from Alice and
>> going to Caroll will compensate for my resources consumption ?". With the
>> current LN system, the compensation is conditional on payment settlement
>> success and both Alice and Caroll are distrusted yet discretionary on
>> failure/success. Thus the underscored question is undecidable for a routing
>> peer making relay decisions only on packet observation.
>>
>> One way to mitigate this, is to introduce statistical observation of
>> sender/receiver, namely a reputation system. It can be achieved through a
>> scoring system, web-of-trust, or whatever other solution with the same
>> properties.
>> But still it must be underscored that statistical observations are only
>> probabilistic and don't provide resource consumption security to Bob, the
>> routing peer, in a deterministic way. A well-scored peer may start to
>> suddenly misbehave.
>>
>> In that sense, the efficiency evaluation of a reputation-based solution
>> to deter DoS must be evaluated based based on the loss of the reputation
>> bearer related to the potential damage which can be inflicted. It's just
>> reputation sounds harder to compute accurately than a pure payment-based
>> DoS protection system.
>>
>
> I can totally see the issues and complexity of a reputation-based system.
> With 'trustless payment scheme' I meant indeed a trustless pure
> payment-based DoS protection system and the question whether such a system
> can be proven to not exist. A sender would pay an up-front amount to cover
> the maximum cost, but with the 

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker 
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell  writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2].  Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> >stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> >when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> >and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> >sent the initial commitsig but not the final revocation) and
> >receive-in-progress (if you have received the initial commitsig
> >not not received the final revocation).  If either is set,
> >the sender (as indicated by the flags) retransmits the entire
> >sequence.
> >Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler.  There is only ever one pair of commitment txs for any
> >given commitment index.
> > 2. Fee changes are now deterministic.  No worrying about the case where
> >the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> >negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> >receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> >this would be compulsory) you'll be able signficantly simplify;
> >you'll need to record the index at which HTLCs were changed
> >(added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] 

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
Hey laolu,

I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal.


Yes, maybe it's better to not offer two mechanisms and wait for dynamic
commitments to offer that
flexibility.

Instead, you may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.


Exactly, that's the kind of heuristic I had in mind. Peers need to slowly
build trust before you
give them access to more resources.

This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information


Agreed, it's easy to implement locally but it's not going to be very nice
to your peer, who has
no way of knowing why you're rejecting HTLCs and may end up closing the
channel because it sees
weird behavior. That's why we need to offer an explicit re-negotiation of
these parameters, let's
keep this use-case in mind when designing dynamic commitments!

Cheers,
Bastien

Le lun. 12 oct. 2020 à 20:59, Olaoluwa Osuntokun  a
écrit :

>
> > I suggest adding tlv records in `commitment_signed` to tell our channel >
> > peer that we're changing the values of these fields.
>
> I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal. Note that in that paradigm, something
> like this would be a distinct message, and also only be allowed with a
> "clean commitment" (as otherwise what if I reduce the number of slots to a
> value that is lower than the number of active slots?). With this, both
> sides
> would be able to propose/accept/deny updates to the flow control parameters
> that can be used to either increase the security of a channel, or implement
> a sort of "slow start" protocol for any new peers that connect to you.
>
> Similar to congestion window expansion/contraction in TCP, when a new peer
> connects to you, you likely don't want to allow them to be able to consume
> all the newly allocated bandwidth in an outgoing direction. Instead, you
> may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.
>
> A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
> several classes of attacks (supplementing any mitigations by "channel
> acceptor" hooks), and also give forwarding nodes more _control_ of exactly
> how their allocated bandwidth is utilized by all connected peers.  This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information, and may end up in weird re-send loops (as they _why_ an
> HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
> state, since we don't have any sort of "unadd", then the channel may end up
> borked if the violating party keeps retransmitting the same update upon
> reconnection.
>
> > Are there other fields you think would need to become dynamic as well?
>
> One other value that IMO should be dynamic to protect against future
> unexpected events is the dust limit. "It Is Known", that this value
> "doesn't
> really change", but we should be able to upgrade _all_ channels on the fly
> if it does for w/e reason.
>
> -- Laolu
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
I totally agree with the simplicity argument, I wanted to raise this
because it's (IMO) an issue
today because of the way we deal with on-chain fees, but it's less
impactful once update_fee is
scoped to some min_relay_fee.

Let's put this aside for now then and we can revisit later if needed.

Thanks for the feedback everyone!
Bastien

Le lun. 12 oct. 2020 à 20:49, Olaoluwa Osuntokun  a
écrit :

> > It seems to me that the "funder pays all the commit tx fees" rule exists
> > solely for simplicity (which was totally reasonable).
>
> At this stage, I've learned that simplicity (when doing anything that
> involves multi-party on-chain fee negotiating/verification/enforcement can
> really go a long way). Just think about all the edge cases w.r.t
> _allocating
> enough funds to pay for fees_ we've discovered over the past few years in
> the state machine. I fear adding a more elaborate fee splitting mechanism
> would only blow up the number of obscure edge cases that may lead to a
> channel temporarily or permanently being "borked".
>
> If we're going to add a "fairer" way of splitting fees, we'll really need
> to
> dig down pre-deployment to ensure that we've explored any resulting edge
> cases within our solution space, as we'll only be _adding_ complexity to
> fee
> splitting.
>
> IMO, anchor commitments in their "final form" (fixed fee rate on commitment
> transaction, only "emergency" use of update_fee) significantly simplifies
> things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
> fees". However, as you note this doesn't fully distribute the worst-case
> cost of needing to go to chain with a "fully loaded" commitment
> transaction.
> Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
> perspective, once again putting the burden on the broadcaster/confirmer to
> make up the difference.
>
> -- Laolu
>
>
> On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels would
>> only exist between peers
>> that trust each other, which is quite limiting (I'm hoping we can do
>> better).
>>
>> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
>> that steal funds come from
>> the fact that a routing node has paid downstream but cannot claim the
>> upstream HTLCs (correct me
>> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
>> the HTLCs they offer
>> while they're pending in the commit-tx, regardless of whether they're
>> funder or fundee.
>>
>> The simplest way to do this would be to deduce the HTLC cost (172 *
>> feerate) from the offerer's
>> main output (instead of the funder's main output, while keeping the base
>> commit tx weight paid
>> by the funder).
>>
>> A more extreme proposal would be to tie the *total* commit-tx fee to the
>> channel usage:
>>
>> * if there are no pending HTLCs, the funder pays all the fee
>> * if there are pending HTLCs, each node pays a proportion of the fee
>> proportional to the number of
>> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
>> pays 75% of the
>> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
>> redistributed.
>>
>> This model uses the on-chain fee as collateral for usage of the channel.
>> If Alice wants to forward
>> HTLCs through this channel (because she has something to gain - routing
>> fees), she should be taking
>> on some of the associated risk, not Bob. Bob will be taking the same risk
>> downstream if he chooses
>> to forw

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-12 Thread Bastien TEINTURIER via Lightning-dev
Good morning,

For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.


That's an interesting comment, it may be worth exploring.
IIUC you're suggesting that payments may look like this:

* Alice wants to reach Dave by going through Bob and Carol
* An onion encodes the route Alice -> Bob -> Carol -> Dave
* When Bob receives that onion and discovers that Carol is the next node,
he finds a route to Carol
and sends it along that route, but it's not an onion, it's "clear-text"
routing
* When Carol receives that message, she unwraps the Alice -> Bob -> Carol
-> Dave onion to discover
that Dave is the next hop and applies the same steps as Bob

It looks a lot like Trampoline, but Trampoline does onion routing between
intermediate nodes.
Your proposal would replace that with a potentially more efficient but less
private routing scheme.
As long as the Trampoline route does use onion routing, it could make
sense...

For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?


Invoices to the rescue!
Since lightning payments are invoice-based, recipients would add to the
invoice a few nodes that
are close to them (or a partial route, which would probably be better for
privacy).

Thanks,
Bastien

Le dim. 11 oct. 2020 à 10:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Hey Zman,
> >
> > > raising the minimum payment size is another headache
> >
> > It's true that it may (depending on the algorithm) lower the success
> rate of MPP-split.
> > But it's already a parameter that node operators can configure at will
> (at channel creation time),
> > so IMO it's a complexity we have to deal with anyway. Making it dynamic
> shouldn't have a high
> > impact on MPP algorithms (apart from failures while `channel_update`s
> are propagating).
>
> Right, it should not have much impact.
>
> For the most part, when considering the possibility of splicing in the
> future, we should consider that such parameters must be made changeable
> largely.
>
>
> >
> > To be fully honest, my (maybe unpopular) opinion about MPP is that it's
> not necessary on the
> > network's backbone, only at its edges. Once the network matures, I
> expect channels between
> > "serious" routing nodes to be way bigger than the size of individual
> payments. The only places
> > where there may be small or almost-empty channels are between end-users
> (wallets) and
> > routing nodes.
> > If something like Trampoline were to be implemented, MPP would only be
> needed to reach a
> > first routing node (short route), that routing node would aggregate the
> parts and forward as a
> > single HTLC to the next routing node. It would be split again once it
> reaches the other edge
> > of the network (for a short route as well). In a network like this, the
> MPP routes would only have
> > to be computed on a small subset of the network, which makes brute-force
> algorithms completely
> > reasonable and the success rate higher.
>
> This makes me wonder if we really need the onions-per-channel model we
> currently use.
>
> For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
>
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.
>
> HTLC packets could be split arbitrarily, and later nodes could potentially
> merge with the lower CLTV used in subsequent hops.
>
> Or not, *shrug*.
> It has the bad problem of being more expensive on average than purely
> source-based routing, and probably having worse payment latency.
>
>
> For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-09 Thread Bastien TEINTURIER via Lightning-dev
Hey Zman,

raising the minimum payment size is another headache
>

It's true that it may (depending on the algorithm) lower the success rate
of MPP-split.
But it's already a parameter that node operators can configure at will (at
channel creation time),
so IMO it's a complexity we have to deal with anyway. Making it dynamic
shouldn't have a high
impact on MPP algorithms (apart from failures while `channel_update`s are
propagating).

To be fully honest, my (maybe unpopular) opinion about MPP is that it's not
necessary on the
network's backbone, only at its edges. Once the network matures, I expect
channels between
"serious" routing nodes to be way bigger than the size of individual
payments. The only places
where there may be small or almost-empty channels are between end-users
(wallets) and
routing nodes.
If something like Trampoline were to be implemented, MPP would only be
needed to reach a
first routing node (short route), that routing node would aggregate the
parts and forward as a
single HTLC to the next routing node. It would be split again once it
reaches the other edge
of the network (for a short route as well). In a network like this, the MPP
routes would only have
to be computed on a small subset of the network, which makes brute-force
algorithms completely
reasonable and the success rate higher.

This is an interesting fork of the discussion, but I don't think it's a
good reason to prevent these
parameters from being updated on live channels, what do you think?

Bastien


Le jeu. 8 oct. 2020 à 22:05, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Please forget about channel jamming, upfront fees et al and simply
> consider the parameters I'm
> > mentioning. It feels to me that these are by nature dynamic channel
> parameters (some of them are
> > even present in `channel_update`, but no-one updates them yet because
> direct peers don't take the
> > update into account anyway). I'd like to raise `htlc_minimum_msat` on
> some big channels because
> > I'd like these channels to be used only for big-ish payments. Today I
> can't, I have to close that
> > channel and open a new one for such a trivial configuration update,
> which is sad.
>
> At the risk of once more derailing the conversation: from the MPP
> trenches, raising the minimum payment size is another headache.
> The general assumption with MPP is that smaller amounts are more likely to
> get through, but if anyone is making a significant bump up in
> `htlc_minimum_msat`, that assumption is upended and we have to reconsider
> if we may actually want to merge multiple failing splits into one, as well
> as considering asymmetric splits (in particular asymmetric presplits)
> because maybe the smaller splits will be unable to pass through the bigger
> channels but the bigger-side split *might*.
>
> On the other hand: one can consider that the use of big payments as an
> aggregation.
> For example: a forwarding node might support smaller `htlc_minimum_msat`,
> then after making multiple such forwards, find that a channel is now
> heavily balanced towards one side or another.
> It can then make a single large rebalance via one of the
> high-`htlc_minimum_msat` channels t-bast is running.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Thanks (again) Antoine and Zman for your answers,

On the other hand, a quick skim of your proposal suggests that it still
> respects the "initiator pays" principle.
> Basically, the fundee only pays fees for HTLCs they initiated, which is
> not relevant to the above attack (since in the above attack, my node is a
> dead end, you will never send out an HTLC through my channel to rebalance).
> So it should still be acceptable.


I agree, my proposal would have the same result as today's behavior in that
case.
Unless your throw-away node waited for me to add an HTLC in its channel, in
that case I would pay a
part of the fee (since I'm adding that HTLC). That leans towards the first
of my two proposals,
where the funder always pays the "base" fee and htlc fees are split
depending on who proposed the HTLC.

The channel initiator shouldn't have to pay for channel-closing as it's
> somehow a liquidity allocation decision


I agree 100%. Especially since mutual closing should be preferred most of
the time.

That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.


Yes, this is an issue, but the only way to fix it today is to never be the
funder, always be fundee
and I think that creates unhealthy, assymetric incentives.

This is a scenario where the other node will only burn you once; if you
notice that behavior you'll
be forced to pay on-chain fees, but you'll ban this peer. And if he opened
the channel to you, he'll
still be paying the "base" fee. I don't think there's a silver bullet here
where you can completely
avoid being bitten by such malicious nodes, but you can reduce exposure and
ban them after the fact.

Another note on using a minimal relay fee; in a potential future where
on-chain fees are always
high and layer 1 is consistently busy, even that minimal relay fee will be
costly. You'll want your
peer to pay for the HTLCs it's responsible for to split the on-chain fee
more fairly. So I believe
moving (slightly) away from the "funder pays all" model is desirable (or at
least it's worth
exploring seriously in order to have a better reason to dismiss it than
"it's simpler").

Does that make sense?

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:30, Antoine Riard  a
écrit :

> Hello Bastien,
>
> I'm all in for a model where channel transactions are pre-signed with a
> reasonable minimal relay fee and the adjustment is done by the closer. The
> channel initiator shouldn't have to pay for channel-closing as it's somehow
> a liquidity allocation decision ("My balance could be better allocated
> elsewhere than in this channel").
>
> That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.
>
> AFAICT, this issue is not solved by anchor outputs. A way to decentivize
> this kind of behavior from a malicious counterparty is an upfront payment
> where the upholding HTLC fee * HTLC block-buffer-before-onchain is higher
> than the cost of going onchain. It should cost higher for the counterparty
> to withhold a HTLC than paying onchain-fees to close the channel.
>
> Or can you think about another mitigation for the issue raised above ?
>
> Antoine
>
> Le lun. 5 oct. 2020 à 09:13, Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> a écrit :
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels wou

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Good morning Antoine and Zman,

Thanks for your answers!

I was thinking dynamic policy adjustment would be covered by the dynamic
> commitment mechanism proposed by Laolu


I didn't mention this as I think we still have a long-ish way to go before
dynamic commitments
are spec-ed, implemented and deployed, and I think the parameters I'm
interested in don't require
that complexity to be updated.

Please forget about channel jamming, upfront fees et al and simply consider
the parameters I'm
mentioning. It feels to me that these are by nature dynamic channel
parameters (some of them are
even present in `channel_update`, but no-one updates them yet because
direct peers don't take the
update into account anyway). I'd like to raise `htlc_minimum_msat` on some
big channels because
I'd like these channels to be used only for big-ish payments. Today I
can't, I have to close that
channel and open a new one for such a trivial configuration update, which
is sad.

There is no need to stop the channel's operations while you're updating
these parameters, since
they can be updated unilaterally anyway. The only downside is that if you
make your policy stricter,
your peer may send you some HTLCs that you will immediately fail
afterwards; it's only a minor
inconvenience that won't trigger a channel closure.

I'd like to know if other implementations than eclair have specificities
that would make this
feature particularly hard to implement or undesirable.

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:43, ZmnSCPxj  a écrit :

> Good morning Antoine, and Bastien,
>
>
> > Instead of relying on reputation, the other alternative is just to have
> an upfront payment system, where a relay node doesn't have to account for a
> HTLC issuer reputation to decide acceptance and can just forward a HTLC as
> long it paid enough. More, I think it's better to mitigate jamming with a
> fees-based system than a web-of-trust one, less burden on network newcomers.
>
> Let us consider some of the complications here.
>
> A newcomer wants to make an outgoing payment.
> Speculatively, it connects to some existing nodes based on some policy.
>
> Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
>
> In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> In order to do that, we would have to offer the upfront fees for that
> node, to the node we *are* channeled with, so it can forward this as well.
>
> * We can give the upfront fee outright to the first hop, and trust that if
> it forwards, it will also forward the upfront fee for the next hop.
>   * The first hop would then prefer to just fail the HTLC then and there
> and steal all the upfront fees.
> * After all, the offerrer is a newcomer, and might be the sybil of a
> hacker that is trying to tie up its liquidity.
>   The first hop would (1) avoid this risk and (2) earn more upfront
> fees because it does not forward those fees to later hops.
>   * This is arguably custodial and not your keys not your coins applies.
> Thus, it returns us back to tr\*st anyway.
> * We can require that the first hop prove *where* along the route errored.
>  If it provably failed at a later hop, then the first hop can claim more
> as upfront fees, since it will forward the upfront fees to the later hop as
> well.
>   * This has to be enforcable onchain in case the channel gets dropped
> onchain.
> Is there a proposal SCRIPT which can enforce this?
>   * If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> * On the other hand, sub-satoshi amounts are not enforcable onchain
> too, and nobody cares, so...
>
> On the other hand, a web-of-tr\*st might not be *that* bad.
>
> One can say that "tr\*st is risk", and consider that the size and age of a
> channel to a peer represents your tr\*st that that peer will behave
> correctly for fast and timely resolution of payments.
> And anyone can look at the blockchain and the network gossip to get an
> idea of who is generally considered tr\*stworthy, and since that
> information is backed by Bitcoins locked in channels, this is reasonably
> hard to fake.
>
> On the other hand, this risks centralization around existing, long-lived
> nodes.
> *Sigh*.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Incremental Routing (Was: Making (some) channel limits dynamic)

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
If I remember correctly, it looks very similar to how I2P establishes
tunnels, it may be worth
diving in their documentation to fish for ideas.

However in their case the goal is to establish a long-lived tunnel, which
is why it's ok to have
a slow and costly protocol. It feels to me that for payments, this is a lot
of messages and delays,
I'm not sure this is feasible at a reasonable scale...

Cheers,
Bastien

Le mer. 7 oct. 2020 à 19:34, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Antoine, Bastien, and list,
>
> > > Instead of relying on reputation, the other alternative is just to
> have an upfront payment system, where a relay node doesn't have to account
> for a HTLC issuer reputation to decide acceptance and can just forward a
> HTLC as long it paid enough. More, I think it's better to mitigate jamming
> with a fees-based system than a web-of-trust one, less burden on network
> newcomers.
> >
> > Let us consider some of the complications here.
> >
> > A newcomer wants to make an outgoing payment.
> > Speculatively, it connects to some existing nodes based on some policy.
> >
> > Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
> >
> > In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> > In order to do that, we would have to offer the upfront fees for that
> node, to the node we are channeled with, so it can forward this as well.
> >
> > -   We can give the upfront fee outright to the first hop, and trust
> that if it forwards, it will also forward the upfront fee for the next hop.
> > -   The first hop would then prefer to just fail the HTLC then and
> there and steal all the upfront fees.
> > -   After all, the offerrer is a newcomer, and might be the
> sybil of a hacker that is trying to tie up its liquidity.
> > The first hop would (1) avoid this risk and (2) earn more
> upfront fees because it does not forward those fees to later hops.
> >
> > -   This is arguably custodial and not your keys not your coins
> applies.
> > Thus, it returns us back to tr\*st anyway.
> >
> > -   We can require that the first hop prove where along the route
> errored.
> > If it provably failed at a later hop, then the first hop can claim
> more as upfront fees, since it will forward the upfront fees to the later
> hop as well.
> > -   This has to be enforcable onchain in case the channel gets
> dropped onchain.
> > Is there a proposal SCRIPT which can enforce this?
> >
> > -   If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> > -   On the other hand, sub-satoshi amounts are not enforcable
> onchain too, and nobody cares, so...
>
> One thing I have been thinking about, but have not proposed seriously yet,
> would be "incremental routing".
>
> Basically, the route of pending HTLCs also doubles as an encrypted
> bidirectional tunnel.
>
> Let me first describe how I imagine this "incremental routing" would look
> like.
>
> First, you offer an HTLC with a direct peer.
> The data with this HTLC includes a point, which the peer will ECDH with
> its own privkey, to form a shared secret.
> You can then send additional messages to that node, which it will decrypt
> using the shared secret as the symmetric encryption key.
> The node can also reply to those messages, by encrypting it with the same
> symmetric encryption key.
> Typically this will be via a stream cipher which is XORed with the real
> data.
>
> One of the messages you can send to that node (your direct peer) would be
> "please send out an HTLC to this peer of yours".
> Together with that message, you could also bump up the value of the HTLC,
> and possibly the CLTV delta, you have with that node.
> This bumping up is the forwarding fee and resolution time you have to give
> to that node in order to have it safely put an HTLC to the next hop.
>
> If there is a problem on the next hop, the node replies back, saying it
> cannot forward the HTLC further.
> Your node can then respond by giving an alternative next hop, which that
> node can reply back is also not available, etc. until you say "give up" and
> that node will just fail the HTLC.
>
> However, suppose the next hop is online and there is enough space in the
> channel.
> That node then establishes the HTLC with the next hop.
>
> At this point, you can then send a message to the direct peer which is
> nothing more than "send the rest of this message as the message to the next
> hop on the same HTLC, then wait for a reply and wrap it and reply to me".
> This is effectively onion-wrapping the message to the peer of your peer,
> and waiting for an onion-wrapped reply from the peer of your peer.
>
> You 

Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Hi darosior,

This is true, but we haven't solved yet how to estimate a good enough
`min_relay_fee` that works
for end-to-end tx propagation over the network.

We've discussed this during the last two spec meetings, but it's still
unclear whether we'll be able to solve
this before package-relay lands in bitcoin, so I wanted to explore this as
a potential more short-term
solution. But maybe it's not worth the effort and we should focus more on
anchors and `min_relay_fee`,
we'll see ;)

Bastien

Le lun. 5 oct. 2020 à 15:25, darosior  a écrit :

> Hi Bastien,
>
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
>
> Agreed.
> However in an anchor outputs future the funder only pays for the
> "backbone" fees of the channel and the fees necessary to secure the
> confirmation of transactions is paid in second stage by each interested
> party (*). It seems to me to be a reasonable middle-ground.
>
> (*) Credits to ZmnSCPxj for pointing this out to me on IRC.
>
> Darosior
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


  1   2   >