Re: [Lightning-dev] #PickhardtPayments implemented in lnd-manageJ

2022-05-17 Thread Bastien TEINTURIER
I completely agree with Matt, these two components are completely
independent
and too often conflated. Scoring channels and estimating liquidity is
something
that has been regularly discussed by implementations for the last few years,
where every implementation did its own experiments over time.

Eclair has quite a large, configurable set of heuristics around channel
scoring,
along with an A/B testing system that we've been using for a while on
mainnet
(see [1] for details). We've also been toying with channel liquidity
estimation for
more than half a year, which you can follow in [2] and [3].

These are heuristics, and it's impossible to judge whether they work or not
until
you've tried them on mainnet with real payments, so I strongly encourage
people
to run such experiments. But when you do, you should have enough volume for
the result data to be statistically meaningful and you should do A/B
testing,
otherwise you can make the data say pretty much everything you want. What
I believe is mostly missing is the volume, the network doesn't have enough
real
payments yet IMHO for this data to accurately say that one heuristic is
better
than another.

Using an MCF algorithm instead of dijkstra is useful when relaying large
payments
that will need to be split aggressively to reach the destination. It does
make a lot
of sense in that scenario. However, it's important to also take a step back
and
look at whether it is economical to make such payments on lightning.

For a route with an aggregated proportional fee of 1000ppm, here is a rough
comparison of the fees between on-chain and lightning:

* At 1 sat/byte on-chain, payments above 2mBTC cost less on-chain than
off-chain
* At 10 sat/byte on-chain, payments above 20mBTC cost less on-chain than
off-chain
* At 25 sat/byte on-chain, payments above 50mBTC cost less on-chain than
off-chain
* And so on (just keep multiplying)

Of course, making payments on lightning has more benefits than just fees,
they
also confirm faster than on-chain payments, but I think it's important to
keep these
figures in mind.

It would be also useful to think about the shape of the network. Using an
MCF
algorithm makes sense when payments are saturating channels. But if channels
are much bigger than your payment size, this is probably overkill. If
channels are
small "at the edges of the network" and bigger than payments at the "core
of the
network", and we're using trampoline routing [4], it makes sense to run
different
path-finding algorithms depending on where we are (e.g. MCF at the edges on
a small subset of the graph and dijkstra inside the core).

I'm very happy that all this research is happening and helping lightning
payments
become more reliable, thanks for everyone involved! I think the design
space is
still quite large when we take everything into account, so I expect that
we'll see
even more innovation in the coming years.

Cheers,
Bastien

[1]
https://github.com/ACINQ/eclair/blob/10eb9e932f9c0de06cc8926230d8ad4e2d1d9e2c/eclair-core/src/main/resources/reference.conf#L237
[2] https://github.com/ACINQ/eclair/pull/2263
[3] https://github.com/ACINQ/eclair/pull/2071
[4] https://github.com/lightning/bolts/pull/829


Le lun. 16 mai 2022 à 22:59, Matt Corallo  a
écrit :

> Its probably worth somewhat disentangling the concept of switching to a
> minimum-cost flow routing
> algorithm from the concept of "scoring based on channel value and
> estimated available liquidity".
>
> These are two largely-unrelated concepts that are being mashed into one in
> this description - the
> first concept needs zero-base-fee to be exact, though its not clear to me
> that a heuristics-based
> approach won't give equivalent results in practice, given the noise in
> success rate compared to
> theory here.
>
> The second concept is something that LDK (and I believe CLN and maybe even
> eclair now) do already,
> though lnd does not last I checked. For payments where MPP does not add
> much to success rate (i.e.
> payments where the amount is relatively "low" compared to available
> network liquidity) dijkstra's
> with a liquidity/channel-size based scoring will give you the exact same
> result.
>
> For cases where you're sending an amount which is "high" compared to
> available network liquidity,
> taking a minimum-cost-flow algorithm becomes important, as you point out.
> Of course you're always
> going to suffer really slow payment and many retires in this case anyway.
>
> Matt
>
> On 5/15/22 1:01 PM, Carsten Otto via Lightning-dev wrote:
> > Dear all,
> >
> > the most recent version of lnd-manageJ [1] now includes basic, but
> usable,
> > support for #PickhardtPayments. I kindly invite you to check out the
> code, give
> > it a try, and use this work for upcoming experiments.
> >
> > Teaser with video:
> https://twitter.com/c_otto83/status/1525879972786749453
> >
> > The problem, heavily summarized:
> >
> > - Sending payments in the LN often fails, especially with larger amounts.
> > - Splitting a 

[Lightning-dev] Security issue in anchor outputs implementations

2022-04-22 Thread Bastien TEINTURIER
Good morning list,

I will describe here a vulnerability found in older versions of some
lightning implementations of anchor outputs. As most implementations
have not yet released support for anchor outputs, they should verify
that they are not impacted by this type of vulnerability while they
implement this feature.

I want to thank the impacted implementations for their reactivity in
fixing this issue, which hasn't impacted any user (as far as I know).

## Timeline

- March 23 2021: I discovered an interesting edge case while
implementing anchor outputs in eclair ([1]).
- August 2021: while I was finalizing support for the 0-htlc-fees
variant of anchor outputs in eclair, I was able to do in-depth
interoperability tests with other implementations that supported
anchor outputs (only lnd and c-lightning at that time). These tests
revealed that both implementations were impacted by the edge case
discovered in March and that it could be exploited to steal funds.
- September 2 2021: I notified both development teams.
- October 11 2021: I disclosed the vulnerability to Electrum and LDK
to ensure they would not ship a version of anchor outputs containing
the same issue (anchor outputs wasn't shipped in their software yet).
- November 2021: a fix for this vulnerability was released in lnd 0.14.0
and c-lightning 0.10.2.

## Impacted users

- Users running versions of lnd prior to 0.14.0
- Users running versions of c-lightning prior to 0.10.2 if they have
activated experimental features (and have anchor outputs channels)

## Description of the vulnerability

With anchor outputs, your lightning node doesn't use `SIGHASH_ALL` when
sending its signatures for htlc transactions in `commitment_signed`.
It uses `SIGHASH_SINGLE | SIGHASH_ANYONECANPAY` instead and the other
node is supposed to add a `SIGHASH_ALL` signature when they broadcast
the htlc transaction.

Interestingly, this lets the other node combine multiple htlcs in a
single transaction without invalidating your signatures, as long as the
`nLockTime` of all htlcs match. This has been a known fact for a long
time, which can be used to batch transactions and save on fees.

The vulnerability lies in how *revoked* htlc transactions were handled.
Because we historically used `SIGHASH_ALL`, we could assume that htlc
transactions had a single output. For example, older eclair versions
used that fact, and when presented with a revoked htlc transaction,
would claim a single output of that transaction via a penalty/justice
transaction (see [2]). This was completely ok before anchor outputs.
But after anchor outputs, if the revoked htlc transaction actually
contained many htlcs, your node should claim *all* of the revoked
outputs with penalty/justice transactions.

When presented with a transaction containing multiple revoked htlcs,
both impacted implementations would fail to claim any output. This means
the attacker could publish a revoked commitment with htlcs that have
been settled since then, and claim these htlcs a second time on-chain,
thus stealing funds.

Let's take a concrete example, where Bob is under attack.
Bob has channels with Alice and Carol: Alice ---> Bob ---> Carol.
Alice sends N htlcs to Carol spending all of her channel balance, which
Carol then fulfills.
Carol has then irrevocably received the funds from Bob.
Then Alice publishes her old commitment where all the htlcs were pending
and aggregates all of her htlc-timeouts in a single transaction.
Bob will fail to claim the revoked htlc outputs which will go back to
Alice's on-chain wallet. Bob has thus lost the full channel amount.

## Caveat

An important caveat is that this attack will not work all the time, so
it can carry a risk for the attacker. The reason for that is that the
htlc transactions have a relative delay of 1 block. If the node under
attack is able to make his penalty/justice transactions confirm
immediately after the revoked commitment (by claiming outputs directly
from the commitment transaction with a high enough feerate) the
attacker won't be able to broadcast the aggregated htlc transaction
(and loses their channel reserve).

The success of the attack depends on what block target implementations
use for penalty/justice transactions and how congested the mempool is
(unless the attacker notices that their peer is offline, in which case
they can use this opportunity to carry out the attack).

I'm pretty confident all users have already upgraded to newer versions
(particularly since there have been important bug fixes on unrelated
issues since then), but if your node still hasn't upgraded, you should
consider doing it as soon as possible.

Cheers,
Bastien

[1] https://github.com/ACINQ/eclair/pull/1738
[2]
https://github.com/ACINQ/eclair/blob/35b070ee5de2ea3847cf64b86f7e47abcca10b95/eclair-core/src/main/scala/fr/acinq/eclair/transactions/Transactions.scala#L613
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org

Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-15 Thread Bastien TEINTURIER
Good morning Alex,

I’ve been investigating set reconciliation as a means to reduce bandwidth

and redundancy of gossip message propagation.
>

Cool project, glad to see someone working on it! The main difficulty here
will
indeed be to ensure that the number of differences between sets is bounded.
We will need to maintain a mechanism to sync the whole graph from scratch
for new nodes, so the minisketch diff must be efficient enough otherwise
nodes
will just fall back to a full sync way too often (which would waste a lot of
bandwidth).

Picking several offending channel ids, and digging further, the majority of
> these

appear to be flapping due to Tor or otherwise intermittent connections.
>

One thing that may help here from an implementation's point of view is to
avoid
sending a disabled channel update every time a channel goes offline. What
eclair does to avoid spamming is to only send a disabled channel update when
someone actually tries to use that channel. Of course, if people choose this
offline node in their route, you don't have a choice and will need to send a
disabled channel update, but we've observed that many channels come back
online before we actually need to use them, so we're saving two channel
updates
(one to disable the channel and one to re-enable it). I think all
implementations
should do this. Is that the case today?

We could go even further, and when we receive an htlc that should be relayed
to an offline node, wait a bit to give them an opportunity to come online
instead
of failing the htlc and sending a disabled channel update. Eclair currently
doesn't
do that, but it would be very easy to add.

- A common listing of current default rate limits across lightning network
> implementations.
>

Eclair doesn't do any rate-limiting. We wanted to "feel the pain" before
adding
anything, and to be honest we haven't really felt it yet.

which will use a common, simple heuristic to accept or reject a gossip
> message.

(Think one channel update per block, or perhaps one per block_height << 5.)
>

I think it would be easy to come to agreement between implementations and
restrict channel updates to at most one every N blocks. We simply need to
add
the `block_height` in a tlv in `channel_update` and then we'll be able to
actually
rate-limit based on it. Given how much time it takes to upgrade most of the
network, it may be a good idea to add the `block_height` tlv now in the
spec,
and act on it later? Unless your work requires bigger changes in channel
update
in which case it will probably be a new message.

Note that it will never be completely accurate though, as different nodes
can
have different blockchain tips. My nodes may be one or two blocks late
compared
to the node that emits the channel update. We need to allow a bit of leeway
there.

Cheers,
Bastien




Le jeu. 14 avr. 2022 à 23:06, Alex Myers  a écrit :

> Hello lightning developers,
>
>
> I’ve been investigating set reconciliation as a means to reduce bandwidth
> and redundancy of gossip message propagation. This builds on some earlier work
> from Rusty using the minisketch library [1]. The idea is that each node
> will build a sketch representing it’s own gossip set. Alice’s node will
> encode and transmit this sketch to Bob’s node, where it will be merged with
> his own sketch, and the differences produced. These differences should
> ideally be exactly the latest missing gossip of both nodes. Due to size
> constraints, the set differences will necessarily be encoded, but Bob’s
> node will be able to identify which gossip Alice is missing, and may then
> transmit exactly those messages.
>
>
> This process is relatively straightforward, with the caveat that the sets
> must otherwise match very closely (each sketch has a maximum capacity for
> differences.) The difficulty here is that each node and lightning
> implementation may have its own rules for gossip acceptance and
> propagation. Depending on their gossip partners, not all gossip may
> propagate to the entire network.
>
>
> Core-lightning implements rate limiting for incoming channel updates and
> node announcements. The default rate limit is 1 per day, with a burst of
> 4. I analyzed my node’s gossip over a 14 day period, and found that, of
> all publicly broadcasting half-channels, 18% of them fell afoul of our
> spam-limiting rules at least once. [2]
>
>
> Picking several offending channel ids, and digging further, the majority
> of these appear to be flapping due to Tor or otherwise intermittent
> connections. Well connected nodes may be more susceptible to this due to more
> frequent routing attempts, and failures resulting in a returned channel
> update (which otherwise might not have been broadcast.) A slight
> relaxation of the rate limit resolves the majority of these cases.
>
>
> A smaller subset of channels broadcast frequent channel updates with minor
> adjustments to htlc_maximum_msat and fee_proportional_millionths
> parameters. These nodes appear to be 

[Lightning-dev] Blinded payments and unblinding attacks

2022-04-01 Thread Bastien TEINTURIER
Good morning list,

In the last couple of months, @thomash-acinq and I have spent a lot of time
working on route blinding for payments [1]. As you may know, route blinding
is a prerequisite for onion messages [2] and Bolt 12 offers [3].

Using route blinding to provide anonymity for onion messages is quite
simple, but it is harder to use safely for payments. The reason for that is
that the lightning network is a very heterogeneous channels network.

The parameters used to relay payments vary widely from one channel to the
other, and can dynamically vary over time: if not accounted for, this can
provide an easy fingerprint to let malicious actors guess what channels are
actually used inside a blinded route. The ideas behind these probing attacks
are described in more details in the route blinding proposals [4].

To protect against such attacks, the latest version of the route blinding
specification lets the recipient impose what parameters will be used by
intermediate blinded nodes to relay payments (instead of using the values
they advertise in their `channel_update`). The parameters that matter are:

* `fee_base_msat`
* `fee_proportional_millionths`
* `cltv_expiry_delta`
* `htlc_minimum_msat`
* `features` that impact payment relaying behavior

We'd like help from this list to figure out whether these are the only
parameters that an attacker can use to fingerprint channels, or if there
are others that we need to take into account to guarantee user privacy.

Note that these attacks only work against public channels: wallet users
relying on unannounced channels are not at risk and will more easily
benefit from route blinding.

I spent a lot of time re-working the specification PR to make it as clear
as possible: please have a look at it and let me know if I can do anything
to make it better. Don't hesitate to reach out directly with questions and
feedback. I strongly recommend to start with the high-level design doc [5],
as natural language and detailed examples will help grasp the main ideas
and subtleties of the proposal.

Cheers,
Bastien

[1] https://github.com/lightning/bolts/pull/765
[2] https://github.com/lightning/bolts/pull/759
[3] https://github.com/lightning/bolts/pull/798
[4]
https://github.com/lightning/bolts/blob/route-blinding/proposals/route-blinding.md#attacks
[5]
https://github.com/lightning/bolts/blob/route-blinding/proposals/route-blinding.md
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-22 Thread Bastien TEINTURIER
Hey AJ,

Right, I was probably confused between local/remote, especially when
we're talking about our anchor in the remote commitment (should it be
called local anchor, which is from our point of view, or remote?).

Let's call them Alice and Bob, and Bob is publishing a commitment.
Correct me if I'm wrong there, what you're suggesting is that:

* Bob's anchor on Bob's commitment can be spent with revkey
* Alice's anchor on Bob's commitment can be spent with Alice's pubkey

This does ensure that each participant is able to claim their anchor in
the latest commitment, and Alice is able to claim both anchors in any of
Bob's outdated commitments.

But I think it defeats the `OP_16 OP_CHECKSEQUENCEVERIFY` script branch.
We have that branch to allow anyone to spend anchor outputs *after* the
commitment is confirmed, to avoid keeping them around in the utxo set
forever. However, the trick is that the internal pubkey must be set to
something that is publicly revealed when the channel closes. Now that we
put the revkey in internal pubkeys everywhere instead of script branches,
that revkey is *not* revealed when channels close with the latest commit.
So it would prevent people from using that script branch to clean up the
utxo set...

I have currently used  and  because
they're revealed whenever main outputs are claimed, but there is probably
a smarter solution (maybe one that would let us use revkey here as you
suggest), this will be worth thinking about a bit more.

Thanks,
Bastien

Le mar. 21 déc. 2021 à 17:04, Anthony Towns  a écrit :

> On Tue, Dec 21, 2021 at 04:25:41PM +0100, Bastien TEINTURIER wrote:
> > The reason we have "toxic waste" with HTLCs is because we commit to the
> > payment_hash directly inside the transaction scripts, so we need to
> > remember all the payment_hash we've seen to be able to recreate the
> > scripts (and spend the outputs, even if they are revoked).
>
> I think "toxic waste" refers to having old data around that, if used,
> could cause you to lose all the funds in your channel -- that's why it's
> toxic. This is more just regular landfill :)
>
> > *_anchor: dust, who cares -- might be better if local_anchor used key =
> > > revkey
> > I don't think we can use revkey,
>
> musig(revkey, remote_key)
>   --> allows them to spend after you've revealed the secret for revkey
>   you can never spend because you'll never know the secret for
>   remote_key
>
> but if you just say:
>
> (revkey)
>
> then you can spend (because you know revkey) immediately (because it's
> an anchor output, so intended to be immediately spent) or they can spend
> if it's an obsolete commitment and you've revealed the revkey secret.
>
> > this would prevent us from bumping the
> > current remote commitment if it appears on-chain (because we don't know
> > the private revkey yet if this is the latest commitment). Usually the
> > remote peer should bump it, but if they don't, we may want to bump it
> > ourselves instead of publishing our own commitment (where our main
> > output has a long CSV).
>
> If we're going to bump someone else's commitment, we'll use the
> remote_anchor they provided, not the local_anchor, so I think this is
> fine (as long as I haven't gotten local/remote confused somewhere along
> the way).
>
> Cheers,
> aj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-21 Thread Bastien TEINTURIER
Hey AJ and list,

That's a very good point, it's really worth highlighting!

The reason we have "toxic waste" with HTLCs is because we commit to the
payment_hash directly inside the transaction scripts, so we need to
remember all the payment_hash we've seen to be able to recreate the
scripts (and spend the outputs, even if they are revoked).

But with PTLCs, we commit to a payment_point outside of the scripts (in
the adaptor signature that is exchanged), so we're able to recreate the
scripts independently of the payment details! This also means that the
payment_point never appears on-chain (which is good for privacy) whereas
the payment_hash does appear on-chain for HTLCs.

*_anchor: dust, who cares -- might be better if local_anchor used key =
> revkey


I don't think we can use revkey, this would prevent us from bumping the
current remote commitment if it appears on-chain (because we don't know
the private revkey yet if this is the latest commitment). Usually the
remote peer should bump it, but if they don't, we may want to bump it
ourselves instead of publishing our own commitment (where our main
output has a long CSV).

But as you already mentioned, who cares, it's dust and we don't even need
it to CPFP the revoked commitment, we can use any other output since the
revocation path isn't encumbered with a CSV 1.

Cheers,
Bastien

Le dim. 19 déc. 2021 à 23:23, Anthony Towns  a écrit :

> On Wed, Dec 08, 2021 at 04:02:02PM +0100, Bastien TEINTURIER wrote:
> > I updated my article [0], people jumping on the thread now may find it
> > helpful to better understand this discussion.
> > [0] https://github.com/t-bast/lightning-docs/pull/16
>
> Since merged, so
> https://github.com/t-bast/lightning-docs/blob/master/taproot-updates.md
>
> So imagine that this proposal is finished and widely adopted/deployed
> and someone adds an additional feature bit that allows a channel to
> forward PTLCs only, no HTLCs.
>
> Then suppose that you forget every old PTLC, because you don't like
> having your channel state grow without bound. What happens if your
> counterparty broadcasts an old state?
>
>  * the musig2 channel funding is irrelevant -- the funding tx has been
>spend at this point
>
>  * the unspent commitment outputs pay to:
>  to_local: ipk = musig(revkey, mykey) -- known ; scripts also known
>  to_remote: claimable in 1 block, would be better if ipk was also musig
>  *_anchor: dust, who cares -- might be better if local_anchor used
> key = revkey
>  *_htlc: irrelevant by definition
>  local_ptlc: ipk = musig(revkey, mykey) -- known; scripts also known
>
>  * commitment outputs may be immediately spent via layered txs. if so,
>their outputs are: ipk = musig(revkey, mykey); with fixed scripts,
>that include a relative timelock
>
> So provided you know the revocation key (which you do, because it's an
> old transaction and that only requires log(states) data to reconstruct)
> and your own private key, you can reconstruct all the scripts and use
> key path spends for every output immediately (excepting the local_anchor,
> and to_remote is delayed by a block).
>
> So while this doesn't achieve eltoo's goal of "no toxic waste", I believe
> it does achieve the goal of "state information is bounded no matter
> how long you leave the channel open / how many transactions travel over
> the channel".
>
> (Provided you're willing to wait for the other party to attempt to claim
> a htlc via their layered transaction, you can use this strategy for
> htlcs as well as ptlcs -- however this leaves you the risk that they
> never attempt to claim the funds, which may leave you out of pocket,
> and may give them the opportunity to do an attack along the lines of
> "you don't get access to the $10,000 locked in old HTLCs unless you pay
> me $1,000".  So I don't think that's really a smart thing to do)
>
> Cheers,
> aj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A blame ascribing protocol towards ensuring time limitation of stuck HTLCs in flight.

2021-12-15 Thread Bastien TEINTURIER
Good morning,

I agree, this onion message trick could let us work around this kind of
cheating
attempt. However, it becomes quite a complex protocol, and it's likely that
the more we progress towards specifying it, the more subtle issues we will
find that will require making it even more complex.

I'm more hopeful that we'll find channel jamming mitigations that work for
both
fast spam and slow spam, and will remove the need for this protocol (which
doesn't protect against fast spam, only against slow spam).

`D` can present to `B` its own `revoke_and_ack` in the above mentioned
> onion message reply.
>

A few high-level notes on why I think this is still harder than it looks:

* even if `D` shows B its `revoke_and_ack`, it doesn't prove that D sent it
to C
* it's impossible for a node to prove that it did *not* receive a message:
you can prove knowledge,
  but proving lack of knowledge is much harder (impossible?)

Cheers,
Bastien

Le jeu. 16 déc. 2021 à 01:50, lightning developer <
lightning-develo...@protonmail.com> a écrit :

> Good Morning Bastien,
>
> I believe there is another limitation that you're not mentioning: it's
> easy for a malicious node to blame an honest node. I'm afraid this is a
> serious limitation of the proposal.
>
>
> Thank you very much for your review and comments. I have just updated the
> proposal on github with a section "Security Considerations" that is
> equivalent to what I will send in this mail as I believe that the "serious
> limitation" that you pointed out can be resolved with the help of onion
> messages similar to what I tried to communicate in the already existing
> "Extensions" section. BTW before I sent my initial mail I was thinking
> exactly about the example that you mentioned! I elected to not include it
> to keep the text concise and short. Of course I might have back then and
> still a mistake in my thinking and in that case I apologize for asking you
> to review the proposal and my rebuttal.
>
> If we have a payment: A -> B -> C -> D and C is malicious.
> C can forward the payment to D, and even wait for D to correctly settle it
> (with `update_fulfill_htlc` or `update_fail_htlc`), but then withhold that
> message instead of forwarding it to B. Then C blames D, everyone agrees
> that
> D is bad node that must be avoided. Later, C unblocks the `update_*_htlc`
> and everyone thinks that D hodled the HTLC for a long time, which is bad.
>
>
> The above issue can be addressed by `B` verifying the proof it received
> from `C`. This can be done by presenting the proof to `D` via an onion
> message along a different node than `C`. If `D` cannot refute the proof by
> presenting a newer state to `B` then `B` knows that `D` was indeed
> dishonest. Otherwise `D` and `B` have discovered that `C` was misbehaving
> and tried to frame `D`.
>
> `B` indicates to `D` that it is allowed to ask such verification question
> by include the received proof from `C`. Note that `B` could never own such
> proof if `C` has not communicated with `B`. Of course if `C` has never
> talked to `B` in the first place `B` would have send a
> `TEMPORARY_CHANNEL_FAILURE` and if `C` stopped during the update of the
> statemachine to communicate to `B` then `B` can blame `C` via the above
> mechanism and `A` can verify the claim it received from `B`.
>
> Also `B` cannot just send garbage to `D` and try to frame `C` because as
> soon as `B` would frame `C` the upstream node `A` would talk to `C` and
> recognize that it was `B` who was dishonest.
>
> Going back to the situation assuming that `C` and `D` have indeed already
> successfully resolved the HTLC then the node `D` could in the reply to `B`
> even securely include the preimage allowing `B` to reclaim the funds from
> `A` and settle the HTLC in the A->B channel. Only the HTLC in the B->C
> channel would be locked which doesn't have to bother `B` as `B` expects
> that `C` is pulling / settling the HTLC anyway.  Only `C` would have the
> disadvantage as it is not pulling its liquidity as soon as it can.
>
> So far - besides a rather complicated flow of information - I do not see
> why the principles of my suggestion would not be possible to work at any
> other point of the channel state machine. So when queried by `B` the node
>  `D` could always replay with the latest state it has in the C->D channel
> indicating to `B` that `C` was dishonest.
>
> Of course we could ask now what is if `B` is also malicious? In this case
> `B` could propagate the `blame_channel` back but `A` could again use the
> onion trick to verify and discover that `B` and `C` are not following the
> protocol.
>
>
> Apart from this, I think the blame proof isn't that easy to build.
> It cannot simply use `commitment_signed`, because HTLCs are relayed only
> once the previous commitment has been revoked (through `revoke_and_ack`).
> So the proof should contain data from `commitment_signed` and a proof that
> the previous commitment was revoked (and that it was indeed the 

Re: [Lightning-dev] A blame ascribing protocol towards ensuring time limitation of stuck HTLCs in flight.

2021-12-15 Thread Bastien TEINTURIER
Good morning,

Thanks for looking into this!

I believe there is another limitation that you're not mentioning: it's
easy for a malicious node to blame an honest node. I'm afraid this is a
serious limitation of the proposal.

If we have a payment: A -> B -> C -> D and C is malicious.
C can forward the payment to D, and even wait for D to correctly settle it
(with `update_fulfill_htlc` or `update_fail_htlc`), but then withhold that
message instead of forwarding it to B. Then C blames D, everyone agrees that
D is bad node that must be avoided. Later, C unblocks the `update_*_htlc`
and everyone thinks that D hodled the HTLC for a long time, which is bad.

Apart from this, I think the blame proof isn't that easy to build.
It cannot simply use `commitment_signed`, because HTLCs are relayed only
once the previous commitment has been revoked (through `revoke_and_ack`).
So the proof should contain data from `commitment_signed` and a proof that
the previous commitment was revoked (and that it was indeed the previous
commitment) which is likely very hard to do securely without disclosing
too much about your channel.

Cheers,
Bastien

Le mer. 15 déc. 2021 à 02:08, lightning developer via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning list,
>
> I have just published a proposal to address (but unfortunately not solve)
> the old issue of HTLC spam via onions:
> https://github.com/lightning-developer/lightning-network-documents/blob/main/A%20blame%20ascribing%20protocol%20to%20mitigate%20HTLC%20spam.md
>
> The proposal picks up the early idea by Rusty, AJ and others to ascribe
> blame to a malicious actor but hopefully in a cheaper way than providing
> proof of a channel close by making use of a new lightning message
> `blame_channel` in combination with the proposed onion messages. I guess
> similar ideas and follow ups are already community knowledge (for example
> the local reputation tracking by Jim Posen at:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-May/001232.html)
> However I had the feeling that the current write up might provide some
> additional value to the community.
>
> The proposal also ensures that blame can be ascribed quickly by requiring
> a reply from the downstream onion that is proportional to the `cltv delta`
> at the hop. In this way a sending node will quickly know that a (and more
> importantly which) downstream channel is not working properly.
>
> I will be delighted to read your feedback, thoughts and criticism. For
> your convenience and archiving I also copied the raw markdown file of the
> proposal to the end of this Mail.
>
> Sincerely Lighting Developer
>
>
> - Begin Proposal --
>
> # A blame ascribing protocol towards ensuring time limitation of stuck
> HTLCs in flight.
>
> I was reviewing the [HOLD fee proposal by Joost](
> https://github.com/lightning/bolts/pull/843) and the [excellent summary
> of known mitigation techniques by t-bast](
> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md)
> when I revisited the very [first idea to mitigate HTLC spam via onions](
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-August/000135.html)
> that was discussed back in 2015 by Rusty, AJ and a few others. At that time
> the idea was to ascribe blame to a malicious actor by triggering a force
> close and proofing ones own honesty by providing the force close
> transaction. I think there is a lot of merit to the idea of ascribing blame
> and I think it might be possible with the help of [onion messages](
> https://github.com/lightning/bolts/pull/759) without the necessity to
> trigger full force closes.
>
> As I am not entirely sure if this suggestion is a reasonable improvement
> (it certainly does not resolve all the issues we have) I did not spec out
> the details and message formats and fields but only described the high
> level idea. I hope this is sufficient to discuss the principles and get the
> feedback from you if you consider this to be of use and if you think we
> should work on the details.
>
> Idea / Obervation:
> =
> The key idea is to set a fixed time in seconds (the `reply_interval`)
> after successfully negotiating an HTLC until when a node requires a
> resultion or reply from its peer to which it previously has forwarded a
> downstream onion. If the HTLC is not resolved and no reply was sent the
> downstream peer is considered to be acting maliciously.
>
> The amount in seconds can be proportional to the `cltv_delta` of that hop.
> To me the arbitrary choice of translating 10 blocks of `cltv_delta` to `1`
> second of expected reply time seems reasonable for now but could be chosen
> differently as long as the entire network (or at least every node included
> to the payment attempt) agrees upon the same conversion rate from
> `cltv_delta` to expected response time from downstream nodes.
>
> There are three cases for the reply:
>
> The Good reply case (HTLC 

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi again AJ and list,

I have slightly re-worked your proposal, and came up with the following
(I also added the musig2 nonces for completeness):

Alice -> Bob: commitment_proposed
channel id
adaptor sigs for PTLCs to Bob in Alice's next commitment
musig nonces for Alice to spend funding tx
musig nonces for Bob to spend funding tx

Bob -> Alice: commitment_proposed
channel id
adaptor sigs for PTLCs to Alice in Bob's next commitment
musig nonces for Alice to spend funding tx
musig nonces for Bob to spend funding tx

Bob -> Alice: commitment_signed
channel id
signature for Alice to spend funding tx
sigs for Alice to spend HTLCs and PTLCs from her next commitment

Alice -> Bob: revoke_and_ack
channel id
reveal previous commitment secret
next commitment point

Alice -> Bob: commitment_signed
channel id
signature for Bob to spend funding tx
sigs for Bob to spend HTLCs and PTLCs from his next commitment

Bob -> Alice: revoke_and_ack
channel id
reveal previous commitment secret
next commitment point

I believe it's exactly the same flow of data between peers as your
proposal, but I simply split the data into several messages. Let me
know if that's incorrect or if I missed a subtlety in your proposal.

This has some small advantages:

* commitment_signed and revoke_and_ack are mostly unchanged, we just
add a new message before the commit / revoke dance. The only change
happens in commitment_signed, where the signatures for PTLC-success
transactions will actually become adaptor signatures.
* the new adaptor signatures are in commitment_proposed instead of being
in commitment_signed, which ensures that we can still have 2*483
pending (H|P)TLCs: since the message size is limited to 65kB, we would
otherwise decrease our maximum to ~2*335 with your proposal (very rough
calculation)
* the messages are now symmetrical, which may be easier to reason about

One thing to note is that we reversed the order in which participants
sign new commitments. We previously had Alice sign first, whereas now
if Alice initiates, Bob will sign the updated commitment first. This is
why we add only 0.5 RTT instead of 1 RTT compared to the current protocol.
I don't think this is an issue, but if someone sees a way to maliciously
exploit this, please share it!

I updated my article [0], people jumping on the thread now may find it
helpful to better understand this discussion.

Thanks,
Bastien

[0] https://github.com/t-bast/lightning-docs/pull/16

Le mer. 8 déc. 2021 à 11:00, Bastien TEINTURIER  a écrit :

> Hi AJ,
>
> I think the problem t-bast describes comes up here as well when you
>> collapse the fast-forwards (or, anytime you update the commitment
>> transaction even if you don't collapse them).
>
>
> Yes, exactly.
>
> I think doing a synchronous update of commitments to the channel state,
>> something like:
>
>
>
> Alice -> Bob: propose_new_commitment
>> channel id
>> adaptor sigs for PTLCs to Bob
>
>
>> Bob -> Alice: agree_new_commitment
>> channel id
>> adaptor sigs for PTLCs to Alice
>> sigs for Alice to spend HTLCs and PTLCs to Bob from her own
>> commitment tx
>> signature for Alice to spend funding tx
>>
>> Alice -> Bob: finish_new_commitment_1
>> channel id
>> sigs for Bob to spend HTLCs and PTLCs to Alice from his own
>> commitment tx
>> signature for Bob to spend funding tx
>> reveal old prior commitment secret
>> new commitment nonce
>>
>> Bob -> Alice: finish_new_commitment_2
>> reveal old prior commitment secret
>> new commitment nonce
>>
>> would work pretty well.
>
>
> I agree, this is better than my naive addition of a `remote_ptlcs_signed`
> message in both directions, and even though it changes the protocol
> messages
> it stays very close to the mechanisms we currently have.
>
> I'll spend some time specifying this in more details, to verify that we're
> not missing anything. What I really like about this proposal is that we
> can probably bundle that protocol change with `option_simplified_update`
> [0]
> without the adaptor sigs, and simply add the adaptor sigs as tlvs when we
> do PTLCs. That lets us deploy this new update protocol separately from
> PTLCs
> and ensure it also simplifies the state machine and makes other features
> such as splicing [1] and dynamic channel upgrades [2] easier.
>
> Thanks,
> Bastien
>
> [0] https://github.com/lightning/bolts/pull/867
> [1] https://github.com/lightning/bolts/pull/863
> [2] https://github.com/lightning/bolts/pull/868
>
> Le mer. 8 déc. 2021 à 10:29, Anthony Towns  a écrit :
>
>> On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev
>> wrote:
>> > Alternate

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi AJ,

I think the problem t-bast describes comes up here as well when you
> collapse the fast-forwards (or, anytime you update the commitment
> transaction even if you don't collapse them).


Yes, exactly.

I think doing a synchronous update of commitments to the channel state,
> something like:



Alice -> Bob: propose_new_commitment
> channel id
> adaptor sigs for PTLCs to Bob


> Bob -> Alice: agree_new_commitment
> channel id
> adaptor sigs for PTLCs to Alice
> sigs for Alice to spend HTLCs and PTLCs to Bob from her own
> commitment tx
> signature for Alice to spend funding tx
>
> Alice -> Bob: finish_new_commitment_1
> channel id
> sigs for Bob to spend HTLCs and PTLCs to Alice from his own
> commitment tx
> signature for Bob to spend funding tx
> reveal old prior commitment secret
> new commitment nonce
>
> Bob -> Alice: finish_new_commitment_2
> reveal old prior commitment secret
> new commitment nonce
>
> would work pretty well.


I agree, this is better than my naive addition of a `remote_ptlcs_signed`
message in both directions, and even though it changes the protocol messages
it stays very close to the mechanisms we currently have.

I'll spend some time specifying this in more details, to verify that we're
not missing anything. What I really like about this proposal is that we
can probably bundle that protocol change with `option_simplified_update` [0]
without the adaptor sigs, and simply add the adaptor sigs as tlvs when we
do PTLCs. That lets us deploy this new update protocol separately from PTLCs
and ensure it also simplifies the state machine and makes other features
such as splicing [1] and dynamic channel upgrades [2] easier.

Thanks,
Bastien

[0] https://github.com/lightning/bolts/pull/867
[1] https://github.com/lightning/bolts/pull/863
[2] https://github.com/lightning/bolts/pull/868

Le mer. 8 déc. 2021 à 10:29, Anthony Towns  a écrit :

> On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev wrote:
> > Alternately, fast-forwards, which avoid this because it does not change
> commitment transactions on the payment-forwarding path.
> > You only change commitment transactions once you have enough changes to
> justify collapsing them.
>
> I think the problem t-bast describes comes up here as well when you
> collapse the fast-forwards (or, anytime you update the commitment
> transaction even if you don't collapse them).
>
> That is, if you have two PTLCs, one from A->B conditional on X, one
> from B->A conditional on Y. Then if A wants to update the commitment tx,
> she needs to
>
>   1) produce a signature to give to B to spend the funding tx
>   2) produce an adaptor signature to authorise B to spend via X from his
>  commitment tx
>   3) produce a signature to allow B to recover Y after timeout from his
>  commitment tx spending to an output she can claim if he cheats
>   4) *receive* an adaptor signature from B to be able to spend the Y output
>  if B posts his commitment tx using A's signature in (1)
>
> The problem is, she can't give B the result of (1) until she's received
> (4) from B.
>
> It doesn't matter if the B->A PTLC conditional on Y is in the commitment
> tx itself or within a fast-forward child-transaction -- any previous
> adaptor sig will be invalidated because there's a new commitment
> transaction, and if you allowed any way of spending without an adaptor
> sig, B wouldn't be able to recover the secret and would lose funds.
>
> It also doesn't matter if the commitment transaction that A and B will
> publish is the same or different, only that it's different from the
> commitment tx that previous adaptor sigs committed to. (So ANYPREVOUT
> would fix this if it were available)
>
> So I think this is still a relevant question, even if fast-forwards
> make it a rare problem, that perhaps is only applicable to very heavily
> used channels.
>
> (I said the following in email to t-bast already)
>
> I think doing a synchronous update of commitments to the channel state,
> something like:
>
>Alice -> Bob: propose_new_commitment
>channel id
>adaptor sigs for PTLCs to Bob
>
>Bob -> Alice: agree_new_commitment
>channel id
>adaptor sigs for PTLCs to Alice
>sigs for Alice to spend HTLCs and PTLCs to Bob from her own
>  commitment tx
>signature for Alice to spend funding tx
>
>Alice -> Bob: finish_new_commitment_1
>channel id
>sigs for Bob to spend HTLCs and PTLCs to Alice from his own
>  commitment tx
>signature for Bob to spend funding tx
>reveal old prior commitment secret
>new commitment nonce
>
>Bob -> Alice: finish_new_commitment_2
>reveal old prior commitment secret
>new commitment nonce
>
> would work pretty well.
>
> This adds half a round-trip compared to now:
>
>Alice -> Bob: commitment_signed
>Bob -> Alice: revoke_and_ack, commitment_signed
>Alice -> Bob: revoke_and_ack
>
> The timings change like 

Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Bastien TEINTURIER
Hi Z,

`SIGHASH_NONE | SIGHASH_NOINPUT` (which will take another what, four
> years?) or a similar "covenant" opcode,

such as `OP_CHECKTEMPLATEVERIFY` without any commitments or an
> `OP_CHECKSIGFROMSTACK` on an empty message.
> All you really need is a signature for an empty message, really...
>

That fails my requirement of "deployable in 2022" :)

Same thing applies to fast-forwards: I do see their value, but I'd like to
focus on a first version with minimal changes to the transaction structure
and the update protocol, to ensure we can actually get agreement on it
somewhat quickly and ship it in 2022. Then we can start working on a
more ambitious rework of the protocol that adds a lot of cool features,
such as what AJ proposed recently.

Cheers,
Bastien

Le mer. 8 déc. 2021 à 00:52, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > I believe these new transactions may require an additional round-trip.
> > Let's take a very simple example, where we have one pending PTLC in each
> > direction: PTLC_AB was offered by A to B and PTLC_BA was offered by B to
> A.
> >
> > Now A makes some unrelated updates and wants to sign a new commitment.
> > A cannot immediately send her `commitment_signed` to B.
> > If she did, B would be able to broadcast this new commitment, and A would
> > not be able to claim PTLC_BA from B's new commitment (even if she knew
> > the payment secret) because she wouldn't have B's signature for the new
> > PTLC-remote-success transaction.
> >
> > So we first need B to send a new message `remote_ptlcs_signed` to A that
> > contains B's adaptor signatures for the PTLC-remote-success transactions
> > that would spend B's future commitment. After that A can safely send her
> > `commitment_signed`. Similarly, A must send `remote_ptlcs_signed` to B
> > before B can send its `commitment_signed`.
> >
> > It's actually not that bad, we're only adding one message in each
> direction,
> > and we're not adding more data (apart from nonces) to existing messages.
> >
> > If you have ideas on how to avoid this new message, I'd be glad to hear
> > them, hopefully I missed something again and we can make it better!
>
> `SIGHASH_NONE | SIGHASH_NOINPUT` (which will take another what, four
> years?) or a similar "covenant" opcode, such as `OP_CHECKTEMPLATEVERIFY`
> without any commitments or an `OP_CHECKSIGFROMSTACK` on an empty message.
> All you really need is a signature for an empty message, really...
>
> Alternately, fast-forwards, which avoid this because it does not change
> commitment transactions on the payment-forwarding path.
> You only change commitment transactions once you have enough changes to
> justify collapsing them.
> Even in the aj formulation, when A adds a PTLC it only changes the
> transaction that hosts **only** A->B PTLCs as well as the A main output,
> all of which can be sent outright by A without changing any B->A PTLCs.
>
> Basically... instead of a commitment tx like this:
>
> +---+
> funding outpoint -->|   |--> A main
> |   |--> B main
> |   |--> A->B PTLC
> |   |--> B->A PTLC
> +---+
>
> We could do this instead:
>
> +---+2of2  +-+
> funding outpoint -->|   |->| |--> A main
> |   |  | |--> A->B PTLC
> |   |  +-+
> |   |2or2  +-+
> |   |->| |--> B main
> |   |  | |--> B->A PTLC
> +---+  +-+
>
> Then whenever A wants to add a new A->B PTLC it only changes the tx inputs
> of the *other* A->B PTLCs without affecting the B->A PTLCs.
> Payment forwarding is fast, and you only change the "big" commitment tx
> rarely to clean up claimed and failed PTLCs, moving the extra messages out
> of the forwarding hot path.
>
> But this is basically highly similar to what aj designed anyway, so...
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Bastien TEINTURIER
Hi Jeremy,

Right now, lightning anchor outputs use a 330 sats amount. Each commitment
transaction has two such outputs, and only one of them is spent to help the
transaction get confirmed, so the other stays there and bloats the utxo set.
We allow anyone to spend them after a csv of 16 blocks, in the hope that
someone will claim a batch of them when the fees are low and remove them
from the utxo set. However, that trick wouldn't work with 0-value outputs,
as
no-one would ever claim them (doesn't make economical sense).

We actually need to have two of them to avoid pinning: each participant is
able to spend only one of these outputs while the parent tx is unconfirmed.
I believe N-party protocols would likely need N such outputs (not sure).

You mention a change to the carve-out rule, can you explain it further?
I believe it would be a necessary step, otherwise 0-value outputs for
CPFP actually seem worse than low-value ones...

Thanks,
Bastien

Le mer. 8 déc. 2021 à 02:29, Jeremy via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

> Bitcoin Devs (+cc lightning-dev),
>
> Earlier this year I proposed allowing 0 value outputs and that was shot
> down for various reasons, see
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
>
> I think that there can be a simple carve out now that package relay is
> being launched based on my research into covenants from 2017
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf.
>
> Essentially, if we allow 0 value outputs BUT require as a matter of policy
> (or consensus, but policy has major advantages) that the output be used as
> an Intermediate Output (that is, in order for the transaction to be
> creating it to be in the mempool it must be spent by another tx)  with the
> additional rule that the parent must have a higher feerate after CPFP'ing
> the parent than the parent alone we can both:
>
> 1) Allow 0 value outputs for things like Anchor Outputs (very good for not
> getting your eltoo/Decker channels pinned by junk witness data using Anchor
> Inputs, very good for not getting your channels drained by at-dust outputs)
> 2) Not allow 0 value utxos to proliferate long
> 3) It still being valid for a 0 value that somehow gets created to be
> spent by the fee paying txn later
>
> Just doing this as a mempool policy also has the benefits of not
> introducing any new validation rules. Although in general the IUTXO concept
> is very attractive, it complicates mempool :(
>
> I understand this may also be really helpful for CTV based contracts (like
> vault continuation hooks) as well as things like spacechains.
>
> Such a rule -- if it's not clear -- presupposes a fully working package
> relay system.
>
> I believe that this addresses all the issues with allowing 0 value outputs
> to be created for the narrow case of immediately spendable outputs.
>
> Cheers,
>
> Jeremy
>
> p.s. why another post today? Thank Greg
> https://twitter.com/JeremyRubin/status/1468390561417547780
>
>
> --
> @JeremyRubin 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-07 Thread Bastien TEINTURIER
Hi Z, Lloyd,

Let's ignore the musig nonce exchanges for now. I believe these can be
easily included in existing messages: they probably aren't the reason we
need more round-trips (at least not the one I'm concerned about for now).

Basically, if my memory and understanding are accurate, in the above,
> it is the *PTLC-offerrer* which provides an adaptor signature.
> That adaptor signature would be included in the `update_add_ptlc` message.


Neat, you're completely right, I didn't realize that the adaptor signature
could be completed by the other party, this is a great property I had
missed.
Thanks for pointing it out, it does simplify the protocol a lot!

I don't think you can include it in `update_add_ptlc` though, it has to
be in `commitment_signed`, because if you do a batch of updates before
signing, you would immediately invalidate the adaptor signatures you
previously sent.

But it would be a simple change, where the signatures in `commitment_signed`
would actually be adaptor signatures for PTLC-success transactions and
normal signatures for PTLC-timeout transactions.

Isn't it the case that all previous PTLC adaptor signatures need to be
> re-sent for each update_add_ptlc message because the signatures would
> no longer be valid once the commit tx changes


Yes indeed, whenever the commitment changes, peers need to create new
signatures and adaptor signatures for all pending PTLCs.

This is completely fine for PTLC-success and PTLC-timeout transactions,
but we also need to exchange signatures for the new pre-signed transactions
that spend a PTLC from the remote commitment. Let's call this new pre-signed
transaction PTLC-remote-success (not a great name).

I believe these new transactions may require an additional round-trip.
Let's take a very simple example, where we have one pending PTLC in each
direction: PTLC_AB was offered by A to B and PTLC_BA was offered by B to A.

Now A makes some unrelated updates and wants to sign a new commitment.
A cannot immediately send her `commitment_signed` to B.
If she did, B would be able to broadcast this new commitment, and A would
not be able to claim PTLC_BA from B's new commitment (even if she knew
the payment secret) because she wouldn't have B's signature for the new
PTLC-remote-success transaction.

So we first need B to send a new message `remote_ptlcs_signed` to A that
contains B's adaptor signatures for the PTLC-remote-success transactions
that would spend B's future commitment. After that A can safely send her
`commitment_signed`. Similarly, A must send `remote_ptlcs_signed` to B
before B can send its `commitment_signed`.

It's actually not that bad, we're only adding one message in each direction,
and we're not adding more data (apart from nonces) to existing messages.

If you have ideas on how to avoid this new message, I'd be glad to hear
them, hopefully I missed something again and we can make it better!

Thanks,
Bastien

Le mar. 7 déc. 2021 à 09:04, ZmnSCPxj  a écrit :

> Good morning LL, and t-bast,
>
> > > Basically, if my memory and understanding are accurate, in the above,
> it is the *PTLC-offerrer* which provides an adaptor signature.
> > > That adaptor signature would be included in the `update_add_ptlc`
> message.
> >
> > Isn't it the case that all previous PTLC adaptor signatures need to be
> re-sent for each update_add_ptlc message because the signatures would no
> longer be valid once the commit tx changes. I think it's better to put it
> in `commitment_signed` if possible. This is what is done with pre-signed
> HTLC signatures at the moment anyway.
>
> Agreed.
>
> This is also avoided by fast-forwards, BTW, simply because fast-forwards
> delay the change of the commitment tx.
> It is another reason to consider fast-forwards, too
>
> Regards,
> ZmnSCPxj
>
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] PTLCs early draft specification

2021-12-06 Thread Bastien TEINTURIER
Good morning list,

There was a great recent post on the mailing list detailing how we could
do PTLCs on lightning with a lot of other goodies [0]. This proposal
contained heavy changes to the transaction structure and the update
protocol. While it's certainly something we'll want to do in the long
run, I wanted to explore the minimal set of changes we would need to be
able to deploy PTLCs as soon as possible.

The current result is a somewhat high-level article, where each section
could be a separate update of the lightning protocol [1].

I tried to make PTLCs work with minimal changes to the transaction
structure and the update protocol, but they introduce a fundamental
change which forces us to make more changes than I'd like.

With HTLCs, the payment secret (the preimage of the payment hash) was
directly revealed in the witness of a spending transaction.

With PTLCs, this isn't the case anymore. The payment secret is a private
key, and a spending transaction only reveals that key if you have a
matching adaptor signature. This forces us to make two changes:

1. We must obtain adaptor signatures before sending our commit_sig
2. We must use a pre-signed HTLC-success transaction not only with our
local commit, but also with the remote commit

This means that we will need more round-trips whenever we update our
commitment. I'd like to find the right design trade-off where we don't
introduce too many changes in the protocol while minimizing the number
of additional round-trips.

We currently exchange the following messages:

Alice   Bob
  update_add_htlc
  --->
  update_add_htlc
  --->
  update_add_htlc
  --->
commit_sig
  --->
   revoke_and_ack
  <---
commit_sig
  <---
   revoke_and_ack
  --->

It works well because the commit_sig sent by Alice only contains signatures
for Bob's transactions (commit and htlc transactions), and the commit_sig
sent by Bob only contains signatures for Alice's transactions, and Alice
and Bob don't need anything else to spend outputs from either commitment.

But with PTLCs, Bob needs a signature from Alice to be able to fulfill a
PTLC from Alice's commitment. And Alice needs Bob to provide an adaptor
signature for that transaction before she can give him her signature.
We don't have the clean ordering that we had before.

The designs I came up with that keep the current messages and just insert
new ones are either too costly (too many additional round-trips) or too
complex (most likely broken in some edge cases).

I believe we need to change the commit_sig / revoke_and_ack protocol if
we want to find the sweet spot I'm looking for. I'd like to collect ideas
from this list's participants on how we could do that. This is probably
something that should be bundled with option_simplified_commitment [2]
(or at least we must ensure that option_simplified_commitment is a first
step towards the protocol we'll need for PTLCs). It's also important to
note that the protocol changes must work for both HTLCs and PTLCs, and
shouldn't change the structure of the transactions (not more than the
simple addition of PTLC outputs done in [1]).

Cheers,
Bastien

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003278.html
[1] https://github.com/t-bast/lightning-docs/pull/16
[2] https://github.com/lightning/bolts/pull/867
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-19 Thread Bastien TEINTURIER
Hi Matt,

I like this proposal, it's a net improvement compared to hodling HTLCs
at the recipient's LSP. With onion messages, we do have all the tools we
need to build this. I don't think we can do much better than that anyway
if we want to keep payments fully non-custodial. This will be combined
with notifications to try to get the recipient to go online asap.

One thing to note is that the senders also need to come online while
the payment isn't settled, otherwise there is a risk they'll lose their
channels. If the sender's LSP receives the preimage but the sender does
not come online, the sender's LSP will have to force-close to claim the
HTLC on-chain when it gets close to the timeout.

Definitely not a show-stopper, just an implementation detail to keep in
mind.

Bastien

Le jeu. 14 oct. 2021 à 02:20, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Matt,
>
> > On 10/13/21 02:58, ZmnSCPxj wrote:
> >
> > > Good morning Matt,
> > >
> > > >  The Obvious (tm) solution here is PTLCs - just have the sender
> always add some random nonce * G to
> > > >  the PTLC they're paying and send the recipient a random nonce
> in the onion. I'd generally suggest we
> > > >  just go ahead and do this for every PTLC payment, cause why
> not? Now the sender and the lnurl
> > > >  endpoint have to collude to steal the funds, but, like, the
> sender could always just give the lnurl
> > > >  endpoint the money. I'd love suggestions for fixing this short
> of PTLCs, but its not immediately
> > > >  obvious to me that this is possible.
> > > >
> > >
> > > Use two hashes in an HTLC instead of one, where the second hash is
> from a preimage the sender generates, and which the sender sends (encrypted
> via onion) to the receiver.
> > > You might want to do this anyway in HTLC-land, consider that we have a
> `payment_secret` in invoices, the second hash could replace that, and
> provide similar protection to what `payment_secret` provides (i.e.
> resistance against forwarding nodes probing; the information in both cases
> is private to the ultimate sender and ultimate reeceiver).
> >
> > Yes, you could create a construction which does this, sure, but I'm not
> sure how you'd do this
> > without informing every hop along the path that this is going on, and
> adapting each hop to handle
> > this as well. I suppose I should have been more clear with the
> requirements, or can you clarify
> > somewhat what your proposed construction is?
>
> Just that: two hashes instead of one.
> Make *every* HTLC on LN use two hashes, even for current "online RPi user
> pays online RPi user" --- just use the `payment_secret` for the preimage of
> the second hash, the sender needs to send it anyway.
>
> >
> > If you're gonna adapt every node in the path, you might as well just use
> PTLC.
>
> Correct, we should just do PTLCs now.
> (Basically, my proposal was just a strawman to say "we should just do
> PTLCs now")
>
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Opening balanced channels using PSBT

2021-09-22 Thread Bastien TEINTURIER
Hi,

This is exactly what the dual funding proposal provides:
https://github.com/lightningnetwork/lightning-rfc/pull/851

Cheers,
Bastien

Le mer. 22 sept. 2021 à 07:29, Ole Henrik Skogstrøm 
a écrit :

> Hi
>
> I have found a way of opening balanced channels using LND's psbt option
> when opening channels. What I'm doing is essentially just joining funded
> PSBTs before signing and submitting them. This makes it possible to open a
> balanced channel between two nodes or open a ring of balanced channels
> between multiple nodes (ROF).
>
> I found this interesting, however I don't know if this is somehow unsafe
> or for some other reason a bad idea. If not, then it could be an
> interesting alternative to only being able to open unbalanced channels.
>
> To do this efficiently, nodes need to collaborate by sending PSBTs back
> and forth to each other and doing this manually is a pain, so if this makes
> sense to do, it would be best to automate it through a client.
>
> --
> --- Here is an example of the complete flow for a single channel:
> --
>
> ** Node A: generates a new address and sends address to Node B *(lncli
> newaddress p2wkh)
>
> ** Node A starts an Interactive channel **open** to Node B* *using psbt*
> (lncli openchannel --psbt  200 100)
>
> ** Node A funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B funds the refund transaction to Node A and sends PSBT back to
> Node A (*bitcoin-cli walletcreatefundedpsbt []
> '[{"":0.01}]')
>
> * *Node A joins the two PSBTs and sends it back to Node B (*bitcoin-cli
> joinpsbts '["", ""]')
>
> ** Node B verifies the content and signs the joined PSBT before sending it
> back to Node A *(bitcoin-cli walletprocesspsbt )
>
> ** Node A: Verifies the content and signs the joined PSBT *(bitcoin-cli
> walletprocesspsbt )
>
> ** Node A: Completes channel open by publishing the fully signed PSBT*
>
>
> --
> --- Here is an example of the complete flow for a ring of channels between
> multiple nodes:
> --
>
> ** Node A starts an Interactive open channel to Node B using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node A funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B starts an Interactive open channel to Node C using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node B funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node C starts an Interactive open channel to Node A using psbt* (lncli
> openchannel --psbt  200 100)
> ** Node C funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B and C sends Node A their PSBTs*
>
> ** Node A joins all the PSBTs* (bitcoin-cli joinpsbts
> '["", "",
> ""]')
>
> Using (bitcoin-cli walletprocesspsbt ):
>
>
>
> ** Node A verifies and signs the PSBT and sends it to Node B (1/3
> signatures)* Node B verifies and signs the PSBT and sends it to Node C (2/3
> signatures)* Node C verifies and signs the PSBT (3/3 signatures) before
> sending it to Node A and B.*
>
>
> ** Node A completes channel open (no_publish)* Node B completes channel
> open (no_publish)* Node C completes channel open and publishes the
> transaction.*
>
> --
> Ole Henrik Skogstrøm
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Bastien TEINTURIER
Hi Joost,

Concept ACK, I had toyed with something similar a while ago, but I hadn't
realized
that invoice storage was such a DoS vector for merchants/hubs and wasn't
sure it
would be useful.

Do you have an example of what information you would usually put in your
`encoded_order_details`?

I'd imagine that it would usually be simply a skuID from the merchant's
product
database, but it could also be fully self-contained data to identify a
"transaction"
(probably encrypted with a key belonging to the payee).

We'd want to ensure that this field is reasonably small, to ensure it can
fit in
onions without forcing the sender to use shorter routes or disable other
features.

Cheers,
Bastien


Le mar. 21 sept. 2021 à 15:17, Joost Jager  a écrit :

> On Tue, Sep 21, 2021 at 3:06 PM fiatjaf  wrote:
>
>> I would say, however, that these are two separate proposals:
>>
>>   1. implementations should expose a "stateless invoice" API for
>> receiving using the payment_secret;
>>   2. when sending, implementations should attach a TLV record with
>> encoded order details.
>>
>> Of these, 1 is very simple to do and do not require anyone to cooperate,
>> it just works.
>>
>> 2 requires full network compatibility, so it's harder. But 2 is also very
>> much needed otherwise the payee has to keep track of all the invoice ids
>> related to the orders they refer to, right?
>>
>
> Not completely sure what you mean by full network compatibility, but a
> network-wide upgrade including all routing nodes isn't needed. I think to
> do it cleanly we need a new tag for bolt11 and node implementations that
> carry over the contents of this field to a tlv record. So senders do need
> to support this.
>
>
>> But I think just having 1 already improves the situation a lot, and there
>> are application-specific workarounds that can be applied for 2 (having a
>> fixed, hardcoded set of possible orders, encoding the order very minimally
>> in the payment secret or route hint, storing order details on redis for
>> only 3 minutes and using lnurlpay to reduce the delay between invoice
>> issuance and user confirmation to zero, and so on).
>>
>
> A stateless invoice API would be a great thing to have. I've prototyped
> this in lnd and if you implement it so that a regular invoice is inserted
> 'just in time', it isn't too involved as you say.
>
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Asymmetric features

2021-07-08 Thread Bastien TEINTURIER
Good morning list,

I've been mulling over some limitations of our feature bits mechanism and
I'm interested in your ideas and comments.

Our feature bits mechanism works well for symmetric features (where both
peers play the same role) but not so well for asymmetric features (where
there is a client and a service provider). Here is a hypothetical example to
illustrate that. Any similarity to existing wallet features is entirely
coincidental.

Alice has a mobile lightning wallet that can be woken up via push
notifications.
Bob runs a lightning node that can send push notifications to mobile
wallets to
wake them up on important events (e.g. incoming htlcs).

We can't use a single feature bit to model that, because what Alice supports
is actually "I can be woken up via push notifications", but she can't send
push
notifications to other nodes (and similarly, Bob only supports waking up
other
nodes, not receiving push notifications).

So we must use two feature bits: `wake_me_up_plz` and `i_say_wake_up`.
Alice activates `wake_me_up_plz`, Bob activates `i_say_wake_up` and it's
now clear what part of the protocol each node can handle.

But how does Alice require her peers to support `i_say_wake_up`?
She can't turn on the feature with the mandatory bit because then her peers
would be confused and think she can wake up other devices.

I see two potential solutions:

   1. Re-purpose the meaning of `optional` and `mandatory` bits for
   asymmetric feature: the odd bit would mean "I support this feature"
   and the even bit would mean "I require my peer to support this feature"
   2. Add a requirement to send a warning and disconnect when a client
   connects to a provider that hasn't activated the provider-side feature

Thoughts?

Cheers,
Bastien

Note: I opened an issue for that for those who prefer github:
https://github.com/lightningnetwork/lightning-rfc/issues/885
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-02 Thread Bastien TEINTURIER
n
> actually jump in and use LN.
>
> In the end though, there's no grand global committee that prevents people
> from deploying software they think is interesting or useful. In the long
> run, I guess one simply needs to hope that bad ideas die out, or speak out
> against them to the public. As LN sits a layer above the base protocol,
> widespread global consensus isn't really required to make certain classes
> of
> changes, and you can't stop people from experimenting on their own.
>
> > We can't have collisions on any of these three things.
>
> Yeah, collisions are def possible. IMO, this is where the interplay with
> BOLTs comes in: BOLTs are the global feature bit/tlv/message namespace.  A
> bLIP might come with the amendment of BOLT 9 to define feature bits they
> used. Of course, this should be done on a best effort basis, as even if you
> assign a bit for your idea, someone can just go ahead and deploy something
> else w/ that same bit, and they may never really intersect depending on the
> nature or how widespread the new feature is.
>
> It's also likely the case that already implementations, or typically forks
> of implementations are already using "undocumented" TLVs or feature bits in
> the wild today. I don't know exactly which TLV type things like
> applications
> that tunnel messages over the network use, but afaik so far there haven't
> been any disastrous collisions in the wild.
>
> -- Laolu
>
> On Thu, Jul 1, 2021 at 2:19 AM Bastien TEINTURIER 
> wrote:
>
>> Thanks for starting that discussion.
>>
>> In my opinion, what we're really trying to address here are the two
>> following
>> points (at least from the point of view of someone who works on the spec
>> and
>> an implementation):
>>
>> - Implementers get frustrated when they've worked on something that they
>> think
>> is useful and they can't get it into the BOLTs (the spec PR isn't
>> reviewed,
>> it progresses too slowly or there isn't enough agreement to merge it)
>> - Implementers expect other implementers to specify the optional features
>> they
>> ship: we don't want to have to reverse-engineer a sub-protocol when users
>> want our implementation to provide support for feature XXX
>>
>> Note that these are two very different concerns.
>>
>> bLIPs/SPARKS/BIPs clearly address the second point, which is good.
>> But they don't address the first point at all, they instead work around
>> it.
>> To be fair, I don't think we can completely address that first point:
>> properly
>> reviewing spec proposals takes a lot of effort and accepting complex
>> changes
>> to the BOLTs shouldn't be done lightly.
>>
>> I am mostly in favor of this solution, but I want to highlight that it
>> isn't
>> only rainbows and unicorns: it will add fragmentation to the network, it
>> will
>> add maintenance costs and backwards-compatibility issues, many bLIPs will
>> be
>> sub-optimal solutions to the problem they try to solve and some bLIPs
>> will be
>> simply insecure and may put users' funds at risk (L2 protocols are hard
>> and have
>> subtle issues that can be easily missed). On the other hand, it allows
>> for real
>> world experimentation and iteration, and it's easier to amend a bLIP than
>> the
>> BOLTs.
>>
>> On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
>> bazaar
>> style of evolution. Most of them will need:
>>
>> - to assign feature bit(s)
>> - to insert new tlv fields in existing messages
>> - to create new messages
>>
>> We can't have collisions on any of these three things. bLIP XXX cannot
>> use the
>> same tlv types as bLIP YYY otherwise we're creating network
>> incompatibilities.
>> So they really need to be centralized, and we need a process to assign
>> these
>> and ensure they don't collide. It's not a hard problem, but we need to be
>> clear
>> about the process around those.
>>
>> Regarding the details of where they live, I don't have a strong opinion,
>> but I
>> think they must be easy to find and browse, and I think it's easier for
>> readers
>> if they're inside the spec repository. We already have PRs that use a
>> dedicated
>> "proposals" folder (e.g. [1], [2]).
>>
>> Cheers,
>> Bastien
>>
>> [1] https://github.com/lightningnetwork/lightning-rfc/pull/829
>> [2] https://github.com/lightningnetwork/lightning-rfc/pull/854
>>
>> Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a
>> écrit :
>>
>>> BIPs are already the Baza

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Bastien TEINTURIER
Thanks for starting that discussion.

In my opinion, what we're really trying to address here are the two
following
points (at least from the point of view of someone who works on the spec and
an implementation):

- Implementers get frustrated when they've worked on something that they
think
is useful and they can't get it into the BOLTs (the spec PR isn't reviewed,
it progresses too slowly or there isn't enough agreement to merge it)
- Implementers expect other implementers to specify the optional features
they
ship: we don't want to have to reverse-engineer a sub-protocol when users
want our implementation to provide support for feature XXX

Note that these are two very different concerns.

bLIPs/SPARKS/BIPs clearly address the second point, which is good.
But they don't address the first point at all, they instead work around it.
To be fair, I don't think we can completely address that first point:
properly
reviewing spec proposals takes a lot of effort and accepting complex changes
to the BOLTs shouldn't be done lightly.

I am mostly in favor of this solution, but I want to highlight that it isn't
only rainbows and unicorns: it will add fragmentation to the network, it
will
add maintenance costs and backwards-compatibility issues, many bLIPs will be
sub-optimal solutions to the problem they try to solve and some bLIPs will
be
simply insecure and may put users' funds at risk (L2 protocols are hard and
have
subtle issues that can be easily missed). On the other hand, it allows for
real
world experimentation and iteration, and it's easier to amend a bLIP than
the
BOLTs.

On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
bazaar
style of evolution. Most of them will need:

- to assign feature bit(s)
- to insert new tlv fields in existing messages
- to create new messages

We can't have collisions on any of these three things. bLIP XXX cannot use
the
same tlv types as bLIP YYY otherwise we're creating network
incompatibilities.
So they really need to be centralized, and we need a process to assign these
and ensure they don't collide. It's not a hard problem, but we need to be
clear
about the process around those.

Regarding the details of where they live, I don't have a strong opinion,
but I
think they must be easy to find and browse, and I think it's easier for
readers
if they're inside the spec repository. We already have PRs that use a
dedicated
"proposals" folder (e.g. [1], [2]).

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/829
[2] https://github.com/lightningnetwork/lightning-rfc/pull/854

Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a écrit :

> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
> create a BIP and they create an environment of discussion).
>
> BOLTs are essentially one big BIP in the sense that they started as a
> place for discussion but are now more rigid. BOLTs must be followed
> strictly to ensure a node is interoperable with the network. And BOLTs
> should be rigid, as rigid as any widely used BIP like 32 for example.
> Even though BOLTs were flexible when being drafted their purpose has
> changed from descriptive to prescriptive.
> Any alternatives, or optional features should be extracted out of
> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
> required flags set, messages sent, or alterations made to signal that
> the BIP's feature is enabled.
>
> A BOLT may at some point organically change to reference a BIP. For
> example if a BIP was drafted as an optional feature but then becomes
> more widespread and then turns out to be crucial for the proper
> operation of the network then a BOLT can be changed to just reference
> the BIP as mandatory. There isn't anything wrong with this.
>
> All of the above would work exactly the same if there was a bLIP
> repository instead. I don't see the value in having both bLIPs and
> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
> not restricted to exclude lightning, and never have been.
>
> I believe the reason this move to BIPs hasn't happened organically is
> because many still perceive the BOLTs available for editing, so
> changes continue to be made. If instead BOLTs were perceived as more
> "consensus critical", not subject to change, and more people were
> strongly encouraged to write specs for new lightning features
> elsewhere (like the BIP repo) then you would see this issue of growing
> BOLTs resolved.
>
> Cheers
> Ariel Lorenzo-Luaces
>
> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
> wrote:
> >
> > > That being said I think all the points that are addressed in Ryan's
> mail
> > > could very well be formalized into BOLTs but maybe we just need to
> rethink
> > > the current process of the BOLTs to make it more accessible for new
> ideas
> > > to find their way into the BOLTs?
> >
> > I think part of what bLIPs are trying to solve here 

Re: [Lightning-dev] Turbo channels spec?

2021-06-30 Thread Bastien TEINTURIER
>
> - MUST NOT send `announcement_signatures` messages until `funding_locked`
>   has been sent and received AND the funding transaction has at least
> six confirmations.
>
> So still compliant there?
>

Great, I hadn't spotted that one, so we're good on the
`announcement_signatures` side.

I'm wondering if `option_zeroconf` implies that we should set `min_depth =
0` in
`accept_channel`, since that's the number of confirmations before we can
send
`funding_locked`.

We need a signal that this channel uses zero-conf, and the two obvious
choices are:

   - set `min_depth = 0`
   - use a `channel_type` that sets `option_zeroconf`

I think the second option is better, this way we can keep a "normal"
`min_depth` set
and when we send `funding_locked`, we know that the channel is now
perfectly safe
to use (out of the zero-conf zone).

Cheers,
Bastien



Le mer. 30 juin 2021 à 02:09, Rusty Russell  a
écrit :

> Bastien TEINTURIER  writes:
> > Hi Rusty,
> >
> > On the eclair side, we instead send `funding_locked` as soon as we
> > see the funding tx in the mempool.
> >
> > But I think your proposal would work as well.
>
> This would be backward compatible, I think.  Eclair would send
> `funding_locked`, which is perfectly legal, but a normal peer would
> still wait for confirms before also sending `funding_locked`; it's
> just that option_zeroconf_channels would mean it doesn't have to
> wait for that before sending HTLCs?
>
> > We may want to defer sending `announcement_signatures` until
> > after the funding tx has been confirmed? What `min_depth` should
> > we use here? Should we keep a non-zero value in `accept_channel`
> > or should it be zero?
>
> You can't send it before you know the channel_id, so it has to be at
> least 1.  Spec says:
>
>   - MUST NOT send `announcement_signatures` messages until
> `funding_locked`
>   has been sent and received AND the funding transaction has at least
> six confirmations.
>
> So still compliant there?
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-06-29 Thread Bastien TEINTURIER
Hi Rusty,

On the eclair side, we instead send `funding_locked` as soon as we
see the funding tx in the mempool.

But I think your proposal would work as well.

We may want to defer sending `announcement_signatures` until
after the funding tx has been confirmed? What `min_depth` should
we use here? Should we keep a non-zero value in `accept_channel`
or should it be zero?

Cheers,
Bastien



Le mar. 29 juin 2021 à 07:34, Rusty Russell  a
écrit :

> Hi all!
>
> John Carvalo recently pointed out that not every implementation
> accepts zero-conf channels, but they are useful.  Roasbeef also recently
> noted that they're not spec'd.
>
> How do you all do it?  Here's a strawman proposal:
>
> 1. Assign a new feature bit "I accept zeroconf channels".
> 2. If both negotiate this, you can send update_add_htlc (etc) *before*
>funding_locked without the peer getting upset.
> 3. Nodes are advised *not* to forward HTLCs from an unconfirmed channel
>unless they have explicit reason to trust that node (they can still
>send *out* that channel, because that's not their problem!).
>
> It's a pretty simple change, TBH (this zeroconf feature would also
> create a new set of channel_types, altering that PR).
>
> I can draft something this week?
>
> Thanks!
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Bastien TEINTURIER
I looked into this more closely, and as far as I understand it, the spec
already states that you should not count dust HTLCs:

*if result would be offering more than the remote's max_accepted_htlcs
HTLCs, in the remote commitment transaction: *

   - *MUST NOT add an HTLC.*

Note that it clearly says "in the remote commitment transaction", which
means
you don't count HTLCs that are dust or trimmed.

That matches eclair's behavior: we don't count dust HTLCs towards that
limit.
Is lnd including them in that count? What about other implementations?
If that's the case, that can simply be fixed in lnd without any spec change
IMHO.

Note that this also excludes trimmed HTLCs from the count, which means that
nodes that set `max_accepted_htlcs` to 483 may be exposed to the issue I
described earlier (impossible to lower the feerate because the HTLC count
would
become greater than the limit).

Bastien

Le sam. 24 avr. 2021 à 10:01, Bastien TEINTURIER  a
écrit :

> You're right, I was thinking about trimmed HTLCs (which can re-appear in
> the commit tx
> if you lower the feerate via update_fee).
>
> Dust HTLCs will never appear in the commit tx regardless of subsequent
> update_fees,
> so Eugene's suggestion could make sense!
>
> Le sam. 24 avr. 2021 à 06:02, Matt Corallo  a
> écrit :
>
>> The update_fee message does not, as far as I recall, change the dust
>> limit for outputs in a channel (though I’ve suggested making such a change).
>>
>> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
>>
>> 
>> Hi Eugene,
>>
>> The reason dust HTLCs count for the 483 HTLC limit is because of
>> `update_fee`.
>> If you don't count them and exceed the 483 HTLC limit, you can't lower
>> the fee anymore
>> because some HTLCs that were previously dust won't be dust anymore and
>> you may end
>> up with more than 483 HTLC outputs in your commitment, which opens the
>> door to other
>> kinds of attacks.
>>
>> This is the first issue that comes to mind, but there may be other
>> drawbacks if we dig into
>> this enough with an attacker's mindset.
>>
>> Bastien
>>
>> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a
>> écrit :
>>
>>> I propose a simple mitigation to increase the capital requirement of
>>> channel-jamming attacks. This would prevent an unsophisticated attacker
>>> with low capital from jamming a target channel.  It seems to me that this
>>> is a *free* mitigation without any downsides (besides code-writing), so I'd
>>> like to hear other opinions.
>>>
>>> In a commitment transaction, we trim dust HTLC outputs.  I believe that
>>> the reason for the 483 HTLC limit each side has in the spec is to prevent
>>> commitment tx's from growing unreasonably large, and to ensure they are
>>> still valid tx's that can be included in a block.  If we don't include dust
>>> HTLCs in this calculation, since they are not on the commitment tx, we
>>> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
>>> There could be a configurable limit on the number of outstanding dust
>>> HTLCs, but the point is that it doesn't affect the non-dust throughput of
>>> the channel.  This raises the capital requirement of channel-jamming so
>>> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>>>
>>> Interested in others' thoughts.
>>>
>>> Eugene (Crypt-iQ)
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-24 Thread Bastien TEINTURIER
You're right, I was thinking about trimmed HTLCs (which can re-appear in
the commit tx
if you lower the feerate via update_fee).

Dust HTLCs will never appear in the commit tx regardless of subsequent
update_fees,
so Eugene's suggestion could make sense!

Le sam. 24 avr. 2021 à 06:02, Matt Corallo  a
écrit :

> The update_fee message does not, as far as I recall, change the dust limit
> for outputs in a channel (though I’ve suggested making such a change).
>
> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
>
> 
> Hi Eugene,
>
> The reason dust HTLCs count for the 483 HTLC limit is because of
> `update_fee`.
> If you don't count them and exceed the 483 HTLC limit, you can't lower the
> fee anymore
> because some HTLCs that were previously dust won't be dust anymore and you
> may end
> up with more than 483 HTLC outputs in your commitment, which opens the
> door to other
> kinds of attacks.
>
> This is the first issue that comes to mind, but there may be other
> drawbacks if we dig into
> this enough with an attacker's mindset.
>
> Bastien
>
> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a écrit :
>
>> I propose a simple mitigation to increase the capital requirement of
>> channel-jamming attacks. This would prevent an unsophisticated attacker
>> with low capital from jamming a target channel.  It seems to me that this
>> is a *free* mitigation without any downsides (besides code-writing), so I'd
>> like to hear other opinions.
>>
>> In a commitment transaction, we trim dust HTLC outputs.  I believe that
>> the reason for the 483 HTLC limit each side has in the spec is to prevent
>> commitment tx's from growing unreasonably large, and to ensure they are
>> still valid tx's that can be included in a block.  If we don't include dust
>> HTLCs in this calculation, since they are not on the commitment tx, we
>> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
>> There could be a configurable limit on the number of outstanding dust
>> HTLCs, but the point is that it doesn't affect the non-dust throughput of
>> the channel.  This raises the capital requirement of channel-jamming so
>> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>>
>> Interested in others' thoughts.
>>
>> Eugene (Crypt-iQ)
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-23 Thread Bastien TEINTURIER
Hi Eugene,

The reason dust HTLCs count for the 483 HTLC limit is because of
`update_fee`.
If you don't count them and exceed the 483 HTLC limit, you can't lower the
fee anymore
because some HTLCs that were previously dust won't be dust anymore and you
may end
up with more than 483 HTLC outputs in your commitment, which opens the door
to other
kinds of attacks.

This is the first issue that comes to mind, but there may be other
drawbacks if we dig into
this enough with an attacker's mindset.

Bastien

Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a écrit :

> I propose a simple mitigation to increase the capital requirement of
> channel-jamming attacks. This would prevent an unsophisticated attacker
> with low capital from jamming a target channel.  It seems to me that this
> is a *free* mitigation without any downsides (besides code-writing), so I'd
> like to hear other opinions.
>
> In a commitment transaction, we trim dust HTLC outputs.  I believe that
> the reason for the 483 HTLC limit each side has in the spec is to prevent
> commitment tx's from growing unreasonably large, and to ensure they are
> still valid tx's that can be included in a block.  If we don't include dust
> HTLCs in this calculation, since they are not on the commitment tx, we
> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
> There could be a configurable limit on the number of outstanding dust
> HTLCs, but the point is that it doesn't affect the non-dust throughput of
> the channel.  This raises the capital requirement of channel-jamming so
> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>
> Interested in others' thoughts.
>
> Eugene (Crypt-iQ)
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Bastien TEINTURIER
Great idea, I'll join as well.
Thanks for setting this in motion.

Le ven. 23 avr. 2021 à 17:39, Antoine Riard  a
écrit :

> Hi Jeremy,
>
> Yes dates are floating for now. After Bitcoin 2021, sounds a good idea.
>
> Awesome, I'll be really interested to review again an improved version of
> sponsorship. And I'll try to sketch out the sighash_no-input fee-bumping
> idea which was floating around last year during pinnings discussions. Yet
> another set of trade-offs :)
>
> Le ven. 23 avr. 2021 à 11:25, Jeremy  a écrit :
>
>> I'd be excited to join. Recommend bumping the date  to mid June, if
>> that's ok, as many Americans will be at Bitcoin 2021.
>>
>> I was thinking about reviving the sponsors proposal with a 100 block lock
>> on spending a sponsoring tx which would hopefully make less controversial,
>> this would be a great place to discuss those tradeoffs.
>>
>> On Fri, Apr 23, 2021, 8:17 AM Antoine Riard 
>> wrote:
>>
>>> Hi,
>>>
>>> During the lastest years, tx-relay and mempool acceptances rules of the
>>> base layer have been sources of major security and operational concerns for
>>> Lightning and other Bitcoin second-layers [0]. I think those areas require
>>> significant improvements to ease design and deployment of higher Bitcoin
>>> layers and I believe this opinion is shared among the L2 dev community. In
>>> order to make advancements, it has been discussed a few times in the last
>>> months to organize in-person workshops to discuss those issues with the
>>> presence of both L1/L2 devs to make exchange fruitful.
>>>
>>> Unfortunately, I don't think we'll be able to organize such in-person
>>> workshops this year (because you know travel is hard those days...) As a
>>> substitution, I'm proposing a series of one or more irc meetings. That
>>> said, this substitution has the happy benefit to gather far more folks
>>> interested by those issues that you can fit in a room.
>>>
>>> # Scope
>>>
>>> I would like to propose the following 4 items as topics of discussion.
>>>
>>> 1) Package relay design or another generic L2 fee-bumping primitive like
>>> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
>>> making obsolete propagation of transactions with pre-signed feerate, solve
>>> pinning attacks compromising Lightning/multi-party contract protocol
>>> safety, offer an usable and stable API to L2 software stack, stay
>>> compatible with miner and full-node operators incentives and obviously
>>> minimize CPU/memory DoS vectors.
>>>
>>> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it
>>> trivial for an attacker to partition network mempools in divergent subsets
>>> and from then launch advanced security or privacy attacks against a
>>> Lightning node. Note, it might also be a concern for bandwidth bleeding
>>> attacks against L1 nodes.
>>>
>>> 3) Guidelines about coordinated cross-layers security disclosures.
>>> Mitigating a security issue around tx-relay or the mempool in Core might
>>> have harmful implications for downstream projects. Ideally, L2 projects
>>> maintainers should be ready to upgrade their protocols in emergency in
>>> coordination with base layers developers.
>>>
>>> 4) Guidelines about L2 protocols onchain security design. Currently
>>> deployed like Lightning are making a bunch of assumptions on tx-relay and
>>> mempool acceptances rules. Those rules are non-normative, non-reliable and
>>> lack documentation. Further, they're devoid of tooling to enforce them at
>>> runtime [2]. IMHO, it could be preferable to identify a subset of them on
>>> which second-layers protocols can do assumptions without encroaching too
>>> much on nodes's policy realm or making the base layer development in those
>>> areas too cumbersome.
>>>
>>> I'm aware that some folks are interested in other topics such as
>>> extension of Core's mempools package limits or better pricing of RBF
>>> replacement. So l propose a 2-week concertation period to submit other
>>> topics related to tx-relay or mempools improvements towards L2s before to
>>> propose a finalized scope and agenda.
>>>
>>> # Goals
>>>
>>> 1) Reaching technical consensus.
>>> 2) Reaching technical consensus, before seeking community consensus as
>>> it likely has ecosystem-wide implications.
>>> 3) Establishing a security incident response policy which can be applied
>>> by dev teams in the future.
>>> 4) Establishing a philosophy design and associated documentations (BIPs,
>>> best practices, ...)
>>>
>>> # Timeline
>>>
>>> 2021-04-23: Start of concertation period
>>> 2021-05-07: End of concertation period
>>> 2021-05-10: Proposition of workshop agenda and schedule
>>> late 2021-05/2021-06: IRC meetings
>>>
>>> As the problem space is savagely wide, I've started a collection of
>>> documents to assist this workshop : https://github.com/ariard/L2-zoology
>>> Still wip, but I'll have them in a good shape at agenda publication,
>>> with reading suggestions and open questions to structure discussions.

[Lightning-dev] Trampoline routing improvements and updates

2020-12-28 Thread Bastien TEINTURIER
Good morning list,

Before we close this amazing year, I wanted to give you an update and reboot
excitement around trampoline routing for 2021.

Acinq has been running a trampoline node for more than a year to provide
simple
and reliable payments for tens of thousands of Phoenix [1] users. We've
learned
a lot and I just opened a new trampoline routing spec PR to reflect that
[2].

The TL;DR is:

* it's simpler than the previous proposal and more flexible
* it makes MPP more cost-efficient and reliable
* it works nicely with rendezvous or route blinding
* it's as private as normal payments (likely more private) if used properly
(details in the PR)

I strongly believe the current state of trampoline routing can provide great
benefits in terms of wallet UX and reliability, but we need more reviews for
the spec to converge before it can be broadly deployed without fear of
moving
parts or breaking changes. Please have a look at the proposal without
preconceived ideas; you may be surprised by how simple and natural it feels.

I also want to stress that the code changes are very reasonable as it
re-uses
a lot of components that are already part of every lightning implementation
and
doesn't introduce new assumptions.

As a matter of fact, an independent implementation has been completed by the
Electrum team and has been recently tested E2E on mainnet! Having a spec
agreement on feature bits, invoice hints format and onion error codes would
allow their wallet to fully interoperate with Phoenix and future trampoline
wallets, as well as unblock development of even more improvements.

Happy end of year to all and stay #reckless in 2021!

Bastien

[1] https://phoenix.acinq.co/
[2] https://github.com/lightningnetwork/lightning-rfc/pull/829
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning Distributed Routing

2020-11-30 Thread Bastien TEINTURIER
Hi Joao,

Thanks for the time you spent on this, the paper is clear on the trade-offs
(sacrificing some privacy for
efficiency).

My main negative feedback here is that you seem to assume that nodes will
honestly cooperate.
It feels to me that nodes can cheat and gossip biased or invalid
information to their peers in order to
attract more payments through their nodes (and collect more fees or put
honest routing nodes out of
business).

Is that something you've thought about?

Cheers,
Bastien

Le dim. 29 nov. 2020 à 00:46, João Valente  a écrit :

> Hey!
>
> I've been working on this new concept for routing in the lightning
> network. It leverages the use of the information nodes have on the
> distribution of funds in their channels to try and maximize the probability
> of success for a payment.
> Each node shares with his neighbours the information it has about the
> distribution of funds in its own neighbourhood through the form of a
> routing table. As nodes receive new tables they'll be updating their own
> locally maintained tables with the new information, periodically sharing
> them with their neighbours.
> Routing tables associate destination addresses (representing nodes in the
> network) to the next hop in the maximum capacity path to these nodes.
> If a new payment is to be made a payment probe is forwarded by the payer
> and through every node in the path, collects the path information along the
> way, and reaches the payee who returns it to the payer. The payer can then
> use this knowledge and confidently use the discovered path to route LN
> payments through.
>
> I wrote a 10 page paper about the subject and would love to get some
> feedback:
>
> https://drive.google.com/file/d/1dahW0X-N59138ZbY-4odpXjpDnX4Gb7Z/view?usp=sharing
>
> Cheers,
> João Valente
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mitigating Channel Jamming with Stake Certificates

2020-11-27 Thread Bastien TEINTURIER
Good morning list,

This is an interesting approach to solve this problem, I really like the
idea.
It definitely deserves digging more into it: the fact that it doesn't add
an additional
payment makes it largely superior to upfront payment schemes in terms of UX.

If we restrict these stake certificates to LN funding txs, which have a
very specific format
(multisig 2-of-2) there are probably smart ways to achieve this.
If for example we're able to do it easily with Schnorr-based funding txs,
it may be worth
waiting for that to happen.
I'm a bit afraid of having to use ZKPs for general statements, I'd prefer
something tailored
to that specific case (it would likely be more efficient and have less new
assumptions - even
though you're right to point out that this is a non-critical system, so
we're freer to experiment
with hot new stuff).

I completely agree with Z that it should be added to the requirements that
a node cannot
reuse a stake certificate from another node for himself.

Another constraint is that the proof has to be small, since we have to fit
> it all in a small onion...
>

I'm not sure that's necessary. If I understand correctly, you're saying
that because in your
model, the sender (Alice) creates one stake certificate for each node in
the route (Bob, Carol)
and puts them in the onion.

But instead it could be a point-to-point property: each node provides its
own stake certificate
to the next node (and only to that node). Alice provides a stake
certificate to Bob, then Bob
provides a stake certificate to Carol, and so on. If that's the case, it
can be in a tlv field in the
`update_add_htlc` message and doesn't need to be inside the onion. This
also makes it less
likely that Alice is exposing herself to remote nodes in the route (payer
privacy).

Of course, this depends on the implementation details we choose, but I
think it's worth stressing
that these two models exist and are quite different.

Thanks,
Bastien

Le ven. 27 nov. 2020 à 07:46, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Gleb,
>
> > Thank you for your interest :)
> >
> > > Quick question: if I am a routing node and receive a valid stake
> certificate, can I reuse this stake certificate on my own outgoing payments?
> >
> > That probably should be avoided, otherwise a mediocre routing node gets
> a lot of jamming opportunities for no good.
> >
> > You are right, that’s a strong argument for proof “interactivity”: every
> Certificate should probably commit to *at least* public key of the routing
> node it is generated for.
>
> Right, it would be better to have the certificate commit to a specific
> routing node rather than the payment hash/point as I proposed.
> Committing to a payment hash/point allows a random forwarding node to
> probe the rest of the network using the same certificate, lowering the
> score for that certificate on much of the network.
>
> Another constraint is that the proof has to be small, since we have to fit
> it all in a small onion...
>
> Presumably we also want the score to eventually "settle to 0" over time.
>
> Regards,
> ZmnSCPxj
>
> >
> > – gleb
> > On Nov 27, 2020, 2:16 AM +0200, ZmnSCPxj ,
> wrote:
> >
> > > Good morning Gleb and Antoine,
> > >
> > > This is certainly interesting!
> > >
> > > Quick question: if I am a routing node and receive a valid stake
> certificate, can I reuse this stake certificate on my own outgoing payments?
> > >
> > > It seems to me that the proof-of-stake-certificate should also somehow
> integrate a detail of the current payment (such as payment hash/point) so
> it cannot be reused by routing nodes for their own outgoing payments.
> > >
> > > For example, looking only at your naive privacy-broken proposal, the
> signature must use a `sign-to-contract` where the `R` in the signature is
> actually `R' + h(R' | payment_hash)` with the `R'` also revealed.
> > >
> > > Regards,
> > > ZmnSCPxj
> > >
> > > > Hello list,
> > > >
> > > > In this post, we explore a different approach to channel jamming
> mitigation.
> > > > We won’t talk about the background here, for the problem description
> as well as some proposed solutions (mainly upfront payment schemes), see
> [1].
> > > >
> > > > We’re suggesting using UTXO ownership proofs (a.k.a. Stake
> Certificates) to solve this problem. Previously, these proofs were only
> used in the Lightning Network at channel announcement time to prevent
> malicious actors from announcing channels they don’t control. One can think
> of it as a “fidelity bond” (as a scarce resource) as a requirement for
> sending HTLCs.
> > > >
> > > > We start by overviewing issues with other solutions, and then
> present a naive, privacy-broken Stake Certificates. Then we examine
> designing a privacy-preserving version, evaluating them. At the end, we
> talk about non-trivial design decisions and open questions.
> > > >
> > > > ## Issues with other proposals
> > > >
> > > > We find unsatisfying that upfront 

Re: [Lightning-dev] Minor tweaks to blinded path proposal

2020-11-19 Thread Bastien TEINTURIER
Hey Rusty,

Good questions.

I think we could use additive tweaks, and they are indeed faster so it can
be worth doing.
We would replace `B(i) = HMAC256("blinded_node_id", ss(i)) * P(i)` by `B(i)
= HMAC256("blinded_node_id", ss(i)) * G + P(i)`.
Intuitively since the private key of the tweak comes from a hash function,
it should offer the same security.
But there may be dragons lurking there, I don't know how to properly
evaluate whether it's as secure (whereas the multiplicative
version is really just Sphinx, so we know it should be secure).

If we're able to use additive tweaks, we can probably indeed use x-only
pubkeys.
Even though we're not storing these on-chain, so the 1 byte saved isn't
worth much.
I'd say that if it's trivial to use them, let's do it, otherwise it's not
worth any additional effort.

Cheers,
Bastien

Le mer. 18 nov. 2020 à 06:18, Rusty Russell  a
écrit :

>
> See:
>
> https://github.com/lightningnetwork/lightning-rfc/blob/route-blinding/proposals/route-blinding.md
>
> 1. Can we use additive tweaks instead of multiplicative?
>They're slightly faster, and supported by the x-only secp API.
> 2. Can we use x-only pubkeys?  It's generally trivial, and a byte
>shorter.  I'm using them in offers to great effect.
>
> Thanks!
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-11-02 Thread Bastien TEINTURIER via Lightning-dev
Good morning Joost and Z,

So in your proposal, an htlc that is received by a routing node has the
> following properties:
> * htlc amount
> * forward up-front payment (anti-spam)
> * backward up-front payment (anti-hold)
> * grace period
> The routing node forwards this to the next hop with
> * lower htlc amount (to earn routing fees when the htlc settles)
> * lower forward up-front payment (to make sure that an attacker at the
> other end loses money when failing quickly)
> * higher backward up-front payment (to make sure that an attacker at the
> other end loses money when holding)
> * shorter grace period (so that there is time to fail back and not lose
> the backward up-front payment)


That's exactly it, this is a good summary.

An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.


Yes, that is a good observation.
But shouldn't the swap service take that into account in the fee it
collects to
perform the swap? That way it is in fact the user who pays for that fee.

Cheers,
Bastien

Le mer. 28 oct. 2020 à 02:13, ZmnSCPxj  a écrit :

> Good morning Bastien, Joost, and all,
>
> An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
>
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.
>
> It seems to me that the hold-fees mechanism cannot be ported over in the
> onchain side, so even if you set a "reasonable" grace period at the swap
> service of say 1 hour (and assuming forwarding nodes are OK with that
> humongous grace period!), the onchain side of the swap can delay the
> release of onchain.
>
> To mitigate against this, the swap service would need to issue a separate
> invoice to pay for the hold fee for the "real" swap payment.
> The Boltz protocol supports a separate mining-fee invoice (disabled on the
> Boltz production servers) that is issued after the invoice is "locked in"
> at the swap service, but I think that in view of the use of hold fee, a
> combined mining-fee+hold-fee invoice would have to be issued at the same
> time as the "real" swap invoice.
>
> Regards,
> ZmnSCPxj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Hey Joost and Z,

I brought up the question about the amounts because it could be that
> amounts high enough to thwart attacks are too high for honest users or
> certain uses.


I don't think this is a concern for this proposal, unless there's an attack
vector I missed.
The reason I claim that is that the backwards upfront payment can be made
somewhat big without any
negative impact on honest nodes. If you're an honest intermediate node,
only two cases are possible:

* your downstream peer settled the HTLC quickly (before the grace period
ends): in that case you
refund him his upfront fee, and you have time to settle the HTLC upstream
while still honoring
the grace period, so it will be refunded to you as well (unless you delay
the settlement upstream
for whatever reason, in which case you deserve to pay the hold_fee)
* your grace period has expired, so you can't get a refund upstream: if
that happens, the grace
period with your downstream node has also expired, so you're earning money
downstream and paying
money upstream, and you'll usually even take a small positive spread so
everything's good

The only node that can end up loosing money on the backwards upfront
payment is the last node in
the route. But that node should always settle the HTLC quickly (or decide
to hodl it, but in that
case it's normal that it pays the hold_fee).

But what happens if the attacker is also on the other end of the
> uncontrolled spam payment? Not holding the payment, but still collecting
> the forward payments?


That's what I call short-lived `controlled spam`. In that case the attacker
pays the forward fee at
the beginning of the route but has it refunded at the end of the route. If
the attacker doesn't
want to lose any money, he has to release the HTLC before the grace period
ends (which is going to
be short-lived - at least compared to block times). This gives an
opportunity for legitimate payments
to use the HTLC slots (but it's a race between the attacker and the
legitimate users).

It's not ideal, because the attacker isn't penalized...the only way I think
we can penalize this
kind of attack is if the forward fee decrements at each hop, but in that
case it needs to be in the
onion (to avoid probing) and the delta needs to be high enough to actually
penalize the attacker.
Time to bikeshed some numbers!

C can trivially grief D here, making it look like D is delaying, by
> delaying its own `commitment_signed` containing the *removal* of the HTLC.


You're right to dive into these, there may be something here.
But I think your example doesn't work, let me know if I'm mistaken.
D is the one who decides whether he'll be refunded or not, because D is the
first to send the
`commit_sig` that removes the HTLC. I think we would extend `commit_sig`
with a tlv field that
indicates "I refunded myself for HTLC N" to help C compute the same commit
tx and verify sigs.

I agree with you that the details of how we'll implement the grace period
may have griefing attacks
depending on how we do it, it's worth exploring further.

Cheers,
Bastien

Le ven. 23 oct. 2020 à 12:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > > And in this case C earns.
> >
> > > Can C delay the refund to D to after the grace period even if D
> settled the HTLC quickly?
> >
> > Yes C earns, but D has misbehaved. As a final recipient, D isn't
> dependent on anyone downstream.
> > An honest D should settle the HTLC before the `grace_period` ends. If D
> chooses to hold the HTLC
> > for a while, then it's fair that he pays C for this.
>
>
> Okay, now let us consider the case where the supposedly-delaying party is
> not the final destination.
>
> So, suppose D indicates to C that it should fail the HTLC.
> In this case, C cannot immediately propagate the `update_fail_htlc`
> upstream, since the latest commitment transaction for the C<->D channel
> still contains the HTLC.
>
> In addition, our state machine is hand-over-hand, i.e. there is a small
> window where there are two valid commitment transactions.
> What happens is we sign the next commitment transaction and *then* revoke
> the previous one.
>
> So I think C can only safely propagate its own upstream `update_fail_htlc`
> once it receives the `revoke_and_ack` from D.
>
> So the time measured for the grace period between C and D should be from C
> sending `update_add_htlc` to C receiving `revoke_and_ack` from D, in case
> the HTLC fails.
> This is the time period that D is allowed to consume, and if it exceeds
> the grace period, it is penalized.
>
> (In this situation, it is immaterial if D is the destination: C cannot
> know this fact.)
>
> So let us diagram this better:
>
>  C   D
>  |update_add_htlc--->| ---
>  |---commitment_signed-->|  ^
>  |  |<--commitment_signed---|  |
>  |-revoke_and_ack--->|  |
>  |   | grace period
>  |<--update_fail_htlc|  |
>  |<--commitment_signed---|  |

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Thanks for your answers,

My first instinct is that additional complications are worse in general.
> However, it looks like simpler solutions are truly not enough, so adding
> the complication may very well be necessary.


I agree with both these statements ;). I'd love to find a simpler solution,
but this is the simplest
I've been able to come up with for now that seems to work without adding
griefing vectors...

The succeeding text refers to HTLCs "settling".


As you noted, settling means getting the HTLC removed from the commitment
transaction.
It includes both fulfills and fails, otherwise the proposal indeed doesn't
penalize spam.

If we also require that the hold fee be funded from the main output, then
> we cannot use single-funded channels, except perhaps with `push_msat`.


I see what you mean, the first payment cannot require a hold fee since the
fundee doesn't have a
main output. I think it's ok, it's the same thing as the reserve not being
met initially.

But you're right that there are potentially other mechanisms to enforce the
fee (like your suggestion
of subtracting from the HTLC output), I chose the simplest for now but we
can (and will) revisit
that choice if we think that the overall mechanisms work!

And in this case C earns.

Can C delay the refund to D to after the grace period even if D settled the
> HTLC quickly?


Yes C earns, but D has misbehaved. As a final recipient, D isn't dependent
on anyone downstream.
An honest D should settle the HTLC before the `grace_period` ends. If D
chooses to hold the HTLC
for a while, then it's fair that he pays C for this.

it is the fault of the peer for getting disconnected and having a delay in
> reconnecting, possibly forfeiting the hold fee because of that.


I think I agree with that, but we'll need to think about the pros and cons
when we get to details.

Is 1msat going to even deter anyone?

I am wondering though what the values for the fwd and bwd fees should be. I
> agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.


These values are only chosen for the simplicity of the example's sake. If
we agree the proposal works
to fight spam, we will do some calculations to figure a good value for
this. But I think finding the
right base values will not be the hard part, so we'll focus on this if
we're convinced the proposal
is worth exploring in full details.

It is interesting that the forward and backward payments are relatively
> independent of each other


To explain this further, I think it's important to highlight that the
forward fee is meant to fight
`uncontrolled spam` (where the recipient is an honest node) while the
backward fee is meant to fight
`controlled spam` (where the recipient also belongs to the attacker).

The reason it works is because the `uncontrolled spam` requires the
attacker to send a large volume
of HTLCs, so a very small forward fee gets magnified. The backward fee will
be much bigger because
in `controlled spam`, the attacker doesn't need a large volume of HTLCs but
holds them for a long
time. What I think is nice is that this proposal has only a tiny cost for
honest senders (the
forward fee).

What I'd really like to explore is whether there is a type of spam that I
missed or griefing attacks
that appear because of the mechanisms I introduce. TBH I think the
implementation details (amounts,
grace periods and their deltas, when to start counting, etc) are things
we'll be able to figure out
collectively later.

Thanks again for your time!
Bastien


Le ven. 23 oct. 2020 à 07:58, Joost Jager  a écrit :

> Hi Bastien,
>
> We add a forward upfront payment of 1 msat (fixed) that is paid
>> unconditionally when offering an HTLC.
>> We add a backwards upfront payment of `hold_fees` that is paid when
>> receiving an HTLC, but refunded
>> if the HTLC is settled before the `hold_grace_period` ends (see footnotes
>> about this).
>>
>
> It is interesting that the forward and backward payments are relatively
> independent of each other. In particular the forward anti-spam payment
> could quite easily be implemented to help protect the network. As you said,
> just transfer that fixed fee for every `update_add_htlc` message from the
> offerer to the receiver.
>
> I am wondering though what the values for the fwd and bwd fees should be.
> I agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.
>
> Maybe a way to approach it is this: suppose routing nodes are able to make
> 5% per year on their committed capital. An aggressive routing node could be
> willing to spend up to that amount to take down a competitor.
>
> Suppose the network consists only of 1 BTC, 483 slot channels. What should
> the fwd and bwd fees be so that even an attacked routing node will still
> earn that 5% (not through forwarding fees, but through hold fees) in both
> the controlled and the uncontrolled spam scenario?
>
> - Joost
>
___
Lightning-dev mailing 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-22 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Sorry in advance for the lengthy email, but I think it's worth detailing my
hybrid proposal
(bidirectional upfront payments), it feels to me like a workable solution
that builds on
previous proposals. You can safely ignore the details at the end of the
email and focus only on
the high-level mechanism at first.

Let's consider the following route: A -> B -> C -> D

We add a `hold_grace_period_delta` field to `channel_update` (in seconds).
We add two new fields in the tlv extension of `update_add_htlc`:

* `hold_grace_period` (seconds)
* `hold_fees` (msat)

We add an `outgoing_hold_grace_period` field in the onion per-hop payload.

When nodes receive an `update_add_htlc`, they verify that:

* `hold_fees` is not unreasonable large
* `hold_grace_period` is not unreasonably small or large
* `hold_grace_period` - `outgoing_hold_grace_period` >=
`hold_grace_period_delta`

Otherwise they immediately fail the HTLC instead of relaying it.

For the example we assume all nodes use `hold_grace_period_delta = 10`.

We add a forward upfront payment of 1 msat (fixed) that is paid
unconditionally when offering an HTLC.
We add a backwards upfront payment of `hold_fees` that is paid when
receiving an HTLC, but refunded
if the HTLC is settled before the `hold_grace_period` ends (see footnotes
about this).

* A sends an HTLC to B:
* `hold_grace_period = 100 sec`
* `hold_fees = 5 msat`
* `next_hold_grace_period = 90 sec`
* forward upfront payment: 1 msat is deduced from A's main output and added
to B's main output
* backwards upfront payment: 5 msat are deduced from B's main output and
added to A's main output
* B forwards the HTLC to C:
* `hold_grace_period = 90 sec`
* `hold_fees = 6 msat`
* `next_hold_grace_period = 80 sec`
* forward upfront payment: 1 msat is deduced from B's main output and added
to C's main output
* backwards upfront payment: 6 msat are deduced from C's main output and
added to B's main output
* C forwards the HTLC to D:
* `hold_grace_period = 80 sec`
* `hold_fees = 7 msat`
* `next_hold_grace_period = 70 sec`
* forward upfront payment: 1 msat is deduced from C's main output and added
to D's main output
* backwards upfront payment: 7 msat are deduced from D's main output and
added to C's main output

* Scenario 1: D settles the HTLC quickly:
* all backwards upfront payments are refunded (returned to the respective
main outputs)
* only the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 2: D settles the HTLC after the grace period:
* D's backwards upfront payment is not refunded
* If C and B relay the settlement upstream quickly (before
`hold_grace_period_delta`) their backwards
upfront payments are refunded
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 3: C delays the HTLC:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* C delays before settling upstream: it can ensure B will not get refunded,
but C will not get
refunded either so B gains the difference in backwards upfront payments
(which protects against
`controlled spam`)
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 4: the channel B <-> C closes:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* for whatever reason (malicious or not) the B <-> C channel closes
* this ensures that C's backwards upfront payment is paid to B
* if C publishes an HTLC-fulfill quickly, B may have his backwards upfront
payment refunded by A
* if B is forced to wait for his HTLC-timeout, his backwards upfront
payment will not be refunded
but it's ok because B got C's backwards upfront payment
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

If done naively, this mechanism may allow intermediate nodes to deanonymize
sender/recipient.
If the base `grace_period` and `hold_fees` are randomized, I believe this
attack vector disappears,
but it's worth exploring in more details.

The most painful part of this proposal will be handling the `grace_period`:

* when do you start counting: when you send/receive `update_add_htlc`,
`commit_sig` or
`revoke_and_ack`?
* what happens if there is a disconnection (how do you account for the
delay of reconnecting)?
* what happens if the remote settles after the `grace_period`, but refunds
himself when sending his
`commit_sig` (making it look like from his point of view he settled before
the `grace_period`)?
I think in that case the behavior should be to give your peers some leeway
and let them get away
with it, but record it. If they're doing it too often, close channels and
ban them; stealing
upfront fees should never be worth losing channels.

I chose to make the backwards upfront payment fixed instead of scaling it
based on the time an HTLC
is left pending; it's slightly less penalizing for spammers, but is less
complex and 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-19 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

I've started summarizing proposals, attacks and threat models on github [1].
I'm hoping it will help readers get up-to-speed and avoid falling in the
same pitfalls we already
fell into with previous proposals.

I've kept it very high-level for now; we can add nitty-gritty technical
details as we slowly
converge towards acceptable solutions. I have probably missed subtleties
from previous proposals;
feel free to contribute to correct my mistakes. I have omitted for examples
the details of Rusty's
previous proposal since he mentioned a new, better one that will be
described soon.

While doing this exercise, I couldn't find a reason why the `reverse
upfront payment` proposal
would be broken (notice that I described it using a flat amount after a
grace period, not an amount
based on the time HTLCs are held). Can someone point me to the most obvious
attacks on it?

It feels to me that its only issue is that it still allows spamming for
durations smaller than the
grace period; my gut feeling is that if we add a smaller forward direction
upfront payment to
complement it it could be a working solution.

Pasting it here for completeness:

### Reverse upfront payment

This proposal builds on the previous one, but reverses the flow. Nodes pay
a fee for *receiving*
HTLCs instead of *sending* them.

```text
A -> B -> C -> D

B pays A to receive the HTLC.
Then C pays B to receive the forwarded HTLC.
Then D pays C to receive the forwarded HTLC.
```

There must be a grace period during which no fees are paid; otherwise the
`uncontrolled spam` attack
allows the attacker to force all nodes in the route to pay fees while he's
not paying anything.

The fee cannot be the same at each hop, otherwise it's free for the
attacker when he is at both
ends of the payment route.

This fee must increase as the HTLC travels downstream: this ensures that
nodes that hold HTLCs
longer are penalized more than nodes that fail them fast, and if a node has
to hold an HTLC for a
long time because it's stuck downstream, they will receive more fees than
what they have to pay.

The grace period cannot be the same at each hop either, otherwise the
attacker can force Bob to be
the only one to pay fees. Similarly to how we have `cltv_expiry_delta`,
nodes must have a
`grace_period_delta` and the `grace_period` must be bigger upstream than
downstream.

Drawbacks:

* The attacker can still lock HTLCs for the duration of the `grace_period`
and repeat the attack
continuously

Open questions:

* Does the fee need to be based on the time the HTLC is held?
* What happens when a channel closes and HTLC-timeout has to be redeemed
on-chain?
* Can we implement this without exposing the route length to intermediate
nodes?

Cheers,
Bastien

[1] https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md

Le dim. 18 oct. 2020 à 09:25, Joost Jager  a écrit :

> > We've looked at all kinds of trustless payment schemes to keep users
>>
>> > honest, but it appears that none of them is satisfactory. Maybe it is
>> even
>> > theoretically impossible to create a scheme that is trustless and has
>> all
>> > the properties that we're looking for. (A proof of that would also be
>>
>> > useful information to have.)
>>
>> I don't think anyone has drawn yet a formal proof of this, but roughly a
>> routing peer Bob, aiming to prevent resource abuse at HTLC relay is seeking
>> to answer the following question "Is this payment coming from Alice and
>> going to Caroll will compensate for my resources consumption ?". With the
>> current LN system, the compensation is conditional on payment settlement
>> success and both Alice and Caroll are distrusted yet discretionary on
>> failure/success. Thus the underscored question is undecidable for a routing
>> peer making relay decisions only on packet observation.
>>
>> One way to mitigate this, is to introduce statistical observation of
>> sender/receiver, namely a reputation system. It can be achieved through a
>> scoring system, web-of-trust, or whatever other solution with the same
>> properties.
>> But still it must be underscored that statistical observations are only
>> probabilistic and don't provide resource consumption security to Bob, the
>> routing peer, in a deterministic way. A well-scored peer may start to
>> suddenly misbehave.
>>
>> In that sense, the efficiency evaluation of a reputation-based solution
>> to deter DoS must be evaluated based based on the loss of the reputation
>> bearer related to the potential damage which can be inflicted. It's just
>> reputation sounds harder to compute accurately than a pure payment-based
>> DoS protection system.
>>
>
> I can totally see the issues and complexity of a reputation-based system.
> With 'trustless payment scheme' I meant indeed a trustless pure
> payment-based DoS protection system and the question whether such a system
> can be proven to not exist. A sender would pay an up-front amount to cover
> the maximum cost, but with the 

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker 
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell  writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2].  Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> >stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> >when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> >and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> >sent the initial commitsig but not the final revocation) and
> >receive-in-progress (if you have received the initial commitsig
> >not not received the final revocation).  If either is set,
> >the sender (as indicated by the flags) retransmits the entire
> >sequence.
> >Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler.  There is only ever one pair of commitment txs for any
> >given commitment index.
> > 2. Fee changes are now deterministic.  No worrying about the case where
> >the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> >negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> >receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> >this would be compulsory) you'll be able signficantly simplify;
> >you'll need to record the index at which HTLCs were changed
> >(added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] 

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
Hey laolu,

I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal.


Yes, maybe it's better to not offer two mechanisms and wait for dynamic
commitments to offer that
flexibility.

Instead, you may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.


Exactly, that's the kind of heuristic I had in mind. Peers need to slowly
build trust before you
give them access to more resources.

This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information


Agreed, it's easy to implement locally but it's not going to be very nice
to your peer, who has
no way of knowing why you're rejecting HTLCs and may end up closing the
channel because it sees
weird behavior. That's why we need to offer an explicit re-negotiation of
these parameters, let's
keep this use-case in mind when designing dynamic commitments!

Cheers,
Bastien

Le lun. 12 oct. 2020 à 20:59, Olaoluwa Osuntokun  a
écrit :

>
> > I suggest adding tlv records in `commitment_signed` to tell our channel >
> > peer that we're changing the values of these fields.
>
> I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal. Note that in that paradigm, something
> like this would be a distinct message, and also only be allowed with a
> "clean commitment" (as otherwise what if I reduce the number of slots to a
> value that is lower than the number of active slots?). With this, both
> sides
> would be able to propose/accept/deny updates to the flow control parameters
> that can be used to either increase the security of a channel, or implement
> a sort of "slow start" protocol for any new peers that connect to you.
>
> Similar to congestion window expansion/contraction in TCP, when a new peer
> connects to you, you likely don't want to allow them to be able to consume
> all the newly allocated bandwidth in an outgoing direction. Instead, you
> may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.
>
> A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
> several classes of attacks (supplementing any mitigations by "channel
> acceptor" hooks), and also give forwarding nodes more _control_ of exactly
> how their allocated bandwidth is utilized by all connected peers.  This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information, and may end up in weird re-send loops (as they _why_ an
> HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
> state, since we don't have any sort of "unadd", then the channel may end up
> borked if the violating party keeps retransmitting the same update upon
> reconnection.
>
> > Are there other fields you think would need to become dynamic as well?
>
> One other value that IMO should be dynamic to protect against future
> unexpected events is the dust limit. "It Is Known", that this value
> "doesn't
> really change", but we should be able to upgrade _all_ channels on the fly
> if it does for w/e reason.
>
> -- Laolu
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
I totally agree with the simplicity argument, I wanted to raise this
because it's (IMO) an issue
today because of the way we deal with on-chain fees, but it's less
impactful once update_fee is
scoped to some min_relay_fee.

Let's put this aside for now then and we can revisit later if needed.

Thanks for the feedback everyone!
Bastien

Le lun. 12 oct. 2020 à 20:49, Olaoluwa Osuntokun  a
écrit :

> > It seems to me that the "funder pays all the commit tx fees" rule exists
> > solely for simplicity (which was totally reasonable).
>
> At this stage, I've learned that simplicity (when doing anything that
> involves multi-party on-chain fee negotiating/verification/enforcement can
> really go a long way). Just think about all the edge cases w.r.t
> _allocating
> enough funds to pay for fees_ we've discovered over the past few years in
> the state machine. I fear adding a more elaborate fee splitting mechanism
> would only blow up the number of obscure edge cases that may lead to a
> channel temporarily or permanently being "borked".
>
> If we're going to add a "fairer" way of splitting fees, we'll really need
> to
> dig down pre-deployment to ensure that we've explored any resulting edge
> cases within our solution space, as we'll only be _adding_ complexity to
> fee
> splitting.
>
> IMO, anchor commitments in their "final form" (fixed fee rate on commitment
> transaction, only "emergency" use of update_fee) significantly simplifies
> things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
> fees". However, as you note this doesn't fully distribute the worst-case
> cost of needing to go to chain with a "fully loaded" commitment
> transaction.
> Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
> perspective, once again putting the burden on the broadcaster/confirmer to
> make up the difference.
>
> -- Laolu
>
>
> On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels would
>> only exist between peers
>> that trust each other, which is quite limiting (I'm hoping we can do
>> better).
>>
>> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
>> that steal funds come from
>> the fact that a routing node has paid downstream but cannot claim the
>> upstream HTLCs (correct me
>> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
>> the HTLCs they offer
>> while they're pending in the commit-tx, regardless of whether they're
>> funder or fundee.
>>
>> The simplest way to do this would be to deduce the HTLC cost (172 *
>> feerate) from the offerer's
>> main output (instead of the funder's main output, while keeping the base
>> commit tx weight paid
>> by the funder).
>>
>> A more extreme proposal would be to tie the *total* commit-tx fee to the
>> channel usage:
>>
>> * if there are no pending HTLCs, the funder pays all the fee
>> * if there are pending HTLCs, each node pays a proportion of the fee
>> proportional to the number of
>> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
>> pays 75% of the
>> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
>> redistributed.
>>
>> This model uses the on-chain fee as collateral for usage of the channel.
>> If Alice wants to forward
>> HTLCs through this channel (because she has something to gain - routing
>> fees), she should be taking
>> on some of the associated risk, not Bob. Bob will be taking the same risk
>> downstream if he chooses
>> to forw

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-12 Thread Bastien TEINTURIER via Lightning-dev
Good morning,

For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.


That's an interesting comment, it may be worth exploring.
IIUC you're suggesting that payments may look like this:

* Alice wants to reach Dave by going through Bob and Carol
* An onion encodes the route Alice -> Bob -> Carol -> Dave
* When Bob receives that onion and discovers that Carol is the next node,
he finds a route to Carol
and sends it along that route, but it's not an onion, it's "clear-text"
routing
* When Carol receives that message, she unwraps the Alice -> Bob -> Carol
-> Dave onion to discover
that Dave is the next hop and applies the same steps as Bob

It looks a lot like Trampoline, but Trampoline does onion routing between
intermediate nodes.
Your proposal would replace that with a potentially more efficient but less
private routing scheme.
As long as the Trampoline route does use onion routing, it could make
sense...

For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?


Invoices to the rescue!
Since lightning payments are invoice-based, recipients would add to the
invoice a few nodes that
are close to them (or a partial route, which would probably be better for
privacy).

Thanks,
Bastien

Le dim. 11 oct. 2020 à 10:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Hey Zman,
> >
> > > raising the minimum payment size is another headache
> >
> > It's true that it may (depending on the algorithm) lower the success
> rate of MPP-split.
> > But it's already a parameter that node operators can configure at will
> (at channel creation time),
> > so IMO it's a complexity we have to deal with anyway. Making it dynamic
> shouldn't have a high
> > impact on MPP algorithms (apart from failures while `channel_update`s
> are propagating).
>
> Right, it should not have much impact.
>
> For the most part, when considering the possibility of splicing in the
> future, we should consider that such parameters must be made changeable
> largely.
>
>
> >
> > To be fully honest, my (maybe unpopular) opinion about MPP is that it's
> not necessary on the
> > network's backbone, only at its edges. Once the network matures, I
> expect channels between
> > "serious" routing nodes to be way bigger than the size of individual
> payments. The only places
> > where there may be small or almost-empty channels are between end-users
> (wallets) and
> > routing nodes.
> > If something like Trampoline were to be implemented, MPP would only be
> needed to reach a
> > first routing node (short route), that routing node would aggregate the
> parts and forward as a
> > single HTLC to the next routing node. It would be split again once it
> reaches the other edge
> > of the network (for a short route as well). In a network like this, the
> MPP routes would only have
> > to be computed on a small subset of the network, which makes brute-force
> algorithms completely
> > reasonable and the success rate higher.
>
> This makes me wonder if we really need the onions-per-channel model we
> currently use.
>
> For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
>
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.
>
> HTLC packets could be split arbitrarily, and later nodes could potentially
> merge with the lower CLTV used in subsequent hops.
>
> Or not, *shrug*.
> It has the bad problem of being more expensive on average than purely
> source-based routing, and probably having worse payment latency.
>
>
> For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-09 Thread Bastien TEINTURIER via Lightning-dev
Hey Zman,

raising the minimum payment size is another headache
>

It's true that it may (depending on the algorithm) lower the success rate
of MPP-split.
But it's already a parameter that node operators can configure at will (at
channel creation time),
so IMO it's a complexity we have to deal with anyway. Making it dynamic
shouldn't have a high
impact on MPP algorithms (apart from failures while `channel_update`s are
propagating).

To be fully honest, my (maybe unpopular) opinion about MPP is that it's not
necessary on the
network's backbone, only at its edges. Once the network matures, I expect
channels between
"serious" routing nodes to be way bigger than the size of individual
payments. The only places
where there may be small or almost-empty channels are between end-users
(wallets) and
routing nodes.
If something like Trampoline were to be implemented, MPP would only be
needed to reach a
first routing node (short route), that routing node would aggregate the
parts and forward as a
single HTLC to the next routing node. It would be split again once it
reaches the other edge
of the network (for a short route as well). In a network like this, the MPP
routes would only have
to be computed on a small subset of the network, which makes brute-force
algorithms completely
reasonable and the success rate higher.

This is an interesting fork of the discussion, but I don't think it's a
good reason to prevent these
parameters from being updated on live channels, what do you think?

Bastien


Le jeu. 8 oct. 2020 à 22:05, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Please forget about channel jamming, upfront fees et al and simply
> consider the parameters I'm
> > mentioning. It feels to me that these are by nature dynamic channel
> parameters (some of them are
> > even present in `channel_update`, but no-one updates them yet because
> direct peers don't take the
> > update into account anyway). I'd like to raise `htlc_minimum_msat` on
> some big channels because
> > I'd like these channels to be used only for big-ish payments. Today I
> can't, I have to close that
> > channel and open a new one for such a trivial configuration update,
> which is sad.
>
> At the risk of once more derailing the conversation: from the MPP
> trenches, raising the minimum payment size is another headache.
> The general assumption with MPP is that smaller amounts are more likely to
> get through, but if anyone is making a significant bump up in
> `htlc_minimum_msat`, that assumption is upended and we have to reconsider
> if we may actually want to merge multiple failing splits into one, as well
> as considering asymmetric splits (in particular asymmetric presplits)
> because maybe the smaller splits will be unable to pass through the bigger
> channels but the bigger-side split *might*.
>
> On the other hand: one can consider that the use of big payments as an
> aggregation.
> For example: a forwarding node might support smaller `htlc_minimum_msat`,
> then after making multiple such forwards, find that a channel is now
> heavily balanced towards one side or another.
> It can then make a single large rebalance via one of the
> high-`htlc_minimum_msat` channels t-bast is running.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Thanks (again) Antoine and Zman for your answers,

On the other hand, a quick skim of your proposal suggests that it still
> respects the "initiator pays" principle.
> Basically, the fundee only pays fees for HTLCs they initiated, which is
> not relevant to the above attack (since in the above attack, my node is a
> dead end, you will never send out an HTLC through my channel to rebalance).
> So it should still be acceptable.


I agree, my proposal would have the same result as today's behavior in that
case.
Unless your throw-away node waited for me to add an HTLC in its channel, in
that case I would pay a
part of the fee (since I'm adding that HTLC). That leans towards the first
of my two proposals,
where the funder always pays the "base" fee and htlc fees are split
depending on who proposed the HTLC.

The channel initiator shouldn't have to pay for channel-closing as it's
> somehow a liquidity allocation decision


I agree 100%. Especially since mutual closing should be preferred most of
the time.

That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.


Yes, this is an issue, but the only way to fix it today is to never be the
funder, always be fundee
and I think that creates unhealthy, assymetric incentives.

This is a scenario where the other node will only burn you once; if you
notice that behavior you'll
be forced to pay on-chain fees, but you'll ban this peer. And if he opened
the channel to you, he'll
still be paying the "base" fee. I don't think there's a silver bullet here
where you can completely
avoid being bitten by such malicious nodes, but you can reduce exposure and
ban them after the fact.

Another note on using a minimal relay fee; in a potential future where
on-chain fees are always
high and layer 1 is consistently busy, even that minimal relay fee will be
costly. You'll want your
peer to pay for the HTLCs it's responsible for to split the on-chain fee
more fairly. So I believe
moving (slightly) away from the "funder pays all" model is desirable (or at
least it's worth
exploring seriously in order to have a better reason to dismiss it than
"it's simpler").

Does that make sense?

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:30, Antoine Riard  a
écrit :

> Hello Bastien,
>
> I'm all in for a model where channel transactions are pre-signed with a
> reasonable minimal relay fee and the adjustment is done by the closer. The
> channel initiator shouldn't have to pay for channel-closing as it's somehow
> a liquidity allocation decision ("My balance could be better allocated
> elsewhere than in this channel").
>
> That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.
>
> AFAICT, this issue is not solved by anchor outputs. A way to decentivize
> this kind of behavior from a malicious counterparty is an upfront payment
> where the upholding HTLC fee * HTLC block-buffer-before-onchain is higher
> than the cost of going onchain. It should cost higher for the counterparty
> to withhold a HTLC than paying onchain-fees to close the channel.
>
> Or can you think about another mitigation for the issue raised above ?
>
> Antoine
>
> Le lun. 5 oct. 2020 à 09:13, Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> a écrit :
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels wou

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Good morning Antoine and Zman,

Thanks for your answers!

I was thinking dynamic policy adjustment would be covered by the dynamic
> commitment mechanism proposed by Laolu


I didn't mention this as I think we still have a long-ish way to go before
dynamic commitments
are spec-ed, implemented and deployed, and I think the parameters I'm
interested in don't require
that complexity to be updated.

Please forget about channel jamming, upfront fees et al and simply consider
the parameters I'm
mentioning. It feels to me that these are by nature dynamic channel
parameters (some of them are
even present in `channel_update`, but no-one updates them yet because
direct peers don't take the
update into account anyway). I'd like to raise `htlc_minimum_msat` on some
big channels because
I'd like these channels to be used only for big-ish payments. Today I
can't, I have to close that
channel and open a new one for such a trivial configuration update, which
is sad.

There is no need to stop the channel's operations while you're updating
these parameters, since
they can be updated unilaterally anyway. The only downside is that if you
make your policy stricter,
your peer may send you some HTLCs that you will immediately fail
afterwards; it's only a minor
inconvenience that won't trigger a channel closure.

I'd like to know if other implementations than eclair have specificities
that would make this
feature particularly hard to implement or undesirable.

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:43, ZmnSCPxj  a écrit :

> Good morning Antoine, and Bastien,
>
>
> > Instead of relying on reputation, the other alternative is just to have
> an upfront payment system, where a relay node doesn't have to account for a
> HTLC issuer reputation to decide acceptance and can just forward a HTLC as
> long it paid enough. More, I think it's better to mitigate jamming with a
> fees-based system than a web-of-trust one, less burden on network newcomers.
>
> Let us consider some of the complications here.
>
> A newcomer wants to make an outgoing payment.
> Speculatively, it connects to some existing nodes based on some policy.
>
> Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
>
> In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> In order to do that, we would have to offer the upfront fees for that
> node, to the node we *are* channeled with, so it can forward this as well.
>
> * We can give the upfront fee outright to the first hop, and trust that if
> it forwards, it will also forward the upfront fee for the next hop.
>   * The first hop would then prefer to just fail the HTLC then and there
> and steal all the upfront fees.
> * After all, the offerrer is a newcomer, and might be the sybil of a
> hacker that is trying to tie up its liquidity.
>   The first hop would (1) avoid this risk and (2) earn more upfront
> fees because it does not forward those fees to later hops.
>   * This is arguably custodial and not your keys not your coins applies.
> Thus, it returns us back to tr\*st anyway.
> * We can require that the first hop prove *where* along the route errored.
>  If it provably failed at a later hop, then the first hop can claim more
> as upfront fees, since it will forward the upfront fees to the later hop as
> well.
>   * This has to be enforcable onchain in case the channel gets dropped
> onchain.
> Is there a proposal SCRIPT which can enforce this?
>   * If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> * On the other hand, sub-satoshi amounts are not enforcable onchain
> too, and nobody cares, so...
>
> On the other hand, a web-of-tr\*st might not be *that* bad.
>
> One can say that "tr\*st is risk", and consider that the size and age of a
> channel to a peer represents your tr\*st that that peer will behave
> correctly for fast and timely resolution of payments.
> And anyone can look at the blockchain and the network gossip to get an
> idea of who is generally considered tr\*stworthy, and since that
> information is backed by Bitcoins locked in channels, this is reasonably
> hard to fake.
>
> On the other hand, this risks centralization around existing, long-lived
> nodes.
> *Sigh*.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Incremental Routing (Was: Making (some) channel limits dynamic)

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
If I remember correctly, it looks very similar to how I2P establishes
tunnels, it may be worth
diving in their documentation to fish for ideas.

However in their case the goal is to establish a long-lived tunnel, which
is why it's ok to have
a slow and costly protocol. It feels to me that for payments, this is a lot
of messages and delays,
I'm not sure this is feasible at a reasonable scale...

Cheers,
Bastien

Le mer. 7 oct. 2020 à 19:34, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Antoine, Bastien, and list,
>
> > > Instead of relying on reputation, the other alternative is just to
> have an upfront payment system, where a relay node doesn't have to account
> for a HTLC issuer reputation to decide acceptance and can just forward a
> HTLC as long it paid enough. More, I think it's better to mitigate jamming
> with a fees-based system than a web-of-trust one, less burden on network
> newcomers.
> >
> > Let us consider some of the complications here.
> >
> > A newcomer wants to make an outgoing payment.
> > Speculatively, it connects to some existing nodes based on some policy.
> >
> > Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
> >
> > In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> > In order to do that, we would have to offer the upfront fees for that
> node, to the node we are channeled with, so it can forward this as well.
> >
> > -   We can give the upfront fee outright to the first hop, and trust
> that if it forwards, it will also forward the upfront fee for the next hop.
> > -   The first hop would then prefer to just fail the HTLC then and
> there and steal all the upfront fees.
> > -   After all, the offerrer is a newcomer, and might be the
> sybil of a hacker that is trying to tie up its liquidity.
> > The first hop would (1) avoid this risk and (2) earn more
> upfront fees because it does not forward those fees to later hops.
> >
> > -   This is arguably custodial and not your keys not your coins
> applies.
> > Thus, it returns us back to tr\*st anyway.
> >
> > -   We can require that the first hop prove where along the route
> errored.
> > If it provably failed at a later hop, then the first hop can claim
> more as upfront fees, since it will forward the upfront fees to the later
> hop as well.
> > -   This has to be enforcable onchain in case the channel gets
> dropped onchain.
> > Is there a proposal SCRIPT which can enforce this?
> >
> > -   If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> > -   On the other hand, sub-satoshi amounts are not enforcable
> onchain too, and nobody cares, so...
>
> One thing I have been thinking about, but have not proposed seriously yet,
> would be "incremental routing".
>
> Basically, the route of pending HTLCs also doubles as an encrypted
> bidirectional tunnel.
>
> Let me first describe how I imagine this "incremental routing" would look
> like.
>
> First, you offer an HTLC with a direct peer.
> The data with this HTLC includes a point, which the peer will ECDH with
> its own privkey, to form a shared secret.
> You can then send additional messages to that node, which it will decrypt
> using the shared secret as the symmetric encryption key.
> The node can also reply to those messages, by encrypting it with the same
> symmetric encryption key.
> Typically this will be via a stream cipher which is XORed with the real
> data.
>
> One of the messages you can send to that node (your direct peer) would be
> "please send out an HTLC to this peer of yours".
> Together with that message, you could also bump up the value of the HTLC,
> and possibly the CLTV delta, you have with that node.
> This bumping up is the forwarding fee and resolution time you have to give
> to that node in order to have it safely put an HTLC to the next hop.
>
> If there is a problem on the next hop, the node replies back, saying it
> cannot forward the HTLC further.
> Your node can then respond by giving an alternative next hop, which that
> node can reply back is also not available, etc. until you say "give up" and
> that node will just fail the HTLC.
>
> However, suppose the next hop is online and there is enough space in the
> channel.
> That node then establishes the HTLC with the next hop.
>
> At this point, you can then send a message to the direct peer which is
> nothing more than "send the rest of this message as the message to the next
> hop on the same HTLC, then wait for a reply and wrap it and reply to me".
> This is effectively onion-wrapping the message to the peer of your peer,
> and waiting for an onion-wrapped reply from the peer of your peer.
>
> You 

Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Hi darosior,

This is true, but we haven't solved yet how to estimate a good enough
`min_relay_fee` that works
for end-to-end tx propagation over the network.

We've discussed this during the last two spec meetings, but it's still
unclear whether we'll be able to solve
this before package-relay lands in bitcoin, so I wanted to explore this as
a potential more short-term
solution. But maybe it's not worth the effort and we should focus more on
anchors and `min_relay_fee`,
we'll see ;)

Bastien

Le lun. 5 oct. 2020 à 15:25, darosior  a écrit :

> Hi Bastien,
>
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
>
> Agreed.
> However in an anchor outputs future the funder only pays for the
> "backbone" fees of the channel and the fees necessary to secure the
> confirmation of transactions is paid in second stage by each interested
> party (*). It seems to me to be a reasonable middle-ground.
>
> (*) Credits to ZmnSCPxj for pointing this out to me on IRC.
>
> Darosior
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Why should funders always pay on-chain fees?

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

It seems to me that the "funder pays all the commit tx fees" rule exists
solely for simplicity
(which was totally reasonable). I haven't been able to find much discussion
about this decision
on the mailing list nor in the spec commits.

At first glance, it's true that at the beginning of the channel lifetime,
the funder should be
responsible for the fee (it's his decision to open a channel after all).
But as time goes by and
both peers earn value from this channel, this rule becomes questionable.
We've discovered since
then that there is some risk associated with having pending HTLCs
(flood-and-loot type of attacks,
pinning, channel jamming, etc).

I think that *in some cases*, fundees should be paying a portion of the
commit-tx on-chain fees,
otherwise we may end up with a web-of-trust network where channels would
only exist between peers
that trust each other, which is quite limiting (I'm hoping we can do
better).

Routing nodes may be at risk when they *receive* HTLCs. All the attacks
that steal funds come from
the fact that a routing node has paid downstream but cannot claim the
upstream HTLCs (correct me
if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
the HTLCs they offer
while they're pending in the commit-tx, regardless of whether they're
funder or fundee.

The simplest way to do this would be to deduce the HTLC cost (172 *
feerate) from the offerer's
main output (instead of the funder's main output, while keeping the base
commit tx weight paid
by the funder).

A more extreme proposal would be to tie the *total* commit-tx fee to the
channel usage:

* if there are no pending HTLCs, the funder pays all the fee
* if there are pending HTLCs, each node pays a proportion of the fee
proportional to the number of
HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
pays 75% of the
commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
redistributed.

This model uses the on-chain fee as collateral for usage of the channel. If
Alice wants to forward
HTLCs through this channel (because she has something to gain - routing
fees), she should be taking
on some of the associated risk, not Bob. Bob will be taking the same risk
downstream if he chooses
to forward.

I believe it also forces the fundee to care about on-chain feerates, which
is a healthy incentive.
It may create a feedback loop between on-chain feerates and routing fees,
which I believe is also
a good long-term thing (but it's hard to predict as there may be negative
side-effects as well).

What do you all think? Is this a terrible idea? Is it okay-ish, but not
worth the additional
complexity? Is it an amazing idea worth a lightning nobel? Please don't
take any of my claims
for granted and challenge them, there may be negative side-effects I'm
completely missing, this is
a fragile game of incentives...

Side-note: don't forget to take into account that the fees for HTLC
transactions (second-level txs)
are always paid by the party that broadcasts them (which makes sense). I
still think this is not
enough and can even be abused by fundees in some setups.

Thanks,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Making (some) channel limits dynamic

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Good evening list,

Recent discussions around channel jamming [1] have highlighted again the
need to think twice when
configuring your channels parameters. There are currently parameters that
are set once at channel
creation that would benefit a lot from being configurable throughout the
lifetime of the channel
to avoid closing channels when we just want to reconfigure them:

* max_htlc_value_in_flight_msat
* max_accepted_htlcs
* htlc_minimum_msat
* htlc_maximum_msat

Nodes can currently unilaterally udpate these by applying forwarding
heuristics, but it would be
better to tell our peer about the limits we want to put in place (otherwise
we're wasting a whole
cycle of add/commit/revoke/fail messages for no good reason).

I suggest adding tlv records in `commitment_signed` to tell our channel
peer that we're changing
the values of these fields.

Is someone opposed to that?
Are there other fields you think would need to become dynamic as well?
Do you think that needs a new message instead of using extensions of
`commitment_signed`?

Cheers,
Bastien

[1] https://twitter.com/joostjgr/status/1308414364911841281
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Bastien TEINTURIER via Lightning-dev
Thanks for sharing this, I think it's the right time to start experimenting
with
that kind of feature (especially in the light of Taproot and the package
relay
work / pinning transactions issue).

we can start to experiment with flow-control like
> ideas such as limiting a new channel peer to only a handful of HTLC slots,
> which is then progressively increased based on "good behavior" (or the
> other
> way around as well)


Note that this is already possible today, a node can unilaterally decide its
internal rules for accepting channels/HTLCs. But it's true that it would be
nicer to communicate these rules with your peer to reduce inefficiencies
(e.g. proposing HTLCs that we know will be rejected).

* `open_channel` and `accept_channel` gain a new `channel_type` TLV field.
> * retroactively the OG commitment format is numbered as `channel_type=0`,
> `static_remote_key`, as `channel_type=1`, and anchors as
> `channel_type=2`


ACK! Internally eclair (and I believe lnd as well) has exactly that field in
its DB, with exactly those values.

* an empty `commit_sig` message (one that covers no updates) is
> disallowed, unless the `commit_sig` has a `channel_type`, `c_n` that
> differs from the channel type of the prior commitment, `c_n-1`.


That sounds reasonable, as changing the `channel_type` is actually an update
(it results in changes in the commit tx and/or htlc txs).

An alternative to attaching the `channel_type` message to the `commit_sig`
> and having _that_ kick off the commitment upgrade, we could instead
> possibly
> add a _new_ update message (like `update_fee`) to make the process more
> explicit. In either case, we may want to restrict things a bit by only
> allowing the initiator to trigger a commitment format update.


I prefer that alternative. I think it's better to explicitly signal that we
want to pause the channel while we upgrade the commitment format (and stop
accepting HTLCs while we're updating, like we do once we've exchanged the
`shutdown` message). Otherwise the asynchronocity of the protocol is likely
to
create months (years?) of tracking unwanted force-closes because of races
between `commig_sig`s with the new and old commitment format.

Updating the commitment format should be a rare enough operation that we can
afford to synchronize with a two-way `update_commitment_format` handshake,
then
temporarily freeze the channel.

The tricky part will be how we handle "dangling" operations that were sent
by
the remote peer *after* we sent our `update_commitment_format` but *before*
they received it. The simplest choice is probably to have the initiator just
ignore these messages, and the non-initiator enqueue these un-acked messages
and replay them after the commitment format update completes (or just drop
them
and cancel corresponding upstream HTLCs if needed).

Regarding initiating the commitment format update, how do you see this
happen?
The funder activates a new feature on his (e.g. `option_anchor_outputs`),
and
broadcasts it in `init` and `node_announcement`, then waits until the remote
also activates it in its `init` message and then reacts to this by
triggering
the update process?

Thanks,
Bastien

Le mar. 21 juil. 2020 à 03:18, Olaoluwa Osuntokun  a
écrit :

> Hi y'all,
>
> In this post, I'd like to share an early version of an extension to the
> spec
> and channel state machine that would allow for on-the-fly commitment
> _format/type_ changes. Notably, this would allow for us to _upgrade_
> commitment types without any on-chain activity, executed in a
> de-synchronized and distributed manner. The core realization these proposal
> is based on the fact that the funding output is the _only_ component of a
> channel that's actually set in stone (requires an on-chain transaction to
> modify).
>
>
> # Motivation
>
> (you can skip this section if you already know why something like this is
> important)
>
> First, some motivation. As y'all are likely aware, the current deployed
> commitment format has changed once so far: to introduce the
> `static_remote_key` variant which makes channels safer by sending the funds
> of the party that was force closed on to a plain pubkey w/o any extra
> tweaks
> or derivation. This makes channel recovery safer, as the party that may
> have
> lost data (or can't continue the channel), no longer needs to learn of a
> secret value sent to them by the other party to be able to claim their
> funds. However, as this new format was introduced sometime after the
> initial
> bootstrapping phase of the network, most channels in the wild today _are
> not_ using this safer format.  Transitioning _all_ the existing channels to
> this new format as is, would require closing them _all_, generating tens of
> thousands of on-chain transactions (to close, then re-open), not to mention
> chain fees.
>
> With dynamic commitments, users will be able to upgrade their _existing_
> channels to new safer types, without any new on-chain transactions!
>
> Anchor output based 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-22 Thread Bastien TEINTURIER via Lightning-dev
Hey ZmnSCPxj,

I agree that in theory this looks possible, but doing it in practice with
accurate control
of what parts of the network get what tx feels impractical to me (but maybe
I'm wrong!).

It feels to me that an attacker who would be able to do this would break
*any* off-chain
construction that relies on absolute timeouts, so I'm hoping this is
insanely hard to
achieve without cooperation from a miners subset. Let me know if I'm too
optimistic on
this!

Cheers,
Bastien

Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :

> Good morning Bastien,
>
> > Thanks for the detailed write-up on how it affects incentives and
> centralization,
> > these are good points. I need to spend more time thinking about them.
> >
> > > This is one reason I suggested using independent pay-to-preimage
> > > transactions[1]
> >
> > While this works as a technical solution, I think it has some incentives
> issues too.
> > In this attack, I believe the miners that hide the preimage tx in their
> mempool have
> > to be accomplice with the attacker, otherwise they would share that tx
> with some of
> > their peers, and some non-miner nodes would get that preimage tx and be
> able to
> > gossip them off-chain (and even relay them to other mempools).
>
> I believe this is technically possible with current mempool rules, without
> miners cooperating with the attacker.
>
> Basically, the attacker releases two transactions with near-equal fees, so
> that neither can RBF the other.
> It releases the preimage tx near miners, and the timelock tx near
> non-miners.
>
> Nodes at the boundaries between those that receive the preimage tx and the
> timelock tx will receive both.
> However, they will receive one or the other first.
> Which one they receive first will be what they keep, and they will reject
> the other (and *not* propagate the other), because the difference in fees
> is not enough to get past the RBF rules (which requires not just a feerate
> increase, but also an increase in absolute fee, of at least the minimum
> relay feerate times transaction size).
>
> Because they reject the other tx, they do not propagate the other tx, so
> the boundary between the two txes is inviolate, neither can get past that
> boundary, this occurs even if everyone is running 100% unmodified Bitcoin
> Core code.
>
> I am not a mempool expert and my understanding may be incorrect.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-22 Thread Bastien TEINTURIER via Lightning-dev
Thanks for the detailed write-up on how it affects incentives and
centralization,
these are good points. I need to spend more time thinking about them.

This is one reason I suggested using independent pay-to-preimage
> transactions[1]
>

While this works as a technical solution, I think it has some incentives
issues too.
In this attack, I believe the miners that hide the preimage tx in their
mempool have
to be accomplice with the attacker, otherwise they would share that tx with
some of
their peers, and some non-miner nodes would get that preimage tx and be
able to
gossip them off-chain (and even relay them to other mempools).

If they are actively helping the attacker, they wouldn't spend the
pay-to-preimage tx,
unless they gain more from it than the share the attacker gives them. This
becomes
a simple bidding war, and the honest user will always be the losing party
here (the
attacker has nothing to lose). For this reason I'm afraid it wouldn't work
out in practice
as well as we'd hope...what do you think? And even if the honest user wins
the bidding
war, the attack still steals money from that user; it just goes into the
miner's pocket.

But from the perspective of a single LN node, it
> might make more sense to get the information and *not* share it
>

I think it depends. If this attack becomes doable in practice and we see it
happening,
LN routing nodes and service providers have a very high incentive to thwart
these attacks,
because otherwise they'd lose their business as people would leave the
lightning network.

As long as enough nodes think that way (with "enough" being a very hard to
define quantity),
this should mitigate the attack. The only risk would be a big "exit scam"
scenario, but the
coordination cost between all these nodes makes that scenario unlikely
(IMHO).

Thanks,
Bastien

Le sam. 20 juin 2020 à 12:37, David A. Harding  a écrit :

> On Sat, Jun 20, 2020 at 10:54:03AM +0200, Bastien TEINTURIER wrote:
> > We're simply missing information, so it looks like the only good
> > solution is to avoid being in that situation by having a foot in
> > miners' mempools.
>
> The problem I have with that approach is that the incentive is to
> connect to the highest hashrate pools and ignore the long tail of
> smaller pools and solo miners.  If miners realize people are doing this,
> they may begin to charge for information about their mempool and the
> largest miners will likely be able to charge more money per hashrate
> than smaller miners, creating a centralization force by increasing
> existing economies of scale.
>
> Worse, information about a node's mempool is partly trusted.  A node can
> easily prove what transactions it has, but it can't prove that it
> doesn't have a certain transaction.  This implies incumbent pools with a
> long record of trustworthy behavior may be able to charge more per
> hashrate than a newer pools, creating a reputation-based centralizing
> force that pushes individual miners towards well-established pools.
>
> This is one reason I suggested using independent pay-to-preimage
> transactions[1].  Anyone who knows the preimage can mine the
> transaction, so it doesn't provide reputational advantage or direct
> economies of scale---pay-to-preimage is incentive equivalent to paying
> normal onchain transaction fees.  There is an indirect economy of
> scale---attackers are most likely to send the low-feerate
> preimage-containing transaction to just the largest pools, so small
> miners are unlikely to learn the preimage and thus unlikely to be able
> to claim the payment.  However, if the defense is effective, the attack
> should rarely happen and so this should not have a significant effect on
> mining profitability---unlike monitoring miner mempools which would have
> to be done continuously and forever.
>
> ZmnSCPxj noted that pay-to-preimage doesn't work with PTLCs.[2]  I was
> hoping one of Bitcoin's several inventive cryptographers would come
> along and describe how someone with an adaptor signature could use that
> information to create a pubkey that could be put into a transaction with
> a second output that OP_RETURN included the serialized adaptor
> signature.  The pubkey would be designed to be spendable by anyone with
> the final signature in a way that revealed the hidden value to the
> pubkey's creator, allowing them to resolve the PTLC.  But if that's
> fundamentally not possible, I think we could advocate for making
> pay-to-revealed-adaptor-signature possible using something like
> OP_CHECKSIGFROMSTACK.[3]
>
> [1]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002664.html
> [2]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002667.html
> [3] https://bitcoinops.org/en/topics/op_checksigfromstack/
>
> > Do 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-20 Thread Bastien TEINTURIER via Lightning-dev
Hello Dave and list,

Thanks for your quick answers!

The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.
>

Exactly, if the attacker submits an outdated transaction he would be
shooting himself in the foot,
as we could claim the revocation paths when seeing the transaction in a
block and get all the
channel funds (since the attacker's outputs will be CSV-locked).

The only way your Bitcoin peer will relay your blind child
> is if it already has the parent transaction.
>

That's an excellent point that I missed in the blind CPFP carve-out trick!
I think this makes the
blind CPFP carve-out quite hard in practice (even using getdata - thanks
for detailing that option)...

In the worst case scenario where most miners' mempools contain the
attacker's tx and the rest of
the network's mempools contains the honest participant's tx, I think there
isn't much we can do.
We're simply missing information, so it looks like the only good solution
is to avoid being in that
situation by having a foot in miners' mempools. Do you think it's
unreasonable to expect at least
some LN nodes to also invest in running nodes in mining pools, ensuring
that they learn about
attackers' txs and can potentially share discovered preimages with the
network off-chain (by
gossiping preimages found in the mempool over LN)? I think that these
recent attacks show that
we need (at least some) off-chain nodes to be somewhat heavily invested in
on-chain operations
(layers can't be fully decoupled with the current security assumptions -
maybe Eltoo will help
change that in the future?).

Thank you for your time!
Bastien



Le ven. 19 juin 2020 à 22:53, David A. Harding  a écrit :

> On Fri, Jun 19, 2020 at 03:58:46PM -0400, David A. Harding via bitcoin-dev
> wrote:
> > I think you're assuming here that the attacker broadcast a particular
> > state.
>
> Whoops, I managed to confuse myself despite looking at Bastien's
> excellent explainer.  The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.  However, the blind child will only be relayed by a Bitcoin peer
> if the peer also has the parent transaction (the latest state) and, if
> it has the parent transaction, you should be able to just getdata('tx',
> $txid) that transaction from the peer without CPFPing anything.  That
> will give you the preimage and so you can immediately resolve the HTLC
> with the upstream channel.
>
> Revising my conclusion from the previous post:
>
> I think the strongman argument for the attack would be that the attacker
> will be able to perform a targeted relay of the low-feerate
> preimage-containing transaction to just miners---everyone else on the
> network will receive the honest user's higher-feerate expired-timelock
> transaction.  Unless the honest user happens to have a connection to a
> miner's node, the user will neither be able to CPFP fee bump nor use
> getdata to retrieve the preimage.
>
> Sorry for the confusion.
>
> -Dave
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Sorry for being (very) late to the party on that subject, but better late
than never.

A lot of ideas have been thrown at the problem and are scattered across
emails, IRC discussions,
and github issues. I've spent some time putting it all together in one
gist, hoping that it will
help stir the discussion forward as well as give newcomers all the
background they need to ramp up
on this issue and join the discussion, bringing new ideas to the table.

The gist is here, and I'd appreciate your feedback if I have wrongly
interpreted some of the ideas:
https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12

Readers of this list can probably directly skip to the "Future work"
section. I believe my
"alternative proposal" should loosely reflect Matt's proposal from the very
first mail of this
thread; note that I included anchors and new txs only in some places, as I
think they aren't
necessary everywhere.

My current state-of-mind (subject to change as I discover more potential
attacks) is that:

* The proposal to add more anchors and pre-signed txs adds non-negligible
complexity and hurts
small HTLCs, so it would be better if we didn't need it
* The blind CPFP carve-out trick is a one shot, so you'll likely need to
pay a lot of fees for it
to work which still makes you lose money in case an attacker targets you
(but the money goes to
miners, not to the attacker - unless he is the miner). It's potentially
hard to estimate what fee
you should put into that blind CPFP carve-out because you have no idea what
the current fee of the
pinned success transaction package is, so it's unsure if that solution will
really work in practice
* If we take a step back, the only attack we need to protect against is an
attacker pinning a
preimage transaction while preventing us from learning that preimage for at
least `N` blocks
(see the gist for the complete explanation). Please correct me if that
claim is incorrect as it
will invalidate my conclusion! Thus if we have:
* a high enough `cltv_expiry_delta`
* [off-chain preimage broadcast](
https://github.com/lightningnetwork/lightning-rfc/issues/783)
(or David's proposal to do it by sending txs that can be redeemed via only
the preimage)
* LN hubs (or any party commercially investing in running a lightning node)
participating in
various mining pools to help discover preimages
* decent mitigations for eclipse attacks
* then the official anchor outputs proposal should be safe enough and is
much simpler?

Thank you for reading, I hope the work I put into this gist will be useful
for some of you.

Bastien

Le ven. 24 avr. 2020 à 00:47, Matt Corallo via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

>
>
> On 4/23/20 8:46 AM, ZmnSCPxj wrote:
> >>> -   Miners, being economically rational, accept this proposal and
> include this in a block.
> >>>
> >>> The proposal by Matt is then:
> >>>
> >>> -   The hashlock branch should instead be:
> >>> -   B and C must agree, and show the preimage of some hash H (hashlock
> branch).
> >>> -   Then B and C agree that B provides a signature spending the
> hashlock branch, to a transaction with the outputs:
> >>> -   Normal payment to C.
> >>> -   Hook output to B, which B can use to CPFP this transaction.
> >>> -   Hook output to C, which C can use to CPFP this transaction.
> >>> -   B can still (somehow) not maintain a mempool, by:
> >>> -   B broadcasts its timelock transaction.
> >>> -   B tries to CPFP the above hashlock transaction.
> >>> -   If CPFP succeeds, it means the above hashlock transaction exists
> and B queries the peer for this transaction, extracting the preimage and
> claiming the A->B HTLC.
> >>
> >> Note that no query is required. The problem has been solved and the
> preimage-containing transaction should now confirm just fine.
> >
> > Ah, right, so it gets confirmed and the `blocksonly` B sees it in a
> block.
> >
> > Even if C hooks a tree of low-fee transactions on its hook output or
> normal payment, miners will still be willing to confirm this and the B hook
> CPFP transaction without, right?
>
> Correct, once it makes it into the mempool we can CPFP it and all the
> regular sub-package CPFP calculation will pick it
> and its descendants up. Of course this relies on it not spending any other
> unconfirmed inputs.
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Bastien TEINTURIER via Lightning-dev
Hi Antoine and list,

Thanks for raising this. There's one step I'd like to understand further:

* Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
> on Alice's transaction,
> feerate is maliciously chosen to get in network mempools but never to
> confirm. Absolute fee must
> be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
> race.
>

Can you detail how the "absolute fee" is computed here?
Doesn't that mean that if this had a higher fee than the htlc-timeout, and
the htlc-timeout fee was
chosen to confirm quickly (that's why we have an annoying `update_fee`),
the htlc-success will confirm
quickly (which makes the problem disappear)?
Because once the commit tx is confirmed, the "package" consists of only the
htlc-success, doesn't it?

I think the devil will be in the details here, so it's worth expanding on
the fee calculation imho.

Thanks!
Bastien

Le mer. 22 avr. 2020 à 10:01, Antoine Riard  a
écrit :

> Personally, I would have wait a bit before to go public on this, like
> letting some implementations
> increasing their CLTV deltas, but anyway, it's here now.
>
> Mempool-pinning attacks were already discussed on this list [0], but what
> we found is you
> can _reverse_ the scenario, where it's not the malicious party delaying
> confirmation of honest
> party transactions but malicious deliberately stucking its own
> transactions in the mempool to avoid
> confirmation of timeout. And therefore gaming inter-link timelock to
> provoke an unbalanced
> settlement for the victim ("aka you pay forward, but don't get pay
> backward").
>
> How much attacks are practical is based on how you can leverage mempool
> rules to pin your own
> transaction. What you're looking for is a  _mempool-obstruction_ trick,
> i.e a way to get honest party
> transaction being bounce off due to your transaction being already there.
>
> Beyond disabling RBF on your transaction (with current protocol, not
> anchor proposal), there is
> two likely candidates:
> * BIP 125 rule 3: "The replacement transaction pays an absolute fee of at
> least the sum paid by the original transactions."
> * BIP 125 rule 5: "The number of original transactions to be replaced and
> their descendant transactions which will be evicted from the mempool must
> not exceed a total of 100 transactions."
>
> Let's go through whole scenario:
> * Mallory and Eve are colluding
> * Eve and Mallory are opening channels with Alice, Mallory do a bit of
> rebalancing
> to get full incoming capacity, like receiving funds on an onchain address
> through another Alice
> link
> * Eve send a HTLC #1 to Mallory through Alice expirying at block 100
> * Eve send a second HTLC #2 to Mallory through Alice, expirying at block
> 110 on outgoing link
> (A<->M), 120 on incoming link (E<->A)
> * Before block 100, without cancellation from Mallory, Alice will
> force-close channel and broadcast
> her local commitment and HTLC-timeout to get back HTLC #1
> * Alice can't broadcast HTLC-timeout for HTLC #2 as it's only expires at
> 110
> * Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
> on Alice's transaction,
> feerate is maliciously chosen to get in network mempools but never to
> confirm. Absolute fee must
> be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
> race.
> * As Alice doesn't watch the mempool, she is never going to learn the
> preimage to redeeem incoming
> HTLC #2
> * At block 110, Alice is going to broadcast HTLC-timeout #2, feerate may
> be higher but as absolute
> fee is lower, it's going to be rejected from network mempools as
> replacement for Pinning Preimage
> Tx (BIP 125 rule 3)
> * At block 120, Eve closes channel and HTLC-timeout HTLC #2
> * Mallory can RBF its Pinning Preimage Tx by a high-feerate one and get it
> confirmed
>
> New anchor_output proposal, by disabling RBF, forces attacker to bid on
> the absolute fee. It may
> be now a risk to loose the fee if Pinning Tx is confirming. You may extend
> your "pinning
> lease" by ejecting your malicious tx, like conflicting or trimming out of
> the mempool one of its
> parents. And then reannounce your preimage tx with a
> lower-feerate-but-still-high-fee before a
> new block and a honest HTLC-timeout rebroadcast.
>
> AFAICT, even with anchor_output deployed, even assuming empty mempools,
> success rate and economic
> rationality of attacks is finding such cheap, reliable "pinning lease
> extension" trick.
>
> I think any mempool watching mitigation is at best a cat-and-mouse hack.
> Contrary to node
> advancing towards a global blockchain view thanks to PoW, network mempools
> don't have a convergence
> guarantee. This means,  in a distributed system like bitcoin, node don't
> see events in the same
> order, Alice may observe tx X, tx Y, tx Z and Bob may observe tx Z, tx X,
> tx Y. And order of events
> affects if a future event is going to be rejected or not, like if tx Z
> disable-RBF and tx X try to
> replace Z, 

[Lightning-dev] A better encoding for lightning invoices

2020-04-01 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

In Bolt 11 we decided to use bech32 to encode lightning invoices.
While bech32 has some nice properties, it isn't well suited for invoices.
The main drawback of Bolt 11 invoices is their size: when you start adding
routing hints or rendezvous onions, invoices become huge and hard to share.

Empirical evidence shows that most lightning transactions are done over
Twitter
(73,41% of all lightning payments in 2019 were made via tweets).
Since Twitter only allows up to 280 characters per tweet, this has severely
impacted
the development of new features for lightning. Anything that made an
invoice bigger
ended up being unused as users were left without any option to share those
invoices.

After several months of research and experimentation at Acinq Research, we
have come
up with a highly efficient invoice encoding, optimized primarily for
Twitter. This has
caused further delays in the development of our iOS wallet, but we felt
this was a
much higher priority.

Our encoding uses an AI-optimized mapping from 11-bit words to Twitter
emojis.
Early results show that emoji invoices are more than 2 times smaller than
legacy invoices.

Reckless users are already using this in production:

https://twitter.com/realtbast/status/1245258812279398400?s=20
https://twitter.com/acinq_co/status/1245258815597096960

There is a spec PR available at [1], along with reference eclair code [2].
We plan to release this feature in the next update of our Phoenix wallet
[3].

We'd like feedback from this list on how to improve this further. We
believe the
same encoding could be used to compress the bitcoin blockchain. With more
training
data, we believe our AI-optimized mapping could allow bitcoin blocks to fit
in a
single tweet; we would then be able to use Twitter feeds to store the whole
blockchain.

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/762
[2] https://github.com/ACINQ/eclair/tree/emoji-encoding
[3] https://phoenix.acinq.co/
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Blind paths revisited

2020-03-13 Thread Bastien TEINTURIER via Lightning-dev
Good morning ZmnSCPxj,

Ok I see what you mean. In that case it would be slightly different from
the current path blinding proposal.
The recipient would need to give the sender all the blinding points for
each hop in the blinded path.
Currently the recipient only gives one blinding point and then each node in
the blinded path is able to
compute the next blinding point and send it to the next node.

This may work, but I think it deserves a closer look. The security
assumptions in multi-hop locks is that
the sender can choose every secret from a random distribution. If instead
these secrets are provided by
the recipient, this may open up some attack vectors on the sender. Maybe
the sender can tweak each
recipient secret with a secret of its own, but one would need to write the
exact maths down to verify that
it works end-to-end.

I'm not saying it's impossible, I'm just saying that it's not trivial at
all and the devil is in the details ;)

Cheers,
Bastien

Le ven. 13 mars 2020 à 01:42, ZmnSCPxj  a écrit :

> Good morning tbast, rusty, and list,
>
>
> > As for ZmnSCPxj's suggestion, I think there is the same kind of issue.
> > The secrets we establish with anonymous multi-hops locks are between the
> *sender*
> > and each of the hops. In the route blinding case, what we're adding are
> secrets
> > between the *recipient* and the hops, and we don't want the sender to be
> able to
> > influence those. It's a kind of reverse Sphinx. So I'm not sure yet the
> recipient
> > could safely contribute to those secrets, but maybe we'll find a nice
> trick in
> > the future!
>
> Not quite?
>
> The recipient knows the secrets from the first recipient-selected-hop to
> itself, and, if it knows the payment scalar, can subtract those secrets
> from the receiver scalar.
> Thus the sender only has to arrange to deliver the payment point to the
> first recipient-selected-hop, the rest of the recipient-selected-hops will
> add their blinding scalars (which come from the recipient), and the final
> recipient can linearly deduct those.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Blind paths revisited

2020-03-11 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Thanks Rusty for following up on this, I'm glad it may be useful for offers!
I certainly want this as well for wallet users' privacy.

I have gathered my proposal in a better format than my previous gist here:
https://github.com/lightningnetwork/lightning-rfc/blob/route-blinding/proposals/route-blinding.md

You will note that I've been able to simplify the scheme a bit compared to
my
gist. It's now very clear that this is exactly the same kind of secrets
derivation than what Sphinx does. I still have things I want to add to the
proposal, but at least the crypto part should be ready to review (and I
think
it does need more eyes on it).

Feel free to add comments directly on the branch commits, it may be easier
to
review that way. Let me know if you think I should turn it into a draft PR
to
facilitate discussions. It kept it vague on some specific parts on purpose
(such as invoice fields, encrypted blob format); we will learn from early
prototype implementations and enrich the proposal as we go.

A few comments on your previous mails. I have removed the (ab)use of
`payment_secret`, but I think your comment on using the `blinding` to
replace
it would not work because that blinding is known by the next-to-last node
(which computes it and forwards it to the final node).
The goal of `payment_secret` is explicitly to avoid having the next-to-last
node
discover it to prevent him from probing. But I think that you didn't plan on
doing the blinding the same way I'm doing it, which may explain the
difference.

As for ZmnSCPxj's suggestion, I think there is the same kind of issue.
The secrets we establish with anonymous multi-hops locks are between the
*sender*
and each of the hops. In the route blinding case, what we're adding are
secrets
between the *recipient* and the hops, and we don't want the sender to be
able to
influence those. It's a kind of reverse Sphinx. So I'm not sure yet the
recipient
could safely contribute to those secrets, but maybe we'll find a nice trick
in
the future!

Cheers,
Bastien

Le mer. 11 mars 2020 à 00:22, Rusty Russell  a
écrit :

> ZmnSCPxj  writes:
> > Good morning Rusty, et al.,
> >
> >
> >> Note that this means no payment secret is necessary, since the incoming
> >> `blinding` serves the same purpose. If we wanted to, we could (ab)use
> >> payment_secret as the first 32-bytes to put in Carol's enc1 (i.e. it's
> >> the ECDH for Carol to decrypt enc1).
> >
> > I confess to not reading everything in detail, but it seems to me that,
> with payment point + scalar and path decorrelation, we need to establish a
> secret with each hop anyway (the blinding scalar for path decorrelation),
> so if you need a secret per hop, possibly this could be reused as well?
>
> Indeed, this could be used the same way, though for that secret it can
> simply be placed inside the onion rather than passed alongside.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Sphinx Rendezvous Update

2020-02-25 Thread Bastien TEINTURIER via Lightning-dev
erate the prefill stream and insert it in the correct place,
>before the HMAC. This reconstitutes the original routing packet
>  - Swap out the original onion with the reconstituted onion and forward.
>
> My writeup [1] is an early draft, but I wanted to get it out early to
> give the discussion a basis to work off. I'll revisit it a couple of
> times before opening a PR, but feel free to shout at me if I have
> forgotten to consider something :-)
>
> Cheers,
> Christian
>
> [1]
> https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md
> [2] https://gist.github.com/cdecker/ec06452bc470749d9f6d2de73651c5fd
>
> Bastien TEINTURIER via Lightning-dev
>  writes:
> > Good morning list,
> >
> > After exploring decoys [1], which is a cheap way of doing route blinding,
> > I'm turning back to exploring rendezvous.
> > The previous mails on the mailing list mentioned that there was a
> > technicality
> > to make the HMACs check out, but didn't provide a lot of details.
> > The issue is that the filler generation needs to take into account some
> hops
> > that will be added *later*, by the payer.
> >
> > However it is quite easy to work-around, with a few space trade-offs.
> > Let's consider a typical rendezvous setup, where Alice wants to be paid
> via
> > rendezvous Bob, and Carol wants to pay that invoice:
> >
> > Carol -> ... -> Bob -> ... -> Alice
> >
> > If Alice knows how many bytes Carol is going to use for her part of the
> > onion
> > payloads, Alice can easily take them into account when generating her
> > filler by
> > pre-pending the same amount of `0` bytes. It seems reasonable to impose a
> > fixed
> > number of onion bytes for each side of the rendezvous (650 each?) so
> Alice
> > would
> > know that amount.
> >
> > When Carol completes the onion with her part of the route, she simply
> needs
> > to
> > generate filler data for her part of the route following the normal
> Sphinx
> > protocol
> > and apply it to the onion she found in the invoice.
> >
> > But the tricky part is that she needs to give Bob a way of generating the
> > same
> > filler data to unapply it. Then all HMACs correctly check out.
> >
> > I see two ways of doing that:
> >
> > * Carol simply sends that filler (650 bytes), probably via a TLV in
> > `update_add_htlc`.
> > This means every intermediate hop needs to forward that, which is painful
> > and
> > potentially leaking too much data.
> > * Carol provides Bob with the rho keys used to generate her filler, and
> the
> > length
> > used by each hop. This leaks to Bob an upper bound on the number of hops
> > and the
> > number of bytes sent to each hop.
> >
> > Since shift-and-xor kind of crypto is hard to read as equations, but very
> > easy to
> > read as diagrams, I spent a bit of time doing beautiful ASCII art [2].
> > Don't hesitate
> > to have a look at it to find more details about how that works. You can
> > also print
> > that on t-shirts to look fancy at conferences. I also have some sample
> code
> > working
> > in eclair [3] for those who can read Scala without getting headaches.
> >
> > Are there other tricks we can use to reconcile both sides of the onion at
> > Bob's?
> > Maybe cdecker (or someone else) has an ace up his sleeve for me there? :)
> >
> > One important thing to note is that rendezvous on normal onions will be
> > costly to
> > integrate into invoices: it takes 1366 bytes to include one onion, and if
> > we want
> > to handle route failures or let the sender use multi-part, we will need
> to
> > have a
> > handful of pre-encrypted onions in the invoice (hence a few kB, which may
> > not be
> > practical for QR codes).
> >
> > But I did mention before that doing rendezvous on the trampoline onion
> > could have
> > better properties [4]. When doing that, having Carol transmit her filler
> > data only
> > to Bob, via the outer onion payload becomes practical and doesn't leak
> > information.
> > Multi-part would work with a single trampoline onion in the invoice (~500
> > bytes),
> > because nodes can do MPP between trampoline nodes thanks to the
> > onion-in-onion
> > construction. We simply need to decide the size of the trampoline onion
> to
> > allow
> > each side of the rendezvous to be able to insert a number of hops we're
> > comfortable
> > with. You can find more details in the "Rendezvous on a trampoline&

[Lightning-dev] Sphinx Rendezvous Update

2020-02-24 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

After exploring decoys [1], which is a cheap way of doing route blinding,
I'm turning back to exploring rendezvous.
The previous mails on the mailing list mentioned that there was a
technicality
to make the HMACs check out, but didn't provide a lot of details.
The issue is that the filler generation needs to take into account some hops
that will be added *later*, by the payer.

However it is quite easy to work-around, with a few space trade-offs.
Let's consider a typical rendezvous setup, where Alice wants to be paid via
rendezvous Bob, and Carol wants to pay that invoice:

Carol -> ... -> Bob -> ... -> Alice

If Alice knows how many bytes Carol is going to use for her part of the
onion
payloads, Alice can easily take them into account when generating her
filler by
pre-pending the same amount of `0` bytes. It seems reasonable to impose a
fixed
number of onion bytes for each side of the rendezvous (650 each?) so Alice
would
know that amount.

When Carol completes the onion with her part of the route, she simply needs
to
generate filler data for her part of the route following the normal Sphinx
protocol
and apply it to the onion she found in the invoice.

But the tricky part is that she needs to give Bob a way of generating the
same
filler data to unapply it. Then all HMACs correctly check out.

I see two ways of doing that:

* Carol simply sends that filler (650 bytes), probably via a TLV in
`update_add_htlc`.
This means every intermediate hop needs to forward that, which is painful
and
potentially leaking too much data.
* Carol provides Bob with the rho keys used to generate her filler, and the
length
used by each hop. This leaks to Bob an upper bound on the number of hops
and the
number of bytes sent to each hop.

Since shift-and-xor kind of crypto is hard to read as equations, but very
easy to
read as diagrams, I spent a bit of time doing beautiful ASCII art [2].
Don't hesitate
to have a look at it to find more details about how that works. You can
also print
that on t-shirts to look fancy at conferences. I also have some sample code
working
in eclair [3] for those who can read Scala without getting headaches.

Are there other tricks we can use to reconcile both sides of the onion at
Bob's?
Maybe cdecker (or someone else) has an ace up his sleeve for me there? :)

One important thing to note is that rendezvous on normal onions will be
costly to
integrate into invoices: it takes 1366 bytes to include one onion, and if
we want
to handle route failures or let the sender use multi-part, we will need to
have a
handful of pre-encrypted onions in the invoice (hence a few kB, which may
not be
practical for QR codes).

But I did mention before that doing rendezvous on the trampoline onion
could have
better properties [4]. When doing that, having Carol transmit her filler
data only
to Bob, via the outer onion payload becomes practical and doesn't leak
information.
Multi-part would work with a single trampoline onion in the invoice (~500
bytes),
because nodes can do MPP between trampoline nodes thanks to the
onion-in-onion
construction. We simply need to decide the size of the trampoline onion to
allow
each side of the rendezvous to be able to insert a number of hops we're
comfortable
with. You can find more details in the "Rendezvous on a trampoline" section
of [2].

I'm really interested in other approaches to making rendezvous work with
the HMACs
correctly checking out. If people on this list have drafts, intuitions or
random
thoughts about possible constructions, please share them, I'd be happy to
dive into
them to explore alternatives to the one I found, hoping we can make this
work and
provide this feature to our users in the near future.

A small side-note on Hornet. Hornet does offer many features that I believe
we will
want in Lightning in the future. It may seem that doing a custom rendezvous
scheme
is a waste of time since we'll ditch it once/if we implement Hornet. While
that is
true in the long run, I believe that if we're able to find a rendezvous
scheme that
isn't too much work to implement, it makes sense to have something
available soon-ish.
Hornet will likely be a longer-term effort that we won't get as soon as
we'd like
(especially since it will probably require a network-wide update). But who
knows, maybe
we may see that we are trying to create many features that are already
built into Hornet
(rendezvous, directed message support, etc) and will decide to implement
Hornet sooner
than expected?

Cheers,
Bastien

[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002435.html
[2] https://gist.github.com/t-bast/ab42a7f52eb2e73105557957c8359601
[3] https://github.com/ACINQ/eclair/tree/sphinx-rendezvous
[4]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002237.html
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org

Re: [Lightning-dev] Using libp2p as a communication protocol for Lightning

2020-02-17 Thread Bastien TEINTURIER
Exactly what Matt said.

I would also add that libp2p aims to be a kind of swiss-army knife for p2p
networking: that's nice for many use-cases, but when security is your main
focus, it's not.
Look at TLS: most attacks are downgrade attacks because the protocol offers
way too many options.
Protocols like Wireguard have perfectly understood this. No options, not
many configuration hooks -> small, auditable codebase.

For lightning it's the same: we prefer a very simple transport that has no
options whatsoever.
Simple to implement, simple to test, and works great in practice.

Bastien

Le lun. 17 févr. 2020 à 18:00, Matt Corallo  a
écrit :

> Because writing connection logic and peer management is really not that
> complicated compared to HTLC state machines and the rest of lightning. For
> crypto, lighting does use the noise framework, though the resulting code is
> so simple (in a good way) that its super easy to just write it yourself
> instead of fighting with a dependency.
>
> Lastly, for self-respecting cryptocurrency developers,
> not-carefully-audited dependencies are security vulnerabilities that will
> expose your users’ funds. By pulling simple connection logic into a
> lighting implementation, it’s easier to  test/fuzz/etc with the rest of a
> project.
>
> Matt
>
> On Feb 17, 2020, at 06:12, Alexandr Burdiyan  wrote:
>
> 
> Hi everyone!
>
> Since I recently started digging into all-things-peer-to-peer, I found
> that there’s a lot of fragmentation between many different projects that
> seemingly have a lot of things in common, like networking, encoding
> standards, and etc. I suppose there’re lots of historical reasons for that.
>
> More concretely for Lightning, I wonder why it couldn’t use some existing
> open source technologies and standards, like libp2p [1] for communication,
> or various multiformats [2] standards for addresses, hashes and encodings?
>
> I do think that building and evolving common toolkits and standards for
> decentralized system like libp2p, or multiformats, or IPLD [3] could be
> something very useful for the whole community. Currently, it feels like
> everyone wants to go so fast, so there’s no time for coordination and
> consensus to build these kinds of specs. That is understandable. But I
> wonder if Lightning community ever looked at projects like libp2p and
> multiformats, or maybe is considering to implement them in lightning. Or
> maybe there was a decision of not using them for some reason that I might
> be missing.
>
> [1]: https://libp2p.io
> [2]: https://multiformats.io
> [3]: https://ipld.io
>
> Thanks!
>
> Alexandr Burdiyan
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-13 Thread Bastien TEINTURIER
Damn you're good.

Le jeu. 13 févr. 2020 à 11:44, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > > Propose we take the `z` to use as bolt11 letter, because even the
> French
> > > don't pronounce it in "rendez-vous"!)
> >
> > As long as Z-man didn't want to claim this bolt11 letter for himself or
> his
> > puppet army, that sounds good :).
>
> That would be too obvious.
> What I *am* claiming is `8` for my own use and for my non-existent army of
> city-marching robots, because it does not appear in my public alias at all,
> thus actively misleading surveillors.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-13 Thread Bastien TEINTURIER
Hey Rusty and list,

I was starting to think this whole thing was of marginal benefit: note
> that solving "private channels need a temp scid" is far simpler[1].


That's true, the simpler solution does break the on-chain / off-chain link
but
I think we can take this opportunity to also improve payee privacy to make
sure
two invoices can't leak that they are from the same payee.

Even better, this would be a replacement for current route hints


Definitely, this is clearly a good opportunity to re-work route hints to
something that can fix all the short-comings of current route hints, thanks
for
your suggestions on that. And if anyone on this list has other fields that
may
be useful in these new route hints, please do say it.

Propose we take the `z` to use as bolt11 letter, because even the French
> don't pronounce it in "rendez-vous"!)


As long as Z-man didn't want to claim this bolt11 letter for himself or his
puppet army, that sounds good :).

I'll get started on an implementation, and will start working on a spec PR
as
well. I'm hoping to get more reviews from anyone experienced with both
lightning and cryptography to verify that the scheme isn't broken. I'm still
offering beers and cocktails to anyone who cracks it [1]!

Thanks!
Bastien

[1] https://twitter.com/realtbast/status/1227233654503505925

Le jeu. 13 févr. 2020 à 05:49, Rusty Russell  a
écrit :
>
> Bastien TEINTURIER  writes:
> > Hi Rusty,
> >
> > Thanks for the answer, and good luck with c-lightning 0.8.1-rc1 ;)
>
> ... Now -rc2.  I actually had a RL use for lightning (OMG!), and sure
> enough found a bug.
>
> > I've been thinking more about improving my scheme to not require any
sender
> > change, but I don't think that's possible at the moment. As with all
> > Lightning
> > tricks though, once we have Schnorr then it's really easy to do.
> > Alice simply needs to use `s * d_a` as her "preimage" (and the payment
point
> > becomes the P_I Bob needs). That may depend on the exact multi-hop locks
> > construction we end up using though, so I'm not 100% sure about that
yet.
>
> I was starting to think this whole thing was of marginal benefit: note
> that solving "private channels need a temp scid" is far simpler[1].
>
> But since your scheme extends to rendevous, it's much more tempting!
>
> We would use this for normal private channels as well as private routes
> aka new rendezvous.  Even better, this would be a replacement for
> current route hints (which lack ability to specify feature bits, which
> we would add here, and is also grossly inefficient if you just want to
> use it for Routeboost[2]).
>
> Propose we take the `z` to use as bolt11 letter, because even the French
> don't pronounce it in "rendez-vous"!)
>
> Then use TLV inside:[3]
>
> * `z` (2): `data_length` variable. One or more entries containing extra
>   routing information; there may be more than one `z` field.  Each entry
>   looks like:
>* `tlv_len` (8 bits)
>* `rendezvous_tlv` (tlv_len bytes)
>
> 1. tlvs: `rendezvous_tlv`
> 2. types:
>1. type: 1 (`pubkey`)
>2. data:
>   * [`point`:`nodeid`]
>1. type: 2 (`short_channel_id`)
>2. data:
>   * [`short_channel_id`:`short_channel_id`]
>1. type: 3 (`fee_base_msat`)
>2. data:
>   * [`tu32`:`fee_base_msat`]
>1. type: 4 (`fee_proportional_millionths`)
>2. data:
>   * [`tu32`:`fee_proportional_millionths`]
>1. type: 5 (`cltv_expiry_delta`)
>2. data:
>   * [`tu16`:`cltv_expiry_delta`]
>1. type: 6 (`features`)
>2. data:
>   * [`...*byte`:`features`]
>
> That probably adds 6 bytes entry, but worth it I think.
>
> Cheers,
> Rusty.
>
> [1] Add a new field to 'funding_locked': "private_scid".  If both sides
> support 'option_private_scid' (?) then the "real" scid is no longer
> valid for routing, and we use the private scid.
>
> [2] It's enough to give the scid(s) in this case indicating where you
> have incoming capacity.
>
> [3] I'm really starting to dislike my bolt11 format.  We should probably
> start afresh with a TLV-based one, where signature covers the hash
> of each entry (so they can be easily externalized!), but that's a
> big, unrelated task.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-11 Thread Bastien TEINTURIER
Hi Rusty,

Thanks for the answer, and good luck with c-lightning 0.8.1-rc1 ;)

(I think we should probably ban forwarding to private channels,
> too, for similar reasons).


Can you detail why? I believe that forwarding through private channels can
actually be pretty useful in the future for payee privacy (more on that
later).

Note that with any self-assigned SCID schemes, Alice has to respond to
> unknown scids in update_add_htlc with some BADONION code (which makes
> *Bob* give Carol an error response, since Alice can't without revealing
> her identity).


I believe the difference is that in your scheme, Bob would answer with
`unknown_next_peer`. When instead Alice responds with a `BADONION`, the only
thing it reveals is that Alice does use the decoy feature (which Mallory
already knows because she has seen an invoice from Alice). As long as this
behavior is consistent throughout the network, I think both options offer
the
same privacy (unless I'm missing something).

I expect such payments to become
> significant, and as long as paying to a temporary id and paying to a
> private channel looks identical, it's too draconian to ban.


True, that must become the default flow for receiving payments on mobile
wallets.
Granted, my solution would take longer to deploy because it needs to be
added to
sender wallets before receivers can require it.

I've been thinking more about improving my scheme to not require any sender
change, but I don't think that's possible at the moment. As with all
Lightning
tricks though, once we have Schnorr then it's really easy to do.
Alice simply needs to use `s * d_a` as her "preimage" (and the payment point
becomes the P_I Bob needs). That may depend on the exact multi-hop locks
construction we end up using though, so I'm not 100% sure about that yet.

But I did come up with what could be an interesting development.
Nothing prevents the decoy scheme to be used for public channels too, and
for
multiple hops: that enables a cheap form of rendezvous that only costs a few
hundred bytes in the invoice.

Alice would select multiple hops to a rendezvous node, and would apply some
blinding to those hops' `node_id` and `scid`. Alice would include these
decoy
hops in the invoice `routing_hints` (only costs 51 bytes per hop instead of
a
full onion). Mallory would only learn an upper bound on the distance between
Alice and the rendezvous.

I have a detailed version of the scheme in a gist [1] if people want to
take a
deeper look and break it (beer on me to the first one who breaks the
scheme).

[1] https://gist.github.com/t-bast/9972bfe9523bb18395bdedb8dc691faf

Cheers,
Bastien

Le lun. 10 févr. 2020 à 04:40, Rusty Russell  a
écrit :
>
> Bastien TEINTURIER  writes:
> >> But Mallory can do the same attack, I think.  Just include the P_I from
> >> the wrong invoice for Bob.
> >
> > Good catch, that's true, thanks for keeping me honest there! In that
case
> > my proposal
> > would need the same mitigation as yours, Bob will need to include the
> > `scid` he received
> > in `update_add_htlc` (this is in fact not that hard once we allow TLV
> > extensions on every
> > message).
>
> Yes, I've added this to the PR.  Which gives a new validation path, I
> think:
>
> ## Figuring out what nodeid to use to decode onion
>
> 1. Look up scid from HTLC; if it didn't include one, use default.
> 2. Look up payment_hash; if no invoice is found, use default.
> 3. If invoice specified this scid, get nodeid and use that.
> 4. ... and refuse to forward the HTLC (it must terminate here).
>
> My plan is to add an argument to `invoice` which is an array of one or
> more scids: we get a temporary scids for each peer and use them in the
> routehints.  We also assign a random temporary nodeid to that invoice.
>
> The above algo is designed to ensure we behave like any other node which
> has no idea about this nodeid if Mallory:
>
> 1. tries to use a temporary node id on a normal channel to us.
> 2. tries to pay another invoice using this temporary node id.
> 3. tries to probe our outgoing channels using this routing hint
>(I think we should probably ban forwarding to private channels,
>too, for similar reasons).
>
> ---
>
> Note that with any self-assigned SCID schemes, Alice has to respond to
> unknown scids in update_add_htlc with some BADONION code (which makes
> *Bob* give Carol an error response, since Alice can't without revealing
> her identity).
>
> With Bob-assigned SCIDs, Alice simply needs to make him unallocate
> it before forgetting the invoice, so she will simply never see old
> invoices.
>
> (All these schemes give limited privacy, of course: Bob knows who Alice
> is, and fingerprinting and liveness attacks are always possible).
>
> > I'm extremely nervous about custodial light

Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-05 Thread Bastien TEINTURIER
>
> But Mallory can do the same attack, I think.  Just include the P_I from
> the wrong invoice for Bob.
>

Good catch, that's true, thanks for keeping me honest there! In that case
my proposal
would need the same mitigation as yours, Bob will need to include the
`scid` he received
in `update_add_htlc` (this is in fact not that hard once we allow TLV
extensions on every
message).

I'm extremely nervous about custodial lightning services restricting
> what they will pay to.  This is not theoretical: they will come under
> immense KYC pressure in the near future, which means they cannot pay
> arbitrary invoices.
>

That's a very good point, thanks for raising this. However I believe that
there are (and will be) enough
non-custodial wallets to let motivated users pay whatever they want. Users
can even run their own
node to pay such invoices if needed.

If you are using a custodial wallet and KYC pressure kicks in, then
regardless of that feature law may
require users to completely reveal who they are paying, so even normal
payments wouldn't protect
them, don't you think? Regulation could for example disallow paying via
unannounced channels entirely
(or require you to show the funding tx associated to your unannounced
channel).

If we're taking into account such KYC pressure, then I believe none of the
solutions we can provide will
be useful. It will be up to the recipient to decide whether he thus wants
to use a normal invoice and
reveal his identity or pass on that payment.

What do you think? Do you believe `option_scid_assign` can do a better job
in such situations?

Cheers,
Bastien

Le mer. 5 févr. 2020 à 02:44, Rusty Russell  a
écrit :

> Bastien TEINTURIER  writes:
> > Hey again,
> >
> > Otherwise Mallory gets two invoices, and wants to know if they're
> >> actually the same node.  Inv1 has nodeid N1, routehint Bob->C1, Inv2 has
> >> nodeid N2, routehint Bob->C2.
> >
> > I think this attack is interesting. AFAICT my proposal defends against
> this
> > because of the way
> > `payment_secret` and `decoy_key` are both used to derive the `decoy_scid`
> > (but don't trust me, do
> > verify that I'm not missing something).
> >
> > If Mallory doesn't use both the right `decoy_node_id` and
> `payment_secret`
> > to compute `P_I`, Bob
> > will not decode that to a valid real `scid` and will return an
> > `unknown_next_peer` which is good
> > for privacy.
>
> But Mallory can do the same attack, I think.  Just include the P_I from
> the wrong invoice for Bob.
>
> > It seems to me that
> > https://github.com/lightningnetwork/lightning-rfc/pull/681 cannot defend
> > against this attack. If both invoices are currently valid, Bob will
> forward
> > an HTLC that uses N1
> > with C2 (because Bob has no way of knowing N1 from the onion, for privacy
> > reasons).
> > The only way I'd see to avoid is would be that Alice needs to share her
> > `decoy_node_id`s with
> > Bob (and the mapping to a `decoy_scid`) which means more state to
> > manage...but maybe I'm just
> > missing a better mitigation?
>
> No, Bob can include the scid he used in the update_add_htlc message, so
> Alice can check.
>
> I'm extremely nervous about custodial lightning services restricting
> what they will pay to.  This is not theoretical: they will come under
> immense KYC pressure in the near future, which means they cannot pay
> arbitrary invoices.
>
> Thus my preference for a system which doesn't add any requirements on
> the payer.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-04 Thread Bastien TEINTURIER
Hey again,

Otherwise Mallory gets two invoices, and wants to know if they're
> actually the same node.  Inv1 has nodeid N1, routehint Bob->C1, Inv2 has
> nodeid N2, routehint Bob->C2.
>

I think this attack is interesting. AFAICT my proposal defends against this
because of the way
`payment_secret` and `decoy_key` are both used to derive the `decoy_scid`
(but don't trust me, do
verify that I'm not missing something).

If Mallory doesn't use both the right `decoy_node_id` and `payment_secret`
to compute `P_I`, Bob
will not decode that to a valid real `scid` and will return an
`unknown_next_peer` which is good
for privacy.

It seems to me that
https://github.com/lightningnetwork/lightning-rfc/pull/681 cannot defend
against this attack. If both invoices are currently valid, Bob will forward
an HTLC that uses N1
with C2 (because Bob has no way of knowing N1 from the onion, for privacy
reasons).
The only way I'd see to avoid is would be that Alice needs to share her
`decoy_node_id`s with
Bob (and the mapping to a `decoy_scid`) which means more state to
manage...but maybe I'm just
missing a better mitigation?

Cheers,
Bastien

Le mar. 4 févr. 2020 à 15:09, Bastien TEINTURIER  a
écrit :

> I'm a bit confused, I don't know if the implementation work you're
> mentioning refers to my proposal
> or yours :).
>
> When you say `temporary id`, could you clarify whether you mean a
> temporary `node_id` or `scid`?
>
> Firstly, need to brute-force the onion against your N keys.
>
>
> This is probably the part that confuses me. Are you talking about Bob or
> Alice there?
> Alice can easily have her `decoy_node_id` be derived from her real
> `node_id`'s privacy key and the
> `payment_hash` or `payment_preimage`. When she receives a payment, she
> knows which `decoy_node_id`
> should have been used so she doesn't need to brute-force.
>
> That means Alice doesn't even have to change how she stores invoices. When
> Alice retrieves the
> invoice from her DB, if it has the `decoy_node_id` feature bit set, she
> knows she needs to derive
> the correct `node_id`. If it doesn't have that feature bit set, it's a
> "legacy" invoice and she has
> to use her real `node_id`.
>
> Now Mallory uses Bob->C2 to pay to N1 for Inv1. If it works, he knows it's
>> the same node issuing both invoices.
>
>
> Same, that wouldn't work because Alice can easily detect the mismatch and
> pretend she can't decrypt
> the onion (the code doesn't even have to pretend: it will use the expected
> `node_id` and use the
> existing error paths).
>
> Actually, that was too hasty.
>
>
> Ok I think your second email came to the same conclusions and clarifies it
> a bit :).
>
> It's true that this is code where the developer may easily get confused
> between keys (but it's a
> lot simpler than the Sphinx or Noise implementation).
>
> However in my opinion it's still simpler than the `scid` state management
> that needs to happen at
> Alice and Bob in
> https://github.com/lightningnetwork/lightning-rfc/pull/681 (but I would
> need to
> implement both E2E to be able to fairly judge that).
>
> Thanks for the feedback, I'll keep working on improving the proposal.
> Bastien
>
> Le mar. 4 févr. 2020 à 05:29, Rusty Russell  a
> écrit :
> >
> > Rusty Russell  writes:
> > > Bastien TEINTURIER  writes:
> > >> That's of course a solution as well. Even with that though, if Alice
> opens
> > >> multiple channels to each of her Bobs,
> > >> she should use Tor and a different node_id each time for better
> privacy.
> > >
> > > There are two uses for this feature (both of which I started
> implementing):
> > >
> > > 1. Simply always use a temporary id when you have a private channel, to
> > >obscure your onchain footprint.  This is a nobrainer.
> > >
> > > 2. For an extra layer of transience, apply a new temporary id and new
> > >nodeid on every invoice *which applies only for that invoice*.
> > >
> > > But implementing the latter securely is fraught!
> > >
> > > Firstly, need to brute-force the onion against your N keys.  Secondly,
> > > if you use a temporary key, then you *don't* end up using the HTLC to
> > > pay an invoice matching that key, you *MUST* pretend you couldn't
> > > decrypt the onion!  This applies to all code paths between the two,
> > > including parsing the TLV, etc: they must ALL return
> > > WIRE_INVALID_ONION_HMAC.
> > >
> > > Otherwise, Mallory can get an invoice, then send malformed payments to
> > > Alice using the transient key in the invoice and see if she decrypts
> it.
> >
> > Actual

Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-04 Thread Bastien TEINTURIER
I'm a bit confused, I don't know if the implementation work you're
mentioning refers to my proposal
or yours :).

When you say `temporary id`, could you clarify whether you mean a temporary
`node_id` or `scid`?

Firstly, need to brute-force the onion against your N keys.


This is probably the part that confuses me. Are you talking about Bob or
Alice there?
Alice can easily have her `decoy_node_id` be derived from her real
`node_id`'s privacy key and the
`payment_hash` or `payment_preimage`. When she receives a payment, she
knows which `decoy_node_id`
should have been used so she doesn't need to brute-force.

That means Alice doesn't even have to change how she stores invoices. When
Alice retrieves the
invoice from her DB, if it has the `decoy_node_id` feature bit set, she
knows she needs to derive
the correct `node_id`. If it doesn't have that feature bit set, it's a
"legacy" invoice and she has
to use her real `node_id`.

Now Mallory uses Bob->C2 to pay to N1 for Inv1. If it works, he knows it's
> the same node issuing both invoices.


Same, that wouldn't work because Alice can easily detect the mismatch and
pretend she can't decrypt
the onion (the code doesn't even have to pretend: it will use the expected
`node_id` and use the
existing error paths).

Actually, that was too hasty.


Ok I think your second email came to the same conclusions and clarifies it
a bit :).

It's true that this is code where the developer may easily get confused
between keys (but it's a
lot simpler than the Sphinx or Noise implementation).

However in my opinion it's still simpler than the `scid` state management
that needs to happen at
Alice and Bob in https://github.com/lightningnetwork/lightning-rfc/pull/681
(but I would need to
implement both E2E to be able to fairly judge that).

Thanks for the feedback, I'll keep working on improving the proposal.
Bastien

Le mar. 4 févr. 2020 à 05:29, Rusty Russell  a écrit
:
>
> Rusty Russell  writes:
> > Bastien TEINTURIER  writes:
> >> That's of course a solution as well. Even with that though, if Alice
opens
> >> multiple channels to each of her Bobs,
> >> she should use Tor and a different node_id each time for better
privacy.
> >
> > There are two uses for this feature (both of which I started
implementing):
> >
> > 1. Simply always use a temporary id when you have a private channel, to
> >obscure your onchain footprint.  This is a nobrainer.
> >
> > 2. For an extra layer of transience, apply a new temporary id and new
> >nodeid on every invoice *which applies only for that invoice*.
> >
> > But implementing the latter securely is fraught!
> >
> > Firstly, need to brute-force the onion against your N keys.  Secondly,
> > if you use a temporary key, then you *don't* end up using the HTLC to
> > pay an invoice matching that key, you *MUST* pretend you couldn't
> > decrypt the onion!  This applies to all code paths between the two,
> > including parsing the TLV, etc: they must ALL return
> > WIRE_INVALID_ONION_HMAC.
> >
> > Otherwise, Mallory can get an invoice, then send malformed payments to
> > Alice using the transient key in the invoice and see if she decrypts it.
>
> Actually, that was too hasty.  You can use the payment_hash as a
> fastpath:
>
> 1. Look up invoice using payment_hash.
>
> 2. If there is an invoice, and it has a temporary id associated with it,
>try using that to decrypt the onion.  If that works, and the onion is
>on the final hop, and the TLV decodes, and the payment_secret is
>correct, you can go back and use this temporary key to decrypt the
onion.
>Otherwise, go back and use the normal node key.
>
> That's still quite a bit of tricky code though...
>
> Cheers,
> Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-03 Thread Bastien TEINTURIER
Hi ZmnSCPxj,

That is precisely what I am referring to, the lowest bits of the node ID
> are embedded in the SCID, which we do not want to openly reveal to Carol.
>

Got it, I wasn't understanding your point correctly. We totally agree on
that.

Though if the point is to prevent Carol from correlating different invoices
> as arising from the same payee, then my scheme fails against that.
>

IMO we should prevent Carol from correlating different invoices by using a
different node_id for each invoice.
This requires minimal changes and happens entirely payee-side (see my
initial mail).

Alice would do better to use multiple Bobs in that case.
>

That's of course a solution as well. Even with that though, if Alice opens
multiple channels to each of her Bobs,
she should use Tor and a different node_id each time for better privacy.

Cheers,
Bastien

Le lun. 3 févr. 2020 à 15:51, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > > This is relevant if we ever want to hide the node id of the last node:
> Bob could provide a symmetric
> > > encryption key to all its peers with unpublished channels, which the
> peer can XOR with its own true
> > > node id and use the lowest 40 bits (or 46 bits or 58 bits) in the SCID.
> >
> > I don't understand your point here. Alice cannot hide her node_id from
> Bob since the `node_id` is
> > tied to the (unannounced) channel creation.
> >
> > But this is not an issue. What Alice wants to break is the ability to
> link multiple HTLCs together
> > because they use the same `node_id`. Since Alice can use a different
> `node_id` in every invoice,
> > it's easy for her to make sure Carol cannot tie those HTLCs together.
>
> That is precisely what I am referring to, the lowest bits of the node ID
> are embedded in the SCID, which we do not want to openly reveal to Carol.
> Though if the point is to prevent Carol from correlating different
> invoices as arising from the same payee, then my scheme fails against that.
>
> >
> > In order to hide from Bob, the best Alice can do is use a different
> `node_id` for each channel she
> > opens to Bob and use Tor. This way Bob cannot know that node_id_1 and
> node_id_2 both belong to Alice.
> > I don't think we can do better than that.
>
> Alice would do better to use multiple Bobs in that case.
>
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-03 Thread Bastien TEINTURIER
Thanks for the feedback and discussion. Here are some more comments.

This is relevant if we ever want to hide the node id of the last node: Bob
> could provide a symmetric
> encryption key to all its peers with unpublished channels, which the peer
> can XOR with its own true
> node id and use the lowest 40 bits (or 46 bits or 58 bits) in the SCID.


I don't understand your point here. Alice cannot hide her node_id from Bob
since the `node_id` is
tied to the (unannounced) channel creation.

But this is not an issue. What Alice wants to break is the ability to link
multiple HTLCs together
because they use the same `node_id`. Since Alice can use a different
`node_id` in every invoice,
it's easy for her to make sure Carol cannot tie those HTLCs together.

In order to hide from Bob, the best Alice can do is use a different
`node_id` for each channel she
opens to Bob and use Tor. This way Bob cannot know that node_id_1 and
node_id_2 both belong to Alice.
I don't think we can do better than that.

I really don't want a special marker on Carol; she needs to just pay like
> normal.


I agree that this would be the ideal outcome (and my current proposal
doesn't achieve that, but I'm
hoping I can improve it to achieve that). Do note that even though my
current proposal requires
a code update from Carol, the code-change would be very small. Adding
support for `payment_secret`
did require a change on Carol to improve security; I'm hoping that a small
enough code-change with
a big enough privacy improvement would eventually be supported by all three
implementations (and
then find its way inside wallets).

I must admit I'm a bit turned off by the state management required by your
proposal. I'm afraid it
may be complex to get right, or be subject to fingerprinting and wouldn't
result in the privacy
gain we're hoping.

I think this really needs to be cheap for Bob; if Bob can be DoS-ed by
offering this feature, I
don't think the Bobs out there will activate it.

I really feel some cryptography trick can allow us to find a solution that
requires no more than a
shared secret to be kept between Alice and Bob, and no
synchronization/state management.
I'd like to explore this option further.

Cheers,
Bastien

Le lun. 3 févr. 2020 à 07:51, m.a.holden via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> > (I'm seeking a clever way that Bob can assign them and trivially tell
> > which ID is assigned to which peer, but I can't figure it out, so I
> > guess Bob keeps a mapping and restricts each peer to 256 live scids?).
>
> Hi Rusty.
>
> Here's a potential way for Alice and Bob to agree a set of 256 scids
> without any additional messages or changes to existing messages beyond a
> feature flag and a flag in open_channel, but comes with a computational
> cost.
>
> Alice and Bob agree on a random integer `r`. This could be negotiated on
> `open_channel`, but we shouldn't need to send additional information
> because we already have a random integer we can use: the
> `temporary_channel_id`. This is not known to anybody besides Alice and Bob.
>
> When a channel is locked, Bob computes n=256 scids, using something
> approximating `concat(n, trunc_bytes(sha256(ec_mult(2^n*r, Q)), 7))`, where
> `Q` is Alice's public key for the channel funding transaction.
>
> The chance of scid collisions between channels is 2^56, which is probably
> no cause for concern.
>
> Instead of keeping a map of 256 scids for each channel, Bob can use a
> cuckoo filter for efficiency. The filter can be used for a quick membership
> test and also as an associative map from scids to channels. It can also
> support scid deletion in the event of channel closure (at the cost of
> recomputing 256 ec_mults again).
>
> So when Bob receives a new HTLC to forward, he tests it against his cuckoo
> filter and retreives a candidate set of possible channels to which it may
> refer. For each channel, he takes the most significant byte of the scid as
> `m` and performs `trunc_bytes(sha256(ec_mult(2^m*r, Q)), 7)` and tests the
> least-significant 7 bytes of the result against the scid.
>
> Alice does not need to keep all of the scids she may use for invoices
> because they can be computed on the fly, but she will need to keep a copy
> of the `temporary_channel_id`.
>
> In the reverse direction of Alice forwarding HTLCs to Bob, Bob's public
> key for the funding transaction is used instead.
>
> Regards,
> Mark Holden
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Decoy node_ids and short_channel_ids

2020-01-20 Thread Bastien TEINTURIER
Good morning list,

I'd like to explore some enhancements for unannounced (sometimes called
private) channels.
Unannounced channels are really useful for mobile nodes that won't be
online often enough to route
payments. That does leak information to your channel peer, but that's a
topic for another post.

One of the nice properties of unannounced channels is that they help break
linkability between
on-chain and off-chain payments (because only your channel peer knows the
link between your
`funding_key`, your on-chain UTXOs and your `node_id`).

However the current implementation is broken: invoices leak both your
`node_id` and
`short_channel_id` (via the signature and Bolt 11 routing hints [1]).
It doesn't have to be like this though; invoices don't always require you
to use your real
`short_channel_id` nor your real `node_id`.

Let's set the scene:

* Alice is our mobile wallet user
* Bob is a normal lightning node connected to Alice via an unannounced
channel
* Carol wants to pay Alice via Bolt 11 invoices

There is already a first proposal to fix this problem [2], with the
following trade-offs:

(-) Adds a new stateful protocol (with new messages) between Alice and Bob
(-) Can't use a unique `short_channel_id` for every invoice
(+) Carol doesn't need to change anything from the existing flow

I'd like to propose an alternative design with the following, different
trade-offs:

(+) No state to synchronize between Alice and Bob
(+) Can use unique `short_channel_id`s and `node_id`s for each invoice
(-) But Carol needs to add a new record in the onion (probably needs a
feature bit)

## Decoy `node_id`s

Alice currently signs all invoices with the private key associated to her
`node_id`.
This makes sense when Alice wants to be reached via public channels, but it
isn't used at all when
Alice provides routing hints. In that case she can generate a one-time
private key and sign the
invoice with it. This way Alice doesn't leak her real `node_id` to payers.

## Decoy `short_channel_id`s

Here is a simple construction for generating a `decoy_short_channel_id`:

* Alice draws a random `invoice_key`
* Alice computes the corresponding public key: `P_I = invoice_key * G`
* Alice computes `decoy_short_channel_id = H(invoice_key * bob_node_id) xor
short_channel_id`
* Alice provides a routing hint using `decoy_short_channel_id` in the
invoice
* Alice provides `P_I` in the invoice

Now when Carol wants to pay, she has to include `P_I` in the onion payload
for Bob.
When Bob receives the HTLC, he can compute `short_channel_id =
H(bob_private_key * P_I) xor decoy_short_channel_id`.
That allows Bob to correctly forward the payment to Alice without any prior
negotiation.

## Improvements

The two main drawbacks of this scheme are:

1. It uses 33 bytes in the invoice
2. It uses 33 bytes in the onion payload for Bob

We can easily get rid of (1.) by leveraging the `payment_secret`. The
improved scheme is:

* Alice draws a random `decoy_key`
* Alice computes the corresponding `decoy_node_id = decoy_key * G`
* Alice draws a random `payment_secret`
* Alice computes `decoy_short_channel_id = H(payment_secret * decoy_key *
bob_node_id) xor short_channel_id`
* Alice uses the `decoy_key` to sign the invoice
* Carol recovers `decoy_node_id` from the invoice signature
* Carol includes `P_I = payment_secret * decoy_node_id` in the onion
payload for Bob
* Bob can compute `short_channel_id = H(bob_private_key * P_I) xor
decoy_short_channel_id`

But I don't see how to get rid of (2.). If anyone has a clever idea on how
to do that, I'd love to hear it!

These constructions definitely need more eyes on them; I don't see anything
obviously broken, but
neither did the designers of PKCS #1 until Bleichenbacher ruined the party
[3].

Thank you for reading,
Bastien

[1]
https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md#tagged-fields
[2] https://github.com/lightningnetwork/lightning-rfc/pull/681
[3] http://archiv.infsec.ethz.ch/education/fs08/secsem/bleichenbacher98.pdf
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-19 Thread Bastien TEINTURIER
Good points, these are good optimisations if we propose such a new opcode!
I'm still pondering whether this will be useful enough or if finney attacks
completely ruin all use-cases...

Le jeu. 19 déc. 2019 à 07:24, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > > -   A script-path spend with the following script (and only that
> script):
> > > OP_SWAP OP_DUP  OP_EQUALVERIFY OP_SWAP  OP_CHECKSIG
> > >
> >
> > Why not this:
> >
> >  OP_SWAP OP_CHECKSPLITSIG
> >
> > ?
> >
> > Since `R` is constrained to be fixed anyway, why repeat `R` twice, once
> in the script and once in the witness stack?
>
> For that matter, since we are far more likely to have a constant `R` than
> a constant `s` maybe you should instead propose that `OP_CHECKSPLITSIG` be
> given `   OP_CHECKSPLITSIG`, so that a fixed-`R` single-show
> signature is just `  OP_CHECKSPLITSIG`.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-18 Thread Bastien TEINTURIER
Thanks Ethan, I agree on that.

Let me also share additional feedback I received on #bitcoin-wizards from
gmaxwell [1]:

* Changing the behavior of OP_CHECKSIG is a no-go because using two stack
arguments
  instead of one increases the witness size
* This is better done as a new opcode as you suggest
* OP_CAT and friends were intentionally left out of Taproot (too general,
needs more analysis)
* But this OP_CHECKSPLITSIG is very constrained so may be ok?
* It does NOT protect against a finney attack [2]: protocols leveraging
that would need to take
  such attacks into account in the incentive analysis
* It only protects against a double-spend if you disallow Patrick
from "emptying" this UTXO via
  Lightning before double-spending

I still believe there are good use-cases for this for off-chain protocols,
so I'll keep fleshing it out.
I am interested in more feedback about the scheme, potential other attack
vectors, potential other
use-cases, anything you may find relevant to the discussion.

Cheers,
Bastien

[1] https://freenode.irclog.whitequark.org/bitcoin-wizards/2019-12-18
[2] https://bitcoin.stackexchange.com/questions/4942/what-is-a-finney-attack


Le mer. 18 déc. 2019 à 15:35, Ethan Heilman  a écrit :

> Responding below
>
> The core idea is to modify Tapscript's `OP_CHECKSIG`. Instead of reading
>> the
>> signature as a single 64-bytes stack argument, let's add a small change
>> to read
>> the signature as two 32-bytes stack arguments: `R` first then `s`.
>> Since Taproot already makes changes to this opcode, adding this small
>> change
>> seems to be quite simple and harmless (and this is the right time to
>> propose
>> such a change as we're still in the Taproot review process).
>>
>
> I  very much in favor of a mechanism to enable outputs to enforce ECDSA
> nonce reuse.
>
> However I would argue against changing the behavior of OP_CHECKSIG. Subtly
> changing the stack behavior of perhaps the most widely used and complex OP
> code in Bitcoin is likely to result in bugs in systems that create and sign
> transactions. Additionally making this new behavior only activate based on
> context is even more likely to cause problems.
>
> It would likely be safer to have this as a new OP code, say
> OP_CHECKSPLITSIG.
>
> Alternatively we could try to get OP_CAT approved. It is a very simple OP
> code, which is easy to explain, generally useful and allows this feature
> plus allows many other critical features.
>
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-18 Thread Bastien TEINTURIER
Good morning list,

Thanks again for all the good suggestions, this is awesome.
David and ZmnSCPxj's proposals got me thinking (and I still need to dive
into
Antoine's suggestion as well), and I may have found a very interesting
construction. It's either great or completely dumb. I hope you can help me
figure out which of the two it's going to be.

The core idea is to modify Tapscript's `OP_CHECKSIG`. Instead of reading the
signature as a single 64-bytes stack argument, let's add a small change to
read
the signature as two 32-bytes stack arguments: `R` first then `s`.
Since Taproot already makes changes to this opcode, adding this small change
seems to be quite simple and harmless (and this is the right time to propose
such a change as we're still in the Taproot review process).

This effectively lets us leverage nonce reuse as a feature to prevent double
spending once a signature has been shared off-chain.

Let's set the scene for this usecase. We have a service provider Patrick
that
wants to offer layer 2 services. Patrick prepares some of his UTXOs to have
the
following spending condition:

* A provably unspendable key-path spend
* A script-path spend with the following script (and only that script):
OP_SWAP OP_DUP  OP_EQUALVERIFY OP_SWAP  OP_CHECKSIG
* Notes:
* The script could be more fancy (maybe we want to use hash(R) instead of R
directly) but you get the idea
* The OP_SWAP are needed because the spending stack will be   
* P is of course different for every UTXO

This means that Patrick is committing to the nonce he'll be using to spend
that
output.

Now comes our friend Alice. Patrick wants to open a channel to Alice and
wants
to start using this channel without waiting for on-chain confirmation.
Alice and Patrick build the funding transaction as usual; once Alice sees
the
transaction in the mempool, she can verify that the inputs have the right
format.
Now Alice can be sure that Patrick will not double-spend the funding
transaction's inputs: if he does, he will be signing a different message
with
the same nonce. That would allow Alice to extract the private key for `P`
and
spend the UTXO to herself. She has nothing to lose there because it's
Patrick's
UTXO so she has an incentive to use much higher fees than Patrick.

It seems to me that this construction can be generalized for many off-chain
protocols that don't want to wait for confirmation. I may be overly
optimistic,
but I think this could enable a whole lot of new use-cases and remove many
pain points in Lightning.

This is only a first draft, and there are things that can be improved. Let's
list what comes to mind (you will probably identify other issues):

* Patrick can't use RBF on transactions that spend this kind of UTXO
because it
would reveal his private key: that's probably ok in practice (we can add an
output for CPFP instead like we're doing for anchor outputs [1])
* These UTXOs are easy to recognize on-chain once spent, which may indicate
that this is spent for an off-chain protocol
* It would be great to have a way to allow key-path spend, but revoke this
capability once the script has been revealed (off-chain): that would allow
Patrick to encumber all his UTXOs with such a script and only use it when
needed for an off-chain scenario (and use normal key-path spend otherwise)

Please let me know if this is completely broken, completely dumb or worth
sharing to the bitcoin-dev mailing list to consider including this
`OP_CHECKSIG`
change in the Taproot soft-fork.

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/688

Le mer. 18 déc. 2019 à 05:49, Antoine Riard  a
écrit :
>
> Hi Bastien,
>
> The use case you're describing strikes me as similar to a slashing
protocol for a LN node and a watchtower, i.e punishing
> a lazy watchtower for not broadcasting a penalty tx on remote revoked
state. In both case you want "if A don't do X
> unlock some funds for B".
>
> Here a rough slashing protocol I've sketched out to someone else
off-list, it may work for you use case if you replace the penalty tx
> by the funding transaction as a way for the trusted channel funder to
clear his liability. Though you will need onchain interactivity
> before the fact but you may be able to reuse slashing outpoint for
multiple channel funding.
>
> Slashing Protocol
> --
>
> Alice and Bob lock fund in channel outpoint X. They issue commitment tx
N.  Will the accountable watchtower locks fund
> in a 2-of-2 slashing outpoint Y with Bob the client.
>
> When Alice and Bob update channels to N', Bob and Will use some output
from commitment N (like upcoming anchor output)
> to create an accountable tx M. M is paying to Bob after timelock+Bob sig
or is paying to transaction success_penalty P
> with Will sig + Bob sig. Success_penalty P will have 2 inputs, one from M
and from J the justice tx than Bob has given
> to Will. J is spending Alice's revoked commitment N.
>
> So this slashing protocol should avoid Bob making 

Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-17 Thread Bastien TEINTURIER
Thanks a lot David for the suggestion and pointers, that's a really
interesting solution.
I will dive into that in-depth, it could be very useful for many layer-2
constructions.

Thanks ZmnSCPxj as well for the quick feedback and the `OP_CAT`
construction,
a lot of cool tricks coming up once (if?) we have such tools in the future
;)

Le mar. 17 déc. 2019 à 16:14, ZmnSCPxj  a écrit :

> Good morning David, t-bast, and all,
>
>
> > I'm not aware of any way to currently force single-show signatures in
> > Bitcoin, so this is pretty theoretical. Also, single-show signatures
> > add a lot of fragility to any setup and make useful features like RBF
> > fee bumping unavailable.
>
> With `OP_CAT`, we can enforce that a particular `R` is used, which allows
> to implement single-show signatures.
>
> # Assuming signatures are the concatenation of (R,s)
>  OP_SWAP OP_CAT  OP_CHECKSIG
>
> The above would then feed `s` only on the witness stack.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-17 Thread Bastien TEINTURIER
Hi ZmnSCPxj,

Thanks for your response.

* Once the pre-funding is sufficiently confirmed as per Bob security
> parameter
>

This is the part I'm trying to avoid. If we're ok with waiting for
confirmation, then it's easy to do indeed (and let's just wait for the
funding tx to confirm, I believe we don't even need that pre-funding step).
But if we have to wait for confirmations we're hodling the incoming HTLC
for a few blocks, which I'd like to avoid.

Do you have a smart construction that would allow us to build safely on
that unconfirmed transaction?
Is there maybe a smart trick that would allow us the pay-to-open server to
provably lock some UTXO in advance to prevent
itself from double-spending them?

Cheers,
Bastien



Le mar. 17 déc. 2019 à 10:31, ZmnSCPxj  a écrit :

>
> Good morning t-bast,
>
> > Good morning list,
> >
> > As everyone who has ever used a Lightning wallet is well aware, the
> onboarding process could be
> > made smoother. With Phoenix [1], we've been experimenting with
> pay-to-open [2].
> >
> > That works well in practice and provides a great UX for newcomers, but
> it requires temporary trust
> > between the user and our node (until the funding tx confirms).
> >
> > That trust relationship appears in two places:
> >
> > a. The user releases the preimage, then we fund the channel [2]
> > b. The user trusts that we won't double-spend the funding transaction
> >
> > We currently need (a) because we can't ensure that the user will reveal
> the preimage once we've
> > funded the channel.
> >
> > It's (somewhat) easy to fix that once Bitcoin supports Schnorr.
> > Let's assume that we're using PTLCs (where the secret is a private key)
> and MuSig for channel
> > funding transactions.
> > When Alice receives a PTLC to forward to Bob, if she doesn't have a
> channel to Bob and Bob supports
> > pay-to-open, she can initiate a tweaked channel opening flow. She can
> use tlv extensions in the
> > `open_channel` message to tell Bob that this channel is linked to a PTLC
> with point `X=x*G`.
> > Bob will tweak the MuSig nonce with `X` and provide Alice with a partial
> signature for that nonce.
> > When Bob then provides the adaptor signature to finalize the funding
> transaction, it reveals `x` to
> > Alice who can now fulfill the PTLC downstream.
> >
> > Note that in this simple version, Alice knows the nonce tweak
> beforehand. This may (or may not,
> > that will need to be investigated thoroughly) be a security issue.
> > Even if it turns out to be an issue, I'm pretty sure we can find a
> secure protocol that will allow
> > this atomicity (let's just add another round of communication, that's
> usually how we fix broken
> > cryptographic protocols).
>
> This can be assured today with HTLC-like constructions, similar to what we
> use in HTLC-success / HTLC-timeout in BOLT 3.
>
> Channel opening *instead* goes this way:
>
> * Alice receives a payment request to Bob with a specific payment hash.
> * Alice creates a transaction from its onchain funds, paying out to an
> HTLC-like construction with logic `(hash_preimage && A && B) || (timelock
> && A)`.
>   * Call this the pre-funding transaction.
>   * Alice does **not** sign and broadcast this *yet*!
>   * The timelock could reuse the same timelock as indicated in the final
> hop to the incoming payment.
> * Alice gives the txid of the pre-funding to Bob.
> * Alice and Bob create a transaction that spends the above output to the
> logic `A && B`.
>   * Call this the funding transaction.
> * Alice and Bob create commitment transactions spending the above funding
> transaction as per usual flow, and exchange signatures, completing up to
> `funding_signed`.
>   * Have it `push_msat` the payment amount to Bob minus the fee to open.
> * Alice and Bob exchange signatures for funding transaction, spending
> using the hashlock branch of the pre-funding transaction HTLC.
> * Alice signs and broadcasts the pre-funding transaction.
> * Once the pre-funding is sufficiently confirmed as per Bob security
> parameter, Bob then broadcasts the funding transaction.
>   * To do so, Bob has to add the preimage to the witness stack in order to
> make-valid the funding transaction.
> * Alice sees the preimage from the broadcasted funding transaction and can
> now continue claiming the incoming HTLC.
>
> >
> > I'm more concerned about fixing (b). As long as the funding transaction
> is unconfirmed, there's a
> > risk of double-spending by the funder. I'm shamelessly trying to use
> this mailing list's brainpower
> > to figure out possible solutions for that. Does someone have ideas that
> could help? Can we setup
> > the incentives so that it's never rational for the funder to
> double-spend?
>
> Above procedure probably fixes this as well?
> It sets things up so that the funder cannot double-spend the funds that
> will eventually get into the channel after it is capable of receiving the
> preimage.
> Funder can double-spend, but then is unable to learn the 

[Lightning-dev] Pay-to-Open and UX improvements

2019-12-17 Thread Bastien TEINTURIER
Good morning list,

As everyone who has ever used a Lightning wallet is well aware, the
onboarding process could be
made smoother. With Phoenix [1], we've been experimenting with pay-to-open
[2].

That works well in practice and provides a great UX for newcomers, but it
requires temporary trust
between the user and our node (until the funding tx confirms).

That trust relationship appears in two places:

a. The user releases the preimage, then we fund the channel [2]
b. The user trusts that we won't double-spend the funding transaction

We currently need (a) because we can't ensure that the user will reveal the
preimage once we've
funded the channel.

It's (somewhat) easy to fix that once Bitcoin supports Schnorr.
Let's assume that we're using PTLCs (where the secret is a private key) and
MuSig for channel
funding transactions.
When Alice receives a PTLC to forward to Bob, if she doesn't have a channel
to Bob and Bob supports
pay-to-open, she can initiate a tweaked channel opening flow. She can use
tlv extensions in the
`open_channel` message to tell Bob that this channel is linked to a PTLC
with point `X=x*G`.
Bob will tweak the MuSig nonce with `X` and provide Alice with a partial
signature for that nonce.
When Bob then provides the adaptor signature to finalize the funding
transaction, it reveals `x` to
Alice who can now fulfill the PTLC downstream.

Note that in this simple version, Alice knows the nonce tweak beforehand.
This may (or may not,
that will need to be investigated thoroughly) be a security issue.
Even if it turns out to be an issue, I'm pretty sure we can find a secure
protocol that will allow
this atomicity (let's just add another round of communication, that's
usually how we fix broken
cryptographic protocols).

I'm more concerned about fixing (b). As long as the funding transaction is
unconfirmed, there's a
risk of double-spending by the funder. I'm shamelessly trying to use this
mailing list's brainpower
to figure out possible solutions for that. Does someone have ideas that
could help? Can we setup
the incentives so that it's never rational for the funder to double-spend?

Cheers,
Bastien

[1] https://phoenix.acinq.co/
[2] https://medium.com/@ACINQ/phoenix-part-2-pay-to-open-4a8a482dd4d
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Rendez-vous on a Trampoline

2019-11-12 Thread Bastien TEINTURIER
Hey Laolu,

Looks like HORNET is back in the game in many recent threads ;)
However the recent paper shared on the tor-dev mailing list [1] mentions
that HORNET and
other onion sessions might be a lot less secure than we thought...so I'd
wait for more academic
results before including such a big change to the network.

I totally agree with you though that current rendezvous proposals are
one-shot only.
If that route fails, then you can't do smart retries nor get useful routing
failure data.

It might be an unpopular opinion but I think that this can be addressed
off-protocol though.
In most scenario I can think of, the user will still interact with a
website to scan a QR code or an API
to get an invoice. On failure, another round of interaction to offer a
different rendezvous onion could
happen between the payer's app and the merchant's website. It's the same
for stuckless' ACK
message. This message could be exchanged outside of the protocol via the
interaction between a
payer's app and the merchant's website.

Cheers,
Bastien

[1] https://arxiv.org/abs/1910.13772


Le mer. 6 nov. 2019 à 00:53, Olaoluwa Osuntokun  a
écrit :

> Hi t-bast,
>
> > She creates a Bolt 11 invoice containing that pre-encrypted onion.
>
> This seem insufficient, as if the prescribed route that Alice selects
> fails,
> then the sender has no further information to go off of (let's say Teddy is
> offline, but there're other pats). cdecker's rendezvous sketch using
> Sphinx you
> linked above also suffers from the same issue: you need some other
> bi-directional communication medium between the sender and receiver in
> order to
> account for payment failures. Beyond that, if any failures occur in the
> latter
> half of the route (the part that's opaque to the sender), then the sender
> isn't
> able to incorporate the failure information into their path finding.  As a
> result, the payer would need to send the error back to the receiver for
> decrypting, possibly ping-ponging several times in a payment attempt.
>
> On the other hand, using HORNET for rendezvous routing as was originally
> intended gives the sender+receiver a communication channel they can use to
> exchange further payment information, and also a channel to use for
> decryption
> of the opaque errors. Amongst many other things, it would also give us a
> payment-level ACK [1], which may be a key component for payment splitting
> (otherwise
> you have no idea if _any_ shards have even arrived at the other side).
>
>
> [1]:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001524.html
>
> -- Laolu
>
> On Tue, Oct 22, 2019 at 5:02 AM Bastien TEINTURIER 
> wrote:
>
>> Good morning everyone,
>>
>> Since I'm a one-trick pony, I'd like to talk to you about...guess what?
>> Trampoline!
>> If you watched my talk at LNConf2019, I mentioned at the end that
>> Trampoline enables high AMP very easily.
>> Every Trampoline node in the route may aggregate an incoming multi-part
>> payment and then decide on how
>> to split the outgoing aggregated payment. It looks like this:
>>
>>  . 1mBTC ..--- 2mBTC ---.
>> /\ /
>> \
>> Alice - 3mBTC --> Ted -- 4mBTC > Terry - 6mBTC >
>> Bob
>>\ /
>> `--- 2mBTC --'
>>
>> In this example, Alice only has small-ish channels to Ted so she has to
>> split in 3 parts. Ted has good outgoing
>> capacity to Terry so he's able to split in only two parts. And Terry has
>> a big channel to Bob so he doesn't need
>> to split at all.
>> This is interesting because each intermediate Trampoline node has
>> knowledge of his local channels balances,
>> thus can make more informed decisions than Alice on how to efficiently
>> split to reach the next node.
>>
>> But it doesn't stop there. Trampoline also enables a better rendez-vous
>> routing than normal payments.
>> Christian has done most of the hard work to figure out how we could do
>> rendez-vous on top of Sphinx [1]
>> (thanks Christian!), so I won't detail that here (but I do plan on
>> submitting a detailed spec proposal with all
>> the crypto equations and nice diagrams someday, unless Christian does it
>> first).
>>
>> One of the issues with rendez-vous routing is that once Alice (the
>> recipient) has created her part of the onion,
>> she needs to communicate that to Bob (the sender). If we use a Bolt 11
>> invoice for that, it means we need to
>> put 1366 additional bytes to the invoice (plus some additional
>> information for the ephemeral key 

Re: [Lightning-dev] Rendez-vous on a Trampoline

2019-11-12 Thread Bastien TEINTURIER
Hi Antoine,

This delegation trades hardware requirements against privacy leaks
> and higher fees. And we also have now to re-design privacy mechanism
> to constitue an anonymous network on top of the network one. Rendez-vous
> is one of them, multipe-trampoline hops another one.
>

I'm not sure I understand this correctly. The goal of trampoline is to do
multi-trampoline hops
right from the beginning (when we include it in the spec). In that case I
believe we can make
sure we offer the same privacy as we have today.

That's said, current trampoline proposal which enables legacy payee doxing
> without
> any opt-in from its side is a bit gross.
>

I totally agree and I think that's something that we will fix once we start
brainstorming for
spec inclusion, with more eyes on the proposal.

Overall I agree with your concerns and this is why we want more feedback on
the proposal.
We think that providing a first implementation is a good step towards
getting people onboard
and improving it.

I also think that we're over-estimating the privacy currently offered by
the network (against
powerful adversaries). You mention doing MPP path intersection to expose
senders, but while
the network has low usage, people can be de-anonymized much more easily
with simple
statistical analysis (via cltv and amounts).

But I'm confident that privacy features can be added incrementally, such as
the random_scid
work and rendezvous.

Cheers,
Bastien

Le lun. 28 oct. 2019 à 03:02, Antoine Riard  a
écrit :

> Hi,
>
> Design reason of trampoline routing was to avoid lite nodes having
> to store the whole network graph and compute long-hop route. Trick
> lays in getting away from source-base routing, which has the nice
> property to hide hop position along the payment route (if we forget
> payment hash correleation), by enabling a mechanism for route
> computation delegation.
>
> This delegation trades hardware requirements against privacy leaks
> and higher fees. And we also have now to re-design privacy mechanism
> to constitue an anonymous network on top of the network one. Rendez-vous
> is one of them, multipe-trampoline hops another one. We may want also to
> be inspired by I2P and its concept of outbound/inbound tunnels, like payer
> concatenating a second trampoline onion to the rendez-vous onion acquired
> from
> the payee. Trick are known but hard and complex to get right in practice.
>
> That's said, current trampoline proposal which enables legacy payee doxing
> without
> any opt-in from its side is a bit gross. Yes rendez-vous routing by
> receiver solves
> it (beyond being cool in itself)! but stucks on the same requirement to
> update payee nodes.
> If so, implementing trampoline routing on receiver could be easier and let
> it hide behind the
> feature flag.
>
> If Eclair go forward with trampoline, are you going to enforce that
> trampoline
> routing is only done with payee flagging support ?
>
> That's a slowdown but if not people are going to be upset learning that a
> chunk of their
> incoming payment is potentially logged by some intermediate node.
>
> Also, I'm a bit worried too on how AMP is going to interact with
> trampoline routing.
> Depend on topology, but a naive implementation only using public channels
> and one-hop
> trampoline node would let the trampoline learn who is the payer by doing
> intersection
> of the multiple payment paths.
>
> Long-term we may be pleased to have this flexible tools to enable
> wide-scale
> networking without assessing huge routing tables for everyone but I think
> we
> should be really careful on how we design and deploy this stuff to avoid
> another
> false promise of privacy like we have known on the base layer, e.g
> bloom-filters.
>
> Antoine
>
> Le ven. 25 oct. 2019 à 03:20, Corné Plooy via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> a écrit :
>
>> Cool: non-source rendez-vous routing. Getting closer to 2013 Amiko Pay,
>> with the added experience of 2019 Lightning with Sphinx routing and AMP.
>>
>> https://cornwarecjp.github.io/amiko-pay/doc/amiko_draft_2.pdf
>>
>> (esp. section 2.1.3)
>>
>> Please forgive the use of the term "Ripple". 2013 was a different time.
>>
>>
>> CJP
>>
>>
>> On 22-10-19 14:01, Bastien TEINTURIER wrote:
>> > Good morning everyone,
>> >
>> > Since I'm a one-trick pony, I'd like to talk to you about...guess
>> > what? Trampoline!
>> > If you watched my talk at LNConf2019, I mentioned at the end that
>> > Trampoline enables high AMP very easily.
>> > Every Trampoline node in the route may aggregate an incoming
>> > multi-part payment and then decid

[Lightning-dev] Rendez-vous on a Trampoline

2019-10-22 Thread Bastien TEINTURIER
 Good morning everyone,

Since I'm a one-trick pony, I'd like to talk to you about...guess what?
Trampoline!
If you watched my talk at LNConf2019, I mentioned at the end that
Trampoline enables high AMP very easily.
Every Trampoline node in the route may aggregate an incoming multi-part
payment and then decide on how
to split the outgoing aggregated payment. It looks like this:

 . 1mBTC ..--- 2mBTC ---.
/\ /
  \
Alice - 3mBTC --> Ted -- 4mBTC > Terry - 6mBTC > Bob
   \ /
`--- 2mBTC --'

In this example, Alice only has small-ish channels to Ted so she has to
split in 3 parts. Ted has good outgoing
capacity to Terry so he's able to split in only two parts. And Terry has a
big channel to Bob so he doesn't need
to split at all.
This is interesting because each intermediate Trampoline node has knowledge
of his local channels balances,
thus can make more informed decisions than Alice on how to efficiently
split to reach the next node.

But it doesn't stop there. Trampoline also enables a better rendez-vous
routing than normal payments.
Christian has done most of the hard work to figure out how we could do
rendez-vous on top of Sphinx [1]
(thanks Christian!), so I won't detail that here (but I do plan on
submitting a detailed spec proposal with all
the crypto equations and nice diagrams someday, unless Christian does it
first).

One of the issues with rendez-vous routing is that once Alice (the
recipient) has created her part of the onion,
she needs to communicate that to Bob (the sender). If we use a Bolt 11
invoice for that, it means we need to
put 1366 additional bytes to the invoice (plus some additional information
for the ephemeral key switch).
If the amount Alice wants to receive is big and may require multi-part,
Alice has to decide upfront on how to split
and provide multiple pre-encrypted onions (so we need 1366 bytes *per
partial payment*, which kinda sucks).

But guess what? Bitcoin Trampoline fixes that*™*. Instead of doing the
pre-encryption on a normal onion, Alice
would do the pre-encryption on a Trampoline onion (which is much smaller,
in my prototype it's 466 bytes).
And that allows rendez-vous routing to benefit from Trampoline's ability to
do multi-part at each hop.
Obviously since the onion is smaller, that limits the number of trampoline
hops that can be used, but don't
forget that there are additional "normal" hops between each Trampoline node
(and the final Trampoline spec
can choose the size of the Trampoline onion to enable a good enough
rendez-vous).

Here is what it would look like. Alice chooses to rendez-vous at Terry.
Alice wants the payment to go through Terry
and Teddy so she pre-encrypts a Trampoline onion with that route:

Alice <--- Teddy <--- Terry

She creates a Bolt 11 invoice containing that pre-encrypted onion. Bob
picks up that invoice and can either reach
Terry directly (via a normal payment route) or via another Trampoline node
(Toad?). Bob finalizes the encryption of
the Trampoline onion and sends it onward. Bob can use multi-part and split
the payment however he wishes,
because every Trampoline node in the route will be free to aggregate and
re-split differently.
Terry is the only intermediate node to know that rendez-vous routing was
used. Terry doesn't learn anything about
Alice because the payment still needs to go through Teddy. Teddy only
learns that this is a Trampoline payment, so
he doesn't know his position in the Trampoline route (especially since he
doesn't know that rendez-vous was used).

I believe this makes rendez-vous routing reasonable to implement: the
trade-offs aren't as strong as in the normal
payment case. If I missed something (maybe other issues related to the
current rendez-vous proposal) please let me know.

Of course Trampoline itself also has trade-offs that in some cases may
impact privacy (e.g. when paying to legacy nodes
that don't understand the Trampoline onion). This is why Eclair is
currently implementing it to identify all the places where
it falls short, so that we can then leverage the community's amazing brain
power to converge on a spec that everyone is
happy with and that minimizes the trade-offs we need to make. Stay tuned
for more information and updates to the spec PR
once we make progress on our Trampoline experiments.

Thank you for reading this, don't hesitate to throw ideas and/or criticize
this proposal.
Note that all the cryptographic details are left as an exercise to the
reader.

Bastien

[1]
https://github.com/lightningnetwork/lightning-rfc/wiki/Rendez-vous-mechanism-on-top-of-Sphinx
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo implementation in Bitcoin functional test framework

2019-09-04 Thread Bastien TEINTURIER
Good morning Richard,

This is an interesting initiative, I'm curious to see the results!
I know we haven't worked on any Eltoo implementation yet at Acinq and I
don't know if others have attempted it.

However I have a very open question that may impact your project.
I'm starting to look at miniscript [1] (still a total noob though) and
listened to an interview where Pieter Wuille briefly mentioned that using
miniscript for lightning may be more future-proof and extensible than
directly using bitcoin script.
Have you considered first re-writing the Eltoo scripts with miniscript? Or
did someone else on this list attempt this already?
Do people on this list have opinions on whether that is the right direction
for Eltoo scripts (and maybe even for Bolt 1.x scripts if *any_prevout*
never makes it to Bitcoin scripts)?

Cheers,
Bastien

[1] http://bitcoin.sipa.be/miniscript/

Le mer. 4 sept. 2019 à 13:20, Richard Myers  a écrit :

> Hi All,
>
> To better understand how the eltoo update scheme (
> https://blockstream.com/eltoo.pdf ) works in practice I implemented eltoo
> in the Bitcoin functional test framework. These simulations exercise a
> concrete implementation of the eltoo Bitcoin scripts and explore the data
> flow between nodes that use eltoo to update their channel state.
>
> My motivation for creating these tests is to have a framework for both
> understanding and refining the Bitcoin scripts and message passing protocol
> for eltoo. I’d love to hear what people think of my initial implementation.
>
> This simulation uses a fork of Bitcoin with cdecker’s SIGHASH_NOINPUT
> patch applied to the signet2 fork fjahr created with patches applied for
> signet (kallewoof), taproot (sipa) and anyprevout* (ajtowns).
>
>
> https://github.com/remyers/signet2/blob/eltoo/test/functional/simulate_eltoo.py
>
> Next steps:
>
>-
>
>add bidirectional channel updates
>-
>
>derive public keys for settle transactions from a pre-shared basepoint
>
>
> Does anyone know of any other eltoo implementations? I’d love to compare
> notes and get the ball rolling on a detailed specification.
>
> Special thanks to the Chaincode Summer Residency and Christian Decker for
> their helpful advice and encouragement while I worked on this project.
>
>   -- Richard
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fwd: Trampoline Routing

2019-08-05 Thread Bastien TEINTURIER
>
> Anyway, I'm probably missing something, but another way of putting my
> question would be: why does your example use 2 trampolines instead of 1?


Because I wanted to show the generality of the scheme: the number of
trampoline hops is entirely Alice's choice.
If Alice only cares about cost-efficiency, she will choose a single
trampoline hop (in the current network's conditions).
If Alice cares about privacy, she will likely chose more than one
trampoline hop.
The fact that she *may* use multiple trampoline hops is important because
it increases her anonymity set (even if she
uses only one in the end).

You also said we're going to need some hierarchy, but what it's that? Is it
> required?


This is not needed in the current network because the routing table is
still small.
If the network eventually reaches a billion channels, we can't expect even
trampoline nodes to sync everything and
be able to find a route to any other node in the network: when/if that
happens, we will need to introduce some kind
of hierarchy / packet-switching as ZmnCSPxj previously mentioned.
But we don't know yet if that will happen, or when it will happen. It's
important to think about it and make sure we can
have a working solution if that happens, but this isn't a short-term need.


Le lun. 5 août 2019 à 11:30, fiatjaf  a écrit :

> No. My question was more like why does Alice decide to build a route that
> for through T1 and RT2 and not only through one trampoline router she knows.
>
> That makes sense you me in the context of ZmnSCPxj's virtual space idea,
> but not necessarily in the current network conditions. You also said we're
> going to need some hierarchy, but what it's that? Is it required?
>
> Anyway, I'm probably missing something, but another way of putting my
> question would be: why does your example use 2 trampolines instead of 1?
>
> On Monday, August 5, 2019, Bastien TEINTURIER  wrote:
> > Good morning fiatjaf,
> > This is a good question, I'm glad you asked.
> > As:m ZmnSCPxj points out, Alice doesn't know. By not syncing the full
> network graph, Alice has to accept
> > "being in the dark" for some decisions. She is merely hoping that RT2
> can find a route to Bob. Note that
> > it's quite easy to help Alice make informed decision by proving routing
> hints in the invoice and in gossip
> > messages (which we already do for "normal" routing).
> > The graph today is strongly connected, so it's quite a reasonable
> assumption (and Alice can easily retry
> > with another choice of trampoline node if the first one fails - just
> like we do today with normal payments).
> > I fully agree with ZmnSCPxj though that in the future this might not be
> true anymore. When/if the network
> > becomes too large we will likely lose its strongly connected nature.
> When that happens, the Lightning
> > Network will need some kind of hierarchical / packet switched routing
> architecture and we won't require
> > trampoline nodes to know the whole network graph and be able to route to
> mostly anyone.
> > I argue that trampoline routing is a first step towards enabling that.
> It's a good engineering trade-off between
> > ease of implementation and deployment, fixing a problem we have today
> and enabling future scaling for
> > problems we'll have tomorrow. It's somewhat easy once we have trampoline
> payments to evolve that to a
> > system closer to the internet's packet switching infrastructure, so
> we'll deal with that once the need for it
> > becomes obvious.
> > Does that answer your question?
> > Cheers,
> > Bastien
> > Le sam. 3 août 2019 à 05:48, ZmnSCPxj  a
> écrit :
> >>
> >> Good morning fiatjaf,
> >>
> >> I proposed before that we could institute a rule where nodes are mapped
> to some virtual space, and nodes should preferably retain the part of the
> network graph that connects itself to those nodes near to it in this
> virtual space (and possibly prefer to channel to those nodes).
> >>
> >>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-April/001959.html
> >>
> >> Thus Alice might **not** know that some route exists between T1 and T2.
> >>
> >> T1 itself might not know of a route from itself to T2.
> >> But if T1 knows a route to T1.5, and it knows that T1.5 is nearer to T2
> than to itself in the virtual space, it can **try** to route through T1.5
> in the hope T1.5 knows a route from itself to T2.
> >> This can be done if T1 can remove itself from the trampoline route and
> replace itself with T1.5, offerring in exchange some of the fee to T1.5.
> >>
> >> Other ways of knowing some distillation of the public

Re: [Lightning-dev] Trampoline Routing

2019-08-05 Thread Bastien TEINTURIER
Good morning fiatjaf,

This is a good question, I'm glad you asked.

As ZmnSCPxj points out, Alice doesn't know. By not syncing the full network
graph, Alice has to accept
"being in the dark" for some decisions. She is merely hoping that RT2 *can
find a route* to Bob. Note that
it's quite easy to help Alice make informed decision by proving routing
hints in the invoice and in gossip
messages (which we already do for "normal" routing).

The graph today is strongly connected, so it's quite a reasonable
assumption (and Alice can easily retry
with another choice of trampoline node if the first one fails - just like
we do today with normal payments).

I fully agree with ZmnSCPxj though that in the future this might not be
true anymore. When/if the network
becomes too large we will likely lose its strongly connected nature. When
that happens, the Lightning
Network will need some kind of hierarchical / packet switched routing
architecture and we won't require
trampoline nodes to know the whole network graph and be able to route to
mostly anyone.
I argue that trampoline routing is a first step towards enabling that. It's
a good engineering trade-off between
ease of implementation and deployment, fixing a problem we have today and
enabling future scaling for
problems we'll have tomorrow. It's somewhat easy once we have trampoline
payments to evolve that to a
system closer to the internet's packet switching infrastructure, so we'll
deal with that once the need for it
becomes obvious.

Does that answer your question?

Cheers,
Bastien

Le sam. 3 août 2019 à 05:48, ZmnSCPxj  a écrit :

> Good morning fiatjaf,
>
> I proposed before that we could institute a rule where nodes are mapped to
> some virtual space, and nodes should preferably retain the part of the
> network graph that connects itself to those nodes near to it in this
> virtual space (and possibly prefer to channel to those nodes).
>
>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-April/001959.html
>
> Thus Alice might **not** know that some route exists between T1 and T2.
>
> T1 itself might not know of a route from itself to T2.
> But if T1 knows a route to T1.5, and it knows that T1.5 is nearer to T2
> than to itself in the virtual space, it can **try** to route through T1.5
> in the hope T1.5 knows a route from itself to T2.
> This can be done if T1 can remove itself from the trampoline route and
> replace itself with T1.5, offerring in exchange some of the fee to T1.5.
>
> Other ways of knowing some distillation of the public network without
> remembering the channel level details are also possible.
> My recent pointlessly long spam email for example has a section on
> Hierarchical Maps.
>
>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002095.html
>
> Regards,
> ZmnSCPxj
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Saturday, August 3, 2019 12:29 AM, fiatjaf  wrote:
>
> > Ok, since you seem to imply each question is valuable, here's mine: how
> does Alice know RT2 has a route to Bob? If she knows that, can she also
> know T1 has a route to Bob? In any case, why can't she just build her small
> onion with Alice -> T1 -> Bob? I would expect that to be the most common
> case, am I right?
> >
> > On Friday, August 2, 2019, Bastien TEINTURIER  wrote:
> >
> > > Good morning list,
> > >
> > > I realized that trampoline routing has only been briefly described to
> this list (credits to cdecker and pm47 for laying
> > > out the foundations). I just published an updated PR [1] and want to
> take this opportunity to present the high level
> > > view here and the parts that need a concept ACK and more feedback.
> > >
> > > Trampoline routing is conceptually quite simple. Alice wants to send a
> payment to Bob, but she doesn't know a
> > > route to get there because Alice only keeps a small area of the
> routing table locally (Alice has a crappy phone,
> > > damn it Alice sell some satoshis and buy a real phone). However, Alice
> has a few trampoline nodes in her
> > > friends-of-friends and knows some trampoline nodes outside of her
> local area (but she doesn't know how to reach
> > > them). Alice would like to send a payment to a trampoline node she can
> reach and defer calculation of the rest of
> > > the route to that node.
> > >
> > > The onion routing part is very simple now that we have variable-length
> onion payloads (thanks again cdecker!).
> > > Just like russian dolls, we simply put a small onion inside a big
> onion. And the HTLC management forwards very
> > > naturally.
> > >
> > > It's always simpler with 

[Lightning-dev] Trampoline Routing

2019-08-02 Thread Bastien TEINTURIER
Good morning list,

I realized that trampoline routing has only been briefly described to this
list (credits to cdecker and pm47 for laying
out the foundations). I just published an updated PR [1] and want to take
this opportunity to present the high level
view here and the parts that need a concept ACK and more feedback.

Trampoline routing is conceptually quite simple. Alice wants to send a
payment to Bob, but she doesn't know a
route to get there because Alice only keeps a small area of the routing
table locally (Alice has a crappy phone,
damn it Alice sell some satoshis and buy a real phone). However, Alice has
a few trampoline nodes in her
friends-of-friends and knows some trampoline nodes outside of her local
area (but she doesn't know how to reach
them). Alice would like to send a payment to a trampoline node she can
reach and defer calculation of the rest of
the route to that node.

The onion routing part is very simple now that we have variable-length
onion payloads (thanks again cdecker!).
Just like russian dolls, we simply put a small onion inside a big onion.
And the HTLC management forwards very
naturally.

It's always simpler with an example. Let's imagine that Alice can reach
three trampoline nodes: T1, T2 and T3.
She also knows the details of many remote trampoline nodes that she cannot
reach: RT1, RT2, RT3 and RT4.
Alice selects T1 and RT2 to use as trampoline hops. She builds a small
onion that describes the following route:

*Alice -> T1 -> RT2 -> Bob*

She finds a route to T1 and builds a normal onion to send a payment to T1:

*Alice -> N1 -> N2 -> T1*

In the payload for T1, Alice puts the small trampoline onion.
When T1 receives the payment, he is able to peel one layer of the
trampoline onion and discover that he must
forward the payment to RT2. T1 finds a route to RT2 and builds a normal
onion to send a payment to RT2:

*T1 -> N3 -> RT2*

In the payload for RT2, T1 puts the peeled small trampoline onion.
When RT2 receives the payment, he is able to peel one layer of the
trampoline onion and discover that he must
forward the payment to Bob. RT2 finds a route to Bob and builds a normal
onion to send a payment:

*RT2 -> N4 -> N5 -> Bob*

In the payload for Bob, RT2 puts the peeled small trampoline onion.
When Bob receives the payment, he is able to peel the last layer of the
trampoline onion and discover that he is
the final recipient, and fulfills the payment.

Alice has successfully sent a payment to Bob deferring route calculation to
some chosen trampoline nodes.
That part was simple and (hopefully) not controversial, but it left out
some important details:

   1. How do trampoline nodes specify their fees and cltv requirements?
   2. How does Alice sync the fees and cltv requirements for her remote
   trampoline nodes?

To answer 1., trampoline nodes needs to estimate a fee and cltv that allows
them to route to (almost) any other
trampoline node. This is likely going to increase the fees paid by
end-users, but they can't eat their cake and
have it too: by not syncing the whole network, users are trading fees for
ease of use and payment reliability.

To answer 2., we can re-use the existing gossip infrastructure to exchange
a new *node_update *message that
contains the trampoline fees and cltv. However Alice doesn't want to
receive every network update because she
doesn't have the bandwidth to support it (damn it again Alice, upgrade your
mobile plan). My suggestion is to
create a filter system (similiar to BIP37) where Alice sends gossip filters
to her peers, and peers only forward to
Alice updates that match these filters. This doesn't have the issues BIP37
has for Bitcoin because it has a cost
for Alice: she has to open a channel (and thus lock funds) to get a
connection to a peer. Peers can refuse to serve
filters if they are too expensive to compute, but the filters I propose in
the PR are very cheap (a simple xor or a
node distance comparison).

If you're interested in the technical details, head over to [1].
I would really like to get feedback from this list on the concept itself,
and especially on the gossip and fee estimation
parts. If you made it that far, I'm sure you have many questions and
suggestions ;).

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/654
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Lightning Network Pathfinding Latency by Path Splicing and Other Real-Time Strategy Game Techniques

2019-08-02 Thread Bastien TEINTURIER
Good morning ZmnSCPxj,

The channel that is failing is then the channel *after* the error-reporting
> node (assuming bit `NODE` (`0x2000`) is not set in the `failure_code`: if
> it is a node-level error we should back off by one node and mark the erring
> node as unreliable).
>
> Indeed, the other insight here is that, if we were able to receive an
> error report from forwarding node N, this implies that every node and
> channel between us and node N is reliable.
> `permuteroute` reuses this prefix, since it is known-reliable.
>

I think this is more subtle than that. The thread I linked provided more
details, but in many cases you can't decide whether you should blacklist
only the channel *after* the failing node or also the channel *before* the
failing node. And it's even worse than that, if a node before the failing
one is malicious, it can force some next node to fail (by simply holding
the HTLC until close to the expiry) and in that case you should also
blacklist
some of the nodes *before* the failing node. And note that malicious nodes
with that behavior would happily forward the error onion because it
directly incriminates someone else.

I agree that we should optimize for the most common use-case (which
probably means ignoring these malicious node scenario for now), but I think
it's important to keep them in mind. At some point people will attack the
network so we need to give some thoughts about potential attacks and make
sure our algorithms can heal properly.
But that's not the most important discussion for this thread so let's
shelve that for now :).

The *real* issue is that costs are *both* fixed and proportional.
> So we need to select a balancing factor between the fixed and proportional
> costs.
>

I fully agree with that.

We can assume "past performance is an indicator of future performance" and
> record the average payment size of the user in order to determine how to
> balance the fixed and proportional costs.
> Picking an example value of say 1mBTC at the start, when the user has not
> used the node yet, seems reasonable.
>

I also agree this sounds reasonable, this is what we had in mind as a
starting point for eclair.

Most solutions to the network flow problem seem to require an accurate view
> of flows at each node, which we do not have.
>

Interesting, but for the first hop (local channels) we have the exact
balance available for sending, and for next hops we can consider the
channels
balanced (with a random perturbation of X%). The combination of that
and retries could provide interesting results (I plan on testing that on
realistic simulations of the network, I can't know for sure if this will
work until then).

My first implementation of MPP for eclair uses an algorithm similar to
flocking.
I think your last suggestion of using something similar to `permuteroute`
can be interesting to try too.
I'll give that a shot if we're not satisfied with the results of the
flocking implementation. Is it what you plan on doing for MPP in
c-lightning?

Cheers,
Bastien


Le ven. 2 août 2019 à 01:02, ZmnSCPxj  a écrit :

> Good morning Bastien,
>
> >
> > I think that the main points that make routing in the Lightning Network
> different from game path-finding algorithms are:
> >
> > -   Paths are consumed by payments, effectively moving available balance
> between sides of a channel
> > -   The routing algorithm doesn't know remote channels balance
> allocation (and that changes constantly)
> > -   The cost of a path depends on the value you're sending (proportional
> fees)
> > -
> > -   This encourages algorithms not to search for an optimal solution
> (because an optimal solution on outdated/incomplete data doesn't even make
> sense) but rather fast and good enough solutions with retries
>
> I believe the differences are smaller than you might initially think.
>
> Units move around on the map and a pathfinding algorithm cannot predict
> how the *other* units owned by allied players will be, once the current
> units asking for a path have moved along the path.
> i.e. the algorithm does not know how remote tiles are occupied (and that
> changes constantly)
>
> Faster units really should be able to walk around slower units, because
> there is often a tradeoff between speed and combat effectiveness and a
> player asking a faster unit to move probably is depending on their speed.
> i.e. paths can be blocked by slower units, effectively becoming
> slow-moving obstacles that need to be worked around.
>
> And so on.
>
> >
> > There are a few technicalities that might be a problem for some of your
> suggestions, I'm interested in your opinion on how to address them.
> >
> > For `permuteroute`, you mention the following pre-requisite:
> >
> > > the original payer is informed of which node reported the failure and
> which channel failed.
> >
> > We currently don't have a solution for reliable error reporting, as
> pointed out in [1].
> > I think making progress on this thread would be interesting and 

Re: [Lightning-dev] Improving Lightning Network Pathfinding Latency by Path Splicing and Other Real-Time Strategy Game Techniques

2019-08-01 Thread Bastien TEINTURIER
Good morning ZmnSCPxj,

Thanks for sharing this analysis, you're touching on a lot of interesting
points and giving a lot of good resource pointers.
It echoes many ideas we also had to improve eclair's routing algorithm
(which currently uses Yen's k-shortest paths with
Dijkstra, a few configurable heuristics and a compact in-memory
representation of the graph).

I think that the main points that make routing in the Lightning Network
different from game path-finding algorithms are:

   - Paths are consumed by payments, effectively moving available balance
   between sides of a channel
   - The routing algorithm doesn't know remote channels balance allocation
   (and that changes constantly)
   - The cost of a path depends on the value you're sending (proportional
   fees)
   -
   - This encourages algorithms not to search for an optimal solution
   (because an optimal solution on outdated/incomplete data doesn't even make
   sense) but rather fast and good enough solutions with retries

There are a few technicalities that might be a problem for some of your
suggestions, I'm interested in your opinion on how to address them.

For `permuteroute`, you mention the following pre-requisite:

the original payer is informed of which node reported the failure and which
> channel failed.
>

We currently don't have a solution for reliable error reporting, as pointed
out in [1].
I think making progress on this thread would be interesting and useful for
routing heuristics.

I thought about path pre-computation and path caching, but what bothered me
is that the cost depends on the amount you want to send.
When pre-computing / caching, you have to either ignore that completely
(which can be fine, I don't think trying to always find the most
cost-efficient route is a reasonable goal) or take into account some kind
of "universal" factor that works for most amounts. How did you take
that into account in your pre-computation experiments?

I do agree that multi-part payments and trampoline (hierarchical routing)
can offer a lot of room for algorithmic improvements and your
ideas on how to leverage them resonate with mine.

An interesting thing to note is that trampoline (in the current proposal at
least) allows trampoline nodes to leverage multi-part payments
"at each hop", meaning that a trampoline node can join/split arbitrarily an
incoming payment to reach the next trampoline node.

While implementing a first version of multi-part payments, I realized that
they need to be tightly integrated to the routing algorithm.
Since each payment "consumes" a path, potentially "stealing" it from other
payments, a naive implementation of multi-part payments
 would try to use different paths for each sub-payment, but that's an
inefficient way of solving it. Working on multi-part payments made
me think that maybe our routing problem is more similar to a circulation or
network flow problem [2] rather than path-finding. Have you
thought about this? If so what is your opinion?

Thanks again for sharing all this and starting those interesting
discussions.

Cheers,
Bastien

[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-June/002015.html
[2] https://en.wikipedia.org/wiki/Circulation_problem

Le jeu. 1 août 2019 à 07:14, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning laolu,
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, August 1, 2019 10:29 AM, Olaoluwa Osuntokun <
> laol...@gmail.com> wrote:
>
> > > I found out recently (mid-2019) that mainnet Lightning nodes take an
> > > inordinate amount of time to find a route between themselves and an
> > > arbitrary payee node.
> > > Typical quotes suggested that commodity hardware would take 2 seconds
> to
> > > find a route
> >
> > Can you provide a reproducible benchmark or further qualify this number
> (2
> > seconds)?
>
> No reproducible benchmark.
> However, this is my reference:
> https://medium.com/@cryptotony/why-does-ln-payments-arent-instantaneous-d24f7e5f88cb
> which claims this 2 seconds for LND implementations.
> (It is entirely possible this information is obsolete, as it was published
> a month ago and things move fast in LN.)
>
> As per Rene, from his C-Lightning mainnet node, `getroute` typically takes
> 1.1 to 1.3s to a particular unspecified destination.
> I do not know details of his hardware; it would be better to ask him.
>
> > Not commenting on the rest of this email as I haven't read the
> > rest of it yet, but this sounds like just an issue of engineering
> > optimization.
>
> The rest of the email *is* engineering optimization.
>
> > AFAIK, most implementations are using unoptimized on-disk
> > representations of the graph, do minimal caching, and really haven't made
> > any sort of push to optimize these hot spots.
>
> C-Lightning has always used in-memory representation of the graph (helped
> massively by the low-level nature of C so we can fit a larger graph in 

Re: [Lightning-dev] Proposal for Stuckless Payment

2019-06-25 Thread Bastien TEINTURIER
This is a very good proposal, thanks Hiroki for all those details it helped
me learn a lot.

If I'm not mistaken, https://eprint.iacr.org/2018/472.pdf has shown that we
MUST add another round
of communication if we want to avoid the wormhole attacks (and thus
decorrelate payments). While
I agree that this degrades latency, if it provides a way to "cancel" stuck
payments and retry I think it's
worth it. And I really like the option to make multiple tries in parallel
as you suggest, which would help
with latency (if you have enough outbound capacity).

I agree with ZmnSCPxj that it would be good to keep payer anonymity, and I
may have a solution to
provide this. As ZmnSCPxj explains, the loss of payer anonymity is due to
the ACK message traveling
via the same route (D -> C -> B -> A).

However, there are two interesting things to note about the ACK message
(unless I missed something):

   1. It doesn't need any data from D
   2. It isn't tied to channels and only A needs to receive it (not
   intermediate nodes)
   3. It could use a smaller onion packet than the *add_htlc* onion (fixed
   size but smaller than 1300 bytes)

Given 1., the ACK onion packet can be constructed by A. Given 2., it can
use a different route than the
*add_htlc* onion packet.

A can select another route (e.g. D -> E -> F -> A) and can create the ACK
onion packet during the setup phase.
A can then embed this ACK packet inside the last hop payload of the
*add_htlc* onion packet.
When D receives it, it simply sends that onion to the indicated recipient
(E) which will unwrap and forward.
This way D doesn't learn anything about A, and intermediate nodes aren't
included in the ACK route so
they don't learn anything either.

Note that nodes in the ACK message route don't have an incentive to forward
ACK messages (apart from
participating honestly in the network). But even if a malicious node drops
an ACK message, it just ends up
being a stuck payment that you can safely retry since you haven't shared
the keys yet.

And if A doesn't care about anonymity at all, A can provide its information
in the onion to let D directly send it
the ACK. I don't know if we want to provide that option or not, but at
least that's possible to do.

Would that be a satisfactory solution to maintain the payer anonymity
property?

Cheers,
Bastien

Le mar. 25 juin 2019 à 12:16, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Hiroki,
>
> Thank you for this.
> It seems a good solution.
> And, it seems, yet another reason to move to payment point / scalar from
> payment hash / preimage.
>
> As I understand it, the `y0+y1+...` sums are also the blinding tweaks used
> for payment decorrelation.
> My understanding, they will be sent in onion packet.
>
> > Pre-Settlement: Add this new phase after the Update phase. Any route can
> be used.
> >
> > A --> * --> * --> D# key (`y0+y1+y2`)
> >
> > A <-- * <-- * <-- D# PoP (`z`)
>
> My poor suggestion, to use same route ->B->C-> and <-B<-C<-.
> Currently, destination D does not know who is payer A.
> So my opinion, it is best to retain this property of "D does not know
> payer A".
>
> Of course, in practice in many case, D knows A already, for example for
> delivery address of product.
> But anonymity should be preserved still.
> For example, perhaps I wish to deliver product to some other entity other
> than me, and wish to remain anonymous to do so.
>
> However, I seem, the detail below, means we should use the same route:
>
> > At the end of this phase, we require the payee return the ACK to the
> payer to notify the completion of this phase. It must be guaranteed that
> the payee himself returns it. This can be achieved by reversing the route
> and wrapping the ACK in the onion sequentially, as the `reason` field of
> the `update_fail_htlc` in BOLT 1.x.
>
>
>
> > These modifications add the cost of three new messages (ACK, key, PoP),
> but it is only three (unaccompanied by other messages). These may also
> reduce other preventive messages.
>
> The added communication round may allow intermediate node to guess the
> payer.
>
> Already in current network, intermediate node can guess the distance to
> payee.
> Distance to payee can be guessed from timelocks.
> Also, distance to payee can be guessed by time from `update_add_htlc` to
> time of `update_fulfill_htlc`/`update_fail_htlc`.
>
> However, there is no information that intermediate node can use to guess
> distance to payer.
>
> With addition of new ACK-key turnaround, intermediate node can measure
> time from send of ACK to receive of key, and guess its distance to payer.
>
> I am uncertain how to weigh this.
> I believe, this idea is very good and stuckless is important feature.
> Getting some information about the payer may allow attempts at censorship,
> however.
> But maybe the information leaked is not significant enough in practice.
>
> Another issue is the added latency of payments.
> 

[Lightning-dev] Trampoline Onion Routing proposal

2019-05-23 Thread Bastien TEINTURIER
Good morning list,

I have been working on formalizing how trampoline onion routing could work
and just published a PR here
.
Comments are welcome as this introduces changes at several layers.

Mobile phones are already struggling to keep up with the bandwidth and CPU
requirements to do source-routing.
Trampoline onion routing maintains the same security assumptions (and even
increases the anonymity set) at the
cost of higher fees and timeout values for the payer (and thus more gains
for trampoline nodes).

Here is a high-level summary of the changes required:

   - A new version of the onion packet with a smaller size than v0 (but
   with the same cryptographic operations and security guarantees)
   - A new *node_update* message advertising trampoline fees and cltv
   - New logic to run on trampoline nodes to estimate such fees and cltv
   - A filter system for *channel_updates* and *node_updates*
   - This doesn't change anything to the way HTLCs are forwarded, failed or
   succeeded

Please read the full document for details (and feel free to comment
directly on the PR).

An interesting side-note is that this also enables rendezvous routing and
many other "onion in an onion" constructions.

Cheers,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Eltoo, anyprevout and chaperone signatures

2019-05-16 Thread Bastien TEINTURIER
Thanks for your answers and links, the previous discussions probably
happened before I joined this list so I'll go dig into the archive ;)

> I think it makes sense for us to consider both variants, one committing
> to the script and the other not committing to the script, but I think it
> applies rather to the `update_tx` <-> `settlement_tx` link and less to
> the `funding_tx` <-> `update_tx` link and `update_tx` <-> `update_tx`
> link. The reason is that the `settlement_tx` needs to be limited to be
> bindable only to the matching `update_tx` (`anyprevout`), while
> `update_tx` need to be bindable to the `funding_tx` as well as any prior
> `update_tx` which differ in the script by at least the state number
> (hence `anyprevoutanyscript`).

> Like AJ pointed out in another thread, the use of an explicit trigger
> transaction is not really needed since any `update_tx` can act as a
> trigger transaction (i.e., start the relative timeouts to tick).

Thanks for confirming, that was how I understood it too.

> Specifically we can't make make use of the collaborative path where
> we override an `update_tx` with a newer one in taproot as far as I can
> see, since the `update_tx` needs to be signed with noinput (for
> rebindability) but there is no way for us to specify the chaperone key
> since we're not revealing the committed script.

Can you expand on that? Why do we need to "make use of the collaborative
path" (maybe it's unclear to me what you mean by collaborative path here)?
When we override an `update_tx` we use a new state number and we derive the
new keys for that state independently of the keys of the previous state
right?
So we would derive new settlement keys and potentially chaperone keys, and
re-create a merkle tree and taproot from scratch.
I don't see where taproot interacts in a negative way with noinput there...

> For that matter the `OP_CHECKMULTISIG`/`OP_CHECKSIGADD` could be reduced
by using MuSig on the two participants.
> Further, there is no need for an explicit `OP_CHECKSEQUENCEVERIFY` or
even separate keys for state and update paths.

Thanks for the suggestions, these are good optimizations.
I feel like there will be a few other optimizations that are unlocked by
taproot/tapscript, it will be interesting to dig into that.

Thanks,
Bastien

Le jeu. 16 mai 2019 à 03:48, ZmnSCPxj  a écrit :

> Good morning,
>
>
> >
> > We could collapse those 1-of-2 multisigs into a single-sig if we just
> > collaboratively create a shared private key that is specific to the
> > instance of the protocol upon setup. That minimizes the extra space
> > needed.
>
> For that matter the `OP_CHECKMULTISIG`/`OP_CHECKSIGADD` could be reduced
> by using MuSig on the two participants.
> Further, there is no need for an explicit `OP_CHECKSEQUENCEVERIFY` or even
> separate keys for state and update paths.
> xref.
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-March/001933.html
>
> The proposal that does not include `OP_CODESEPARATOR` is:
>
>  OP_CHECKLOCKTIMEVERIFY OP_DROP
>  OP_CHECKSIG  OP_CHECKSIG
>
> Where `C` is the common key that Christian described above, and `index` is
> the update number index.
>
> For update transactions, `nSequence` is 0.
> For state transactions, `nSequence` is non-0.
> Both of them will have `nLockTime` equal to the required index.
> The `nSequence` is enforced by the participants refusing to sign invalid
> `nSequence`.
>
> The above seems quite optimized.
>
> > > (I ommitted the tapscript changes, ie moving to OP_CHECKSIGADD, to
> > > highlight only the chaperone changes)
> > > When updating the channel, Alice and Bob would exchange their
> > > anyprevoutanyscript signatures (for the 2-of-2 multisig).
> > > The chaperone signature can be provided by either Alice or Bob at
> > > transaction broadcast time (so that it commits to a specific input
> > > transaction).
> > > It seems to me that using the same key for both signatures (the
> chaperone
> > > one and the anyprevoutanyscript one) is safe here, but if someone knows
> > > better I'm interested.
> > > If that's unsafe, we simply need to introduce another key-pair
> (chaperone
> > > key).
> > > Is that how you guys understand it too? Do you have other ideas on how
> to
> > > comply with the need for a chaperone signature?
> > > Note that as Anthony said himself, the BIP isn't final and we don't
> know
> > > yet if chaperone signatures will eventually be needed, but I think it's
> > > useful to make sure that Eltoo could support it.
> >
> > I quite like the chaperone idea, however it doesn't really play nice
> > with taproot collaborative spends that require anyprevout /
> > anyprevoutanyscript / noinput, which would make our transactions stand
> > out quite a bit. Then again this is only the case for the unhappy,
> > unilateral close, path of the protocol, which (hopfully) should happen
> > rarely.
>
> The mere use of any `SIGHASH` that is not `SIGHASH_ALL` already stands out.
> So I think this is not a 

[Lightning-dev] Eltoo, anyprevout and chaperone signatures

2019-05-15 Thread Bastien TEINTURIER
Good morning list,

I have been digging into Anthony Towns' anyprevout BIP

proposal
to verify that it has everything we need for Eltoo
.

The separation between anyprevout and anyprevoutanyscript is very handy
(compared to the previous noinput proposal).
Unless I'm missing something, it would simplify the funding tx (to a simple
multisig without cltv/csv) and remove the need for the trigger tx.

The more tricky part to integrate is the chaperone signature.
If I understand it correctly (which I'm not guaranteeing), we would need to
modify the update transactions to something like:

OP_IF

10 OP_CSV

1 A(s,i) B(s,i) 2 OP_CHECKMULTISIGVERIFY  <- public keys' first
> byte in this line is 0x02 or 0x03

2 A(s,i) B(s,i) 2 OP_CHECKMULTISIGVERIFY  <- public keys' first
> byte in this line is 0x00 or 0x01

OP_ELSE

 OP_CLTV

1 A(u) B(u) 2  OP_CHECKMULTISIGVERIFY  <- public keys' first
> byte in this line is 0x02 or 0x03

2 A(u) B(u) 2  OP_CHECKMULTISIGVERIFY  <- public keys' first
> byte in this line is 0x00 or 0x01

OP_END


(I ommitted the tapscript changes, ie moving to OP_CHECKSIGADD, to
highlight only the chaperone changes)

When updating the channel, Alice and Bob would exchange their
anyprevoutanyscript signatures (for the 2-of-2 multisig).
The chaperone signature can be provided by either Alice or Bob at
transaction broadcast time (so that it commits to a specific input
transaction).

It seems to me that using the same key for both signatures (the chaperone
one and the anyprevoutanyscript one) is safe here, but if someone knows
better I'm interested.
If that's unsafe, we simply need to introduce another key-pair (chaperone
key).

Is that how you guys understand it too? Do you have other ideas on how to
comply with the need for a chaperone signature?

Note that as Anthony said himself, the BIP isn't final and we don't know
yet if chaperone signatures will eventually be needed, but I think it's
useful to make sure that Eltoo could support it.

Cheers,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev