Re: [Lightning-dev] Forwarding hints in channel update messages

2018-11-15 Thread Joost Jager
On Thu, Nov 15, 2018 at 1:53 AM Rusty Russell  wrote:

> The decision was made to allow additional channel_update in the error
> reply:
>
> DECISION: document that scid is not binding, allow extra
> channel_updates in errors for “upselling”.
>

Yes, I read this. What exactly is "upselling" in this context and how were
extra channel_updates in errors intended to be used for this? Is it useful
for non-strict forwarding nodes?


> AFAICT this is a deeply weird case.  If another channel had capacity you
> would have just used it.  If another channel doesn't, sending a
> channel_update doesn't help.  And if there's a channel available at a
> higher feerate or longer timeout, it raises the question of why you're
> doing that rather than just taking the offer in front of you; that value
> clearly used to be acceptable, and now you risk them routing around you.
>

Good point. If the value is acceptable, but that particular channel happens
to have not enough balance, it is hard to explain why you wouldn't just
accept. Maybe if you have a large capacity channel that you want to reserve
for large amounts at a higher fee and you don't want this channel's balance
to be used up by multiple non-strict forwarded small htlcs? This could also
be realized by setting min_htlc, but then it can never be used for small
htlcs. This admittedly is pretty specific.

Maybe the forwarding hint that could make more of a difference is just the
information that non-strict forwarding was actually applied. Communicate
this as a node property via a global feature bit. If the sender receives a
TemporaryChannelFailure from a non-strict forwarding node, it doesn't need
to bother with trying all other (equal policy) channels from that node to
the next.

Regards,
Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Forwarding hints in channel update messages

2018-11-14 Thread Joost Jager
Hello all,

I'd like to bring up an idea that builds on top of "non-strict" forwarding.
I commented about this on conner's non-strict forwarding lightning-rfc pr,
but it is probably better to discuss it on its own in this list.

A node that forwards non-strictly, is using any of its channels to carry
the payment to the next hop. It is ignoring the actually requested channel
in `update_add_htlc`, except for determining the next hop pubkey.

When forwarding fails, a `channel_update` msg can be returned to the
sender. This brings up the question for which channel to return a
`channel_update` in case of failed non-strict forwarding.

If the htlc didn't satisfy the policy of the requested channel, the
sender's view on the graph is not up to date and it makes sense to return a
`channel_update` for the requested channel.

However, if the htlc did satisfy the policy, sending back a
`channel_update` does not provide the sender with any new information. In
case of TemporaryChannelFailure, the update is optional. But leaving it out
does not save any bytes because of padding (as pointed out by pierre in the
pr).

The idea is to repurpose the `channel_update` message in this case as a
'forwarding hint'.

When non-strict forwarding fails, the intermediate node iterates over all
its channels to the next hop and determines what would be the 'best'
channel to use from the sender point of view. Best could be defined as a
trade off between minimum fee and time lock delta, similar to the weight
function used during path finding. Only channels that have enough balance
for the amount requested in the htlc are considered in this process.

If there is no best channel (for example when none of the channels have
enough capacity), the node just returns a `channel_update` for the
requested channel as usual.

If there is a best channel, a `channel_update` is returned for that channel
instead of the requested channel. Senders that are aware of this behavior
will recognize this when reading the failure message and interpret the
`channel_update` as a forwarding hint.

Senders not aware of the forwarding hint will either just apply the channel
update to the graph or reject it. Both are fine, because their copy of the
policy for the requested channel was already up-to-date. This makes this
idea backwards compatible.

What this forwarding hint signals is that an htlc with a fee and time lock
delta that satisfies the policy of the hinted channel will likely be
forwarded successfully. Of course if something changes at the intermediate
node (channel balance) in the mean time, the hint may not be valid anymore.

With the hint in hand, the sender can adjust the route to satisfy the
hinted policy and try again. Alternatively, it could first try a route
through other nodes because the hinted policy increases the total fee
and/or time lock too much. How to exactly integrate this in path finding is
something to work out further. The sender should be aware that an
intermediate node may try to maximize its earning by handing out
'expensive' forwarding hints and therefore should not blindly apply the new
policy to a route.

The advantage of having the hint is that the sender likely needs fewer
payment attempts to complete the payment. For the intermediate node, it is
a way to increase its earnings. It gives the sender more certainty about
the parameters that lead to successful forwarding and the sender may choose
to just go with those instead of trying out many other routes, even if one
of those routes could be better. In case the sender wants the absolute best
route, the forwarding hint may still be beneficial to the intermediate
node. When there are multiple routes with identical total fees and time
locks, a sender will likely choose the route for which it has received
forwarding hints.

In case the intermediate node can only forward the payment over a private
channel, it could hint the policy of a public channel with a policy that
also satisfies the private channel's policy. It doesn't matter if this
public channel doesn't have enough balance, because non-strict forwarding
will also be applied on the next attempt. Or maybe just returning a
`channel_update` for channel id 0 with the private channel's policy.
Senders aware of forwarding hints may just as well interpret this properly.

To implement this, no onion packet changes are required. An intermediate
node could advertise that it provides forwarding hints through a global
feature bit, but that is optional. The forwarding hints can also be
recognized in the `channel_update` message itself.

Regards,
Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Final expiry probing attack

2019-04-09 Thread Joost Jager
Hi all,

In https://github.com/lightningnetwork/lightning-rfc/pull/516,
the incorrect_or_unknown_payment_details failure message is extended with
an htlc_msat field and thereby replaces the former incorrect_payment_amount
message. The objective of this change is to prevent a probing attack that
allows an intermediate node to find out the final destination of the
payment.

Shouldn't the same change be applied to the cltv expiry?

Currently in lnd, we return a final_expiry_too_soon message if the htlc
expiry is not meeting the invoice cltv delta requirement. This can be used
for probing by using low expiry values, similar to how this was previously
possible with low amounts.

The proposed change would be: when the htlc expiry doesn't meet the invoice
cltv delta requirement, return an incorrect_or_unknown_payment_details
failure (extended with a new htlc_expiry field) instead
of final_expiry_too_soon.

Joost.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improve Lightning payment reliability through better error attribution

2019-06-14 Thread Joost Jager
Hi Bastien,


> What about having each node add some padding along the way? The erring
> node's padding should be bigger than intermediate nodes' padding (ideally
> using a deterministic construction as you suggest) so details need to be
> fleshed out, but it could mitigate even more the possibility of
> intermediate nodes figuring out their approximate position.
> That also mitigates the risk that a network observer correlates error
> messages between hops (because in the variable-length message that you
> propose, a network observer can easily track an error message across the
> whole payment path).
>

Yes we could also do that. Then even if the same person has two different
nodes in the path, they can't know for sure how many hops were in between.

It would be nice if there is a way to add padding such that hops don't
learn anything about their position, but not sure if that is possible.
Having the error node add padding with a random length between 0 and 20
block sizes (block size is the number of bytes a hop would add in the
backward path), does reveal an upper bound for the distance to the error
node. For example: a node receives a failure with a padding of 3 blocks.
That means that the distance to the error node is between 0 and 3 hops.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improve Lightning payment reliability through better error attribution

2019-06-15 Thread Joost Jager
Hi ZmnSCPxj,

Since, B cannot report the `update_fail_htlc` immediately, its timer should
> still be running.
> Suppose we counted only up to `update_fail_htlc` and not on the
> `revoke_and_ack`.
> If C sends `update_fail_htlc` immediately, then the
> `update_add_htlc`->`update_fail_htlc` time reported by B would be fast.
> But if C then does not send `revoke_and_ack`, B cannot safely propagate
> `update_fail_htlc` to A, so the time reported by A will be slow.
> This sudden transition of time from A to B will be blamed on A and B,
> while C is unpunished.
>
> That is why, for failures, we ***must*** wait for `revoke_and_ack`.
> The node must report the time when it can safely propagate the error
> report upstream, not the time it receives the error report.
> For payment fulfillment, `update_fulfill_htlc` is fine without waiting for
> `revoke_and_ack` since it is always reported immediately upstream anyway.
>

Yes, that is a good point. C hasn't completed its duty until it sends
`revoke_and_ack` indeed.


> > I think we could indeed do more with the information that we currently
> have and gather some more by probing. But in the end we would still be
> sampling a noisy signal. More scenarios to take into account, less accurate
> results and probably more non-ideal payment attempts. Failed, slow or stuck
> payments degrade the user experience of lightning, while "fat errors"
> arguably don't impact the user in a noticeable way.
>
> Fat errors just give you more information when a problem happens for a
> "real" payment.
> But the problem still occurs on the "real" payment and user experience is
> still degraded.
>
> Background probing gives you the same information **before** problems
> happen for "real" payments.
>

With probing, I was thinking about probing right before making the actual
payment, so not a process that is continuously running in the background. I
am not sure how that would scale, everyone (assume mass adoption) probing
the whole network. Also, for private channels, nodes may put rate limits in
place or close channels that are the source of many failures. Then end
points with only private channels like a mobile app wouldn't be able to
probe effectively anymore.

I do think probes are useful, but would only use them sparingly. Sending a
probe before the real payment surely helps to mitigate certain risks. But
then I'd still prefer to also have fat errors. To get maximum value out of
the probe and minimize the number of probes required. Functionally
speaking, I don't see why you wouldn't want to have that extra information.

 Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Improve Lightning payment reliability through better error attribution

2019-06-12 Thread Joost Jager
Hello list,

In Lightning, the reliability of payments is dependent on the reliability
of the chosen route. Information about previous payment attempts helps to
select better routes and improve the payment experience. Therefore
implementations usually track the past performance of nodes and channels.
This can be as simple as a black list that contains previously failed
channels.

In order for this mechanism to be most effective, it is important to know
which node is to blame for a non-ideal payment attempt.

Non-ideal payment attempts are not only failed payment attempts (either
instantly failed or after a delay), but also successful payments for which
it took a long time to receive the `htlc_fulfill` message.

For non-ideal payment attempts, we are currently not always able to
determine the node that should be penalized. In particular:
* If an attempt takes long to complete (settle or fail), we have no
information that points us to the source of the delay.
* Nodes can return a corrupt failure message. When this message arrives at
the sender after a number of encryption rounds, the sender is no longer
able to pinpoint the node that failed the payment.

A potential solution is to change the failure message such that every hop
along the backward path adds an hmac to the message (currently only the
error source authenticates the message). This allows the source of a
corruption to be narrowed down to a pair of nodes, which is enough to
properly apply a penalty.

In addition to that, all hops could add two timestamps to the failure
message: the htlc add time and the htlc fail time. Using this information,
the sender of the payment can identify the source of the delay down to,
again, a pair of nodes. Those timestamps could be added to the settle
message as well, to also allow diagnostics on slow settled payments.

The challenge here is to design the failure message format in such a way
that hops cannot learn their position in the path. Just appending
timestamps and hmacs to a variable length message would reveal the distance
between a node and the error source.

A fixed length message in which hops shift some previous (unused) data out
from the message to create space to add their own data does not seem to
work. What happens is that the earlier nodes calculate their hmac over data
that is shifted out and cannot be recovered anymore by the sender. The
sender has no way to verify the hmac in that case. Regenerating the shifted
out data (similar to deterministic padding on the forward path) isn't a
solution either, because a node may modify that (unused) data before
passing the message on. This would invalidate all hmacs, denying the sender
from locating the responsible node.

One space-inefficient solution is to have every hop add hmacs for every
possible (real) message length, but this would require n^2 hmacs in total
(20*20*32 bytes). Half of these could be discarded along the way, but it
would still leave 10*20*32=6.4kb of hmacs.

Another direction might be to use a variable length message, but have the
error source add a seemingly random length padding. The actual length could
be deterministically derived from the shared secret, so that the erring
node cannot just not add padding. This obfuscates the distance to the error
source somewhat, but still reveals a bit of information. If one knows that
the padding length is somewhere between 0 and 20 blocks worth of bytes, a
message length of say 25 blocks would reveal that the err source is at
least 5 hops away. It could be a fair trade-off though.

An alternative to all of this is to try to locate bad nodes by probing with
different route lengths and coming from different angles. This will however
require more attempts and more complex processing of the outcomes. There is
also a level of indirectness because not all information is gathered in a
single roundtrip. And in addition to that, a malicious node may somehow act
differently if it manages to recognize the probes.

I'd be interested to hear your opinions about the importance of being able
to locate bad nodes irrefutably, as well as ideas around the failure
message format.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improve Lightning payment reliability through better error attribution

2019-06-14 Thread Joost Jager
Hi ZmnSCPxj,


> > That is definitely a concern. It is up to senders how to interpret the
> received timestamps. They can decide to tolerate slight variations. Or they
> could just look at the difference between the in and out timestamp,
> abandoning the synchronization requirement altogether (a node could also
> just report that difference instead of two timestamps). The held duration
> is enough to identify a pair of nodes from which one of the nodes is
> responsible for the delay.
> >
> > Example (held durations between parenthesis):
> >
> > A (15 secs) -> B (14 secs) -> C (3 secs) -> D (2 secs)
> >
> > In this case either B or C is delaying the payment. We'd penalize the
> channel between B and C.
>
> This seems better.
> If B is at fault, it could lie and reduce its reported delta time, but
> that simply means it will be punished with A.
> If C is at fault, it could lie and increase its reported delta time, but
> that simply means it will be punished with D.
>
> I presume that the delta time is the time difference from when it sends
> `update_add_htlc` and when it receives `update_fulfill_htlc`, or when it
> gets an irrevocably committed `update_fail_htlc` + `revoke_and_ack`.
> Is that accurate?
>

Yes that is accurate, although using the time difference between receiving
the `update_add_htlc` and sending back the `update_fail_htlc` would work
too. It would then include the node's processing time.


> Unit should probably be milliseconds
>

Yes, we probably want sub-second resolution for this.

An alternative that comes to mind is to use active probing and tracking
> persistent data per node.
>
> For each node we record two pieces of information:
>
> 1.  Total imposed delay.
> 2.  Number of attempts.
>
> Suppose a probe or payment takes N milliseconds on a route with M nodes to
> fulfill or irrevocably fail at the payer.
> For each node on the route, we increase Total imposed delay by N / M
> rounded up, and increment Number of attempts.
> For error reports we can shorten the route if we get an error response
> that points to a specific failing node, or penalize the entire route in
> case of a completely undecodable error response.
>
> When finding a route for a "real" payment, we adjust the cost of
> traversing a node by the ratio Total imposed delay / Number of attempts (we
> can avoid undefined math by starting both fields at 1).
> For probes we can probably ignore this factor in order to give nodes that
> happened to be borked by a different slow node on the trial route another
> chance to exonerate their apparent slowness.
>
> This does not need changes in the current spec.
>

I think we could indeed do more with the information that we currently have
and gather some more by probing. But in the end we would still be sampling
a noisy signal. More scenarios to take into account, less accurate results
and probably more non-ideal payment attempts. Failed, slow or stuck
payments degrade the user experience of lightning, while "fat errors"
arguably don't impact the user in a noticeable way.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Improve Lightning payment reliability through better error attribution

2019-06-14 Thread Joost Jager
Hi ZmnSCPxj,

Before proceeding with discussing HMACs and preventing nodes from putting
> words in the mouths of other nodes, perhaps we should consider, how we can
> ensure that nodes can be forced to be accurate about what happened.
>
> For instance, a proposal is for nodes to put timestamps for certain events.
> Does this imply that all nodes along the route **MUST** have their clocks
> strongly synchronized to some global clock?
> If a node along the route happens to be 15 seconds early or 15 seconds
> late, can it be erroneously "dinged" for this when a later hop delays a
> successful payment for 20 seconds?
>
> If it requires that hop nodes have strong synchrony with some global clock
> service, why would I want to run a hop node then?
> What happens if some global clock service is attacked in order to convince
> nodes to route to particular nodes (using a competing global clock service)
> on the Lightning network?
>

That is definitely a concern. It is up to senders how to interpret the
received timestamps. They can decide to tolerate slight variations. Or they
could just look at the difference between the in and out timestamp,
abandoning the synchronization requirement altogether (a node could also
just report that difference instead of two timestamps). The held duration
is enough to identify a pair of nodes from which one of the nodes is
responsible for the delay.

Example (held durations between parenthesis):

A (15 secs) -> B (14 secs) -> C (3 secs) -> D (2 secs)

In this case either B or C is delaying the payment. We'd penalize the
channel between B and C.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-07 Thread Joost Jager
On Thu, Nov 7, 2019 at 3:36 PM lisa neigut  wrote:

> > Imagine the following setup: a network of nodes that trust each other
>
> The goal of this pre-payment proposal is to remove the need for trusted
> parties
>

Trust isn't the right word. It is a level of service that you provide to
your peers. If nodes are cognizant of the fact that the level of service
they receive goes down if they forward spam, they will be careful on the
incoming side. Require peers to build up a reputation before increasing the
inbound limits that apply to the channels with them.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-07 Thread Joost Jager
>
> > We could
> > simplify this to a single to_self_delay that is proposed by the
> initiator,
> > but what was the original reason to allow distinct values?
>
> Because I didn't fight hard enough for simplicity :(
>

But the people you were fighting with, what reason did they have? Just
flexibility in general, or was there an actual use case? Maybe these people
are reading this email and can comment?

There is no "negotiation" on opening; it's accept or error.  That leads
> to a situation where every implementation MUST accept what every
> implementation offers.
>

Agreed that the verb negotiate is a bit misleading. Although the
open/accept sequence could be repeated several times to make it more of a
negotiation.


> The unification proposal was to use the max of the two settings.  That's
> fair; if you want me to suffer a 2 week delay, you should too.
>

Yes, we could do that as part of this new commitment format. Make that an
implicit consequence of `option_anchor_outputs` (or whatever its name will
be). The semantics need to change anyway, because we want that CSV lock on
every output.


> >> * Within each version of the commitment transaction, both anchors always
> >> > have equal values and are paid for by the initiator.
> >>
> >> Who pays if they can't afford it?  What if they push_msat everything to
> >> the other side?
> >
> > Similar to how it currently works. There should never be a commitment
> > transaction in which the initiator cannot pay the fee.
>
> Unfortunately, this is not correct (in theory).
>
> We can always get into a case where fees are insufficient (simultanous
> HTLC adds), but it's unusual.  We used to specify that the non-funder
> would pay the remaining fee, but we dropped this in favor of allow
> unilateral close if this ever happened.
>

So then because unilateral close is the only way to resolve atm, it is
correct also in theory that there will never be a commitment tx where the
non-initiator pays fees? But the point is clear, channels can get stuck.


> > With anchor outputs
> > there should never be a commitment tx in which the initiator cannot pay
> the
> > fee and the anchors.
>
> There can be, but I think we can simply modify this so you have to pay
> the anchors *first* before fees.
>

That way it seems that adding the anchors doesn't make the stuck channel
problem that you described above worse?


> > If we hard-code a constant, we won't be able to adapt to changes of
> > `dustRelayFee` in the bitcoin network. And we'd also need to deal with a
> > peer picking a value higher than that constant for its regular funding
> flow
> > dust limit parameter.
>
> Note that we can't adapt to dustRelayFee *today*, since we can't change
> it after funding (another thing we probably need to fix).
>

You can't for an existing channel, but at least for a new channel you can
pick a different value. Which wouldn't be possible if we'd put a fixed
(anchor) amount in the spec.


> If we really want to make it adjustable, could we make each side pay for
> its own; if you can't afford it, you don't get one?  There's no point
> the funder paying for a fundee-anchor if the fundee has no skin in the
> game.
>
> That reduces the pressure somewhat, I think?
>

If you can't afford you don't get one, not sure about that. I could open a
channel, send out the total capacity an in htlc to myself via some other
hops, force close with a very low commit fee, then pull in the htlc (one
time the money). The victim then needs to get the commit confirmed to claim
the money, but there is no anchor unfortunately. I wait for the htlc to
expire, then anchor down the commit tx and time out the htlc (twice the
money).


> > In the light of this forgotten insight, is there a reason why the anchor
> > output would need key rotation? Having no rotation makes it easier to let
> > those anchors go straight into the wallet, which may mitigate the dust
> utxo
> > problem a bit. At least then they can be easily coin-selected for any
> > on-chain spent, if the market fees are low enough.
>
> Or what about we rotate the anchors and nothing else, which (assuming we
> make it anyone-can-spend-after-N-blocks) reduces the amount of onchain
> spam if someone completely loses their keys?
>
> That's a bigger change, but maybe it's worth it?
>

We now have David's great proposal to reuse the funding keys for the anchor
output. That allows us to always let anyone spend after confirmation,
because they can reconstruct the spend script. But I think this also means
that we cannot do rotation on the anchor keys. We need to use the funding
pubkey as is.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-07 Thread Joost Jager
>
> > Isn't spam something that can also be addressed by using rate limits for
> > failures? If all relevant nodes on the network employ rate limits, they
> can
> > isolate the spammer and diminish their disruptive abilities.
>
> Sure, once the spammer has jammed up the network, he'll be stopped.  So
> will everyone else.  Conner had a proposal like this which didn't work,
> IIRC.
>

Do you have ref to this proposal?

Imagine the following setup: a network of nodes that trust each other (as
far as spam is concerned) applies a 100 htlc/sec rate limit to the channels
between themselves. Channels to untrusted nodes get a rate of only 1
htlc/sec. Assuming the spammer isn't a trusted node, they can only spam at
1 htlc/s and won't jam up the network?
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-06 Thread Joost Jager
In my opinion, the prepayment should be a last resort. It does take away
some of the attractiveness of the Lightning Network. Especially if you need
to make many payment attempts over long routes, the tiny prepays do add up.
For a $10 payment, it's probably nothing to worry about. But for
micro-payments this can become prohibitively expensive. And it is exactly
the micro-payment use case where Lightning outshines other payment systems.
A not yet imagined micro-payment based service could even be the launchpad
to world domination. So I think we should be very careful with interfering
with that potential.

Isn't spam something that can also be addressed by using rate limits for
failures? If all relevant nodes on the network employ rate limits, they can
isolate the spammer and diminish their disruptive abilities. If a node sees
that its outgoing htlc packets stack up, it can reduce the incoming flow on
the channels where the htlcs originate from. Large routing nodes could
agree with their peers on service levels that define these rate limits.

Joost

On Tue, Nov 5, 2019 at 3:25 AM Rusty Russell  wrote:

> Hi all,
>
> It's been widely known that we're going to have to have up-front
> payments for msgs eventually, to avoid Type 2 spam (I think of Type 1
> link-local, Type 2 though multiple nodes, and Type 3 liquidity-using
> spam).
>
> Since both Offers and Joost's WhatSat are looking at sending
> messages, it's time to float actual proposals.  I've been trying to come
> up with something for several years now, so thought I'd present the best
> I've got in the hope that others can improve on it.
>
> 1. New feature bit, extended messages, etc.
> 2. Adding an HTLC causes a *push* of a number of msat on
>commitment_signed (new field), and a hash.
> 3. Failing/succeeding an HTLC returns some of those msat, and a count
>and preimage (new fields).
>
> How many msat can you take for forwarding?  That depends on you
> presenting a series of preimages (which chain into a final hash given in
> the HTLC add), which you get by decoding the onion.  You get to keep 50
> msat[1] per preimage you present[2].
>
> So, how many preimages does the user have to give to have you forward
> the payment?  That depends.  The base rate is 16 preimages, but subtract
> one for each leading 4 zero bits of the SHA256(blockhash | hmac) of the
> onion.  The blockhash is the hash of the block specified in the onion:
> reject if it's not in the last 3 blocks[3].
>
> This simply adds some payment noise, while allowing a hashcash style
> tradeoff of sats for work.
>
> The final node gets some variable number of preimages, which adds noise.
> It should take all and subtract from the minimum required invoice amount
> on success, or take some random number on failure.
>
> This leaks some forward information, and makes an explicit tradeoff for
> the sender between amount spent and privacy, but it's the best I've been
> able to come up with.
>
> Thoughts?
> Rusty.
>
> [1] If we assume $1 per GB, $10k per BTC and 64k messages, we get about
> 655msat per message.  Flat pricing for simplicity; we're trying to
> prevent spam, not create a spam market.
> [2] Actually, a number and a single preimage; you can check this is
> indeed the n'th preimage.
> [3] This reduces incentive to grind the damn things in advance, though
> maybe that's dumb?  We can also use a shorter hash (siphash?), or
> even truncated SHA256 (128 bits).
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-08 Thread Joost Jager
>
> >> > Isn't spam something that can also be addressed by using rate limits
> for
> >> > failures? If all relevant nodes on the network employ rate limits,
> they
> >> can
> >> > isolate the spammer and diminish their disruptive abilities.
> >>
> >> Sure, once the spammer has jammed up the network, he'll be stopped.  So
> >> will everyone else.  Conner had a proposal like this which didn't work,
> >> IIRC.
> >
> > Do you have ref to this proposal?
> >
> > Imagine the following setup: a network of nodes that trust each other (as
> > far as spam is concerned) applies a 100 htlc/sec rate limit to the
> channels
> > between themselves. Channels to untrusted nodes get a rate of only 1
> > htlc/sec. Assuming the spammer isn't a trusted node, they can only spam
> at
> > 1 htlc/s and won't jam up the network?
>
> Damn, I searched for it but all the obvious keywords turned up blank.
> Conner CC'd in case he remembers the discussion and I'm not imagining it?
>
> Anyway, if there are 100 nodes in the network I can still open a channel
> to each one and jam it up immediately.  And that's not even assuming I
> play nice until you trust me, then attack or get taken over.


At least it has gotten (100 times?) more difficult. I think it is hard to
say upfront how good or bad this setup would work. But I agree that prepay
deters spam in a more fundamental way.

Besides the argument that I brought up earlier of potentially killing
micro-payment-based use cases with prepay, there is also another concern.
It is nothing new, but could be interesting to look at it in the light of
prepayments.

It is currently possible to jam a channel with very limited resources. If
you hold 483 htlcs on a channel, it has become unusable for up to the cltv
limit. Let's say for 1000 blocks. If you allow a route that ping pongs back
and forth on the channel (let's say 8 times back and forth), you only need
to send 60 htlcs of the minimum amount for the route to jam the channel
completely. With min htlc policies of 1 sat, that will lock up only 60 sats
of the attacker (assuming no routing fees).

Let's say the wumbo channel has a capacity of 1 btc. Locking up 1 btc for
1000 blocks (~1 week) and assuming a time value of 4% per annum, the cost
to the routing node is: 1 * (1.04 ^ (1/52)) - 1 = 75,000 sats.

How much prepay would you need to prevent this? I don't think the 50 msat
would cut it.

I know this is the liquidity-using class of spam, but if prepay cannot
prevent this I think it is better to first address this class. And once
there is solution for that, see whether the other classes of spam are still
possible.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-08 Thread Joost Jager
>
> > The goal of this pre-payment proposal is to remove the need for
> > trusted parties
> >
> > Trust isn't the right word. It is a level of service that you provide
> > to your peers. If nodes are cognizant of the fact that the level of
> > service they receive goes down if they forward spam, they will be
> > careful on the incoming side. Require peers to build up a reputation
> > before increasing the inbound limits that apply to the channels with
> them.

We can learn from the current situation in emails, that a system based
> on reputation tends to concentrate the power in the hands of few big and
> strong actors (gmail and co). If we have from the beginning a mechanism
> to fight against spam by paying to send message, we can perhaps have a
> really distributed system which cannot be censured.
>

Can you elaborate on this a bit further? If you consider rate limiting to
be a form of censoring, then you can still censor if there is a prepay.

I am not too familiar with the current state of email servers, to what
extent power is concentrated now and how that evolution translates to
Lightning. One difference is that afaik emails don't traverse a path
through multiple mail "nodes". Another is that inboxes of users are very
centralized (gmail and co).

What does that undesired situation exactly look like in the Lightning
Network if nodes would enforce rate limits based on how they rate their
direct peers?

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-10-26 Thread Joost Jager
We started to look at the `push_me` outputs again. Will refer to them as
`anchor` outputs from now on, to prevent confusion with `push_msat` on the
`open_channel` message.

The cpfp carve-out https://github.com/bitcoin/bitcoin/pull/15681 has been
merged and for reasons described earlier in this thread, we now need to add
a csv time lock to every non-anchor output on the commitment transaction.

To realize this, we are currently considering the following changes:

* Add `to_remote_delay OP_CHECKSEQUENCEVERIFY OP_DROP` to the `to_remote`
output. `to_remote_delay` is the csv delay that the remote party accepted
in the funding flow for their outputs. This not only ensures that the
carve-out works as intended, but also removes the incentive to game the
other party into force-closing. If desired, both parties can still agree to
have different `to_self_delay` values.

* Add `1 OP_CHECKSEQUENCEVERIFY OP_DROP` to the non-revocation clause of
the HTLC outputs.

For the anchor outputs we consider:

* Output type: normal P2WKH. At one point, an additional spending path was
proposed that was unconditional except for a 10 block csv lock. The
intention of this was to prevent utxo set pollution by allowing anyone to
clean up. This however also opens up the possibility for an attacker to
'use up' the cpfp carve-out after those 10 blocks. If the user A is offline
for that period of time, a malicious peer B may already have broadcasted
the commitment tx and pinned down user A's anchor output with a low fee
child. That way, the commitment tx could still remain unconfirmed while an
important htlc expires.

* For the keys to use for `to_remote_anchor` and `to_local_anchor`, we’d
like to introduce new addresses that both parties communicate in the
`open_channel` and `accept_channel` messages. We don’t want to reuse the
main commitment output addresses, because those may (at some point) be cold
storage addresses and the cpfp is likely to happen from a hot wallet.

* Within each version of the commitment transaction, both anchors always
have equal values and are paid for by the initiator. The value of the
anchors is the dust limit that was negotiated in the `open_channel` or
`accept_channel` message of the party that publishes the transaction. It
means that the definitive balance of an endpoint is dependent on which
version of the commitment transaction confirms. This however is nothing
new. In the current commitment format, there are always two or three valid
versions of the commitment transaction (local, remote and sometimes the not
yet revoked previous remote tx) which can have slightly different balances.
For the initiator, it is important to validate the other party's dust
limit. The initiator pays for it and doesn't want to give away more free
money than necessary.

Furthermore, there doesn’t seem to be a compelling reason anymore for
tweaking the keys (new insights into watchtower designs, encrypt by txid).
Therefore we think we can remove them entirely in this new commitment
format and require less channel state data to sweep the outputs.

Joost


On Wed, Nov 21, 2018 at 3:17 AM Rusty Russell  wrote:

> I'm also starting to implement this, to see what I missed!
>
> Original at https://github.com/lightningnetwork/lightning-rfc/pull/513
>
> Pasted here for your reading convenience:
>
> - Option is sticky; it set at open time, it stays with channel
>   - I didn't want to have to handle penalty txs on channels which switch
>   - We could, however, upgrade on splice.
> - Feerate is fixed at 253
>   - `feerate_per_kw` is still in open /accept (just ignored): multifund
> may want it.
> - closing tx negotiates *upwards* not *downwards*
>   - Starting from base fee of commitment tx = 282 satoshi.
> - to_remote output is always CSV delayed.
> - pushme outputs are paid for by funder, but only exist if the matching
>   to_local/remote output exists.
> - After 10 blocks, they become anyone-can-spend (they need to see the
>   to-local/remote witness script though).
> - remotepubkey is not rotated.
> - You must spend your pushme output; you may sweep for others.
>
> Signed-off-by: Rusty Russell 
>
> diff --git a/02-peer-protocol.md b/02-peer-protocol.md
> index 7cf9ebf..6ec1155 100644
> --- a/02-peer-protocol.md
> +++ b/02-peer-protocol.md
> @@ -133,7 +133,9 @@ node can offer.
>  (i.e. 1/4 the more normally-used 'satoshi per 1000 vbytes') that this
>  side will pay for commitment and HTLC transactions, as described in
>  [BOLT #3](03-transactions.md#fee-calculation) (this can be adjusted
> -later with an `update_fee` message).
> +later with an `update_fee` message).  Note that if
> +`option_simplified_commitment` is negotiated, this `feerate_per_kw`
> +is treated as 253 for all transactions.
>
>  `to_self_delay` is the number of blocks that the other node's to-self
>  outputs must be delayed, using `OP_CHECKSEQUENCEVERIFY` delays; this
> @@ -208,7 +210,8 @@ The receiving node MUST fail the channel if:
>- `push_msat` is greater than 

Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-10-26 Thread Joost Jager
>
> * Output type: normal P2WKH. At one point, an additional spending path was
> proposed that was unconditional except for a 10 block csv lock. The
> intention of this was to prevent utxo set pollution by allowing anyone to
> clean up. This however also opens up the possibility for an attacker to
> 'use up' the cpfp carve-out after those 10 blocks. If the user A is offline
> for that period of time, a malicious peer B may already have broadcasted
> the commitment tx and pinned down user A's anchor output with a low fee
> child. That way, the commitment tx could still remain unconfirmed while an
> important htlc expires.
>

Ok, this 'attack' scenario doesn't make sense. Of course with a csv lock,
this spend path is closed when the commitment tx is unconfirmed. But it is
still a question whether user A would appreciate their anchor output being
taken by someone else when they are offline for more than 10 blocks.

If we do like this utxo set clean up path, one could also argue that this
should then be applied to every near-dust output on the commitment tx (eg
small htlcs).
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-10-31 Thread Joost Jager
>
> On Oct 30, 2019, at 06:04, Joost Jager  wrote:
>
> > For the anchor outputs we consider:
>> >
>> > * Output type: normal P2WKH. At one point, an additional spending path
>> was
>> > proposed that was unconditional except for a 10 block csv lock. The
>> > intention of this was to prevent utxo set pollution by allowing anyone
>> to
>> > clean up. This however also opens up the possibility for an attacker to
>> > 'use up' the cpfp carve-out after those 10 blocks. If the user A is
>> offli=
>> ne
>> > for that period of time, a malicious peer B may already have broadcasted
>> > the commitment tx and pinned down user A's anchor output with a low fee
>> > child. That way, the commitment tx could still remain unconfirmed while
>> an
>> > important htlc expires.
>>
>> Agreed, this doesn't really work.  We actually needed a bitcoin rule
>> that allowed a single anyone-can-spend output.  Seems like we didn't get
>> that.
>>
>
> With the mempool acceptance carve-out in bitcoind 0.19, we indeed won't be
> able to safely produce a single OP_TRUE output for anyone to spend. An
> attacker could attach low fee child transactions, reach the limits and
> block further fee bumping.
>
>
> Quick correction. This is only partially true. You can still RBF the
> sub-package, the only issue I see immediately is you have to  pay for the
> otherwise-free relay of everything the attacker relayed.
>

Ok, so this is always possible because the commitment transaction is
signaling opt-in rbf and therefore any child txes are too? From bip125:
"Transactions that don't explicitly signal replaceability are replaceable
under this policy for as long as any one of their ancestors signals
replaceability and remains unconfirmed." But yes, it can get unnecessarily
expensive to replace everything that the attacker added.


> Why not stick with the original design from Adelaide with a spending path
> with a 1CSV that is anyone can spend (or that is revealed by spending
> another output).
>

 What script would this be exactly? While still unconfirmed, the anchor
needs to be protected from being spent by anyone for the carve-out to work.
This also means that anyone spending after the csv needs to know that
script too. But how can they know if there is something like a pubkey (that
protected it during the unconfirmed phase) in it?

Joost.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-18 Thread Joost Jager
Hi all,

Within our team, we've been discussing the subject of preventing
liquidity-consuming spam (aka channel jamming) further. One idea came up
that I think is worth sharing.

Previous prepay ideas were based on the sender paying something up-front in
case the htlc causes grief on the network. This however leaves the sender
vulnerable to nodes stealing that up-front payment.

Consider what is probably the worst known channel jamming attack: an
attacker sends minimum sized htlcs to fill up the limited number of
commitment slots of channels along the route. Those htlcs will be held as
long as possible by the receiving node (that is also controlled by the
attacker). The hold time per htlc doesn't even need to be very long,
because a fresh htlc can be launched to immediately re-occupy a slot after
it opens up again.

The cost to the network of this attack is mostly dependent on the capacity
of the channels used. The bigger the capacity, the more funds are locked up
if a sufficient number of minimum sized htlcs are pending. The size of the
up-front payment likely needs to be proportional to this cost.

This means that for small htlcs, the up-front payment requirements may very
well be exceeding the htlc amount and routing fees paid by far. At that
point, a routing node may decide to steal the up-front payment rather than
earn the routing fee in an honest way.

A different way of mitigating this is to reverse the direction in which the
bond is paid. So instead of paying to offer an htlc, nodes need to pay to
receive an htlc. This sounds counterintuitive, but for the described
jamming attack there is also an attacker node at the end of the route. The
attacker still pays. The advantage is that for legitimate senders, there is
no up-front payment that can be stolen.

How this would work is that channel peers charge each other for the time
that the other party holds an htlc. So if node A extends an htlc to node B,
node B will pay node A amount x per minute of hold time. If node B doesn't
pay (doesn't hold up the contract), A will close the channel. It can be a
running balance between A and B that materializes as a single htlc per
channel on the commitment transaction.

As long as node B forwards the htlc swiftly to node C, the dfiference (the
actual cost) between what B needs to pay A and what B receives from C will
be tiny. Only when the htlc reaches the attacker node, or any other node on
the network that is (unintentionally) mishaving for some reason, the delta
starts to increase quickly for that node. The cost is borne by the node
that should bear it.

This would also fix concerns that have been voiced around hodl invoices.
With the reverse bond payment as described above, hodling nodes will pay
for the cost of their actions.

Many details skipped over, but interested to hear opinions on the viability
of this variation of up-front payments.

Joost

On Tue, Nov 5, 2019 at 3:25 AM Rusty Russell  wrote:

> Hi all,
>
> It's been widely known that we're going to have to have up-front
> payments for msgs eventually, to avoid Type 2 spam (I think of Type 1
> link-local, Type 2 though multiple nodes, and Type 3 liquidity-using
> spam).
>
> Since both Offers and Joost's WhatSat are looking at sending
> messages, it's time to float actual proposals.  I've been trying to come
> up with something for several years now, so thought I'd present the best
> I've got in the hope that others can improve on it.
>
> 1. New feature bit, extended messages, etc.
> 2. Adding an HTLC causes a *push* of a number of msat on
>commitment_signed (new field), and a hash.
> 3. Failing/succeeding an HTLC returns some of those msat, and a count
>and preimage (new fields).
>
> How many msat can you take for forwarding?  That depends on you
> presenting a series of preimages (which chain into a final hash given in
> the HTLC add), which you get by decoding the onion.  You get to keep 50
> msat[1] per preimage you present[2].
>
> So, how many preimages does the user have to give to have you forward
> the payment?  That depends.  The base rate is 16 preimages, but subtract
> one for each leading 4 zero bits of the SHA256(blockhash | hmac) of the
> onion.  The blockhash is the hash of the block specified in the onion:
> reject if it's not in the last 3 blocks[3].
>
> This simply adds some payment noise, while allowing a hashcash style
> tradeoff of sats for work.
>
> The final node gets some variable number of preimages, which adds noise.
> It should take all and subtract from the minimum required invoice amount
> on success, or take some random number on failure.
>
> This leaks some forward information, and makes an explicit tradeoff for
> the sender between amount spent and privacy, but it's the best I've been
> able to come up with.
>
> Thoughts?
> Rusty.
>
> [1] If we assume $1 per GB, $10k per BTC and 64k messages, we get about
> 655msat per message.  Flat pricing for simplicity; we're trying to
> prevent spam, 

Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2020-01-21 Thread Joost Jager
>
> By my calculations, at minfee it will cost you ~94 satoshis to spend.
> Dust limit is 294 for Segwit outputs (basically assuming 3x minfee).
>

> So I'm actually happy to say "anchor outputs are 294 satoshi".  These
> are simply spendable, and still only $3 each if BTC is $1M.  Lower is
> better (as long as we stick with funder-pays), as long as they do
> eventually get spent.
>

Quick note here: the anchor outputs are P2WSH and for those the default
dust limit is 330 satoshis.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-12 Thread Joost Jager
>
> > A. Prepayment: node pays an amount to its channel peer (for example via
> keysend) and the channel peer deducts the hold fees from that prepaid
> balance until it is at zero. At that point it somehow (in the htlc fail
> message?) communicates Lightning's version of http 402 to ask for more
> money.
>
> If the node has already forwarded the HTLC onward, what enforcement hold
> does the node have on the sender of the incoming HTLC?
> Presumably the sender of the HTLC has already gotten what it wanted --- an
> outgoing HTLC --- so how can the forwarding node enforce this request to
> get more money.
>

The idea is that the available prepaid hold fee balance is enough to cover
the worst case hold fee. Otherwise the forward won't happen. The main
difference with option B is that you pay a sum upfront which can be used to
cover multiple forwards. And that this payment is a separate Lightning
payment, not integrated with the add/fail/settle flow. I prefer option B,
but implementation effort is also a consideration.

> B. Tightly integrated with the htlc add/fail/settle messages. When an
> htlc is added, the maximum cost (based on maximum lock time) for holding is
> deducted from the sender's channel balance. When the htlc settles, a refund
> is given based on the actual lock time. An additional `update_fee`-like
> message is added for peers to update their hold fee parameters (fee_base
> and fee_rate).
>
> If I am a forwarding node, and I receive the preimage from the outgoing
> HTLC, can I deliberately defer claiming the incoming HTLC (pretending that
> the outgoing HTLC was taking longer than it actually took) in order to
> reduce the amount I have to refund?
>

Yes you can. That is the trust part, your peer trusts you not to do this.
If they don't trust you, they won't forward to you if you charge a (high)
hold fee.

> In both cases the sender needs to trust its peer to not steal the payment
> and/or artificially delay the forwarding to inflate the hold fee. I think
> that is acceptable given that there is a trust relation between peers
> already anyway.
>
> I am wary of *adding* trust.
> You might trust someone to keep an eye on your snacks while you go refill
> your drink, but not to keep an eye on your hardware wallet when you do the
> same.
> (Since consuming snacks and drinks and hardware wallets are human
> activities, this should show that I am in fact a human.)
>

So I am arguing that there is trust already between peers. Quite
considerable trust even in case of high on-chain fee conditions. The added
risk of being scammed out of these prepay sats may not be significant.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-12 Thread Joost Jager
Hello list,

Many discussions have taken place on this list on how to prevent undesired
use of the Lightning network. Spamming the network with HTLCs (for probing
purposes or otherwise) or holding HTLCs to incapacitate channels can be
done on today's network at very little cost to an attacker. So far this
doesn't seem to be happening in practice, but I believe that it is only a
matter of time before it will become a real issue.

Rate limits and other limits such as the maximum number of in-flight HTLCs
increase the cost of an attack, but may also limit the capabilities of
honest users. It works as a mitigation, but it doesn't seem to be the ideal
solution.

We've looked at all kinds of trustless payment schemes to keep users
honest, but it appears that none of them is satisfactory. Maybe it is even
theoretically impossible to create a scheme that is trustless and has all
the properties that we're looking for. (A proof of that would also be
useful information to have.)

Perhaps a small bit of trust isn't so bad. There is trust in Lightning
already. For example when you open a channel, you trust (or hope) that your
peer remains well connected, keeps charging reasonable fees, doesn't
force-close in a bad way, etc.

What I can see working is a system where peers charge each other a hold fee
for forwarded HTLCs based on the actual lock time (not the maximum lock
time) and the htlc value. This is just for the cost of holding and separate
from the routing fee that is earned when the payment settles.

This hold fee could be: lock_time * (fee_base + fee_rate * htlc_value).
fee_base is in there to compensate for the usage of an htlc slot, which is
a scarce resource too.

I think the implementation of this is less interesting at this stage, but
some ideas are:

A. Prepayment: node pays an amount to its channel peer (for example via
keysend) and the channel peer deducts the hold fees from that prepaid
balance until it is at zero. At that point it somehow (in the htlc fail
message?) communicates Lightning's version of http 402 to ask for more
money.

B. Tightly integrated with the htlc add/fail/settle messages. When an htlc
is added, the maximum cost (based on maximum lock time) for holding is
deducted from the sender's channel balance. When the htlc settles, a refund
is given based on the actual lock time. An additional `update_fee`-like
message is added for peers to update their hold fee parameters (fee_base
and fee_rate).

In both cases the sender needs to trust its peer to not steal the payment
and/or artificially delay the forwarding to inflate the hold fee. I think
that is acceptable given that there is a trust relation between peers
already anyway.

A crucial thing is that these hold fees don't need to be symmetric. A new
node for example that opens a channel to a well-known, established routing
node will be forced to pay a hold fee, but won't see any traffic coming in
anymore if it announces a hold fee itself. Nodes will need to build a
reputation before they're able to command hold fees. Similarly, routing
nodes that have a strong relation may decide to not charge hold fees to
each other at all.

This asymmetry is what is supposed to prevent channel jamming attacks. The
attacker needs to pay hold fees to send out the payment, but when it comes
back to the attacker after traversing a circular route, they won't be able
to charge a hold fee to cancel out the hold fee paid at the start of the
route. (Assuming the attacker node is not trusted.)

A consequence for honest users is that payment attempts are no longer free.
The cost should however be negligible for fast-failing attempts. Also
senders will have to be a lot more selective when building a route.
Selecting a 'black hole' hop (hop that doesn't forward nor fail) can be
costly.

The hold fee scheme is a bit looser compared to previously proposed schemes
(as far as I know...). It is purely an arrangement between channel peers
and doesn't try to exactly compensate every hop for its costs. Instead
trust relations that arguably exist already are leveraged to present a bill
to the actor who deserves it.

Interested to hear opinions about this proposal.

I'd also like to encourage everyone to prioritize this spam/jam issue and
dedicate more time to solving it. Obviously there is a lot more to do in
Lightning, but I am not sure if we can afford to wait for the real
adversaries to show up on this one.

Cheers,
Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-13 Thread Joost Jager
>
> > A crucial thing is that these hold fees don't need to be symmetric. A new
> > node for example that opens a channel to a well-known, established
> routing
> > node will be forced to pay a hold fee, but won't see any traffic coming
> in
> > anymore if it announces a hold fee itself. Nodes will need to build a
> > reputation before they're able to command hold fees. Similarly, routing
> > nodes that have a strong relation may decide to not charge hold fees to
> > each other at all.
>
> I can still establish channels to various low-reputation nodes, and then
> use them to grief a high-reputation node.  Not only do I get to jam up
> the high-reputation channels, as a bonus I get the low-reputation nodes
> to pay for it!
>

So you're saying:

ATTACKER --(no hold fee)--> LOW-REP --(hold fee)--> HIGH-REP

If I were LOW-REP, I'd still charge an unknown node a hold fee. I would
only waive the hold fee for high-reputation nodes. In that case, the
attacker is still paying for the attack. I may be forced to take a small
loss on the difference, but at least the larger part of the pain is felt by
the attacker. The assumption is that this is sufficient enough to deter the
attacker from even trying.


> Operators of high reputation nodes can even make this profitable; doubly
> so, since they eliminate the chance of any of those low-reputation nodes
> every getting to be high reputation (and thus competing).
>

> AFAICT any scheme which penalizes the direct peer creates a bias against
> forwarding unknown payments, thus is deanonymizing.
>

If you're an honest but unknown sender (initiating the payment) and you
just pay the hold fee, I don't think there is a problem? The unknown
forward will still be carried out by a high-rep node. Also need to keep in
mind that the hold fee for quick happy flow payments is going to be tiny
(for example when calculating back from a desired annual return on the
staked channel capacity). And we can finally make these parasitic hodl
invoice users pay for it!

I guess your concern is with trying to become a routing node? If nobody
knows you, you'll be forced to pay hold fees but can't attract traffic if
you charge hold fees yourself. That indeed means that you'll need to be
selective with whom you accept htlcs from. Put limits in place to control
the expenditure. Successful forwards will earn a routing fee which could
compensate for the loss in hold fees too.

I think this mechanism can create interesting dynamics on the network and
eventually reach an equilibrium that is still healthy in terms of
decentralization and privacy.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-13 Thread Joost Jager
>
> > If I were LOW-REP, I'd still charge an unknown node a hold fee. I
> > would only waive the hold fee for high-reputation nodes. In that case,
> > the attacker is still paying for the attack. I may be forced to take a
> > small loss on the difference, but at least the larger part of the pain
> > is felt by the attacker. The assumption is that this is sufficient
> > enough to deter the attacker from even trying.
>
> The LOW-REP node being out of pocket is the clue here: if one party
> loses funds, even a tiny bit, another party gains some funds. In this
> case the HIGH-REP node collaborating with the ATTACKER can extract some
> funds from the intermediate node, allowing them to dime their way to all
> of LOW-REP's funds. If an attack results in even a tiny loss for an
> intermediary and can be repeated, the intermediary's funds can be
> syphoned by an attacker.
>

The assumption is that HIGH-REP nodes won't do this :) LOW-REP will see all
those failed payments and small losses and start to realize that something
strange is happening. I know the proposal isn't fully trustless, but I
think it can work in practice.


> Another attack that is a spin on ZmnSCPxj's waiting to backpropagate the
> preimage is even worse:
>
>  - Attacker node `A` charging hold fees receives HTLC from victim `V`
>  - `A` does not forward the HTLC, but starts charging hold fees
>  - Just before the timeout for the HTLC would force us to settle onchain
>`A` just removes the HTLC without forwarding it or he can try to
>forward at the last moment, potentially blaming someone else for its
>failure to complete
>
> This results in `A` extracting the maximum hold fee from `V`, without
> the downstream hold fees cutting into their profits. By forwarding as
> late as possible `A` can cause a downstream failure and look innocent,
> and the overall payment has the worst possible outcome: we waited an
> eternity for what turns out to be a failed attempt.
>

The idea is that an attacker node is untrusted and won't be able to charge
hold fees.

- Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-13 Thread Joost Jager
>
> > The idea is that the available prepaid hold fee balance is enough to
> cover the worst case hold fee. Otherwise the forward won't happen. The main
> difference with option B is that you pay a sum upfront which can be used to
> cover multiple forwards. And that this payment is a separate Lightning
> payment, not integrated with the add/fail/settle flow. I prefer option B,
> but implementation effort is also a consideration.
>
> If the above is not done (i.e. if I only prepay Joost but not Rusty) then
> it seems to me that the below remote attack is possible:
>

Indeed, the above isn't done. Z only prepays Joost, not rusty.


> * I convince Rene to make a channel to me.
>

You may succeed, but Rene is probably not going to pay you a hold fee
because you're untrusted.


> * I connect to Joost.
> * I prepay to Joost.
> * I forward Me->Joost->Rusty->Rene->me.
>   * I am exploiting the pre-existing tr\*st that Rusty has to Joost, and
> the tr\*st that Rene has to Rusty.
> * When the HTLC reaches me, I dicker around and wait until it is about to
> time out before ultimately failing.
> * Rusty loses tr\*st in Joost, and Rene loses tr\*st in Rusty.


But most importantly: you will have paid hold fees to Joost for the long
lock time of the htlc. This should keep you from even trying this attack.

Thinking a little more deeply: it is in principle possible to give a
> financial value to an amount of msat being locked for an amount of time.
> For instance the C-Lightning `getroute` has a `riskfactor` argument which
> is used in this conversion.
> Basically, by being locked in an HTLC and later failing, then the
> forwarding node loses the expected return on investment if instead the
> amount were locked in an HTLC that later succeeds.
>
> Now, the cost on a forwarding node is based on the actual amount of time
> that its outgoing HTLC is locked.
>

That is indeed the proposal, to give financial value to the sats and the
htlc slot being locked for an amount of time.


> When we consider multi-hop payments, then we should consider that the
> initiator of the multi-hop payment is asking multiple nodes to put their
> funds at risk.
>
> Thus, the initiator of a multi-hop payment should, in principle, prepay
> for *all* the risk of *all* the hops.
>
>
> If we do not enforce this, then an initiator of a multi-hop payment can
> pay a small amount relative to the risk that *all* the hops are taking.
>

I understand that, but I think it might be a large enough shift in the
incentives of the attacker.


> Secondarily, we currently assume that forwarding nodes will, upon having
> their outgoing HTLC claimed, seek to claim the incoming HTLC as quickly as
> possible.
> This is because the incoming HTLC would be locked and unuseable until they
> claim their incoming HTLC, and the liquidity would not be usable for
> earning more fees until the incoming HTLC is claimed and put into its pool
> of liquidity.
>
However, if we make anything that is based on the time that a forwarding
> node claims its incoming HTLC, then this may incentivize the forwarding
> node to delay claiming the incoming HTLC.
>

Yes, that is the trust part again.


> > > > B. Tightly integrated with the htlc add/fail/settle messages. When
> an htlc is added, the maximum cost (based on maximum lock time) for holding
> is deducted from the sender's channel balance. When the htlc settles, a
> refund is given based on the actual lock time. An additional
> `update_fee`-like message is added for peers to update their hold fee
> parameters (fee_base and fee_rate).
> > >
> > > If I am a forwarding node, and I receive the preimage from the
> outgoing HTLC, can I deliberately defer claiming the incoming HTLC
> (pretending that the outgoing HTLC was taking longer than it actually took)
> in order to reduce the amount I have to refund?
> >
> > Yes you can. That is the trust part, your peer trusts you not to do
> this. If they don't trust you, they won't forward to you if you charge a
> (high) hold fee.
>
> What happens if I charge a tiny hold feerate in msats/second, but end up
> locking the funds for a week?
> How does my peer know that even though I charge a tiny hold fee, I will
> hold their funds hostage for a week?
>

That is the trust part also.


> > > > In both cases the sender needs to trust its peer to not steal the
> payment and/or artificially delay the forwarding to inflate the hold fee. I
> think that is acceptable given that there is a trust relation between peers
> already anyway.
> > >
> > > I am wary of *adding* trust.
> > > You might trust someone to keep an eye on your snacks while you go
> refill your drink, but not to keep an eye on your hardware wallet when you
> do the same.
> > > (Since consuming snacks and drinks and hardware wallets are human
> activities, this should show that I am in fact a human.)
> >
> > So I am arguing that there is trust already between peers. Quite
> considerable trust even in case of high on-chain fee conditions. The 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-18 Thread Joost Jager
>
> > We've looked at all kinds of trustless payment schemes to keep users
>
> > honest, but it appears that none of them is satisfactory. Maybe it is
> even
> > theoretically impossible to create a scheme that is trustless and has
> all
> > the properties that we're looking for. (A proof of that would also be
>
> > useful information to have.)
>
> I don't think anyone has drawn yet a formal proof of this, but roughly a
> routing peer Bob, aiming to prevent resource abuse at HTLC relay is seeking
> to answer the following question "Is this payment coming from Alice and
> going to Caroll will compensate for my resources consumption ?". With the
> current LN system, the compensation is conditional on payment settlement
> success and both Alice and Caroll are distrusted yet discretionary on
> failure/success. Thus the underscored question is undecidable for a routing
> peer making relay decisions only on packet observation.
>
> One way to mitigate this, is to introduce statistical observation of
> sender/receiver, namely a reputation system. It can be achieved through a
> scoring system, web-of-trust, or whatever other solution with the same
> properties.
> But still it must be underscored that statistical observations are only
> probabilistic and don't provide resource consumption security to Bob, the
> routing peer, in a deterministic way. A well-scored peer may start to
> suddenly misbehave.
>
> In that sense, the efficiency evaluation of a reputation-based solution to
> deter DoS must be evaluated based based on the loss of the reputation
> bearer related to the potential damage which can be inflicted. It's just
> reputation sounds harder to compute accurately than a pure payment-based
> DoS protection system.
>

I can totally see the issues and complexity of a reputation-based system.
With 'trustless payment scheme' I meant indeed a trustless pure
payment-based DoS protection system and the question whether such a system
can be proven to not exist. A sender would pay an up-front amount to cover
the maximum cost, but with the guarantee that nodes can only take a fair
part of the deposit (based on actual lock time). Perhaps the taproot
upgrade offers new possibilities with adaptor signatures to atomically swap
part of the up-front payment with htlc-received-in-time-signatures from
nodes downstream (random wild idea).


> > What I can see working is a system where peers charge each other a hold
> fee
> > for forwarded HTLCs based on the actual lock time (not the maximum lock
>
> > time) and the htlc value. This is just for the cost of holding and
> separate
> > from the routing fee that is earned when the payment settles
>
> Yes I guess any solution will work as long as it enforces an asymmetry
> between the liquidity requester and a honest routing peer. This asymmetry
> can be defined as guaranteeing that the routing peer's incoming/outgoing
> balance is always increasing, independently of payment success. Obviously
> this increase should be materialized by a payment, while minding it might
> be discounted based on requester reputation ("pay-with-your-reputation").
> This reputation evaluation can be fully delegated to the routing node
> policy, without network-wise guidance.
>
> That said, where I'm skeptical on any reputation-heavy system is on the
> long-term implications.
>
> Either, due to the wants of a subset of actors deliberately willingly to
> trade satoshis against discounted payment flow by buying well-scored
> pubkeys, we see the emergence of a reputation market. Thus enabling
> reputation to be fungible to satoshis, but with now a weird "reputation"
> token to care about.
>
> Or, reputation is too hard to make liquid (e.g hard to disentangle pubkeys
> from channel ownership or export your score across routing peers) and thus
> you now have reputation scarcity which is introducing a bias from a "purer"
> market, where agents are only routing based on advertised fees. IMO, we
> should strive for the more liquid Lightning market we can, as it avoids
> bias towards past actors and thus may contain centralization inertia. I'm
> curious about your opinion on this last point.
>

I am in favor of more liquidity and less centralization, but as far as I
know the reality is that we don't have a good solution yet to achieve this
without being vulnerable to DoS attacks. If those attacks would happen on a
large scale today, what would we do?

Also peers can implement these trusted upfront payments without protocol
changes. Just stop forwarding when the prepaid forwarding budget is used up
and require a top-up. It may have been implemented already in parts of the
network, I don't think there is a way to know. I've experimented a bit with
the fee model myself (
https://twitter.com/joostjgr/status/1317546071984427009). Node operators
don't need to wait for permission.

To me it seems that the longer it takes to come up with a good anti-DoS
system for Lightning, the further the outside world will have 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-20 Thread Joost Jager
Hi Bastien,

Thanks for creating the summary!

While doing this exercise, I couldn't find a reason why the `reverse
> upfront payment` proposal
> would be broken (notice that I described it using a flat amount after a
> grace period, not an amount
> based on the time HTLCs are held). Can someone point me to the most
> obvious attacks on it?
>
> It feels to me that its only issue is that it still allows spamming for
> durations smaller than the
> grace period; my gut feeling is that if we add a smaller forward direction
> upfront payment to
> complement it it could be a working solution.
>

The 'uncontrolled spamming' as you called it in your doc is pretty serious.
If you want to have fun, you should really try to spin up a bunch of
threads and keep your outgoing channels fully saturated with max length
routes going nowhere. I tried it on testnet and it was quite bad. All that
traffic is fighting for resources which makes it take even longer to unlock
the htlcs again.

I think that any solution should definitely address this case too.

Your proposal to add a small upfront payment, wouldn't that allow the
(arbitrary) grace period to be removed? It would mean that routing nodes
always need to pay something for forwarding spam, but if they do it quick
enough (honest nodes) that expense is covered by the upfront payment.

- Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Joost Jager
>
> It is interesting that the forward and backward payments are relatively
>> independent of each other
>
>
> To explain this further, I think it's important to highlight that the
> forward fee is meant to fight
> `uncontrolled spam` (where the recipient is an honest node) while the
> backward fee is meant to fight
> `controlled spam` (where the recipient also belongs to the attacker).
>

Yes, that was clear. I just meant to say that we could choose to first only
implement the easier uncontrolled spam protection via the forward payment.
Not that any type of protocol upgrade is easy...

What I'd really like to explore is whether there is a type of spam that I
> missed or griefing attacks
> that appear because of the mechanisms I introduce. TBH I think the
> implementation details (amounts,
> grace periods and their deltas, when to start counting, etc) are things
> we'll be able to figure out
> collectively later.
>

I brought up the question about the amounts because it could be that
amounts high enough to thwart attacks are too high for honest users or
certain uses. If that is the case, we don't need to look for other
potential weaknesses. It is just a different order to explore the
feasibility of the proposal.

The forward payment can indeed be small, because uncontrolled spam can only
be in-flight for a short time. To get to that annual return of 5% on a 1
BTC / 483 slot channel, it needs to be approx 1 sat/hour (if I calculated
that correctly). Let's say the spam payment is in-flight on average 30
seconds on a 20 route hop (60 sec at the start, 0 sec at the end). The
total "damage" would then be 600 hop-seconds, requiring a forward payment
of 150 msat to cover that. Still seems acceptable to me. If an honest
user makes a payment and needs 10 attempts, they will pay an additional 1.5
sats for that. Might be a ux-challenge to communicate that cost to a normal
user for a failed payment though.

But what happens if the attacker is also on the other end of the
uncontrolled spam payment? Not holding the payment, but still collecting
the forward payments?

For the backward payment the pricing is different. The max expiry of the
htlc is 2000 blocks, 1000 blocks on average along the route. 1000 blocks is
about 160 hours. So ideally the attacker at the far end of the route should
pay 20 * 160 * 1sat/hr = 3200 sat. This will also be the cost for a hold
invoice then, but not everybody liked them anyway. The net cost for a
regular (fast) payment will be nothing as you described.

- Joost

>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-24 Thread Joost Jager
Hi Bastien,

I brought up the question about the amounts because it could be that
>> amounts high enough to thwart attacks are too high for honest users or
>> certain uses.
>
>
> I don't think this is a concern for this proposal, unless there's an
> attack vector I missed.
> The reason I claim that is that the backwards upfront payment can be made
> somewhat big without any
> negative impact on honest nodes.
>

Yes, that makes sense.


> But what happens if the attacker is also on the other end of the
>> uncontrolled spam payment? Not holding the payment, but still collecting
>> the forward payments?
>
>
> That's what I call short-lived `controlled spam`. In that case the
> attacker pays the forward fee at
> the beginning of the route but has it refunded at the end of the route. If
> the attacker doesn't
> want to lose any money, he has to release the HTLC before the grace period
> ends (which is going to
> be short-lived - at least compared to block times). This gives an
> opportunity for legitimate payments
> to use the HTLC slots (but it's a race between the attacker and the
> legitimate users).
>

I think indeed that this short-lived controlled spam also needs to be
brought under control. Otherwise it is still easy to jam a channel,
although it would need a continuous process to do it rather than sending a
bunch of 2000-block expiry htlcs. For the short-lived controlled spam there
is still a multiplier possible by making loops in the route. It is a race
with legitimate users, but if the spammer is efficient the probability of a
legitimate payment coming through is low. Similar to DDoS attacks where a
legitimate web request could make it to the server but probably doesn't.


> It's not ideal, because the attacker isn't penalized...the only way I
> think we can penalize this
> kind of attack is if the forward fee decrements at each hop, but in that
> case it needs to be in the
> onion (to avoid probing) and the delta needs to be high enough to actually
> penalize the attacker.
> Time to bikeshed some numbers!
>

So in your proposal, an htlc that is received by a routing node has the
following properties:
* htlc amount
* forward up-front payment (anti-spam)
* backward up-front payment (anti-hold)
* grace period

The routing node forwards this to the next hop with
* lower htlc amount (to earn routing fees when the htlc settles)
* lower forward up-front payment (to make sure that an attacker at the
other end loses money when failing quickly)
* higher backward up-front payment (to make sure that an attacker at the
other end loses money when holding)
* shorter grace period (so that there is time to fail back and not lose the
backward up-front payment)

On a high level, it seems to me that this can actually work.

- Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-22 Thread Joost Jager
Hi Bastien,

We add a forward upfront payment of 1 msat (fixed) that is paid
> unconditionally when offering an HTLC.
> We add a backwards upfront payment of `hold_fees` that is paid when
> receiving an HTLC, but refunded
> if the HTLC is settled before the `hold_grace_period` ends (see footnotes
> about this).
>

It is interesting that the forward and backward payments are relatively
independent of each other. In particular the forward anti-spam payment
could quite easily be implemented to help protect the network. As you said,
just transfer that fixed fee for every `update_add_htlc` message from the
offerer to the receiver.

I am wondering though what the values for the fwd and bwd fees should be. I
agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
enough.

Maybe a way to approach it is this: suppose routing nodes are able to make
5% per year on their committed capital. An aggressive routing node could be
willing to spend up to that amount to take down a competitor.

Suppose the network consists only of 1 BTC, 483 slot channels. What should
the fwd and bwd fees be so that even an attacked routing node will still
earn that 5% (not through forwarding fees, but through hold fees) in both
the controlled and the uncontrolled spam scenario?

- Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making unannounced channels harder to probe

2021-04-23 Thread Joost Jager
>
> But Joost pointed out that you need to know the node_id of the next node
> though: this isn't quite true, since if the node_id is wrong the spec
> says you should send an `update_fail_malformed_htlc` with failure code
> invalid_onion_hmac, which node N turns into its own failure message.
> Perhaps it should convert it to `unknown_next_peer` instead?  This isn't
> a common error on the modern network; I think our onion implementations
> have been rock solid.
>

Isn't this what I am suggesting here?
https://twitter.com/joostjgr/status/1385150318959341569

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-11 Thread Joost Jager
Hi all,

Things have been quiet around channel jamming lately, but the vulnerability
it still there as much as it was before. I've participated in an (isolated)
mainnet channel jamming experiment (
https://bitcoinmagazine.com/articles/good-griefing-a-lingering-vulnerability-on-lightning-network-that-still-needs-fixing)
which only confirmed the seriousness of the issue.

BIDIRECTIONAL UPFRONT PAYMENT

Of all the proposals that have been presented, t-bast's remix of forward
and backward upfront payments (
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002862.html
,
https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md#bidirectional-upfront-payment)
indicates in my opinion the most promising direction.

One characteristic of the proposal is that the `hold_fees` are
time-independent. If an htlc doesn't resolve within the `grace_period`, the
receiver of the htlc will be forced to pay the full hold fee. The hold fee
should cover the expenses for locking up an htlc for the maximum duration
(could be 2000 blocks), so this can be a significant penalty. Applications
such as atomic onchain/offchain swaps (Lightning Loop and others) rely on
locking funds for some time and could get expensive with a fixed hold fee.

HOLD FEE RATE

In this post I'd like to present a variation of bidirectional upfront
payments that uses a time-proportional hold fee rate to address the
limitation above. I also tried to come up with a system that aims to relate
the fees paid more directly to the actual costs incurred and thereby reduce
the number of parameters.

In a Lightning channel, the sender of an htlc always incurs the cost. The
htlc value is deduced from their balance and the money can't be used for
other purposes when the htlc is in flight. Therefore ideally a routing node
is compensated for the time that their outgoing htlc is in flight.

To communicate this cost to the outside world, routing nodes advertise a
`hold_fee_rate` as part of their channel forwarding policy. An example
would be "0.3 msat per sat per minute". So if someone wants to forward 10k
sat through that channel and the htlc remains in flight for 5 minutes, the
routing node would like to see a compensation of 0.3msat * 10k sat * 5 mins
= 15 sat. (it is possible to extend the model with a base fee rate to also
cover the cost of an occupied slot on the commitment tx)

The question here again is who is going to pay the hold fee. The answer is
that it is primarily the receiver of the htlc who is going to pay. They are
the ones that can fail or settle the htlc and are therefore in control of
the hold time ("Reverse upfront payment")

But this would also mean that anyone can send out an htlc and collect hold
fees unconditionally. Therefore routing nodes advertise on the network
their `hold_grace_period`. When routing nodes accept an htlc to forward,
they're willing to pay hold fees for it. But only if they added a delay
greater than `hold_grace_period` for relaying the payment and its response.
If they relayed in a timely fashion, they expect the sender of the htlc to
cover those costs themselves. If the sender is also a routing node, the
sender should expect the node before them to cover it. Of course, routing
nodes can't be trusted. So in practice we can just as well assume that
they'll always try to claim from the prior node the maximum amount in
compensation.

This is the basic idea. Routing nodes have real costs for the lock up of
their money and will be compensated for it.

To coordinate the payment of the fees, the `update_add_htlc` message is
extended with:
* `hold_fee_rate`: the fee rate that the sender charges for having the htlc
in-flight (msat per sat per min)
* `hold_fee_discount`: the absolute fee discount (sat) that the receiver
gets as a compensation for hold fees that couldn't be claimed downstream
because of the grace periods (the worst case amount).
(the previous `hold_grace_period` in `update_add_htlc` is no longer needed)

When an htlc is resolved, the receiver of the htlc will pay the sender the
`hold_fee_rate` minus `hold_fee_discount` (exact details of how to
integrate this into the channel state machine and deal with clock shift
tbd).

It is up to the sender of a payment to construct the onion payloads such
that all nodes along the route will have their costs covered.

EXAMPLE

A > B > C > D
Every node charges 0.6 msat/sat/minute with a hold grace period of 1
minute. In this example, the routing fees are zero.
A wants to send 1000 sat to D.

A will charge B a hold fee rate of 0.6 sat/min (1000 sat at 0.6
msat/sat/min). B will charge C a hold fee rate of 1.2 sat/min to cover both
its own cost and what must be paid back to A. C will charge D a hold fee
rate of 1.8 sat/min to cover the costs of A, B and C.

D has a grace period of 1 minute. At the 1.8 sat/min fee rate that C
charges, D would need to pay a maximum of 1.8 sat if it meets its grace
deadline just in time. C pays the 1.8 sats to D as 

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-11 Thread Joost Jager
Hi ZmnSCPxj,

Not quite up-to-speed back into this, but, I believe an issue with using
> feerates rather than fixed fees is "what happens if a channel is forced
> onchain"?
>
> Suppose after C offers the HTLC to D, the C-D channel, for any reason, is
> forced onchain, and the blockchain is bloated and the transaction remains
> floating in mempools until very close to the timeout of C-D.
> C is now liable for a large time the payment is held, and because the C-D
> channel was dropped onchain, presumably any parameters of the HTLC
> (including penalties D owes to C) have gotten fixed at the time the channel
> was dropped onchain.
>

The simplicity of the fixed fee is that it bounds the amount of risk that C
> has in case its outgoing channel is dropped onchain.
>

The risk is bound in both cases. If you want you can cap the variable fee
at a level that isn't considered risky, but it will then not fully cover
the actual cost of the locked-up htlc. Also any anti-DoS fee could very
well turn out to be insignificant to the cost of closing and reopening a
channel with the state of the mempool these days.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-23 Thread Joost Jager
>
> This struck me as an extremely salient point. One thing that has been
> noticeable missing from these discussions is any sort of threat model or
> attacker
> profile. Given this is primarily a griefing attack, and the attacker
> doesn't
> stand any direct gain, how high a fee is considered "adequate" deterrence
> without also dramatically increasing the cost of node operation in the
> average case?
>

I think that the first instances that we'll see of this attack will be
executed by routing node operators that don't mind operating in a gray
area. Apparently the popularity of the Lightning Network has risen to the
point where it is possible for an operator to earn thousands of dollars per
month in routing fees. Imagine how much this could be when you're actively
jamming the competition's channels. Not only can you capture their traffic,
but it will also be possible to command a higher fee rate because senders
probably still want those payments to go through.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-12 Thread Joost Jager
Hi Antoine,

That said, routing nodes might still include the risk of hitting the chain
> in the computation of their `hodl_fee_rate` and the corresponding cost of
> having onchain timelocked funds.
>

Yes, that could be another reason to define `hodl_fee_rate` as a base fee
rate plus a component that is proportional to the amount. As mentioned in
my initial email, the base fee can be used to cover the cost of occupying
an htlc slot (which can be a significant factor for a large wumbo channel).
The risk of hitting the chain that you mention can be factored into this
base part as well. The hold fee rate would then be defined in the form (2
sat + 1%) per minute.


> Given that HTLC deltas are decreasing along the path, it's more likely
> that `hodl_fee_rate` will be decreasing along the path. Even in case of
> lawfully solved hodl HTLC, routing nodes might be at loss for having paid a
> higher hold_fee on their upstream link than received on the downstream one.
>

> Is assuming increasing `hodl_fee_rate` along a payment path at odds with
> the ordering of timelocks ?
>

I don't think it is. There is indeed a time lock delta in case htlcs need
to be settled on-chain. But in the happy offchain scenario, the added (wall
clock) delay of a hop is tiny. So yes, they get paid from downstream a few
seconds less in hold fees than what they need to pay upstream, but I think
this is insignificant compared to the total compensation that they are
getting (which is based on the grace period that is advertised). To be
clear, for the calculation of the hold fee, it is the wall clock time that
is used and not the block height.


> > But this would also mean that anyone can send out an htlc and collect
> hold fees unconditionally. Therefore routing nodes advertise on the network
> their `hold_grace_period`. When routing nodes accept an htl> to forward,
> they're willing to pay hold fees for it. But only if they added a delay
> greater than `hold_grace_period` for relaying the payment and its response.
> If they relayed in a timely fashion, they exp> ect the sender of the htlc
> to cover those costs themselves. If the sender is also a routing node, the
> sender should expect the node before them to cover it. Of course, routing
> nodes can't be trusted. So in> practice we can just as well assume that
> they'll always try to claim from the prior node the maximum amount in
> compensation.
>
> Assuming `hodl_fee_rate` are near-similar along the payment path, you have
> a concern when the HTLC settlement happens at period N on the outgoing link
> and at period N+1 on the incoming link due to clock differences. In this
> case, a routing node will pay a higher `hodl_fee_rate` than received.
>
> I think this is okay, that's an edge case, only leaking a few sats.
>

Is this the same concern as above or slightly different? Or do you mean
clock differences between the endpoints of a channel? For that, I'd think
that there needs to be some tolerance to smooth out disagreements. But yes,
in general as long as a node is getting a positive amount, it is probably
okay to tolerate a few rounding errors here and there.


> A more concerning one is when the HTLC settlement happens at period N on
> the outgoing link and your incoming counterparty goes offline. According to
> the HTLC relay contract, the `hodl_fee_rate` will be inflated until the
> counterparty goes back online and thus the routing node is at loss. And
> going offline is a really lawful behavior for mobile clients, even further
> if you consider mailbox-style of HTLC delivery (e.g Lightning Rod). You
> can't simply label such counterparty as malicious.
>


> And I don't think counterparties can trust themselves about their onliness
> to suspend the `hodl_fee_rate` inflation. Both sides have an interest to
> equivocate, the HTLC sender to gain a higher fee, the HTLC relayer to save
> the fee while having received one on the incoming link ?
>

Yes, that is a good point. But I do think that it is reasonable that a node
that can go offline doesn't charge a hodl fee. Those nodes aren't generally
forwarding htlcs anyway, so it would just be for their own outgoing
payments. Without charging a hodl fee for outgoing payments, they risk that
their channel peer delays the htlc for free. So they should choose their
peers carefully. It seems that at the moment mobile nodes are often
connected to a known LSP already, so this may not be a real problem. The
policies for a channel can be asymmetric with the mobile node not charging
hold fees for its outgoing htlcs to the LSP, while the LSP does charge hold
fees for htlcs that its forwards to the mobile node.

For the mailbox scenario, I think it is fair that someone is going to pay
for all those locked htlcs along the route. If the LSP decides to hold the
htlc until the destination comes online, they need to find a way to get the
mailbox bill paid.

All of this indeed also implies that nodes that do charge hold fees, need
to make sure to stay 

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-14 Thread Joost Jager
I've made a first attempt at projecting this idea onto the existing spec:
https://github.com/lightningnetwork/lightning-rfc/pull/843. This may also
clarify some of the questions that haven't been answered yet.

Joost

On Fri, Feb 12, 2021 at 2:29 PM Antoine Riard 
wrote:

> Hi Joost,
>
> Thanks for working on this and keeping raising awareness about channel
> jamming.
>
> > In this post I'd like to present a variation of bidirectional upfront
> payments that uses a time-proportional hold fee rate to address the
> limitation above. I also tried to come up with a system that aims > relate
> the fees paid more directly to the actual costs incurred and thereby reduce
> the number of parameters.
>
> Not considering hold invoices and other long-term held packets was one of
> my main concerns in the previous bidirectional upfront payments. This new
> "hodl_fee_rate" is better by binding the hold fee to the effectively
> consumed timelocked period of the liquidity and not its potential maximum.
>
> That said, routing nodes might still include the risk of hitting the chain
> in the computation of their `hodl_fee_rate` and the corresponding cost of
> having onchain timelocked funds. Given that HTLC deltas are decreasing
> along the path, it's more likely that `hodl_fee_rate` will be decreasing
> along the path. Even in case of lawfully solved hodl HTLC, routing nodes
> might be at loss for having paid a higher hold_fee on their upstream link
> than received on the downstream one.
>
> Is assuming increasing `hodl_fee_rate` along a payment path at odds with
> the ordering of timelocks ?
>
> > But this would also mean that anyone can send out an htlc and collect
> hold fees unconditionally. Therefore routing nodes advertise on the network
> their `hold_grace_period`. When routing nodes accept an htl> to forward,
> they're willing to pay hold fees for it. But only if they added a delay
> greater than `hold_grace_period` for relaying the payment and its response.
> If they relayed in a timely fashion, they exp> ect the sender of the htlc
> to cover those costs themselves. If the sender is also a routing node, the
> sender should expect the node before them to cover it. Of course, routing
> nodes can't be trusted. So in> practice we can just as well assume that
> they'll always try to claim from the prior node the maximum amount in
> compensation.
>
> Assuming `hodl_fee_rate` are near-similar along the payment path, you have
> a concern when the HTLC settlement happens at period N on the outgoing link
> and at period N+1 on the incoming link due to clock differences. In this
> case, a routing node will pay a higher `hodl_fee_rate` than received.
>
> I think this is okay, that's an edge case, only leaking a few sats.
>
> A more concerning one is when the HTLC settlement happens at period N on
> the outgoing link and your incoming counterparty goes offline. According to
> the HTLC relay contract, the `hodl_fee_rate` will be inflated until the
> counterparty goes back online and thus the routing node is at loss. And
> going offline is a really lawful behavior for mobile clients, even further
> if you consider mailbox-style of HTLC delivery (e.g Lightning Rod). You
> can't simply label such counterparty as malicious.
>
> And I don't think counterparties can trust themselves about their onliness
> to suspend the `hodl_fee_rate` inflation. Both sides have an interest to
> equivocate, the HTLC sender to gain a higher fee, the HTLC relayer to save
> the fee while having received one on the incoming link ?
>
> > Even though the proposal above is not fundamentally different from what
> was known already, I do think that it adds the flexibility that we need to
> not take a step back in terms of functionality (fair prici> ng for hodl
> invoices and its applications). Plus that it simplifies the parameter set.
>
> Minding the concerns raised above, I think this proposal is an improvement
> and would merit a specification draft, at least to ease further reasoning
> on its economic and security soundness. As a side-note, we're working
> further on Stake Certificates, which I believe is better for long-term
> network economics by not adding a new fee burden on payments. We should be
> careful to not economically outlaw micropayments. If we think channel
> jamming is concerning enough in the short-term, we can deploy a
> bidirectional upfront payment-style of proposal now and consider a better
> solution when it's technically mature.
>
>
> Antoine
>
> Le jeu. 11 févr. 2021 à 10:25, Joost Jager  a
> écrit :
>
>> Hi ZmnSCPxj,
>>
>> Not quite up-to-speed back into this, but, I believe an issue with using
>>> feerates rather than fixed fe

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-20 Thread Joost Jager
To further illustrate the interactions of the hold fee rate proposal, I've
created a spreadsheet that calculates the fees for a three hop route:

https://docs.google.com/spreadsheets/d/1UX3nMl-L9URO3Vd49DBVaubs6fdi6wxSb-ziSVlF6eo/edit?usp=sharing


(if there was a way to paste a working spreadheet into an email, I would
have done it...)

You can make a copy and try out various different values. The actual hold
time is particularly interesting, because it allows you to see how much
holding an htlc is going to cost you.

Example 1:
If all nodes want a 5% yearly return on their held capital and one sends a
1 million sat payment across three hops that is held for one hour by the
recipient, the recipient will be charged about 20 sats for this.

Example 2:
With the same configuration, a 1 sat micropayment that is settled
near-instantly will cost the sender around 130 msat in hold fees.

Joost

On Thu, Feb 11, 2021 at 3:28 PM Joost Jager  wrote:

> Hi all,
>
> Things have been quiet around channel jamming lately, but the
> vulnerability it still there as much as it was before. I've participated in
> an (isolated) mainnet channel jamming experiment (
> https://bitcoinmagazine.com/articles/good-griefing-a-lingering-vulnerability-on-lightning-network-that-still-needs-fixing)
> which only confirmed the seriousness of the issue.
>
> BIDIRECTIONAL UPFRONT PAYMENT
>
> Of all the proposals that have been presented, t-bast's remix of forward
> and backward upfront payments (
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002862.html
> ,
> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md#bidirectional-upfront-payment)
> indicates in my opinion the most promising direction.
>
> One characteristic of the proposal is that the `hold_fees` are
> time-independent. If an htlc doesn't resolve within the `grace_period`, the
> receiver of the htlc will be forced to pay the full hold fee. The hold fee
> should cover the expenses for locking up an htlc for the maximum duration
> (could be 2000 blocks), so this can be a significant penalty. Applications
> such as atomic onchain/offchain swaps (Lightning Loop and others) rely on
> locking funds for some time and could get expensive with a fixed hold fee.
>
> HOLD FEE RATE
>
> In this post I'd like to present a variation of bidirectional upfront
> payments that uses a time-proportional hold fee rate to address the
> limitation above. I also tried to come up with a system that aims to relate
> the fees paid more directly to the actual costs incurred and thereby reduce
> the number of parameters.
>
> In a Lightning channel, the sender of an htlc always incurs the cost. The
> htlc value is deduced from their balance and the money can't be used for
> other purposes when the htlc is in flight. Therefore ideally a routing node
> is compensated for the time that their outgoing htlc is in flight.
>
> To communicate this cost to the outside world, routing nodes advertise a
> `hold_fee_rate` as part of their channel forwarding policy. An example
> would be "0.3 msat per sat per minute". So if someone wants to forward 10k
> sat through that channel and the htlc remains in flight for 5 minutes, the
> routing node would like to see a compensation of 0.3msat * 10k sat * 5 mins
> = 15 sat. (it is possible to extend the model with a base fee rate to also
> cover the cost of an occupied slot on the commitment tx)
>
> The question here again is who is going to pay the hold fee. The answer is
> that it is primarily the receiver of the htlc who is going to pay. They are
> the ones that can fail or settle the htlc and are therefore in control of
> the hold time ("Reverse upfront payment")
>
> But this would also mean that anyone can send out an htlc and collect hold
> fees unconditionally. Therefore routing nodes advertise on the network
> their `hold_grace_period`. When routing nodes accept an htlc to forward,
> they're willing to pay hold fees for it. But only if they added a delay
> greater than `hold_grace_period` for relaying the payment and its response.
> If they relayed in a timely fashion, they expect the sender of the htlc to
> cover those costs themselves. If the sender is also a routing node, the
> sender should expect the node before them to cover it. Of course, routing
> nodes can't be trusted. So in practice we can just as well assume that
> they'll always try to claim from the prior node the maximum amount in
> compensation.
>
> This is the basic idea. Routing nodes have real costs for the lock up of
> their money and will be compensated for it.
>
> To coordinate the payment of the fees, the `update_add_htlc` message is
> extended with:
> * `hold_fee_rate`: the fee rate that the sender charges for having the
> htlc

Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-23 Thread Joost Jager
>
> > The conventional approach is to create a lightning invoice on a node and
> > store the invoice together with order details in a database. If the order
> > then goes unfulfilled, cleaning processes remove the data from the node
> > and database again.
>
> > The problem with this setup is that it needs protection against unbounded
> > generation of payment requests. There are solutions for that such as rate
> > limiting, but wouldn't it be nice if invoices can be generated without
> the
> > need to keep any state at all?
>
> Isn't this ultimately an engineering issue? How much exactly is "too much"
> in this case? Invoices are relatively small, and also don't even
> necessarily
> need to be ever written to disk assuming a slim expiration window. It's
> likely the case that a service can just throw everything in Redis and call
> it a day. In terms of rate limiting a service would likely already need to
> implement that on the API/service level to mitigate app level DoS attacks.
>

It is an engineering issue and indeed, you can use something like Redis to
solve it. Today's internet is probably doing the same thing and it seems to
work so far. With lightning though, there is another option. And it could
be an engineering advantage to not have to keep that state. I can also
imagine that slim expiration windows aren't always desirable. Thinking of
personalized payment requests in mass mailings for example.

So yes, this is not about new functionality, but I still think it is worth
exploring the path.

As far as pre-images go, this can already be "stateless" by generating a
> single random seed (storing that somewhere w/ a counter likely) and then
> using shachain or elkrem to deterministically generate payment hashes. You
> can then either use the payment_addr/secret to index into the hash chain,
> or
> have the user send some counter extracted from the invoice as a custom
> record. Similar schemes have been proposed in the past to support "offline"
> vending machine payments.
>

What would be the advantage of using elkrem or shachain compared to just
using `H(receiver_node_secret | payment_secret)` as the preimage? The
sender knows all the preimages already anyway, they aren't revealed one by
one by another party.

Also I think it could be beneficial to add more data into that hash
function such as the payment amount and the order details. With that, the
receiver knows that they received a payment for something that they offered
in the past, without having the actual record of that stored somewhere.


> Taking it one step further, the service could maintain a unique
> elkrem/shachain state for each unique user, which would then allow them to
> also collapse the pre-image into the hash chain, which lets them save space
> and be able to reproduce a given "proof that someone in the world paid"
> (that no service/wallet seems to accept/generate in an
> automated/standardized manner) statement dynamically
>

Can you provide an example to clarify this idea a bit more? If I read it
correctly, the goal is for the receiver to produce a proof that someone in
the world paid. But how can this be proven if the receiver can generate all
the preimages that they want?

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Ask First, Shoot (PTLC/HTLC) Later

2021-10-21 Thread Joost Jager
>If it is a multipart and we have the preimage, wait for all the parts to
arrive, then say yes to all of them.

Without actual reservations made in the channels, is this going to work?
For example: a 10M payment and a route that contains a channel with only 5M
balance. The sender's multi-path algorithm will try to split and send the
first 5M. Then they'll do the second 5M, but because there is no actual
reservation, the second 5M seems to be passing alright through the
bottleneck channel too. When the payment is then executed, it will fail.

Or do nodes keep track of all the unresolved probes and deduct the total
amount from the available balance? Of course only for the available balance
for probes. When a real htlc comes through, outstanding probes are ignored.
Although the problem with that could be that someone can spam you with
probes so that your available 'probe' balance is zero and you'll receive no
real traffic anymore.

Perhaps an alternative is to let senders attach a random identifier to a
probe. For multi-part probes, each probe will carry the same identifier.
Routing nodes will deduct the outstanding probe amounts from the available
balance, but only for probes within the same group (same id). That way each
probe(group) is isolated from everything else that is going on.

Joost

On Wed, Sep 29, 2021 at 5:40 AM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> While discussing something tangentially related with aj, I wondered this:
>
> > Why do we shoot an HTLC first and then ask the question "can you
> actually resolve this?" later?
>
> Why not something like this instead?
>
> * For a payer:
>   * Generate a path.
>   * Ask first hop if it can resolve an HTLC with those specs (passing the
> encrypted onion).
>   * If first hop says "yes", actually do the `update_add_htlc` dance.
> Otherwise try again.
> * For a forwarder:
>   * If anybody asks "can you resolve this path" (getting an encrypted
> onion):
> * Decrypt one layer to learn the next hop.
> * Check if the next hop is alive and we have the capacity towards it,
> if not, answer no.
> * Ask next hop if it can resolve the next onion layer.
> * Return the response from the next hop.
> * For a payee:
>   * If anybody asks "can you resolve this path":
> * If it is not a multipart and we have the preimage, say yes.
> * If it is a multipart and we have the preimage, wait for all the
> parts to arrive, then say yes to all of them.
> * Otherwise say no.
>
> Now, the most obvious reason against this, that comes to mind, is that
> this is a potential DoS vector.
> Random node can trigger a lot of network activity by asking random stuff
> of random nodes.
> Asking the question is free, after all.
>
> However, we should note that sending *actual* HTLCs is a similar DoS
> vector **today**.
> This is still "free" in that the asker has no need to pay fees for failed
> HTLCs; they just lose the opportunity cost of the amount being locked up in
> the HTLCs.
> And presumably the opportunity cost is low since Lightning forwarding
> earnings are so tiny.
>
> One way to mitigate against this is to make generating an onion costly but
> validating and decrypting it cheap.
> We could use an encryption scheme that is more computationally expensive
> to encrypt but cheap to decrypt, for example.
> Or we could require proof-of-work on the onion: each unwrapped onion
> layer, when hashed, has to have a hash that is less than some threshold
> (this scales according to the number of hops in the onion, as well).
> Ultimate askers need to grind the shared secret until the onion layer hash
> achieves the target.
>
> Obviously just because you asked a few milliseconds ago if a path is
> viable does not mean that the path *remains* viable right now when you
> actually send out an HTLC, but presumably that risk is now lessened.
> Unexpected shutdowns or loss of connectivity has to appear in a smaller
> and shorter time frame to negatively affect intermediate nodes.
>
> Another thought is: Does the forwarding node have an incentive to lie?
> Suppose the next hop is alive but the forwarding node has insufficient
> capacity towards the next hop.
> Then the forwarding node can lie and claim it can still resolve the HTLC,
> in the hope that a few milliseconds later, when the actual HTLC arrives,
> the capacity towards the next hop has changed.
> Thus, even if the capacity now is insufficient, the forwarding node has an
> incentive to lie and claim sufficient capacity.
>
> Against the above, we can mitigate this by accepting "no" from *any* node
> along the path, but only accepting "yes" from the actual payee.
> We already have a mechanism where any node along a route can report a
> forwarding or other payment error, and the sender is able to identify which
> node along the path raised it.
> Thus, the payer can identify which node along the route responded with a
> "yes", and check that it 

Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-21 Thread Joost Jager
A potential downside of a dedicated probe message is that it could be used
for free messaging on lightning by including additional data in the payload
for the recipient. Free messaging is already possible today via htlcs, but
a probe message would lower the cost to do so because the sender doesn't
need to lock up liquidity for it. This probably increases the spam
potential. I am wondering if it is possible to design the probe message so
that it is useless for anything other than probing. I guess it is hard
because it would still have that obfuscated 1300 bytes block with the
remaining part of the route in it and nodes can't see whether there is
other meaningful data at the end.

On Thu, Oct 14, 2021 at 9:48 AM Joost Jager  wrote:

> A practice that is widely applied by lightning wallets is to probe routes
> with an unknown payment hash before making the actual payment. Probing
> yields an accurate routing fee that can be shown to the user before
> execution of the payment.
>
> The downside of this style of probing is that for a short period of time,
> liquidity is locked up. Not just the sender's liquidity, but also liquidity
> of nodes along the route. And if the probe gets stuck for whatever reason,
> that short period may become longer.
>
> But does this lock up serve a purpose at all? Suppose there would be a
> liquidity probing protocol message similar to `update_add_htlc`
> (`probe_htlc`?) that would skip the whole channel update machinery and is
> only forwarded to the next hop if the link would be able to carry the htlc.
> Won't this work as well as the current probing without the downsides? Nodes
> can obviously reject these probes because they are distinguishable from
> real payments (contrary to unknown hash probing where everything looks the
> same). However if they do so, senders won't use that route and the routing
> node misses out on routing fees.
>
> Another problem of the lightning network is its susceptibility to channel
> jamming. Multiple options have been proposed (see also
> https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md),
> but they all come with downsides.
>
> Personally I incline towards solutions that involve deterring the attacker
> by making them pay actual satoshis. Lightning itself is payment system and
> it seems that paying for the payments is a natural solution to the problem.
> Several iterations of this idea have been proposed. One of my own that
> builds on an earlier idea by t-bast is described in
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-February/002958.html
> .
>
> The main criticism that this proposal has received is that it deteriorates
> the user experience for honest users when multiple payment routes need to
> be attempted. Every attempt will have a cost, so the user will see its
> balance going down by only just trying to make the payment. How bad this is
> depends on the attempt failure rate. I expect this rate to become really
> low as the network matures and senders hold routing nodes to high
> standards. Others however think otherwise and consider a series of failed
> attempts part of a healthy system.
>
> Custodial wallets could probably just swallow the cost for failures. They
> typically use one pathfinding system for all their users and are therefore
> able to collect a lot of information on routing node performance. This is
> likely to decrease the payment failure rate to an acceptably low level.
>
> For non-custodial nodes, this may be different. They have to map out the
> good routing nodes  all by themselves and this exploration will bear a cost.
>
> So how would things work out with a combination of both of the proposals
> described in this mail? First we make probing free (free as in no liquidity
> locked up) and then we'll require senders to pay for failed payment
> attempts too. Failed payment attempts after a successful probe should be
> extremely rate, so doesn't this fix the ux issue with upfront fees?
>
> Joost
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-21 Thread Joost Jager
On Thu, Oct 21, 2021 at 12:00 PM ZmnSCPxj  wrote:

> Good morning Joost,
>
> > A potential downside of a dedicated probe message is that it could be
> used for free messaging on lightning by including additional data in the
> payload for the recipient. Free messaging is already possible today via
> htlcs, but a probe message would lower the cost to do so because the sender
> doesn't need to lock up liquidity for it. This probably increases the spam
> potential. I am wondering if it is possible to design the probe message so
> that it is useless for anything other than probing. I guess it is hard
> because it would still have that obfuscated 1300 bytes block with the
> remaining part of the route in it and nodes can't see whether there is
> other meaningful data at the end.
>
> For the probe, the onion max size does not *need* to be 1300, we could
> reduce the size to make it less useable for *remote* messaging.
>

Yes, maybe it can be reduced a bit. But if we want to support 27 hops like
we do for payments, there will be quite some space left for messaging on
real routes which are mostly much shorter.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-14 Thread Joost Jager
A practice that is widely applied by lightning wallets is to probe routes
with an unknown payment hash before making the actual payment. Probing
yields an accurate routing fee that can be shown to the user before
execution of the payment.

The downside of this style of probing is that for a short period of time,
liquidity is locked up. Not just the sender's liquidity, but also liquidity
of nodes along the route. And if the probe gets stuck for whatever reason,
that short period may become longer.

But does this lock up serve a purpose at all? Suppose there would be a
liquidity probing protocol message similar to `update_add_htlc`
(`probe_htlc`?) that would skip the whole channel update machinery and is
only forwarded to the next hop if the link would be able to carry the htlc.
Won't this work as well as the current probing without the downsides? Nodes
can obviously reject these probes because they are distinguishable from
real payments (contrary to unknown hash probing where everything looks the
same). However if they do so, senders won't use that route and the routing
node misses out on routing fees.

Another problem of the lightning network is its susceptibility to channel
jamming. Multiple options have been proposed (see also
https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md),
but they all come with downsides.

Personally I incline towards solutions that involve deterring the attacker
by making them pay actual satoshis. Lightning itself is payment system and
it seems that paying for the payments is a natural solution to the problem.
Several iterations of this idea have been proposed. One of my own that
builds on an earlier idea by t-bast is described in
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-February/002958.html
.

The main criticism that this proposal has received is that it deteriorates
the user experience for honest users when multiple payment routes need to
be attempted. Every attempt will have a cost, so the user will see its
balance going down by only just trying to make the payment. How bad this is
depends on the attempt failure rate. I expect this rate to become really
low as the network matures and senders hold routing nodes to high
standards. Others however think otherwise and consider a series of failed
attempts part of a healthy system.

Custodial wallets could probably just swallow the cost for failures. They
typically use one pathfinding system for all their users and are therefore
able to collect a lot of information on routing node performance. This is
likely to decrease the payment failure rate to an acceptably low level.

For non-custodial nodes, this may be different. They have to map out the
good routing nodes  all by themselves and this exploration will bear a cost.

So how would things work out with a combination of both of the proposals
described in this mail? First we make probing free (free as in no liquidity
locked up) and then we'll require senders to pay for failed payment
attempts too. Failed payment attempts after a successful probe should be
extremely rate, so doesn't this fix the ux issue with upfront fees?

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-19 Thread Joost Jager
There could be some corners where the incentives may not work out 100%, but
I doubt that any routing node would bother exploiting this. Especially
because there could always be that reputation scheme at the sender side
which may cost the routing node a lot more in lost routing fees than the
marginal gain from the upfront payment.

Another option is that nodes that don't care to be secretive about their
channel balances could include the actual balance in a probe failed
message. Related: https://github.com/lightningnetwork/lightning-rfc/pull/695

Overall it seems that htlc-less probes are an improvement to what we
currently have. Immediate advantages include a reduction of the load on
nodes by cutting out the channel update machinery, better ux (faster
probes) and no locked up liquidity. On the longer term it opens up the
option to charge for failed payments so that we finally have an answer to
channel jamming.

ZmnSCPxj, as first person to propose the idea (I think?), would you be
interested in opening a draft PR on the spec repository that outlines the
new message(s) that we'd need and continue detailing from there?

Joost

On Sat, Oct 16, 2021 at 12:51 AM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Owen,
>
> > C now notes that B is lying, but is faced with the dilemma:
> >
> > "I could either say 'no' because I can plainly see that B is lying, or
> > I could say 'yes' and get some free sats from the failed payment (or
> > via the hope of a successful payment from a capacity increase in the
> > intervening milliseconds)."
>
> Note that if B cannot forward an HTLC to C later, then C cannot have a
> failed payment and thus cannot earn any money from the upfront payment
> scheme; thus, at least that part of the incentive is impossible.
>
> On the other hand, there is still a positive incentive for continuing the
> lie --- later, maybe the capacity becomes OK and C could earn both the
> upfront fee and the success fee.
>
> > So C decides it's in his interest to keep the lie going. D, the payee,
> > can't tell that it's a lie when it reaches her.
> >
> > If C did want to tattle, it's important that he be able to do so in a
> > way that blames B instead of himself, otherwise payers will assume
> > (incorrectly, and to C's detriment) that the liquidity deficit is with C
> > rather than B.
>
> That is certainly quite possible to do.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-15 Thread Joost Jager
On Fri, Oct 15, 2021 at 4:21 PM Owen Gunden  wrote:

> On Thu, Oct 14, 2021 at 09:48:27AM +0200, Joost Jager wrote:
> > So how would things work out with a combination of both of the
> > proposals described in this mail? First we make probing free (free as
> > in no liquidity locked up) and then we'll require senders to pay for
> > failed payment attempts too. Failed payment attempts after a
> > successful probe should be extremely rate, so doesn't this fix the ux
> > issue with upfront fees?
>
> Why couldn't a malicious routing node (or group of colluding routing
> nodes) succeed the probe and then fail the payment in order to collect
> the failed payment fee?
>

Yes they could, but senders should be really suspicious when this happens.
It could happen occasionally because balances may have shifted in between
probe and payment. But if it keeps happening they may want to ban this
routing node for a long time. This may disincentivize the routing node
enough to respond honestly to probes.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-15 Thread Joost Jager
>
> > On Thu, Oct 14, 2021 at 09:48:27AM +0200, Joost Jager wrote:
> >
> > > So how would things work out with a combination of both of the
> > > proposals described in this mail? First we make probing free (free as
> > > in no liquidity locked up) and then we'll require senders to pay for
> > > failed payment attempts too. Failed payment attempts after a
> > > successful probe should be extremely rate, so doesn't this fix the ux
> > > issue with upfront fees?
> >
> > Why couldn't a malicious routing node (or group of colluding routing
> > nodes) succeed the probe and then fail the payment in order to collect
> > the failed payment fee?
>
> Good observation!
>
> I propose substantially the same thing here:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-September/003256.html


I totally missed that thread, but it is indeed the same thing including the
notion that it may make upfront payments palatable! Contains some great
additional ideas too.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Joost Jager
Problem

One of the qualities of lightning is that it can provide light-weight,
no-login payments with minimal friction. Games, paywalls, podcasts, etc can
immediately present a QR code that is ready for scan and pay.

Optimistically presenting payment requests does lead to many of those
payment requests going unused. A user visits a news site and decides not to
buy the article. The conventional approach is to create a lightning invoice
on a node and store the invoice together with order details in a database.
If the order then goes unfulfilled, cleaning processes remove the data from
the node and database again.

The problem with this setup is that it needs protection against unbounded
generation of payment requests. There are solutions for that such as rate
limiting, but wouldn't it be nice if invoices can be generated without the
need to keep any state at all?

Stateless invoices

What would happen if a lightning invoice is only generated and stored
nowhere on the recipient side? To the user, it won't make a difference.
They would still scan and pay the invoice. When the payment arrives at the
recipient though, two problems arise:

1. Recipient doesn't know whom or what the payment is for.

This can be solved by attaching additional custom tlv records to the htlc.
On the wire, this is all arranged for. The only missing piece is the
ability to specify additional data for that custom tlv record in a bolt11
invoice. One way would be to define a new tagged field for this in which
the recipient can encode the order details.

An alternative is to use the existing invoice description field and simply
always pass that along with the htlc as a custom tlv record.

A second alternative that already works today is to use part (for example
16 out of 32 bytes) of the payment_secret (aka payment address) to encode
the order details in. This assumes that the secret is still secret enough
with reduced entropy. Also there may not be enough space for every
application.

2. Recipient doesn't know the preimage that is needed to settle the htlc(s).

One option is to use a keysend payment or AMP payment. In that case, the
sender includes the preimage with the htlc. Unfortunately this doesn't
provide the sender with a proof of payment that they'd get with a regular
lightning payment.

An alternative solution is to use a deterministic preimage based on a
(recipient node key-derived) secret, the payment secret and other relevant
properties. This allows the recipient to derive the same preimage twice:
Once when the lightning invoice is generated and again when a payment
arrives.

It could be something like this:

payment_secret = random
preimage = H(node_secret | payment_secret | payment_amount |
encoded_order_details)
invoice_hash = H(preimage)

The sender sends an htlc locked to invoice_hash for payment_amount and
passes along payment_secret and encoded_order_details in a custom tlv
record.

When the recipient receives the htlc, they reconstruct the preimage
according to the formula above. At this point, all data is available to do
so. When H(preimage) indeed matches the htlc hash, they can settle the
payment knowing that this is an order that they committed to earlier.
Settling could be implemented as a just-in-time inserted invoice to keep
the diff small.

The preimage is returned to the sender and serves as a proof of payment.

Resilience

To me it seems that stateless invoices can be a relatively simple way to
improve the resiliency of systems that deal with lightning invoices.
Unlimited amounts of invoices can be generated without worrying about
storage or memory, no matter if the requests are due to popularity of a
service or a deliberate dos attack.

Interested to hear your thoughts.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Joost Jager
On Tue, Sep 21, 2021 at 2:05 PM fiatjaf  wrote:

> What if instead of the payer generating the preimage the payee could
> generate stateless invoices? Basically just use some secret to compute the
> preimage upon receiving the HTLC, for example:
>

Maybe my explanation wasn't clear enough, but this is exactly what I am
proposing. The payee generates a stateless invoice and gives it to the
payer.


> 1. Payer requests an invoice.
> 2. Payee computes hash = sha256(hmac(local_secret, arbitrary_invoice_id)),
> then encodes arbitrary_invoice_id into the invoice somehow.
> 3. Payer sends payment with arbitrary_invoice_id as tlv_record_a.
> 4. Upon receiving the HTLC, payee computes preimage = hmac(local_secret,
> tlv_record_a) and resolves it.
>

One way to do this that I tried to describe in the initial post is via the
payment_secret. This is already an arbitrary invoice id that is already
sent as a tlv record.


> I've implemented such a scheme on @lntxbot, but it required low level code
> in a c-lightning plugin and a hack with route hints: since TLV payloads
> were not an option (as payers wouldn't know how to send them) I've used a
> "shadow" route hint to a private channel that didn't exist, so preimage was
> generated on the payee using preimage = hmac(local_secret,
> next_channel_scid).
>

Clever workaround.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Joost Jager
On Tue, Sep 21, 2021 at 3:06 PM fiatjaf  wrote:

> I would say, however, that these are two separate proposals:
>
>   1. implementations should expose a "stateless invoice" API for receiving
> using the payment_secret;
>   2. when sending, implementations should attach a TLV record with encoded
> order details.
>
> Of these, 1 is very simple to do and do not require anyone to cooperate,
> it just works.
>
> 2 requires full network compatibility, so it's harder. But 2 is also very
> much needed otherwise the payee has to keep track of all the invoice ids
> related to the orders they refer to, right?
>

Not completely sure what you mean by full network compatibility, but a
network-wide upgrade including all routing nodes isn't needed. I think to
do it cleanly we need a new tag for bolt11 and node implementations that
carry over the contents of this field to a tlv record. So senders do need
to support this.


> But I think just having 1 already improves the situation a lot, and there
> are application-specific workarounds that can be applied for 2 (having a
> fixed, hardcoded set of possible orders, encoding the order very minimally
> in the payment secret or route hint, storing order details on redis for
> only 3 minutes and using lnurlpay to reduce the delay between invoice
> issuance and user confirmation to zero, and so on).
>

A stateless invoice API would be a great thing to have. I've prototyped
this in lnd and if you implement it so that a regular invoice is inserted
'just in time', it isn't too involved as you say.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread Joost Jager
On Tue, Sep 21, 2021 at 5:47 PM Bastien TEINTURIER  wrote:

> Hi Joost,
>
> Concept ACK, I had toyed with something similar a while ago, but I hadn't
> realized
> that invoice storage was such a DoS vector for merchants/hubs and wasn't
> sure it
> would be useful.
>

Yes, I definitely think it is. Especially in a future where LN invoices are
generated casually everywhere.

I started a PR on lightning-rfc to explore the impact points.
https://github.com/lightningnetwork/lightning-rfc/pull/912


> Do you have an example of what information you would usually put in your
> `encoded_order_details`?
>
> I'd imagine that it would usually be simply a skuID from the merchant's
> product
> database, but it could also be fully self-contained data to identify a
> "transaction"
> (probably encrypted with a key belonging to the payee).
>

I think it depends on the application. For a paywall it could be the
article id and an identifier for the user - perhaps a cookie in the user's
browser. But it could indeed go as far as a list of skus and the physical
delivery address for the goods. That obviously won't be suitable for every
application because of the limited space. Passing a full online supermarket
shopping cart in the tlv payload is probably stretching it too far.

Last year, me and my former colleagues Oliver and Desiree ran tlvshop.com.
The site is offline now, but still viewable at
https://joostjager.github.io/tlvshop.com/. If you fill out all the fields,
it encodes the data into a (non-standard) tlv field. In the case of
tlvshop, the payment is truly spontaneous. However the idea of encoding
metadata is the same.

Attaching metadata to a payment is almost impossible with traditional
payments. Lightning changes this and I think that is also a great
opportunity to rethink typical design patterns for ecommerce applications.


> We'd want to ensure that this field is reasonably small, to ensure it can
> fit in
> onions without forcing the sender to use shorter routes or disable other
> features.
>

I don't know if something bad can happen if the sender is forced to use
shorter routes. Maybe as a method to shape traffic in a certain way? Not
totally sure, but perhaps this is already possible today by creating bogus
route hints. In general I'd say that too much metadata just decreases the
payment success rate and therefore something the payee will consider when
creating the invoice. A reasonable cap is an easy fix to address any doubts
on this front though. Only what constant to pick...

Joost.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Payment sender authentication

2021-12-17 Thread Joost Jager
Hello list,

In Lightning we have a great scheme to protect the identity of the sender
of a payment. This is awesome, but there are also use cases where opt-in
sender authentication is desired.

An example of such a use case is chat over lightning. If you receive a text
message, you generally want to know whom it originates from. For whatsat
[1], I added a custom record containing an ECDSA signature over `sender |
recipient | timestamp | msg`. I believe other chat protocols on lightning
use a similar construction.

For regular payments however, sender authentication can be useful too. A
donation for example doesn't always need to be anonymous. A donor may want
to reveal themselves. In other cases, sender authentication can add a layer
of protection against payments getting lost. It provides the receiver with
another field that they can use to retrieve lost payment information.

On the receiver side, sender authentication also creates new possibilities.
A receiver could for example create an invoice that is locked to a specific
source node.

Also wanted to note that the sender identity doesn't need to be the actual
node identity. It can also be an unrelated key that could for example be
specific to the service that is being paid to. For example, one registers
the public part of a dedicated key pair with an exchange and the exchange
then only accepts deposits authenticated with that key.

The reason for bringing this up on the list is that I wanted to see what
people think is the best way to do it technically. The whatsat approach
with an ECDSA signature doesn't look ideal to me because of the
non-repudiation property. Also care needs to be taken that whatever data
the sender includes, cannot be reused.

Another option that I can think of is to derive a shared secret using ECDH
with the sender and receiver node keys and then attach a custom record to
the payment containing the sender node key and an HMAC of the payment hash
using the shared secret as a key.

I am sure there people better versed in cryptography than me who have an
opinion on this. Thoughts?

Joost

[1] https://github.com/joostjager/whatsat
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-12-17 Thread Joost Jager
On Mon, Nov 22, 2021 at 7:53 AM ZmnSCPxj  wrote:

> Good morning Dave,
>
> > If LN software speculatively chooses a series of attempts with a similar
> > 95%, accounting for things like the probability of a stuck payment (made
> > worse by longer CLTV timeouts on some paths), it could present users
> > with the same sort of options:
> >
> > ~1 second, x fee
> > ~3 seconds, y fee
> > ~10 seconds, z fee
> >
> > This allows the software to use its reliability scoring efficiently in
> > choosing what series of payment attempts to make and presents to the
> > user the information they need to make a choice appropriate for their
> > situation. As a bonus, it makes it easier for wallet software to move
> > towards a world where there is no user-visible difference between
> > onchain and offchain payments, e.g.:
> >
> > ~1 second, w fee
> > ~15 seconds, x fee
> > ~10 minutes, y fee
> > ~60 minutes, z fee
>
> This may not match ideally, as in the worst case a forwarding might be
> struck by literal lightning and dropped off the network while your HTLC is
> on that node, only for the relevant channel to be dropped onchain days
> later when the timeout comes due.
> Providing this "seconds" estimate does not prepare users for the
> possibility of such black swan events where a high fee transaction gets
> stalled due to an accident on the network.
>

I like the idea of providing the user with such choices, similar to how for
example Uber presents its options. But I also agree with Z that it is
probably impossible to get a number of seconds there that comes close to
the actual time it would take.

For LND, I am currently proposing a fuzzy "time preference" parameter that
is vaguely indicative of the time that the payment is expected to take (
https://github.com/lightningnetwork/lnd/pull/6024). On the UI level this
can either be surfaced as a slider or the cost for three predefined values
for time preference can be shown:

Slow: 10 msat
Normal: 150 msat
Fast: 4000 msat


> Why not just ask for a fee budget for a payment, and avoid committing
> ourselves to paying within some number of seconds, given that the seconds
> estimate may very well vary depending on local CPU load?
>
Would users really complain overmuch if the number of seconds is not
> provided, given that we cannot really estimate this well?
>

A fee budget indeed expresses the time preference indirectly. But how does
the user know what a reasonable value is to set this to? It is depend on
network conditions. They may accidentally set it to a value that is too low
and get an unexpectedly slow payment.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-12-17 Thread Joost Jager
On Mon, Nov 22, 2021 at 5:11 PM Stefan Richter 
wrote:

> I also don't believe putting a choice of more or less seconds expectation
> in the UI makes for a great user experience. IMHO the goal should just be:
> give the user an estimate of fees necessary to succeed within a reasonable
> time. Maybe give them an option to optimize for fees only if they are
> really cheap and don't care at all if the payment succeeds.
>

In the ideal world, I'd agree to this. But how close to that are we today?
Suppose we'd define reasonable time as 3 seconds to complete the payment.
And to stay below that, we need to use a short, expensive, high success
probability route that amounts to a fee of 1%. Would users be happy with a
take it or leave it approach or rather pay 0.1% and wait 30 seconds?

I think that for the state of Lightning as it is currently, some kind of
control lever is useful to bridge the gap to payments that are always fast
and cheap. The checkbox that you propose is also a control lever, but it is
pretty minimalistic.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Payment sender authentication

2021-12-20 Thread Joost Jager
Hi fiatjaf and peter,

On Sat, Dec 18, 2021 at 2:08 PM fiatjaf  wrote:

> As a temporary solution, I suggest taking a look at
> https://github.com/fiatjaf/lnurl-rfc/blob/luds/18.md. The idea is that
> you can either provide
>  - a lone pubkey (to be used for manually identifying the payer later if
> necessary);
>  - a domain-specific pubkey along with a signature of a challenge provided
> by the receiver; or
>  - an unauthenticated name or email (for light things like donations for
> which one may not care very much about cryptographic authentication).
> And then you commit these things, whatever they are, inside the BOLT11
> payment request using the 'h' tag.
>

Interesting read. I see it uses a signature, but is that a good idea? It
could allow the receiver to prove to a 3rd party that the payment was made.
I guess it depends on the application if that is desired or not. The more
privacy conscious users would probably try to avoid committing to data via
a signature.

On Sat, Dec 18, 2021 at 6:56 PM Peter Todd  wrote:

> Lightning already has sender authentication: you simply give someone a
> pre-image hash over an authenticated channel, and the fact that the
> payment was
> made means only they could have realistically made it as they were the only
> person who knew that pre-image hash.
>

This sounds quite similar to what is described above in lnurl-18. I can see
that that works, but I should have added that I was looking for a solution
that exists completely within the protocol without using an additional
channel. Also, routing nodes learn the preimage hash too, so the sender
isn't the only person. But that is solved by the payment secret that is
also part of the invoice.


> Going beyond that is dangerous as you're creating the ability to prove to a
> *third* party who made a particular payment. That raises serious problems
> in
> cases like government raids that need to be considered very carefully.


This is why I proposed to use diffie-hellman to generate a shared secret.
The receiver could then have made up all the proofs themselves and are
therefore of no value to a third party.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-15 Thread Joost Jager
In Lightning pathfinding the two main variables to optimize for are routing
fee and reliability. Routing fee is concrete. It is the sat amount that is
paid when a payment succeeds. Reliability is a property of a route that can
be expressed as a probability. The probability that a route will be
successful.

During pathfinding, route options are compared against each other. So for
example:

Route A: fee 10 sat, success probability 50%
Route B: fee 20 sat, success probability 80%

Which one is the better route? That depends on user preference. A patient
user will probably go for route A in the hope of saving on fees whereas for
a time-sensitive payment route B looks better.

It would be great to offer this trade-off to the user in a simple way.
Preferably a single [0, 1] value that controls the selection process. At 0,
the route is only optimized for fees and probabilities are ignored
completely. At 1, the route is only optimized for reliability and fees are
ignored completely.

But how to choose between the routes A and B for a value somewhere in
between 0 and 1? For example 0.5 - perfect balance between reliability and
fee. But what does that mean exactly?

Anyone got an idea on how to approach this best? I am looking for a simple
formula to decide between routes, preferably with a reasonably sound
probability-theoretical basis (whatever that means).

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-15 Thread Joost Jager
One direction that I explored is to start with a statement by the user in
this form:

"If there is a route with a success probability of 50%, then I am willing
to pay up to 1.8x the routing fee for an alternative route that has a 80%
success probability"

I like this because it isn't an abstract weight or factor. It is actually
clear what this means.

What I didn't yet succeed in is to find a model where I can plug in 50%,
80% and 1.8x and generalizes it to arbitrary inputs A% and B%. But it seems
to me that there must be some probabilistic equation / law / rule / theorem
/ ... that can support this.

Joost.

On Mon, Nov 15, 2021 at 4:25 PM Joost Jager  wrote:

> In Lightning pathfinding the two main variables to optimize for are
> routing fee and reliability. Routing fee is concrete. It is the sat amount
> that is paid when a payment succeeds. Reliability is a property of a route
> that can be expressed as a probability. The probability that a route will
> be successful.
>
> During pathfinding, route options are compared against each other. So for
> example:
>
> Route A: fee 10 sat, success probability 50%
> Route B: fee 20 sat, success probability 80%
>
> Which one is the better route? That depends on user preference. A patient
> user will probably go for route A in the hope of saving on fees whereas for
> a time-sensitive payment route B looks better.
>
> It would be great to offer this trade-off to the user in a simple way.
> Preferably a single [0, 1] value that controls the selection process. At 0,
> the route is only optimized for fees and probabilities are ignored
> completely. At 1, the route is only optimized for reliability and fees are
> ignored completely.
>
> But how to choose between the routes A and B for a value somewhere in
> between 0 and 1? For example 0.5 - perfect balance between reliability and
> fee. But what does that mean exactly?
>
> Anyone got an idea on how to approach this best? I am looking for a simple
> formula to decide between routes, preferably with a reasonably sound
> probability-theoretical basis (whatever that means).
>
> Joost
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-15 Thread Joost Jager
Hi Rene,


> First I am happy that you also agree that reliability can and should be
> expressed as a probability as discussed in [0].
>

Probability based routing is not new to me. I've implemented a form of that
in lnd in march 2019: https://github.com/lightningnetwork/lnd/pull/2802,
followed by several rounds of refinement.


> The problem that you address is that of feature engineering[1]. Which
> consists of two (or even more) steps:
>
> 1.) Feature selection: That means in payment delivery we will compute a
> min cost flow [2] with a chosen cost function (historically people used
> dijkstra seach for single paths with the cost function representing the
> weights on the edges of the graph -which is what most folks currently still
> do). While [2] and I personally agree with you that the cost function
> should be a combination the two features fees and reliability (as in
> successprobability) Matt Corallo righfully pointed out [3] that other
> features might be chosen in the future to deliver more optimal results. For
> example implementations currently often use CLTV as a feature (which I
> honestly find horrible) and I am currently investigating if one could add
> latency of channels or - for known IP addresses - either the geo distance
> or IP distance.
>

I am aware that there are more candidate features, but my question is
specifically about the ones that I mentioned.

2.) Combining features: This is the question that you are asking. Often
> people use a linear weighted sum to combine features. This is what often
> happens implicitly in neural networks. While this is often good enough and
> while it is often practical to either learn the weights or give users a
> choice there are many situation where the weighted linear sum does not work
> well with the selected features. An example for the weighted sum is the
> risk-factor in c-lightning that could have been used to decide if one
> wanted the dijkstra seach to either optimize for CLTV delta or for paid
> routing fees. Also in our paper [2] in which we discuss the same two
> features that you mentioned we explain how a linear sum of two features can
> be optimal due to the lagrangian bounding principle. However in practice
> (of machine learning) it has been shown that using the harmonic mean [4]
> between features often works very well without the necessity to learn a
> weight / parameter. This has for example been done when c-lightnign
> recently switched to probabilistic path finding [5]. In this thread you
> find a long discussion and evaluation how the harmonic mean outperformed
> the linear sum.
>

Obviously features can be combined in a multitude of ways, but I am looking
for something that is anchored to some kind of understandable starting
point. What I did in lnd is to work with so called 'payment attempt cost'.
A virtual satoshi amount that represents the cost of a failed attempt. If
you put a high price on failed attempts, pathfinding will tend towards more
reliable routes even if they require a higher fee. To me, the idea of
putting a (virtual) cost on a payment attempt is tangible and ideally the
math should follow from that. I don't want zero parameters, because I think
that ultimately the fee/reliability trade-off is up to the user to decide
on.

Joost

>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-02-15 Thread Joost Jager
Hi Rusty,

Nice to see the proposal in more concrete terms. Few questions:

- The total proved utxo value (not counting any utxos which are spent)
>   is multiplied by 10 to give the "announcable_channel_capacity" for that
> node.
>

Could this work as a dynamic value too, similar to the minimum relay fee on
L1?


> 1. `tlv_stream`: `channel_update_v2_tlvs`
>
2. types:
> 1. type: 4 (`capacity`)
> 2. data:
> * [`tu64`:`satoshis`]
>

What does capacity mean exactly outside the context of a real channel? Will
this be reduced to that maximum htlc amount that the nodes want to route,
to save as much of the announceable budget as possible?

It is also the question of whether 10 x 10k channels should weigh as much
on the budget as a 1 x 100k channel. A spammer may be able to do more harm
with multiple smaller channels because there is more for the sender's
pathfinding algorithms to explore. Maybe it doesn't matter as long as there
is some mechanism to discourage spam.


> 1. type: 5 (`cost`)
> 2. data:
>* [`u16`:`cltv_expiry_delta`]
>* [`u32`:`fee_proportional_millionths`]
>* [`tu32`:`fee_base_msat`]
> 1. type: 6 (`min_msat`)
> 2. data:
> * [`tu64`:`min_htlc_sats`]
>
> - `channel_id_and_claimant` is a 31-bit per-node channel_id which can be
>   used in onion_messages, and a one bit stolen for the `claim` flag.
>

If you'd increase the budget multiplier from 10 to 20, couldn't this be
simplified to always applying the cost to both nodes?


> - A channel is not considered to exist until both peers have sent a
>   channel_update_v2, at least one of which must set the `claim` flag.
> - If a node sets `claim`, the capacity of the channel is subtracted from
>   the remaining announcable_channel_capacity for that node (minimum
>   10,000 sats).
>

Same question about magic value and whether it can be dynamic.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inbound channel routing fees

2022-07-04 Thread Joost Jager
>
> > isn't it the case that it is always possible to DoS your peer by just
> rejecting any forward that comes in from them?
>
> Yes, this is a good point. But there is a difference though. If you do that
> with inbound fees, the "malicious" peer is able to prevent _everyone_ from
> even trying to route through you (because it's advertised).
>

If I understand it correctly, we're talking about nodes A and B, where B is
malicious and sets a high inbound fee on the A-B channel?

I'd think that for the network, it's actually better if B advertises their
high inbound fee and nobody even tries that route, instead of everyone
trying and having to wait for a failure because B drops packets?


> Whereas if they selectively fail HTLCs you forward to them, only the payer
> for
> that HTLC knows about it, and they can attribute the failure to the
> malicious
> node, not to you.
>

Isn't the same true for a high inbound fee set by B? This would make it
clear to everyone that B is the node that makes the A-B channel too
expensive to be useful?


> Of course, that malicious node could also withhold the HTLC or return a
> malformed error, but unfortunately we cannot easily protect against this.
> My point is that this is bad behavior, and we shouldn't be giving more
> tools for nodes to misbehave, and inbound fees are a very powerful tool
> to help misbehaving nodes.
>

I fundamentally disagree with not giving nodes tools to misbehave. To me
this indicates that the system is fragile. I'd actually rather go the
opposite way: give them tools and show that the system is unaffected.

But on the point of DoS'ing a particular node: I think there are so many
ways to do this already, that inbound fees probably won't be the tool of
choice even if it was available.


> > Or indirectly affecting them negatively by setting high fees on all
> outbound channels?
>
> This case is completely different, because the "malicious" node can't
> selectively
> advertise that, it will affect traffic coming from all of their peers so
> they
> would really be shooting themselves in the foot if they did that.
>

It's different, but in my view not completely different. If a routing node
all of a sudden decides to charge 10% outbound across all channels for
whatever reason, its peers will be affected because their capital will at
that point be misplaced for earning routing fees.

If you say 'shoot themselves in the foot', you seem to have a rational
routing node in mind looking to maximize fees? How does DoS'ing a
particular peer fit in that picture, why would they do this?


> > My thinking is that if I accept an incoming htlc, my local balance
> increases
> > on that incoming channel. My money gets locked up in a channel that may
> or
> > may not be interesting to me. Wouldn't it be fair to be compensated for
> that?
>
> If that channel isn't interesting to you, then by all means you should fail
> that HTLC or close the channel? Or you shouldn't have accepted it in the
> first place?
>

Agreed, if it isn't interesting at all, you should close. I should have put
that more nuanced. Some channels will likely be more interesting than
others and inbound fees could help with keeping the less interesting ones
afloat. It's another option besides plainly closing the channel.

Suppose I have three peers A, B and C. I am routing traffic back and forth
between A and B at a low fee of 0.1%.

Then C comes along and opens a 1 BTC channel with me. They push out the
full balance towards B and pay 0.1% for that. After that, there is very
minimal activity and after a month I decide to close the channel. A big
opportunity cost for me because I could have placed that 1 BTC local
balance in a much better way. With an inbound fee, I could have earned more.


> I understand the will to optimize revenue here, but I fear this concrete
> proposal leads to many kinds of unhealthy incentives. I agree that there
> is a
> risk in accepting channels from unknown nodes
>

I'd say that the lack of inbound fees requires more trust from the acceptor
of the channel and leads to more centralization.


> , but I think it should be
> addressed differently: you could for example make the opener pay a fee when
> they open a channel to you to compensate that risk (some kind of reversed
> liquidity ads).
>

Yes, can see that work too. The advantage of an inbound fee though is that
the fee that you pay is proportional to the balance of the counter party.
So you only start paying when you actually move the balance and you don't
need to pay everything upfront (which requires some trust from the
initiator).

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inbound channel routing fees

2022-07-04 Thread Joost Jager
On Fri, Jul 1, 2022 at 2:17 PM Thomas HUET  wrote:

> It was discussed in this issue:
> https://github.com/lightning/bolts/issues/835
>

Ah yes that was it. Thanks for the pointer!


> On the network, the traffic is not balanced. Some nodes tend to receive
> more than they send, merchants for instance. For the lightning network to
> be reliable, we need to incentivise people to open channels to such nodes,
> or else there won't be enough liquidity available and payments will fail.
> The current fee structure provides this incentive: You pay some onchain
> fees and lock some funds and in exchange you will earn routing fees. My
> concern is that your proposed change would break that incentive and make
> the network less reliable.
>

I'd think that if a merchant charges inbound fees, others won't open
channels and the merchant won't have inbound liquidity. So why would they
do this? Also if they wanted to charge inbound fees, they could already do
so today by setting up a fee-charging gateway with a private channel to the
main node that accepts the payments.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Inbound channel routing fees

2022-07-01 Thread Joost Jager
Currently routing nodes on the lightning network charge fees based on a
policy that pertains to the outgoing channel only.

Several mentions have been made by routing node operators that this limits
the control that they can exert over the flow of traffic. The movement of
funds on all of the incoming channels is free of charge, which does not
match the reality of not all inbound liquidity being equal.

One option to fix this is to add two additional fields to the
`channel_update` message:
* `inbound_fee_base_msat`
* `inbound_fee_proportional_millionths`

With the previously introduced tlv message extensions, it should be
possible to let these fields propagate throughout the network without any
upgrades required.

Senders must pay each routing node the sum of its advertised inbound and
outbound fee for the channels used:

outbound_fee(amt_to_fwd) + inbound_fee(amt_to_fwd +
outbound_fee(amt_to_fwd))

So the inbound_fee is calculated based on the actual balance change in the
incoming channel. This includes the amount to forward as well as the
outbound fee.

An important characteristic of any solution that is to be deployed in an
existing network, is that it is backwards compatible. If routing nodes
start to require inbound fees, every sender that hasn’t upgraded their node
software will no longer be able to use that routing node. The routing node
will miss out on routing fees.

One mitigation is to charge zero inbound fees until a sufficiently large
portion of the senders has upgraded. It may be unclear though when this is
the case, and will likely take a significant amount of time. A test could
be to temporarily charge a minimal inbound fee, and watch for a reduction
in traffic and increase in `fee_insufficient` failures returned. If there
is little or no effect, then most senders have probably upgraded.

Another way to go about this is to set negative inbound fees during the
transitory phase. It is effectively a discount for using specific inbound
channels. So a routing node that charges 10 sats for forwarding today, may
in the future increase that to 13 sats and set the inbound fee to -3 sats.

Senders ignoring inbound fees will overpay (13 sats whereas 10 sats would
have been sufficient), but are still able to use the routing node. The
routing node may see a reduction in traffic though because it effectively
increased its fee for older senders only. But inbound fees could be
increased (decreased really because they are negative) gradually while
monitoring for fee over-payments. Over-payments are indicative of senders
ignoring the inbound fee discount.

Path-finding algorithms that are currently in use generally don’t support
negative fees. But in this case, the sum of inbound and outbound fees is
still positive and therefore not a problem. If routing nodes set their
policies accidentally or intentionally so that the sum of fees turns out
negative, senders can just round up to zero and find a path as normal.

Overall I think this can be a relatively compact change that may ultimately
lead to better capital placement on the network and lower routing fees.

Looking for feedback on the idea from both lightning devs and routing node
operators.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inbound channel routing fees

2022-07-01 Thread Joost Jager
>
> Path-finding algorithms that are currently in use generally don’t support
> negative fees. But in this case, the sum of inbound and outbound fees is
> still positive and therefore not a problem. If routing nodes set their
> policies accidentally or intentionally so that the sum of fees turns out
> negative, senders can just round up to zero and find a path as normal.
>

Correction to this:

The sum of inbound and outbound are not the fees set by one single routing
node. When path-finding considers a candidate hop, this adds the outbound
fee of the "from" node and the inbound fee of the "to" node. Because those
nodes don't necessarily coordinate fees, it may happen more often that the
fee goes negative. Rounding up to zero is still a quick fix and better than
ignoring inbound fees completely.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inbound channel routing fees

2022-07-01 Thread Joost Jager
Hi Bastien,

I vaguely remembered that the idea of inbound fees had been discussed
before. Before writing my post, I scanned through old ML posts and bolts
issues but couldn't find the discussion. Maybe it was part of a different
but related email or a bolts pr?

With regards to your objections, isn't it the case that it is always
possible to DoS your peer by just rejecting any forward that comes in from
them? Or indirectly affecting them negatively by setting high fees on all
outbound channels? To me it seems that there is nothing to lose by adding
inbound fees.

My thinking is that if I accept an incoming htlc, my local balance
increases on that incoming channel. My money gets locked up in a channel
that may or may not be interesting to me. Wouldn't it be fair to be
compensated for that?

Any thoughts from routing node operators would be welcome too (or links to
previous threads).

Joost

On Fri, Jul 1, 2022 at 1:19 PM Bastien TEINTURIER  wrote:

> Hi Joost,
>
> As I've already stated every time this has been previously discussed, I
> believe
> this doesn't make any sense. The funds that are on the other side of the
> channel belong to your peer, not you, so they're free to use it however
> they
> want. If you're not happy with the way your peer is managing their fees,
> then
> don't open channels to them and let the network decide whether you're
> right or
> not.
>
> Moreover, you shouldn't care at all. If all the funds are on your peer's
> side,
> this isn't your problem, you used up all the money that was yours. As long
> as
> the channel is open, this is free inbound liquidity for you, so you're even
> benefiting from this.
>
> If Alice could set fees for Bob's side of the channel, Alice could
> arbitrarily
> DoS Bob's payments by setting a high fee. This is just one example of the
> many
> ways this idea completely breaks the routing incentives.
>
> Cheers,
> Bastien
>
> Le ven. 1 juil. 2022 à 13:10, Joost Jager  a
> écrit :
>
>> Path-finding algorithms that are currently in use generally don’t support
>>> negative fees. But in this case, the sum of inbound and outbound fees is
>>> still positive and therefore not a problem. If routing nodes set their
>>> policies accidentally or intentionally so that the sum of fees turns out
>>> negative, senders can just round up to zero and find a path as normal.
>>>
>>
>> Correction to this:
>>
>> The sum of inbound and outbound are not the fees set by one single
>> routing node. When path-finding considers a candidate hop, this adds the
>> outbound fee of the "from" node and the inbound fee of the "to" node.
>> Because those nodes don't necessarily coordinate fees, it may happen more
>> often that the fee goes negative. Rounding up to zero is still a quick fix
>> and better than ignoring inbound fees completely.
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-10 Thread Joost Jager
On Thu, Jun 30, 2022 at 4:19 AM Matt Corallo 
wrote:

> Better yet, as Val points out, requiring a channel to relay onion messages
> puts a very real,
> nontrivial (in a world of msats) cost to getting an onion messaging
> channel. Better yet, with
> backpressure ability to DoS onion message links isn't denominated in
> number of messages, but instead
> in number of channels you are able to create, making the backpressure
> system equivalent to today's
> HTLC DoS considerations, whereas explicit payment allows an attacker to
> pay much less to break the
> system.
>

It can also be considered a bad thing that DoS ability is not based on a
number of messages. It means that for the one time cost of channel
open/close, the attacker can generate spam forever if they stay right below
the rate limit.


> Ultimately, paying suffers from the standard PoW-for-spam issue - you
> cannot assign a reasonable
> cost that an attacker cares about without impacting the system's usability
> due to said cost. Indeed,
> making it expensive enough to mount a months-long DDoS without impacting
> legitimate users be pretty
> easy - at 1msat per relay of a 1366 byte onion message you can only
> saturate an average home users'
> 30Mbps connection for 30 minutes before you rack up a dollar in costs, but
> if your concern is
> whether someone can reasonably trivially take out the network for minutes
> at a time to make it have
> perceptibly high failure rates, no reasonable cost scheme will work. Quite
> the opposite - the only
> reasonable way to respond is to respond to a spike in traffic while
> maintaining QoS is to rate-limit
> by inbound edge!
>

Suppose the attacker has enough channels to hit the rate limit on an
important connection some hops away from themselves. They can then sustain
that attack indefinitely, assuming that they stay below the rate limit on
the routes towards the target connection. What will the response be in that
case? Will node operators work together to try to trace back to the source
and take down the attacker? That requires operators to know each other.

Maybe this is a difference between lightning network and the internet that
is relevant for this discussion. That routers on the internet know each
other and have physical links between them, where as in lightning ties can
be much looser.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-11 Thread Joost Jager
On Sun, Jul 10, 2022 at 9:14 PM Matt Corallo 
wrote:

> > It can also be considered a bad thing that DoS ability is not based on a
> number of messages. It
> > means that for the one time cost of channel open/close, the attacker can
> generate spam forever if
> > they stay right below the rate limit.
>
> I don't see why this is a problem? This seems to assume some kind of
> per-message cost that nodes
> have to bear, but there is simply no such thing. Indeed, if message spam
> causes denial of service to
> other network participants, this would be an issue, but an attacker
> generating spam from one
> specific location within the network should not cause that, given some
> form of backpressure within
> the network.
>

It's more a general observation that an attacker can open a set of channels
in multiple locations once and can use them forever to support potential
attacks. That is assuming attacks aren't entirely thwarted with
backpressure.


> > Suppose the attacker has enough channels to hit the rate limit on an
> important connection some hops
> > away from themselves. They can then sustain that attack indefinitely,
> assuming that they stay below
> > the rate limit on the routes towards the target connection. What will
> the response be in that case?
> > Will node operators work together to try to trace back to the source and
> take down the attacker?
> > That requires operators to know each other.
>
> No it doesn't, backpressure works totally fine and automatically applies
> pressure backwards until
> nodes, in an automated fashion, are appropriately ratelimiting the source
> of the traffic.
>

Turns out I did not actually fully understand the proposal. This version of
backpressure is nice indeed.

To get a better feel for how it works, I've coded up a simple single node
simulation (
https://gist.github.com/joostjager/bca727bdd4fc806e4c0050e12838ffa3), which
produces output like this:
https://gist.github.com/joostjager/682c4232c69f3c19ec41d7dd4643bb27. There
are a few spammers and one real user. You can see that after some time, the
spammers are all throttled down and the user packets keep being handled.

If you add enough spammers, they are obviously still able to hit the next
hop rate limit and affect the user. But because their incoming limits have
been throttled down, you need a lot of them - depending on the minimum rate
that the node goes down to.

I am wondering about that spiraling-down effect for legitimate users. Once
you hit the limit, it is decreased and it becomes easier to hit it again.
If you don't adapt, you'll end up with a very low rate. You need to take a
break to recover from that. I guess the assumption is that legitimate users
never end up there, because the rate limits are much much higher than what
they need. Even if they'd occasionally hit a limit on a busy connection,
they can go through a lot of halvings before they'll get close to the rate
that they require and it becomes a problem.

But how would that work if the user only has a single channel and wants to
retry? I suppose they need to be careful to use a long enough delay to not
get into that down-spiral. But how do they determine what is long enough?
Probably not a real problem in practice with network latency etc, even
though a concrete value does need to be picked.

Spammers are probably also not going to spam at max speed. They'd want to
avoid their rate limit being slashed. In the simulation, I've added a
`perfectSpammers` mode that creates spammers that have complete information
on the state of the rate limiter. Not possible in reality of course. If you
enable this mode, it does get hard for the user. Spammers keep pushing the
limiter to right below the tripping point and an unknowing user trips it
and spirals down. (
https://gist.github.com/joostjager/6eef1de0cf53b5314f5336acf2b2a48a)

I don't know to what extent spammers without perfect information can still
be smart and optimize their spam rate. They can probably do better than
keep sending at max speed.

> Maybe this is a difference between lightning network and the internet
> that is relevant for this
> > discussion. That routers on the internet know each other and have
> physical links between them, where
> > as in lightning ties can be much looser.
>
> No? The internet does not work by ISPs calling each other up on the phone
> to apply backpressure
> manually whenever someone sends a lot of traffic? If anything lightning
> ties between nodes are much,
> much stronger than ISPs on the internet - you generally are at least
> loosely trusting your peer with
> your money, not just your customer's customer's bits.
>

Haha, okay, yes, I actually don't know what ISPs do in case of DoS attacks.
Just trying to find differences between lightning and the internet that
could be relevant for this discussion.

Seems to me that lightning's onion routing makes it hard to trace back to
the source without node operators calling each other up. Harder than it is
on the 

[Lightning-dev] Fat Errors

2022-10-19 Thread Joost Jager
Hi list,

I wanted to get back to a long-standing issue in Lightning: gaps in error
attribution. I've posted about this before back in 2019 [1].

Error attribution is important to properly penalize nodes after a payment
failure occurs. The goal of the penalty is to give the next attempt a
better chance at succeeding. In the happy failure flow, the sender is able
to determine the origin of the failure and penalizes a single node or pair
of nodes.

Unfortunately it is possible for nodes on the route to hide themselves. If
they return random data as the failure message, the sender won't know where
the failure happened. Some senders then penalize all nodes that were part
of the route [4][5]. This may exclude perfectly reliable nodes from being
used for future payments. Other senders penalize no nodes at all [6][7],
which allows the offending node to keep the disruption going.

A special case of this is a final node sending back random data. Senders
that penalize all nodes will keep looking for alternative routes. But
because each alternative route still ends with that same final node, the
sender will ultimately penalize all of its peers and possibly a lot of the
rest of the network too.

I can think of various reasons for exploiting this weakness. One is just
plain grievance for whatever reason. Another one is to attract more traffic
by getting competing routing nodes penalized. Or the goal could be to
sufficiently mess up reputation tracking of a specific sender node to make
it hard for that node to make further payments.

Related to this are delays in the path. A node can delay propagating back a
failure message and the sender won't be able to determine which node did
it.

The link at the top of this post [1] describes a way to address both
unreadable failure messages as well as delays by letting each node on the
route append a timestamp and hmac to the failure message. The great
challenge is to do this in such a way that nodes don’t learn their position
in the path.

I'm revisiting this idea, and have prototyped various ways to implement it.
In the remainder of this post, I will describe the variant that I thought
works best (so far).

# Failure message format

The basic idea of the new format is to let each node (not just the error
source) commit to the failure message when it passes it back by adding an
hmac. The sender verifies all hmacs upon receipt of the failure message.
This makes it impossible for any of the nodes to modify the failure message
without revealing that they might have played a part in the modification.
It won’t be possible for the sender to pinpoint an exact node, because
either end of a communication channel may have modified the message.
Pinpointing a pair of nodes however is good enough, and is commonly done
for regular onion failures too.

On the highest level, the new failure message consists of three parts:

`message` (var len) | `payloads` (fixed len) | `hmacs` (fixed len)

* `message` is the standard onion failure message as described in [2], but
without the hmac. The hmac is now part of `hmacs` and doesn't need to be
repeated.

* `payloads` is a fixed length array that contains space for each node
(`hop_payload`) on the route to add data to return to the sender. Ideally
the contents and size of `hop_payload` is signaled so that future
extensions don’t require all nodes to upgrade. For now, we’ll assume the
following 9-byte format:

  `is_final` (1 byte) | `duration` (8 bytes)

  `is_final` indicates whether this node is the failure source. The sender
uses `is_final` to determine when to stop the decryption/verification
process.

  `duration` is the time in milliseconds that the node held the htlc. By
observing the series of reported durations, the sender is able to pinpoint
a delay down to a pair of nodes.

  The `hop_payload` is repeated 27 times (the maximum route length).

  Every hop shifts `payloads` 9 bytes to the right and puts its own
`hop_payload` in the 9 left-most bytes.

* `hmacs` is a fixed length array where nodes add their hmacs as the
failure message travels back to the sender.

  To keep things simple, I'll describe the format as if the maximum route
length was only three hops (instead of 27):

  `hmac_0_2` | `hmac_0_1`| `hmac_0_0`| `hmac_1_1`| `hmac_1_0`| `hmac_2_0`

  Because nodes don't know their position in the path, it's unclear to them
what part of the failure message they are supposed to include in the hmac.
They can't just include everything, because if part of that data is deleted
later (to keep the message size fixed) it opens up the possibility for
nodes to blame others.

  The solution here is to provide hmacs for all possible positions. The
last node that updated `hmacs` added `hmac_0_2`, `hmac_0_1` and `hmac_0_0`
to the block. Each hmac corresponds to a presumed position in the path,
where `hmac_0_2` is for the longest path (2 downstream hops) and `hmac_0_0`
for the shortest (node is the error source).

  `hmac_x_y` is the hmac added by node x 

Re: [Lightning-dev] Fat Errors

2022-10-20 Thread Joost Jager
Ah, missed that. Thanks for the correction.
Joost.

On Thu, Oct 20, 2022 at 5:36 PM Bastien TEINTURIER  wrote:

> Hi Joost,
>
> I need more time to review your proposed change, but I wanted to quickly
> correct a misunderstanding you had in quoting eclair's code:
>
> > Unfortunately it is possible for nodes on the route to hide themselves.
> > If they return random data as the failure message, the sender won't know
> > where the failure happened. Some senders then penalize all nodes that
> > were part of the route [4][5]. This may exclude perfectly reliable nodes
> > from being used for future payments.
>
> Eclair's code does not penalize nodes for future payment attempts in this
> case. It only ignores them for the retries of that particular payment.
>
> Cheers,
> Bastien
>
> Le mer. 19 oct. 2022 à 13:13, Joost Jager  a
> écrit :
>
>> Hi list,
>>
>> I wanted to get back to a long-standing issue in Lightning: gaps in error
>> attribution. I've posted about this before back in 2019 [1].
>>
>> Error attribution is important to properly penalize nodes after a payment
>> failure occurs. The goal of the penalty is to give the next attempt a
>> better chance at succeeding. In the happy failure flow, the sender is able
>> to determine the origin of the failure and penalizes a single node or pair
>> of nodes.
>>
>> Unfortunately it is possible for nodes on the route to hide themselves.
>> If they return random data as the failure message, the sender won't know
>> where the failure happened. Some senders then penalize all nodes that were
>> part of the route [4][5]. This may exclude perfectly reliable nodes from
>> being used for future payments. Other senders penalize no nodes at all
>> [6][7], which allows the offending node to keep the disruption going.
>>
>> A special case of this is a final node sending back random data. Senders
>> that penalize all nodes will keep looking for alternative routes. But
>> because each alternative route still ends with that same final node, the
>> sender will ultimately penalize all of its peers and possibly a lot of the
>> rest of the network too.
>>
>> I can think of various reasons for exploiting this weakness. One is just
>> plain grievance for whatever reason. Another one is to attract more traffic
>> by getting competing routing nodes penalized. Or the goal could be to
>> sufficiently mess up reputation tracking of a specific sender node to make
>> it hard for that node to make further payments.
>>
>> Related to this are delays in the path. A node can delay propagating back
>> a failure message and the sender won't be able to determine which node did
>> it.
>>
>> The link at the top of this post [1] describes a way to address both
>> unreadable failure messages as well as delays by letting each node on the
>> route append a timestamp and hmac to the failure message. The great
>> challenge is to do this in such a way that nodes don’t learn their position
>> in the path.
>>
>> I'm revisiting this idea, and have prototyped various ways to implement
>> it. In the remainder of this post, I will describe the variant that I
>> thought works best (so far).
>>
>> # Failure message format
>>
>> The basic idea of the new format is to let each node (not just the error
>> source) commit to the failure message when it passes it back by adding an
>> hmac. The sender verifies all hmacs upon receipt of the failure message.
>> This makes it impossible for any of the nodes to modify the failure message
>> without revealing that they might have played a part in the modification.
>> It won’t be possible for the sender to pinpoint an exact node, because
>> either end of a communication channel may have modified the message.
>> Pinpointing a pair of nodes however is good enough, and is commonly done
>> for regular onion failures too.
>>
>> On the highest level, the new failure message consists of three parts:
>>
>> `message` (var len) | `payloads` (fixed len) | `hmacs` (fixed len)
>>
>> * `message` is the standard onion failure message as described in [2],
>> but without the hmac. The hmac is now part of `hmacs` and doesn't need to
>> be repeated.
>>
>> * `payloads` is a fixed length array that contains space for each node
>> (`hop_payload`) on the route to add data to return to the sender. Ideally
>> the contents and size of `hop_payload` is signaled so that future
>> extensions don’t require all nodes to upgrade. For now, we’ll assume the
>> following 9-byte format:
>>
>>   `is_final` (1 byte)

Re: [Lightning-dev] Fat Errors

2022-11-01 Thread Joost Jager
Hey Rusty,

Great to hear that you want to try to implement the proposal. I can polish
my golang proof of concept code a bit and share if that's useful? It's just
doing the calculation in isolation. My next step after that would be to see
what it looks like integrated in lnd.

16 hops sounds fine to me too, but in general I am not too concerned about
the size of the message. Maybe a scheme is possible where the sender
signals the max number of hops, trading off size against privacy. Probably
an unnecessary complication though.

I remember the prepay scheme, but sounds quite a bit more invasive than
just touching encode/relay/decode of the failure message. You also won't
have the timing information to identify slow nodes on the path.

Joost.

On Tue, Oct 25, 2022 at 9:58 PM Rusty Russell  wrote:

> Joost Jager  writes:
> > Hi list,
> >
> > I wanted to get back to a long-standing issue in Lightning: gaps in error
> > attribution. I've posted about this before back in 2019 [1].
>
> Hi Joost!
>
> Thanks for writing this up fully.  Core lightning also doesn't
> penalize properly, because of the attribution problem: solving this lets
> us penalize a channel, at least.
>
> I want to implement this too, to make sure I understand it
> correctly, but having read it twice it seems reasonable.
>
> How about 16 hops?  It's the closest power of 2 to the legacy hop
> limit, and makes this 4.5k for payloads and hmacs.
>
> There is, however, a completely different possibility if we want
> to use a pre-pay scheme, which I think I've described previously.  You
> send N sats and a secp point; every chained secret returned earns the
> forwarder 1 sat[1].  The answers of course are placed in each layer of
> the onion.  You know how far the onion got based on how much money you
> got back on failure[2], though the error message may be corrupted.
>
> Cheers,
> Rusty.
> [1] Simplest is truncate the point to a new secret key.  Each node would
> apply a tweak for decorrelation ofc.
> [2] The best scheme is that you don't get paid unless the next node
> decrypts, actually, but that needs more thought.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Jamming against Channel Jamming

2022-12-02 Thread Joost Jager
A simple but imperfect way to deal with channel jamming and spamming is to
install a lightning firewall such as circuitbreaker [1]. It allows you to
set limits like a maximum number of pending htlcs (fight jamming) and/or a
rate limit (fight spamming). Incoming htlcs that exceed one of the limits
are failed back.

Unfortunately there are problems with this approach. Failures probably lead
to extra retries which increases the load on the network as a whole.
Senders are also taking note of the failure, penalizing you and favoring
other nodes that do not apply limits. With a large part of the network
applying limits, it will probably work better because misbehaving nodes
have fewer opportunities to affect distant nodes. Then it also becomes less
likely that limits are applied to traffic coming from well-behaving nodes,
and the reputation of routing nodes isn’t degraded as much.

But how to get to the point where restrictions are applied generally?
Currently there isn’t too much of a reason for routing nodes to constrain
their peers, and as explained above it may even be bad for business.

Instead of failing, an alternative course of action for htlcs that exceed a
limit is to hold and queue them. For example, if htlcs come in at a high
rate, they’ll just be stacking up on the incoming side and are gradually
forwarded when their time has come.

An advantage of this is that a routing node’s reputation isn’t affected
because there are no failures. This however may change in the future with
fat errors [2]. It will then become possible for senders to identify slow
nodes, and the no-penalty advantage may go away.

A more important effect of holding is that the upstream nodes are punished
for the bad traffic that they facilitate. They see their htlc slots
occupied and funds frozen. They can’t coop close, and a force-close may be
expensive depending on the number of htlcs that materialize on the
commitment transaction. This could be a reason for them to take a careful
look at the source of that traffic, and also start applying limits. Limits
propagating recursively across the network and pushing bad senders into
corners where they can’t do much harm anymore. It’s sort of paradoxical:
jamming channels to stop jamming.

One thing to note is that routing nodes employing the hold strategy are
potentially punishing themselves too. If they are the initiator of a
channel with many pending htlcs, the commit fee for them to pay can be high
in the case of a force-close. They do not need to sweep the htlcs that were
extended by their peer, but still. One way around this is to only use the
hold strategy for channels that the routing node did not initiate, and use
the fail action or no limit at all for self-initiated channels.

Interested to hear opinions on the idea. I’ve also updated circuitbreaker
with a queue mode for anyone willing to experiment with it [3].

[1] https://github.com/lightningequipment/circuitbreaker
[2]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-October/003723.html
[3] https://github.com/lightningequipment/circuitbreaker/pull/14
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Jamming against Channel Jamming

2022-12-02 Thread Joost Jager
Hi Antoine,

If I understand correctly circuitbreaker, it adds new "dynamic" HTLC slot
> limits, in opposition to the "static" ones declared to your counterparty
> during channel opening (within the protocol-limit of 483). On top, you can
> rate-limit the HTLCs forwards based on the incoming source.
>

Correct.


> Effectively, this scheme would penalize your own routing hop as HTLC
> senders routing algorithms would likely cause the hop, in the lack of a new
> gossip message advertising the limits/rates. Further, it sounds for the
> measure to be robust, a circuitbreaking policy should be applied
> recursively by your counterparty on their network topologies. Otherwise,
> you're opening I think you'll have the non-constrained links being targeted
> as entry points in the rate-limited, "jamming-safe" subset of the graph.
>

Indeed, the more nodes run it, the harder it becomes for attackers to
attack. You'd only penalize your own routing node if you send back
failures. If you hold the htlc, there is no penalty with the network as it
is currently.


> The limits could be based on HTLC values, e.g the Xth slots for HTLCs of
> value <1k sats, the Yth slots for HTLC of value <100k sats, the remaining
> Zth slots for HTLC of value <200k sats. IIRC, this jamming countermeasure
> has been implemented by Eclair [0] and discussed more in detail here [1].
> While it increases the liquidity cost for an attacker to launch jamming
> attacks against the high-value slots, it comes at the major downside of
> lowering the cost for jamming low-value slots. Betting on an increasing
> bitcoin price, all other things equals, we'll make simple payments from
> lambda users more and more vulnerable.
>

It is true that the limits make it easier to jam a channel, but the theory
is that everyone does it, the attacker won't have much reach anymore.


> Beyond that, I think this solution would classify in the reputation-based
> family of solutions, where reputation is local and enforced through
> rate-limiting (from my understanding), I would say there is no economic
> proportionality enforced between the rate-limiting and the cost for an
> attacker. A jamming attacker could open new channels during period of
> low-fees in the edges of the graph, and still launch attacks against
> distant hops by splitting the jamming traffic between many sources,
> therefore avoiding force-closures (e.g 230 HTLCs from channel Mallory, 253
> HTLCs from channel Malicia). Even force-closure in case of observed jamming
> isn't that evident, as the economic traffic could still be opportunistic
> locally but only a jam on a distant hop. So I think the economic
> equilibrium and risk structure of this scheme is still uncertain.
>

The economic proportionality is that an attacker can't do much with a
severely limited channel, and would need many more to achieve the same
effect. I don't think it is possible to eliminate all bad behavior, and
that the goal should just be to make it a lot harder than it currently is.
Not sure how force-closes come into play. I don't think there needs to be
any force-close? I just mentioned them in my original post because they can
happen for independent reasons (bug, node offline), and then the size of
the commitment tx and number of pending htlcs translates to a real cost.


> However, I think the mode of queuing HTLCs is still valuable itself,
> independently of jamming, either a) to increase routed privacy of HTLC (e.g
> "delay my HTLC" option [2]), maybe with double opt-in of both senders/hops
> or b) as a congestion control mechanism where you have >100% of honest
> incoming HTLC traffic and you would like to earn routing fees on all of
> them, in the limit of what the outgoing CLTV allow you. An advanced idea
> could be based on statistics collection, sending back-pressure messages or
> HTLC sending scheduling information to the upstream hops. Let's say in the
> future we have more periodic payments, those ones could be scheduled in
> periods of low-congestions.
>
> So I wonder if we don't have two (or even more) problems when we think
> about jamming, the first one, the HTLC forward "counterparty risk" (the
> real jamming) and the other one, congestion and scheduling of efficient
> HTLC traffic, with some interdependencies between them of course.
>

Yes, so the main idea that I tried to present is that applying congestion
control by holding htlcs may wake up everyone along the path back to the
attacker and move them to apply congestion control too.


> On experimenting with circuitbreaker, I don't know which HTLC intercepting
> interface it does expect, we still have a rudimentary one on the LDK-side
> only supporting JIT channels use-case.
>

It connects to lnd's htlc interceptor and htlc events interfaces.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fat Errors

2022-11-10 Thread Joost Jager
Pushed a golang implementation of the fat errors here:
https://github.com/lightningnetwork/lightning-onion/pull/60

Joost.

On Wed, Oct 19, 2022 at 1:12 PM Joost Jager  wrote:

> Hi list,
>
> I wanted to get back to a long-standing issue in Lightning: gaps in error
> attribution. I've posted about this before back in 2019 [1].
>
> Error attribution is important to properly penalize nodes after a payment
> failure occurs. The goal of the penalty is to give the next attempt a
> better chance at succeeding. In the happy failure flow, the sender is able
> to determine the origin of the failure and penalizes a single node or pair
> of nodes.
>
> Unfortunately it is possible for nodes on the route to hide themselves. If
> they return random data as the failure message, the sender won't know where
> the failure happened. Some senders then penalize all nodes that were part
> of the route [4][5]. This may exclude perfectly reliable nodes from being
> used for future payments. Other senders penalize no nodes at all [6][7],
> which allows the offending node to keep the disruption going.
>
> A special case of this is a final node sending back random data. Senders
> that penalize all nodes will keep looking for alternative routes. But
> because each alternative route still ends with that same final node, the
> sender will ultimately penalize all of its peers and possibly a lot of the
> rest of the network too.
>
> I can think of various reasons for exploiting this weakness. One is just
> plain grievance for whatever reason. Another one is to attract more traffic
> by getting competing routing nodes penalized. Or the goal could be to
> sufficiently mess up reputation tracking of a specific sender node to make
> it hard for that node to make further payments.
>
> Related to this are delays in the path. A node can delay propagating back
> a failure message and the sender won't be able to determine which node did
> it.
>
> The link at the top of this post [1] describes a way to address both
> unreadable failure messages as well as delays by letting each node on the
> route append a timestamp and hmac to the failure message. The great
> challenge is to do this in such a way that nodes don’t learn their position
> in the path.
>
> I'm revisiting this idea, and have prototyped various ways to implement
> it. In the remainder of this post, I will describe the variant that I
> thought works best (so far).
>
> # Failure message format
>
> The basic idea of the new format is to let each node (not just the error
> source) commit to the failure message when it passes it back by adding an
> hmac. The sender verifies all hmacs upon receipt of the failure message.
> This makes it impossible for any of the nodes to modify the failure message
> without revealing that they might have played a part in the modification.
> It won’t be possible for the sender to pinpoint an exact node, because
> either end of a communication channel may have modified the message.
> Pinpointing a pair of nodes however is good enough, and is commonly done
> for regular onion failures too.
>
> On the highest level, the new failure message consists of three parts:
>
> `message` (var len) | `payloads` (fixed len) | `hmacs` (fixed len)
>
> * `message` is the standard onion failure message as described in [2], but
> without the hmac. The hmac is now part of `hmacs` and doesn't need to be
> repeated.
>
> * `payloads` is a fixed length array that contains space for each node
> (`hop_payload`) on the route to add data to return to the sender. Ideally
> the contents and size of `hop_payload` is signaled so that future
> extensions don’t require all nodes to upgrade. For now, we’ll assume the
> following 9-byte format:
>
>   `is_final` (1 byte) | `duration` (8 bytes)
>
>   `is_final` indicates whether this node is the failure source. The sender
> uses `is_final` to determine when to stop the decryption/verification
> process.
>
>   `duration` is the time in milliseconds that the node held the htlc. By
> observing the series of reported durations, the sender is able to pinpoint
> a delay down to a pair of nodes.
>
>   The `hop_payload` is repeated 27 times (the maximum route length).
>
>   Every hop shifts `payloads` 9 bytes to the right and puts its own
> `hop_payload` in the 9 left-most bytes.
>
> * `hmacs` is a fixed length array where nodes add their hmacs as the
> failure message travels back to the sender.
>
>   To keep things simple, I'll describe the format as if the maximum route
> length was only three hops (instead of 27):
>
>   `hmac_0_2` | `hmac_0_1`| `hmac_0_0`| `hmac_1_1`| `hmac_1_0`| `hmac_2_0`
>
>   Because nodes don't know their position in the path, it's unclear to

Re: [Lightning-dev] Fat Errors

2022-11-04 Thread Joost Jager
Hi Thomas,

This is a very interesting proposal that elegantly solves the problem, with
> however a very significant size increase. I can see two ways to keep the
> size small:
> - Each node just adds its hmac in a naive way, without deleting any part
> of the message to relay. You seem to have disqualified this option because
> it increases the size of the relayed message but I think it merits more
> consideration. It is much simpler and the size only grows linearly with the
> length of the route. An intermediate node could try to infer its position
> relative to the failing node (which should not be the recipient) but
> without knowing the original message size (which can easily be randomized
> by the final node), is that really such a problem? It may be but I would
> argue it's a good trade-off.
>

That would definitely make the solution a lot simpler. I think that
increasing the message length still does leak some information, even with
randomization by the final node. For example if you know the minimum
message length including random bytes produced by the final node, and a
routing node sees this length, they must be the second-last hop. I tried to
come up with something that stays within the current privacy guarantees,
but it's fair to question the trade-off.

An advantage of the naive hmac append is also that each node can add a
variable (tlv?) payload. In the fixed size proposal that isn't possible
because nodes need to know exactly how many bytes to sign to cover a number
of downstream hop payloads, and some form of signaling would be required to
add flexibility to that. A variable payload makes it easier to add
extensions later on. It also helps with the randomization of the length.
And intermediate nodes could choose to add some random bytes too in a
otherwise unused tlv record.


> - If we really want to keep the constant size property, as you've
> suggested we could use a low limit on the number of nodes. I would put the
> limit even lower, at 8 or less. We could still use longer routes but we
> would only get hmacs for the first 8 hops and revert to the legacy system
> if the failure happens after the first 8 hops. That way we keep the size
> low and 8 hops should be good enough for 99% of the payments, and even when
> there are more hops we would know that the first 7 hops are clean.
>

Sounds like a useful middle road. Each hop will just shift hmacs and the
ones further than 8 hops away will be shifted out completely. Yes, not bad.

The question that is still unanswered for me is how problematic a full size
fat error of 12 kb would really be. Of course small is better than big, but
wondering if there would be an actual degradation of the ux or other
significant negative effects in practice.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-17 Thread Joost Jager
>
> Right, that was my above point about fetching scoring data - there's three
> relevant "buckets" of
> nodes, I think - (a) large nodes sending lots of payments, like the above,
> (b) "client nodes" that
> just connect to an LSP or two, (c) nodes that route some but don't send a
> lot of payments (but do
> send *some* payments), and may have lots or not very many channels.
>
> (a) I think we're getting there, and we don't need to add anything extra
> for this use-case beyond
> the network maturing and improving our scoring algorithms.
> (b) I think is trivially solved by downloading the data from a node in
> category (a), presumably the
> LSP(s) in question (see other branch of this thread)
> (c) is trickier, but I think the same solution of just fetching
> semi-trusted data here more than
> sufficies. For most routing nodes that don't send a lot of payments we're
> talking about a very small
> amount of payments, so trusting a third-party for scoring data seems
> reasonable.
>

I see that in your view all nodes will either be large nodes themselves, or
be downloading scoring data from large nodes. I'd argue that that is more
of a move towards centralisation than the `ha` flag is. The flag at least
allows small nodes to build up their view of the network in an efficient
and independently manner.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Joost Jager
Hi,

For a long time I've held the expectation that eventually payers on the
lightning network will become very strict about node performance. That they
will require a routing node to operate flawlessly or else apply a hefty
penalty such as completely avoiding the node for an extended period of time
- multiple weeks. The consequence of this is that routing nodes would need
to manage their liquidity meticulously because every failure potentially
has a large impact on future routing revenue.

I think movement in this direction is important to guarantee
competitiveness with centralised payment systems and their (at least
theoretical) ability to process a payment in the blink of an eye. A
lightning wallet trying multiple paths to find one that works doesn't help
with this.

A common argument against strict penalisation is that it would lead to less
efficient use of capital. Routing nodes would need to maintain pools of
liquidity to guarantee successes all the time. My opinion on this is that
lightning is already enormously capital efficient at scale and that it is
worth sacrificing a slight part of that efficiency to also achieve the
lowest possible latency.

This brings me to the actual subject of this post. Assuming strict
penalisation is good, it may still not be ideal to flip the switch from one
day to the other. Routing nodes may not offer the required level of service
yet, causing senders to end up with no nodes to choose from.

One option is to gradually increase the strength of the penalties, so that
routing nodes are given time to adapt to the new standards. This does
require everyone to move along and leaves no space for cheap routing nodes
with less leeway in terms of liquidity.

Therefore I am proposing another way to go about it: extend the
`channel_update` field `channel_flags` with a new bit that the sender can
use to signal `highly_available`.

It's then up to payers to decide how to interpret this flag. One way could
be to prefer `highly_available` channels during pathfinding. But if the
routing node then returns a failure, a much stronger than normal penalty
will be applied. For routing nodes this creates an opportunity to attract
more traffic by marking some channels as `highly_available`, but it also
comes with the responsibility to deliver.

Without shadow channels, it is impossible to guarantee liquidity up to the
channel capacity. It might make sense for senders to only assume high
availability for amounts up to `htlc_maximum_msat`.

A variation on this scheme that requires no extension of `channel_update`
is to signal availability implicitly through routing fees. So the more
expensive a channel is, the stronger the penalty that is applied on failure
will be. It seems less ideal though, because it could disincentivize cheap
but reliable channels on high traffic links.

The effort required to implement some form of a `highly_available` flag
seem limited and it may help to get payment success rates up. Interested to
hear your thoughts.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
Hi Matt,

If nodes start aggressively preferring routes through nodes that reliably
> route payments (which I believe lnd already does, in effect, to some large
> extent), they should do so by measurement, not signaling.
>

The signaling is intended as a way to make measurement more efficient. If a
node signals that a particular channel is HA and it fails, no other
measurements on that same node need to be taken by the sender. They can
skip the node altogether for a longer period of time.


> In practice, many channels on the network are “high availability” today,
> but only in one direction (I.e. they aren’t regularly spliced/rebalanced
> and are regularly unbalanced). A node strongly preferring a high payment
> success rate *should* prefer such a channel, but in your scheme would not.
>

This shouldn't be a problem, because the HA signaling is also directional.
Each end can decide independently on whether to add the flag for a
particular channel.


> This ignores the myriad of “at what threshold do you signal HA” issues,
> which likely make such a signal DOA, anyway.
>

I think this is a product of sender preference for HA channels and the
severity of the penalty if an HA channel fails. Given this, routing nodes
will need to decide whether they can offer a service level that increases
their routing revenue overall if they would signal HA. It is indeed
dynamic, but I think the market is able to work it out.


> Finally, I’m very dismayed at this direction in thinking on how ln should
> work - nodes should be measuring the network and routing over paths that it
> thinks are reliable for what it wants, *robustly over an unreliable
> network*. We should absolutely not be expecting the lightning network to be
> built out of high reliability nodes, that creates strong centralization
> pressure. To truly meet a “high availability” threshold, realistically,
> you’d need to be able to JIT 0conf splice-in, which would drive lightning
> to actually being a credit network.
>

Different people can have different opinions about how ln should work, that
is fine. I see a trade-off between the reliability of the network and the
barrier of entry, and I don't think the optimum is on one of the ends of
the scale.


> With reasonable volume, lightning today is very reliable and relatively
> fast, with few retries required. I don’t think we need to change anything
> to fix it. :)
>

How can you be sure about this? This isn't publicly visible data.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
Hi Christian,


> And after all this rambling, let's get back to the topic at hand: I
> don't think enshrining the differences of availability in the protocol,
> thus creating two classes of nodes, is a desirable
> feature.


Yes so to be clear, the HA signaling is not on the node level but on the
channel level. So each node can decide per channel whether they want to
potentially attract additional traffic at the cost of severe penalties (or
avoidance if you want to use a different wording) if the channel can't be
used. They can still maintain a set of less reliable channels along side.


> Communicating up-front that I intend to be reliable does
> nothing, and penalizing after the fact isn't worth much due to the
> repeat interactions issue.


I think it is currently quite common for pathfinders to try another channel
of the same node for the payment at hand. Or re-attempt the same channel
for a future payment to the same destination. I understand the repeat
interactions issue, but not sure about the extent to which it applies to
lightning in practice. A think a common pattern for payments in general is
to pay to the same destinations repeatedly, for example for a daily coffee.


> It'd be even worse if now we had to rely on a
> third party to aggregate and track the reliability, in order to get
> enough repeat interactions to build a good model of their liquidity,
> since we're now back in the hearsay world, and the third party can feed
> us wrong information to maximize their profits.
>

Yes, using 3rd party info seems difficult. As mentioned in my reply to
Matt, the idea of HA signaling is to make local reliability tracking more
efficient so that it becomes less likely that senders need to rely on
external aggregators for their view on the network.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
>
> But how do you decide to set it without a credit relationship? Do I
> measure my channel and set the
>
bit because the channel is "usually" (at what threshold?) saturating in the
> inbound direction? What
> happens if this changes for an hour and I get unlucky? Did I just screw
> myself?
>

As a node setting the flag, you'll have to make sure you open new channels,
rebalance or swap-in in time to maintain outbound liquidity. That's part of
the game of running an HA channel.


> > How can you be sure about this? This isn't publicly visible data.
>
> Sure it is! https://river.com/learn/files/river-lightning-report.pdf


Some operators publish data, but are the experiences of one of the most
well connected (custodial) nodes representative for the network as a whole
when evaluating payment success rates? In the end you can't know what's
happening on the lightning network.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-15 Thread Joost Jager
>
> I think the performance question depends on the type of payment flows
> considered. If you're an
> end-user sending a payment to your local Starbucks for coffee, here fast
> payment sounds the end-goal.
> If you're doing remittance payment, cheap fees might be favored, and in
> function of those flows you're
> probably not going to select the same "performant" routing nodes. I think
> adding latency as a criteria for
> pathfinding construction has already been mentioned in the past for LDK
> [0].
>

My hopes are that eventually lightning nodes can run so efficient that in
practice there is no real trade-off anymore between cost and speed. But of
course hard to say how that's going to play out. I am all for adding
latency as an input to pathfinding. Attributable errors should help with
that too.


> Or there is the direction to build forward-error-correction code on top of
> MPP, like in traditional
> networking [1]. The rough idea, you send more payment shards than the
> requested sum, and then
> you reveal the payment secrets to the receiver after an onion
> interactivity round to finalize payment.
>

This is not very different from payment pre-probing is it? So try a larger
set of possible routes simultaneously and when one proves to be open, send
the real payment across that route. Of course a balance may have shifted in
the mean time, but seems unlikely enough to prevent the approach from being
usable. The obvious downside is that the user needs more total liquidity to
have multiple htlcs outstanding at the same time. Nevertheless an
interesting way to reduce payment latency.


> At the end of the day, we add more signal channels between HTLC senders
> and the routing
> nodes offering capital liquidity, if the signal mechanisms are efficient,
> I think they should lead
> to better allocation of the capital. So yes, I think more liquidity might
> be used by routing nodes
> to serve finely tailored HTLC requests by senders, however this liquidity
> should be rewarded
> by higher routing fees.
>

This is indeed part of the idea. By signalling HA, you may not only attract
more traffic, but also be able to command a higher fee.


> I think if we have lessons to learn on policy rules design and deployment
> on the base-layer
> (the full-rbf saga), it's to be careful in the initial set of rules, and
> how we ensure smooth
> upgradeability, from one version to another. Otherwise the re-deployment
> cost towards
> the new version might incentive the old routing node to stay on the
> non-optimal versions,
> and as we have historical buckets in routing algorithms, or preference for
> older channels,
> this might lead the end-user to pay higher fees, than they could access to.
>

I see the parallel, but also it seems that we have this situation already
today on lightning. Senders apply penalties and routing nodes need to make
assumptions about how they are penalised. Perhaps more explicit signalling
can actually help to reduce the degree of uncertainty as to how a routing
nodes is supposed to perform to keep senders happy?


> This is where the open question lies to me - "highly available" can be
> defined with multiple
> senses, like fault-tolerance, latency processing, equilibrated liquidity.
> And a routing node might
> not be able to optimize its architecture for the same end-goal (e.g more
> watchtower on remote
> host probably increases the latency processing).
>

Yes, good point. So maybe a few more bits to signal what a sender may
expect from a channel exactly?


> > Without shadow channels, it is impossible to guarantee liquidity up to
> the channel capacity. It might make sense for senders to only assume high
> > availability for amounts up to `htlc_maximum_msat`.
>
> As a note, I think "senders assumption" should be well-documented,
> otherwise there will be
> performance discrepancies between node implementations or even versions.
> E.g, an upgraded
> sender penalizing a node for the lack of shadow/parallel channels
> fulfilling HTLC amounts up to
> `htlc_maximum_msat`.
>

Well documented, or maybe even explicit in the name of the feature bit. For
example `htlc_max_guaranteed`.


> I think signal availability should be explicit rather than implicit. Even
> if it's coming with more
> gossip bandwidth data consumed. I would say for bandwidth performance
> management, relying
> on new gossip messages, where they can be filtered in function of the
> level of services required
> is interesting.
>

In terms of implementation, I think this kind of signalling is easier as an
extension of `channel_update`, but it can probably work as a separate
message too.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev