Hey all,

Thanks for the comments!
Here are a few answers inline to some points that aren't fully addressed
yet.

@laolu

> Another question on my mind is: if this works really well for rate
limiting of
> onion messages, then why can't we use it for HTLCs as well?

Because HTLC DoS is fundamentally different: the culprit isn't always
upstream, most of the time it's downstream (holding an HTLC), so back
pressure cannot work.

Onion messages don't have this issue at all because there's no
equivalent to holding an onion message downstream, it doesn't have
any impact on previous intermediate nodes.

@ariard

> as the onion messages routing is source-based, an attacker could
> exhaust or reduce targeted onion communication channels to prevent
> invoices exchanges between LN peers

Can you detail how? That's exactly what this scheme is trying to prevent.
This looks similar to Joost's early comment, but I think it's based on a
misunderstanding of the proposal (as Joost then acknowledged). Spammers
will be statistically penalized, which will allow honest messages to go
through. As Joost details below, attackers with perfect information about
the state of rate-limits can in theory saturate links, but in practice I
believe this cannot work for an extended period of time.

@joost

Cool work with the simulation, thanks!
Let us know if that yields other interesting results.

Cheers,
Bastien

Le lun. 11 juil. 2022 à 11:09, Joost Jager <joost.ja...@gmail.com> a écrit :

> On Sun, Jul 10, 2022 at 9:14 PM Matt Corallo <lf-li...@mattcorallo.com>
> wrote:
>
>> > It can also be considered a bad thing that DoS ability is not based on
>> a number of messages. It
>> > means that for the one time cost of channel open/close, the attacker
>> can generate spam forever if
>> > they stay right below the rate limit.
>>
>> I don't see why this is a problem? This seems to assume some kind of
>> per-message cost that nodes
>> have to bear, but there is simply no such thing. Indeed, if message spam
>> causes denial of service to
>> other network participants, this would be an issue, but an attacker
>> generating spam from one
>> specific location within the network should not cause that, given some
>> form of backpressure within
>> the network.
>>
>
> It's more a general observation that an attacker can open a set of
> channels in multiple locations once and can use them forever to support
> potential attacks. That is assuming attacks aren't entirely thwarted with
> backpressure.
>
>
>> > Suppose the attacker has enough channels to hit the rate limit on an
>> important connection some hops
>> > away from themselves. They can then sustain that attack indefinitely,
>> assuming that they stay below
>> > the rate limit on the routes towards the target connection. What will
>> the response be in that case?
>> > Will node operators work together to try to trace back to the source
>> and take down the attacker?
>> > That requires operators to know each other.
>>
>> No it doesn't, backpressure works totally fine and automatically applies
>> pressure backwards until
>> nodes, in an automated fashion, are appropriately ratelimiting the source
>> of the traffic.
>>
>
> Turns out I did not actually fully understand the proposal. This version
> of backpressure is nice indeed.
>
> To get a better feel for how it works, I've coded up a simple single node
> simulation (
> https://gist.github.com/joostjager/bca727bdd4fc806e4c0050e12838ffa3),
> which produces output like this:
> https://gist.github.com/joostjager/682c4232c69f3c19ec41d7dd4643bb27.
> There are a few spammers and one real user. You can see that after some
> time, the spammers are all throttled down and the user packets keep being
> handled.
>
> If you add enough spammers, they are obviously still able to hit the next
> hop rate limit and affect the user. But because their incoming limits have
> been throttled down, you need a lot of them - depending on the minimum rate
> that the node goes down to.
>
> I am wondering about that spiraling-down effect for legitimate users. Once
> you hit the limit, it is decreased and it becomes easier to hit it again.
> If you don't adapt, you'll end up with a very low rate. You need to take a
> break to recover from that. I guess the assumption is that legitimate users
> never end up there, because the rate limits are much much higher than what
> they need. Even if they'd occasionally hit a limit on a busy connection,
> they can go through a lot of halvings before they'll get close to the rate
> that they require and it becomes a problem.
>
> But how would that work if the user only has a single channel and wants to
> retry? I suppose they need to be careful to use a long enough delay to not
> get into that down-spiral. But how do they determine what is long enough?
> Probably not a real problem in practice with network latency etc, even
> though a concrete value does need to be picked.
>
> Spammers are probably also not going to spam at max speed. They'd want to
> avoid their rate limit being slashed. In the simulation, I've added a
> `perfectSpammers` mode that creates spammers that have complete information
> on the state of the rate limiter. Not possible in reality of course. If you
> enable this mode, it does get hard for the user. Spammers keep pushing the
> limiter to right below the tripping point and an unknowing user trips it
> and spirals down. (
> https://gist.github.com/joostjager/6eef1de0cf53b5314f5336acf2b2a48a)
>
> I don't know to what extent spammers without perfect information can still
> be smart and optimize their spam rate. They can probably do better than
> keep sending at max speed.
>
> > Maybe this is a difference between lightning network and the internet
>> that is relevant for this
>> > discussion. That routers on the internet know each other and have
>> physical links between them, where
>> > as in lightning ties can be much looser.
>>
>> No? The internet does not work by ISPs calling each other up on the phone
>> to apply backpressure
>> manually whenever someone sends a lot of traffic? If anything lightning
>> ties between nodes are much,
>> much stronger than ISPs on the internet - you generally are at least
>> loosely trusting your peer with
>> your money, not just your customer's customer's bits.
>>
>
> Haha, okay, yes, I actually don't know what ISPs do in case of DoS
> attacks. Just trying to find differences between lightning and the internet
> that could be relevant for this discussion.
>
> Seems to me that lightning's onion routing makes it hard to trace back to
> the source without node operators calling each other up. Harder than it is
> on the internet. Of course if backpressure works, you don't need to trace
> nothing so it all doesn't matter.
>
> > Another question on my mind is: if this works really well for rate
>> limiting of
>> > onion messages, then why can't we use it for HTLCs as well?
>
>
>
> We do? 400-some-odd HTLCs in flight at once is a *really* tight rate
>> limit, even! Order of
>> magnitudes tighter than onion message rate limits need to be :)
>
>
> What we don't yet do is create backpressure on the incoming channels by
> lowering the `max_pending_htlc` limit dynamically.
>
> The idea could also be extended to htlc forwarding rate limiters, to
> combat short-lived htlc spam.
>
> Joost
>
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to