Re: [Lightning-dev] Onion messages rate-limiting

2022-07-26 Thread Bastien TEINTURIER
Hey all,

Thanks for the comments!
Here are a few answers inline to some points that aren't fully addressed
yet.

@laolu

> Another question on my mind is: if this works really well for rate
limiting of
> onion messages, then why can't we use it for HTLCs as well?

Because HTLC DoS is fundamentally different: the culprit isn't always
upstream, most of the time it's downstream (holding an HTLC), so back
pressure cannot work.

Onion messages don't have this issue at all because there's no
equivalent to holding an onion message downstream, it doesn't have
any impact on previous intermediate nodes.

@ariard

> as the onion messages routing is source-based, an attacker could
> exhaust or reduce targeted onion communication channels to prevent
> invoices exchanges between LN peers

Can you detail how? That's exactly what this scheme is trying to prevent.
This looks similar to Joost's early comment, but I think it's based on a
misunderstanding of the proposal (as Joost then acknowledged). Spammers
will be statistically penalized, which will allow honest messages to go
through. As Joost details below, attackers with perfect information about
the state of rate-limits can in theory saturate links, but in practice I
believe this cannot work for an extended period of time.

@joost

Cool work with the simulation, thanks!
Let us know if that yields other interesting results.

Cheers,
Bastien

Le lun. 11 juil. 2022 à 11:09, Joost Jager  a écrit :

> On Sun, Jul 10, 2022 at 9:14 PM Matt Corallo 
> wrote:
>
>> > It can also be considered a bad thing that DoS ability is not based on
>> a number of messages. It
>> > means that for the one time cost of channel open/close, the attacker
>> can generate spam forever if
>> > they stay right below the rate limit.
>>
>> I don't see why this is a problem? This seems to assume some kind of
>> per-message cost that nodes
>> have to bear, but there is simply no such thing. Indeed, if message spam
>> causes denial of service to
>> other network participants, this would be an issue, but an attacker
>> generating spam from one
>> specific location within the network should not cause that, given some
>> form of backpressure within
>> the network.
>>
>
> It's more a general observation that an attacker can open a set of
> channels in multiple locations once and can use them forever to support
> potential attacks. That is assuming attacks aren't entirely thwarted with
> backpressure.
>
>
>> > Suppose the attacker has enough channels to hit the rate limit on an
>> important connection some hops
>> > away from themselves. They can then sustain that attack indefinitely,
>> assuming that they stay below
>> > the rate limit on the routes towards the target connection. What will
>> the response be in that case?
>> > Will node operators work together to try to trace back to the source
>> and take down the attacker?
>> > That requires operators to know each other.
>>
>> No it doesn't, backpressure works totally fine and automatically applies
>> pressure backwards until
>> nodes, in an automated fashion, are appropriately ratelimiting the source
>> of the traffic.
>>
>
> Turns out I did not actually fully understand the proposal. This version
> of backpressure is nice indeed.
>
> To get a better feel for how it works, I've coded up a simple single node
> simulation (
> https://gist.github.com/joostjager/bca727bdd4fc806e4c0050e12838ffa3),
> which produces output like this:
> https://gist.github.com/joostjager/682c4232c69f3c19ec41d7dd4643bb27.
> There are a few spammers and one real user. You can see that after some
> time, the spammers are all throttled down and the user packets keep being
> handled.
>
> If you add enough spammers, they are obviously still able to hit the next
> hop rate limit and affect the user. But because their incoming limits have
> been throttled down, you need a lot of them - depending on the minimum rate
> that the node goes down to.
>
> I am wondering about that spiraling-down effect for legitimate users. Once
> you hit the limit, it is decreased and it becomes easier to hit it again.
> If you don't adapt, you'll end up with a very low rate. You need to take a
> break to recover from that. I guess the assumption is that legitimate users
> never end up there, because the rate limits are much much higher than what
> they need. Even if they'd occasionally hit a limit on a busy connection,
> they can go through a lot of halvings before they'll get close to the rate
> that they require and it becomes a problem.
>
> But how would that work if the user only has a single channel and wants to
> retry? I suppose they need to be careful to use a long enough delay to not
> get into that down-spiral. But how do they determine what is long enough?
> Probably not a real problem in practice with network latency etc, even
> though a concrete value does need to be picked.
>
> Spammers are probably also not going to spam at max speed. They'd want to
> avoid their rate limit being slashed. In 

Re: [Lightning-dev] Onion messages rate-limiting

2022-07-11 Thread Joost Jager
On Sun, Jul 10, 2022 at 9:14 PM Matt Corallo 
wrote:

> > It can also be considered a bad thing that DoS ability is not based on a
> number of messages. It
> > means that for the one time cost of channel open/close, the attacker can
> generate spam forever if
> > they stay right below the rate limit.
>
> I don't see why this is a problem? This seems to assume some kind of
> per-message cost that nodes
> have to bear, but there is simply no such thing. Indeed, if message spam
> causes denial of service to
> other network participants, this would be an issue, but an attacker
> generating spam from one
> specific location within the network should not cause that, given some
> form of backpressure within
> the network.
>

It's more a general observation that an attacker can open a set of channels
in multiple locations once and can use them forever to support potential
attacks. That is assuming attacks aren't entirely thwarted with
backpressure.


> > Suppose the attacker has enough channels to hit the rate limit on an
> important connection some hops
> > away from themselves. They can then sustain that attack indefinitely,
> assuming that they stay below
> > the rate limit on the routes towards the target connection. What will
> the response be in that case?
> > Will node operators work together to try to trace back to the source and
> take down the attacker?
> > That requires operators to know each other.
>
> No it doesn't, backpressure works totally fine and automatically applies
> pressure backwards until
> nodes, in an automated fashion, are appropriately ratelimiting the source
> of the traffic.
>

Turns out I did not actually fully understand the proposal. This version of
backpressure is nice indeed.

To get a better feel for how it works, I've coded up a simple single node
simulation (
https://gist.github.com/joostjager/bca727bdd4fc806e4c0050e12838ffa3), which
produces output like this:
https://gist.github.com/joostjager/682c4232c69f3c19ec41d7dd4643bb27. There
are a few spammers and one real user. You can see that after some time, the
spammers are all throttled down and the user packets keep being handled.

If you add enough spammers, they are obviously still able to hit the next
hop rate limit and affect the user. But because their incoming limits have
been throttled down, you need a lot of them - depending on the minimum rate
that the node goes down to.

I am wondering about that spiraling-down effect for legitimate users. Once
you hit the limit, it is decreased and it becomes easier to hit it again.
If you don't adapt, you'll end up with a very low rate. You need to take a
break to recover from that. I guess the assumption is that legitimate users
never end up there, because the rate limits are much much higher than what
they need. Even if they'd occasionally hit a limit on a busy connection,
they can go through a lot of halvings before they'll get close to the rate
that they require and it becomes a problem.

But how would that work if the user only has a single channel and wants to
retry? I suppose they need to be careful to use a long enough delay to not
get into that down-spiral. But how do they determine what is long enough?
Probably not a real problem in practice with network latency etc, even
though a concrete value does need to be picked.

Spammers are probably also not going to spam at max speed. They'd want to
avoid their rate limit being slashed. In the simulation, I've added a
`perfectSpammers` mode that creates spammers that have complete information
on the state of the rate limiter. Not possible in reality of course. If you
enable this mode, it does get hard for the user. Spammers keep pushing the
limiter to right below the tripping point and an unknowing user trips it
and spirals down. (
https://gist.github.com/joostjager/6eef1de0cf53b5314f5336acf2b2a48a)

I don't know to what extent spammers without perfect information can still
be smart and optimize their spam rate. They can probably do better than
keep sending at max speed.

> Maybe this is a difference between lightning network and the internet
> that is relevant for this
> > discussion. That routers on the internet know each other and have
> physical links between them, where
> > as in lightning ties can be much looser.
>
> No? The internet does not work by ISPs calling each other up on the phone
> to apply backpressure
> manually whenever someone sends a lot of traffic? If anything lightning
> ties between nodes are much,
> much stronger than ISPs on the internet - you generally are at least
> loosely trusting your peer with
> your money, not just your customer's customer's bits.
>

Haha, okay, yes, I actually don't know what ISPs do in case of DoS attacks.
Just trying to find differences between lightning and the internet that
could be relevant for this discussion.

Seems to me that lightning's onion routing makes it hard to trace back to
the source without node operators calling each other up. Harder than it is
on the 

Re: [Lightning-dev] Onion messages rate-limiting

2022-07-10 Thread Matt Corallo




On 7/10/22 4:43 AM, Joost Jager wrote:
It can also be considered a bad thing that DoS ability is not based on a number of messages. It 
means that for the one time cost of channel open/close, the attacker can generate spam forever if 
they stay right below the rate limit.


I don't see why this is a problem? This seems to assume some kind of per-message cost that nodes 
have to bear, but there is simply no such thing. Indeed, if message spam causes denial of service to 
other network participants, this would be an issue, but an attacker generating spam from one 
specific location within the network should not cause that, given some form of backpressure within 
the network.


Suppose the attacker has enough channels to hit the rate limit on an important connection some hops 
away from themselves. They can then sustain that attack indefinitely, assuming that they stay below 
the rate limit on the routes towards the target connection. What will the response be in that case? 
Will node operators work together to try to trace back to the source and take down the attacker? 
That requires operators to know each other.


No it doesn't, backpressure works totally fine and automatically applies pressure backwards until 
nodes, in an automated fashion, are appropriately ratelimiting the source of the traffic.


Maybe this is a difference between lightning network and the internet that is relevant for this 
discussion. That routers on the internet know each other and have physical links between them, where 
as in lightning ties can be much looser.


No? The internet does not work by ISPs calling each other up on the phone to apply backpressure 
manually whenever someone sends a lot of traffic? If anything lightning ties between nodes are much, 
much stronger than ISPs on the internet - you generally are at least loosely trusting your peer with 
your money, not just your customer's customer's bits.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-10 Thread Joost Jager
On Thu, Jun 30, 2022 at 4:19 AM Matt Corallo 
wrote:

> Better yet, as Val points out, requiring a channel to relay onion messages
> puts a very real,
> nontrivial (in a world of msats) cost to getting an onion messaging
> channel. Better yet, with
> backpressure ability to DoS onion message links isn't denominated in
> number of messages, but instead
> in number of channels you are able to create, making the backpressure
> system equivalent to today's
> HTLC DoS considerations, whereas explicit payment allows an attacker to
> pay much less to break the
> system.
>

It can also be considered a bad thing that DoS ability is not based on a
number of messages. It means that for the one time cost of channel
open/close, the attacker can generate spam forever if they stay right below
the rate limit.


> Ultimately, paying suffers from the standard PoW-for-spam issue - you
> cannot assign a reasonable
> cost that an attacker cares about without impacting the system's usability
> due to said cost. Indeed,
> making it expensive enough to mount a months-long DDoS without impacting
> legitimate users be pretty
> easy - at 1msat per relay of a 1366 byte onion message you can only
> saturate an average home users'
> 30Mbps connection for 30 minutes before you rack up a dollar in costs, but
> if your concern is
> whether someone can reasonably trivially take out the network for minutes
> at a time to make it have
> perceptibly high failure rates, no reasonable cost scheme will work. Quite
> the opposite - the only
> reasonable way to respond is to respond to a spike in traffic while
> maintaining QoS is to rate-limit
> by inbound edge!
>

Suppose the attacker has enough channels to hit the rate limit on an
important connection some hops away from themselves. They can then sustain
that attack indefinitely, assuming that they stay below the rate limit on
the routes towards the target connection. What will the response be in that
case? Will node operators work together to try to trace back to the source
and take down the attacker? That requires operators to know each other.

Maybe this is a difference between lightning network and the internet that
is relevant for this discussion. That routers on the internet know each
other and have physical links between them, where as in lightning ties can
be much looser.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-05 Thread Antoine Riard
Hi Bastien,

Thanks for the proposal,

While I think establishing the rate limiting based on channel topology
should be effective to mitigate against DoS attackers, there is still a
concern that the damage inflicted might be beyond the channel cost. I.e, as
the onion messages routing is source-based, an attacker could exhaust or
reduce targeted onion communication channels to prevent invoices exchanges
between LN peers, and thus disrupt their HTLC traffic. Moreover, if the
HTLC traffic is substitutable ("Good X sold by Merchant Alice can be
substituted by good Y sold by Merchant Mallory"), the attacker could
extract income from the DoS attack, compensating for the channel cost.

If plausible, such targeted onion bandwidth attacks would be fairly
sophisticated so it might not be a concern for the short-term. Though we
might have to introduce some proportion between onion bandwidth units
across the network and the cost of opening channels in the future...

One further concern, we might have "spontaneous" bandwidth DoS in the
future, if the onion traffic is leveraged beyond offers such as for
discovery of LSP liquidity services (e.g PeerSwap, instant inbound
channels, etc). For confidentiality reasons, a LN node might not use the
Noise connections to learn about such services. The LN node might be also
interested to do real market-discovery by fetching the services rates from
all the LSP, while engaging with only one, therefore provoking a spike in
onion bandwidth consumed across the network without symmetric
HTLC traffic. This concern is hypothetical as that class of traffic might
end up announced in gossip.

So I think backpressure based rate limiting is good to boostrap as a
"naive" DoS protection for onion messages though I'm not sure it will be
robust enough in the long-term.

Antoine

Le mer. 29 juin 2022 à 04:28, Bastien TEINTURIER  a
écrit :

> During the recent Oakland Dev Summit, some lightning engineers got together 
> to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting 
> scheme that
> statistically propagates back to the correct sender, which we describe in 
> details below.
>
> You can also read this in gist format if that works better for you [1].
>
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
> relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion 
> messages from
> peers with whom you have channels, for example 10/seconds when you have a 
> channel and 1/second
> when you don't.
>
> When relaying an onion message, nodes keep track of where it came from (by 
> using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per 
> outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't 
> need to be persisted.
>
> Let's walk through an example to illustrate this mechanism:
>
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's 
> `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Dave
> * ...
>
> We introduce a new message that will be sent when dropping an incoming onion 
> message because it
> reached rate limits:
>
> 1. type: 515 (`onion_message_drop`)
> 2. data:
>* [`rate_limited`:`u8`]
>* [`shared_secret_hash`:`32*byte`]
>
> Whenever an incoming onion message reaches the rate limit, the receiver sends 
> `onion_message_drop`
> to the sender. The sender looks at its per-connection state to find where the 
> message was coming
> from and relays `onion_message_drop` to the last sender, halving their rate 
> limits with that peer.
>
> If the sender doesn't overflow the rate limit again, the receiver should 
> double the rate limit
> after 30 seconds, until it reaches the default rate limit again.
>
> The flow will look like:
>
> Alice  Bob  Carol
>   | | |
>   |  onion_message  | |
>   |>| |
>   | |  onion_message  |
>   | |>|
>   | |onion_message_drop   |
>   | |<|
>   |onion_message_drop   | |
>   |<| |
>
> The `shared_secret_hash` field contains a BIP 340 tagged hash of the Sphinx 
> shared secret of the
> rate limiting peer (in the example above, Carol):
>
> * 

Re: [Lightning-dev] Onion messages rate-limiting

2022-07-03 Thread Matt Corallo



On 7/1/22 8:48 PM, Olaoluwa Osuntokun wrote:

Hi Val,

 > Another huge win of backpressure is that it only needs to happen in DoS
 > situations, meaning it doesn’t have to impact users in the normal case.

I agree, I think the same would apply to prepayments as well (0 or 1 msat in
calm times). My main concern with relying _only_ on backpressure rate
limiting is that we'd end up w/ your first scenario more often than not,
which means routine (and more important to the network) things like fetching
invoices becomes unreliable.


You're still thinking about this in a costing world, but this really is a networking problem, not a 
costing one.



I'm not saying we should 100% compare onion messages to Tor, but that we might
be able to learn from what works and what isn't working for them. The systems
aren't identical, but have some similarities.


To DoS here you have to have *very* asymmetric attack power - regular ol' invoice requests are 
trivial amounts of bandwidth, like, really, really trivial. Like, 1000x less bandwidth than an 
average ol' home node on a DOCSIS high-latency line with 20Mbps up has available. Closer to 
1,000,000x less if we're talking about "real metal".


More importantly, Tor's current attack actually *isn't* a simple DoS attack. The attack there isn't 
relevant to onion messages at all, you're just throwing up roadblocks with nonsense here.




On the topic of parameters across the network: could we end up in a scenario
where someone is doing like streaming payments for a live stream (or w/e),
ends up fetching a ton of invoices (actual traffic leading to payments), but
then ends up being erroneously rate limited by their peers? Assuming they
have 1 or 2 channels that have now all been clamped down, is waiting N
minutes (or w/e) their only option? If so then this might lead to their
livestream (data being transmitted elsewhere) being shut off. Oops, they just
missed the greatest World Cup goal in history!  You had to be there, you had to
be there, you had to *be* there...


You're basically making a "you had to have more inbound capacity" argument, which, sure, yes, you 
do. Even better, though, onion messages are *cheap*, like absurdly cheap, so if you have enough 
inbound capacity you're almost certain to have enough inbound *network* capacity to handle some 
invoice requests, hell, they're a millionth the cost of the HTLCs you're about to receive 
anyway...this argument is just nonsense.




Another question on my mind is: if this works really well for rate limiting of
onion messages, then why can't we use it for HTLCs as well?


We do? 400-some-odd HTLCs in flight at once is a *really* tight rate limit, even! Order of 
magnitudes tighter than onion message rate limits need to be :)


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-03 Thread Matt Corallo




On 7/1/22 9:09 PM, Olaoluwa Osuntokun wrote:

Hi Matt,

 > Ultimately, paying suffers from the standard PoW-for-spam issue - you
 > cannot assign a reasonable cost that an attacker cares about without
 > impacting the system's usability due to said cost.

Applying this statement to related a area


I mean, I think its only mostly-related, cause HTLCs are pretty different in 
cost, but.


would you also agree that proposals
to introduce pre-payments for HTLCs to mitigate jamming attacks is similarly
a dead end?


I dunno if its a "dead end", but, indeed, the naive proposals I'm definitely no fan of whatsoever. I 
certainly remain open to being shown I'm wrong.



Personally, this has been my opinion for some time now. Which
is why I advocate for the forwarding pass approach (gracefully degrade to
stratified topology), which in theory would allow the major flows of the
network to continue in the face of disruption.


I'm starting to come around to allowing a "pay per HTLC-locked-time" fee, with Rusty's proposal 
around allowing someone to force-close a channel to "blame"
 a hop for not failing back after fees stop coming in. Its really nifty in theory and doesn't have 
all the classic issues that up-front-fees have, but it puts a very, very, very high premium on high 
uptime, which may be catastrophic, dunno.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-01 Thread Olaoluwa Osuntokun
Hi Matt,

> Ultimately, paying suffers from the standard PoW-for-spam issue - you
> cannot assign a reasonable cost that an attacker cares about without
> impacting the system's usability due to said cost.

Applying this statement to related a area, would you also agree that
proposals
to introduce pre-payments for HTLCs to mitigate jamming attacks is similarly
a dead end?  Personally, this has been my opinion for some time now. Which
is why I advocate for the forwarding pass approach (gracefully degrade to
stratified topology), which in theory would allow the major flows of the
network to continue in the face of disruption.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-01 Thread Olaoluwa Osuntokun
Hi Val,

> Another huge win of backpressure is that it only needs to happen in DoS
> situations, meaning it doesn’t have to impact users in the normal case.

I agree, I think the same would apply to prepayments as well (0 or 1 msat in
calm times). My main concern with relying _only_ on backpressure rate
limiting is that we'd end up w/ your first scenario more often than not,
which means routine (and more important to the network) things like fetching
invoices becomes unreliable.

I'm not saying we should 100% compare onion messages to Tor, but that we
might
be able to learn from what works and what isn't working for them. The
systems
aren't identical, but have some similarities.

On the topic of parameters across the network: could we end up in a scenario
where someone is doing like streaming payments for a live stream (or w/e),
ends up fetching a ton of invoices (actual traffic leading to payments), but
then ends up being erroneously rate limited by their peers? Assuming they
have 1 or 2 channels that have now all been clamped down, is waiting N
minutes (or w/e) their only option? If so then this might lead to their
livestream (data being transmitted elsewhere) being shut off. Oops, they
just
missed the greatest World Cup goal in history!  You had to be there, you
had to
be there, you had to *be* there...

Another question on my mind is: if this works really well for rate limiting
of
onion messages, then why can't we use it for HTLCs as well?

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Matt Corallo
One further note, I don’t think it makes sense to specify exactly what the 
rate-limiting behavior is here - if a node wants to do something other than the 
general “keep track of last forwarded message source and rate limit them” logic 
they should be free to, there’s no reason that needs to be normative (and there 
may be some reason to think it’s vulnerable to a node deliberately causing one 
inbound edge to be limited even though they’re spamming a different one).

> On Jun 29, 2022, at 04:28, Bastien TEINTURIER  wrote:
> 
> 
> During the recent Oakland Dev Summit, some lightning engineers got together 
> to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting 
> scheme that
> statistically propagates back to the correct sender, which we describe in 
> details below.
> You can also read this in gist format if that works better for you [1].
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
> relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion 
> messages from
> peers with whom you have channels, for example 10/seconds when you have a 
> channel and 1/second
> when you don't.
> 
> When relaying an onion message, nodes keep track of where it came from (by 
> using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per 
> outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't 
> need to be persisted.
> 
> Let's walk through an example to illustrate this mechanism:
> 
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's 
> `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Dave
> * ...
> 
> We introduce a new message that will be sent when dropping an incoming onion 
> message because it
> reached rate limits:
> 
> 1. type: 515 (`onion_message_drop`)
> 2. data:
>* [`rate_limited`:`u8`]
>* [`shared_secret_hash`:`32*byte`]
> 
> Whenever an incoming onion message reaches the rate limit, the receiver sends 
> `onion_message_drop`
> to the sender. The sender looks at its per-connection state to find where the 
> message was coming
> from and relays `onion_message_drop` to the last sender, halving their rate 
> limits with that peer.
> 
> If the sender doesn't overflow the rate limit again, the receiver should 
> double the rate limit
> after 30 seconds, until it reaches the default rate limit again.
> 
> The flow will look like:
> 
> Alice  Bob  Carol
>   | | |
>   |  onion_message  | |
>   |>| |
>   | |  onion_message  |
>   | |>|
>   | |onion_message_drop   |
>   | |<|
>   |onion_message_drop   | |
>   |<| |
> 
> The `shared_secret_hash` field contains a BIP 340 tagged hash of the Sphinx 
> shared secret of the
> rate limiting peer (in the example above, Carol):
> 
> * `shared_secret_hash = SHA256(SHA256("onion_message_drop") || 
> SHA256("onion_message_drop") || sphinx_shared_secret)`
> 
> This value is known by the node that created the onion message: if 
> `onion_message_drop` propagates
> all the way back to them, it lets them know which part of the route is 
> congested, allowing them
> to retry through a different path.
> 
> Whenever there is some latency between nodes and many onion messages, 
> `onion_message_drop` may
> be relayed to the incorrect incoming peer (since we only store the `node_id` 
> of the _last_ incoming
> peer in our outgoing connection state). The following example highlights this:
> 
>  Eve   Bob  Carol
>   |  onion_message  | |
>   |>|  onion_message  |
>   |  onion_message  |>|
>   |>|  onion_message  |
>   |  onion_message  |>|
>   |>|  onion_message  |
> |>|
> Alice   |onion_message_drop   |
>   |  onion_message  |+|
>   |>|  onion_message ||
>   | 

Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Christian Decker
Thanks Bastien for writing up the proposal, it is simple but effective I
think.

>> One issue I see w/ the first category is that a single party can flood the
>> network and cause nodes to trigger their rate limits, which then affects
>> the
>> usability of the onion messages for all other well-behaving parties.
>>
>
> But that's exactly what this proposal addresses? That single party can
> only flood for a very small amount of time before being rate-limited for
> a while, so it cannot disrupt other parties that much (to be properly
> quantified by research, but it seems quite intuitive).

Indeed, it creates a tiny bubble (1-2 hops) in which an attacker can
indeed trigger the rate-limiter, but beyond which its messages simply
get dropped. In this respect it is very similar to the staggered
gossip, in which a node may send updates at an arbitrary rate, but since
each node will locally buffer these changes and aggregate them, the
effective rate that is forwarded/broadcast is such that it doesn't
overwhelm the network (parametrization and network size apart ^^).

This is also an argument for not allowing onion messages over
non-channel connections, since otherwise an attacker could arbitrarily
extend their bubble to encompass every channel in the network, and can
sybil its way to covering the entire network (depending on rate limiter,
and their parameters and timing the attacker bubble may extend to more
than a single hop).

Going back a step it is also questionable whether non-channel OM
forwarding is usable at all, since nodes usually do not know about the
existence of these connections at all (not gossiped). I'd therefore not
allow non-channel forwarding at all, with the small exception of some
local applications, where local knowledge is required, but in that case
the OM should signal this clearly to the forwarding node as well or rely
on direct messaging with the peer (pre-channel negotiation, etc).

>> W.r.t this topic, one event that imo is worth pointing out is that a very
>> popular onion routing system, Tor, has been facing a severe DDoS attack
>> that
>> has lasted weeks, and isn't yet fully resolved [2].
>>
>
> I don't think we can compare lightning to Tor, the only common design
> is that there is onion encryption, but the networking parts are very
> different (and the attack vectors on Tor are mostly on components that
> don't exist in lightning).

Indeed, a major difference if we insist on there being a channel is that
it is no longer easy to sybil the network, and there are no ways to just
connect to a node and send it data (which is pretty much the Tor circuit
construction). So we can rely on the topology of the network to keep an
attacker constrained in its local region of the network, and extending
the attacker's reach would require opening channel, i.e., wouldn't be
free.

Cheers,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Bastien TEINTURIER
Thanks for your inputs!

One issue I see w/ the first category is that a single party can flood the
> network and cause nodes to trigger their rate limits, which then affects
> the
> usability of the onion messages for all other well-behaving parties.
>

But that's exactly what this proposal addresses? That single party can
only flood for a very small amount of time before being rate-limited for
a while, so it cannot disrupt other parties that much (to be properly
quantified by research, but it seems quite intuitive).

W.r.t this topic, one event that imo is worth pointing out is that a very
> popular onion routing system, Tor, has been facing a severe DDoS attack
> that
> has lasted weeks, and isn't yet fully resolved [2].
>

I don't think we can compare lightning to Tor, the only common design
is that there is onion encryption, but the networking parts are very
different (and the attack vectors on Tor are mostly on components that
don't exist in lightning).

I can only assume that the desire to add a cost to onion messages

ultimately stems from a desire to ensure every possible avenue for

value extraction is given to routing nodes
>

I think that Matt is pointing out the main distinction here between the
two proposals. While I can sympathize with that goal, I agree with
Matt that it's probably misplaced here (and that proposal is orders of
magnitude more complex than a simple rate-limit).

I do think any spec for this shouldn't make any recommendations about
> willingness to relay onion
> messages for anonymous no-channel third parties, if anything deliberately
> staying mum on it and
> allowing nodes to adapt policy (and probably rate-limit no-channel
> third-parties before they rate
> limit any peer they have a channel with). Ultimately, we have to assume
> that nodes will attempt to
> send onion messages by routing through the existing channel graph, so
> there's little reason to worry
> too much about ensuring ability to relay for anonymous parties.
>

Sounds good, I'll do that when actually specifying this in a bolt.


>  Better yet, as Val points out, requiring a channel to relay onion
> messages puts a very real,

nontrivial (in a world of msats) cost to getting an onion messaging
> channel.


Yes, this is the main component that does efficiently protect against DoS.
At some point if a peer keeps exceeding their onion rate-limits and isn't
providing you with enough HTLCs to relay, you can force-close on them,
which makes that cost real and stops the spamming attempts for a while.

Cheers,
Bastien

Le jeu. 30 juin 2022 à 04:19, Matt Corallo  a
écrit :

> Thanks Bastien for writing this up! This is a pretty trivial and
> straightforward way to rate-limit
> onion messages in a way that allows legitimate users to continue using the
> system in spite of some
> bad actors trying (and failing, due to being rate-limited) to DoS the
> network.
>
> I do think any spec for this shouldn't make any recommendations about
> willingness to relay onion
> messages for anonymous no-channel third parties, if anything deliberately
> staying mum on it and
> allowing nodes to adapt policy (and probably rate-limit no-channel
> third-parties before they rate
> limit any peer they have a channel with). Ultimately, we have to assume
> that nodes will attempt to
> send onion messages by routing through the existing channel graph, so
> there's little reason to worry
> too much about ensuring ability to relay for anonymous parties.
>
> Better yet, as Val points out, requiring a channel to relay onion messages
> puts a very real,
> nontrivial (in a world of msats) cost to getting an onion messaging
> channel. Better yet, with
> backpressure ability to DoS onion message links isn't denominated in
> number of messages, but instead
> in number of channels you are able to create, making the backpressure
> system equivalent to today's
> HTLC DoS considerations, whereas explicit payment allows an attacker to
> pay much less to break the
> system.
>
> As for the proposal to charge for onion messages, I'm still not at all
> sure where its coming from.
> It seems to flow from a classic "have a hammer (a system to make
> micropayments for things), better
> turn this problem into a nail (by making users pay for it)" approach, but
> it doesn't actually solve
> the problem at hand.
>
> Even if you charge for onion messages, users may legitimately want to send
> a bunch of payments in
> bulk, and trivially overflow a home or Tor nodes' bandwidth. The only
> response to that, whether its
> a DoS attack or a legitimate user, is to rate-limit, and to rate-limit in
> a way that tells the user
> sending the messages to back off! Sure, you could do that by failing onion
> messages with an error
> that updates the fee you charge, but you're ultimately doing a poor-man's
> (or, I suppose,
> rich-man's) version of what Bastien proposes, not adding some fundamental
> difference.
>
> Ultimately, paying suffers from the standard PoW-for-spam issue - 

Re: [Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Matt Corallo
Thanks Bastien for writing this up! This is a pretty trivial and straightforward way to rate-limit 
onion messages in a way that allows legitimate users to continue using the system in spite of some 
bad actors trying (and failing, due to being rate-limited) to DoS the network.


I do think any spec for this shouldn't make any recommendations about willingness to relay onion 
messages for anonymous no-channel third parties, if anything deliberately staying mum on it and 
allowing nodes to adapt policy (and probably rate-limit no-channel third-parties before they rate 
limit any peer they have a channel with). Ultimately, we have to assume that nodes will attempt to 
send onion messages by routing through the existing channel graph, so there's little reason to worry 
too much about ensuring ability to relay for anonymous parties.


Better yet, as Val points out, requiring a channel to relay onion messages puts a very real, 
nontrivial (in a world of msats) cost to getting an onion messaging channel. Better yet, with 
backpressure ability to DoS onion message links isn't denominated in number of messages, but instead 
in number of channels you are able to create, making the backpressure system equivalent to today's 
HTLC DoS considerations, whereas explicit payment allows an attacker to pay much less to break the 
system.


As for the proposal to charge for onion messages, I'm still not at all sure where its coming from. 
It seems to flow from a classic "have a hammer (a system to make micropayments for things), better 
turn this problem into a nail (by making users pay for it)" approach, but it doesn't actually solve 
the problem at hand.


Even if you charge for onion messages, users may legitimately want to send a bunch of payments in 
bulk, and trivially overflow a home or Tor nodes' bandwidth. The only response to that, whether its 
a DoS attack or a legitimate user, is to rate-limit, and to rate-limit in a way that tells the user 
sending the messages to back off! Sure, you could do that by failing onion messages with an error 
that updates the fee you charge, but you're ultimately doing a poor-man's (or, I suppose, 
rich-man's) version of what Bastien proposes, not adding some fundamental difference.


Ultimately, paying suffers from the standard PoW-for-spam issue - you cannot assign a reasonable 
cost that an attacker cares about without impacting the system's usability due to said cost. Indeed, 
making it expensive enough to mount a months-long DDoS without impacting legitimate users be pretty 
easy - at 1msat per relay of a 1366 byte onion message you can only saturate an average home users' 
30Mbps connection for 30 minutes before you rack up a dollar in costs, but if your concern is 
whether someone can reasonably trivially take out the network for minutes at a time to make it have 
perceptibly high failure rates, no reasonable cost scheme will work. Quite the opposite - the only 
reasonable way to respond is to respond to a spike in traffic while maintaining QoS is to rate-limit 
by inbound edge!


Ultimately, what we have here is a networking problem, that has to be solved with networking 
solutions, not a costing problem, which can be solved with payment. I can only assume that the 
desire to add a cost to onion messages ultimately stems from a desire to ensure every possible 
avenue for value extraction is given to routing nodes, but I think that desire is misplaced in this 
case - the cost of bandwidth is diminutive compared to other costs of routing node operation, 
especially when you consider sensible rate-limits as proposed in Bastien's email.


Indeed, if anyone were proposing rate-limits which would allow anything close to enough bandwidth 
usage to cause "lightning is turning into Tor and has Tor's problems" to be a legitimate concern I'd 
totally agree we should charge for its use. But no one is, nor has anyone ever seriously, to my 
knowledge, proposed such a thing. If lightning messages get deployed and start eating up even single 
Mbps's on a consistent basis on nodes, we can totally revisit this, its not like we are shutting the 
door to any possible costing system if it becomes necessary, but rate-limiting has to happen either 
way, so we should start there and see if we need costing, not jump to costing on day one, hampering 
utility.


Matt

On 6/29/22 8:22 PM, Olaoluwa Osuntokun wrote:

Hi t-bast,

Happy to see this finally written up! With this, we have two classes of
proposals for rate limiting onion messaging:

   1. Back propagation based rate limiting as described here.

   2. Allowing nodes to express a per-message cost for their forwarding
   services, which is described here [1].

I still need to digest everything proposed here, but personally I'm more
optimistic about the 2nd category than the 1st.

One issue I see w/ the first category is that a single party can flood the
network and cause nodes to trigger their rate limits, which then affects the
usability of the 

Re: [Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread vwallace via Lightning-dev
Heya Laolu,

From my PoV, adding prepayments to onion messages is putting the cart before 
the horse a bit, think there's a good amount of recourse before resorting to 
that.

Seems there are two cases to address here:

1. People are trying to stream GoT over lightning

In this case, just rate limiting should disrupt their viewing experience such 
that it becomes unusable. Don’t think LN can be compared to Tor here because 
they explicitly want to support this case and we don’t.

2. An attacker is trying to flood the network with OMs

In this case, IMO LN also can’t be compared to Tor because you can limit your 
OMs to channel partners only, and this in itself provides a “proof of work” 
that an attacker can’t surmount without actually opening channels.

Another huge win of backpressure is that it only needs to happen in DoS 
situations, meaning it doesn’t have to impact users in the normal case.

Cheers —Val

--- Original Message ---
On Wednesday, June 29th, 2022 at 8:22 PM, Olaoluwa Osuntokun 
 wrote:

> Hi t-bast,
>
> Happy to see this finally written up! With this, we have two classes of
> proposals for rate limiting onion messaging:
>
> 1. Back propagation based rate limiting as described here.
>
> 2. Allowing nodes to express a per-message cost for their forwarding
> services, which is described here [1].
>
> I still need to digest everything proposed here, but personally I'm more
> optimistic about the 2nd category than the 1st.
>
> One issue I see w/ the first category is that a single party can flood the
> network and cause nodes to trigger their rate limits, which then affects the
> usability of the onion messages for all other well-behaving parties. An
> example, this might mean I can't fetch invoices, give up after a period of
> time (how long?), then result to a direct connection (perceived payment
> latency accumulated along the way).
>
> With the 2nd route, if an attacker floods the network, they need to directly
> pay for the forwarding usage themselves, though they may also directly cause
> nodes to adjust their forwarding rate accordingly. However in this case, the
> attacker has incurred a concrete cost, and even if the rates rise, then
> those that really need the service (notifying an LSP that a user is online
> or w/e) can continue to pay that new rate. In other words, by _pricing_ the
> resource utilization, demand preferences can be exchanged, leading to more
> efficient long term resource allocation.
>
> W.r.t this topic, one event that imo is worth pointing out is that a very
> popular onion routing system, Tor, has been facing a severe DDoS attack that
> has lasted weeks, and isn't yet fully resolved [2]. The on going flooding
> attack on Tor has actually started to affect LN (iirc over half of all
> public routing nodes w/ an advertised address are tor-only), and other
> related systems like Umbrel that 100% rely on tor for networking traversal.
> Funnily enough, Tor developers have actually suggested adding some PoW to
> attempt to mitigate DDoS attacks [3]. In that same post they throw around
> the idea of using anonymous tokens to allow nodes to give them to "good"
> clients, which is pretty similar to my lofty Forwarding Pass idea as relates
> to onion messaging, and also general HTLC jamming mitigation.
>
> In summary, we're not the first to attempt to tackle the problem of rate
> limiting relayed message spam in an anonymous/pseudonymous network, and we
> can probably learn a lot from what is and isn't working w.r.t how Tor
> handles things. As you note near the end of your post, this might just be
> the first avenue in a long line of research to best figure out how to handle
> the spam concerns introduced by onion messaging. From my PoV, it still seems
> to be an open question if the same network can be _both_ a reliable
> micro-payment system _and_ also a reliable arbitrary message transport
> layer. I guess only time will tell...
>
>> The `shared_secret_hash` field contains a BIP 340 tagged hash
>
> Any reason to use the tagged hash here vs just a plain ol HMAC? Under the
> hood, they have a pretty similar construction [4].
>
> [1]: 
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003498.html
> [2]: https://status.torproject.org/issues/2022-06-09-network-ddos/
> [3]: https://blog.torproject.org/stop-the-onion-denial/
> [4]: https://datatracker.ietf.org/doc/html/rfc2104
>
> -- Laolu
>
> On Wed, Jun 29, 2022 at 1:28 AM Bastien TEINTURIER  wrote:
>
>> During the recent Oakland Dev Summit, some lightning engineers got together 
>> to discuss DoS
>> protection for onion messages. Rusty proposed a very simple rate-limiting 
>> scheme that
>> statistically propagates back to the correct sender, which we describe in 
>> details below.
>>
>> You can also read this in gist format if that works better for you [1].
>>
>> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
>> relayed (e.g.
>> N/seconds with some burst 

Re: [Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Olaoluwa Osuntokun
Hi t-bast,

Happy to see this finally written up! With this, we have two classes of
proposals for rate limiting onion messaging:

  1. Back propagation based rate limiting as described here.

  2. Allowing nodes to express a per-message cost for their forwarding
  services, which is described here [1].

I still need to digest everything proposed here, but personally I'm more
optimistic about the 2nd category than the 1st.

One issue I see w/ the first category is that a single party can flood the
network and cause nodes to trigger their rate limits, which then affects the
usability of the onion messages for all other well-behaving parties. An
example, this might mean I can't fetch invoices, give up after a period of
time (how long?), then result to a direct connection (perceived payment
latency accumulated along the way).

With the 2nd route, if an attacker floods the network, they need to directly
pay for the forwarding usage themselves, though they may also directly cause
nodes to adjust their forwarding rate accordingly. However in this case, the
attacker has incurred a concrete cost, and even if the rates rise, then
those that really need the service (notifying an LSP that a user is online
or w/e) can continue to pay that new rate. In other words, by _pricing_ the
resource utilization, demand preferences can be exchanged, leading to more
efficient long term resource allocation.

W.r.t this topic, one event that imo is worth pointing out is that a very
popular onion routing system, Tor, has been facing a severe DDoS attack that
has lasted weeks, and isn't yet fully resolved [2]. The on going flooding
attack on Tor has actually started to affect LN (iirc over half of all
public routing nodes w/ an advertised address are tor-only), and other
related systems like Umbrel that 100% rely on tor for networking traversal.
Funnily enough, Tor developers have actually suggested adding some PoW to
attempt to mitigate DDoS attacks [3]. In that same post they throw around
the idea of using anonymous tokens to allow nodes to give them to "good"
clients, which is pretty similar to my lofty Forwarding Pass idea as relates
to onion messaging, and also general HTLC jamming mitigation.

In summary, we're not the first to attempt to tackle the problem of rate
limiting relayed message spam in an anonymous/pseudonymous network, and we
can probably learn a lot from what is and isn't working w.r.t how Tor
handles things. As you note near the end of your post, this might just be
the first avenue in a long line of research to best figure out how to handle
the spam concerns introduced by onion messaging. From my PoV, it still seems
to be an open question if the same network can be _both_ a reliable
micro-payment system _and_ also a reliable arbitrary message transport
layer. I guess only time will tell...

> The `shared_secret_hash` field contains a BIP 340 tagged hash

Any reason to use the tagged hash here vs just a plain ol HMAC? Under the
hood, they have a pretty similar construction [4].

[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003498.html
[2]: https://status.torproject.org/issues/2022-06-09-network-ddos/
[3]: https://blog.torproject.org/stop-the-onion-denial/
[4]: https://datatracker.ietf.org/doc/html/rfc2104

-- Laolu



On Wed, Jun 29, 2022 at 1:28 AM Bastien TEINTURIER  wrote:

> During the recent Oakland Dev Summit, some lightning engineers got together 
> to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting 
> scheme that
> statistically propagates back to the correct sender, which we describe in 
> details below.
>
> You can also read this in gist format if that works better for you [1].
>
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
> relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion 
> messages from
> peers with whom you have channels, for example 10/seconds when you have a 
> channel and 1/second
> when you don't.
>
> When relaying an onion message, nodes keep track of where it came from (by 
> using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per 
> outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't 
> need to be persisted.
>
> Let's walk through an example to illustrate this mechanism:
>
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's 
> `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Dave
> * ...
>
> We introduce a new 

[Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Bastien TEINTURIER
During the recent Oakland Dev Summit, some lightning engineers got
together to discuss DoS
protection for onion messages. Rusty proposed a very simple
rate-limiting scheme that
statistically propagates back to the correct sender, which we describe
in details below.

You can also read this in gist format if that works better for you [1].

Nodes apply per-peer rate limits on _incoming_ onion messages that
should be relayed (e.g.
N/seconds with some burst tolerance). It is recommended to allow more
onion messages from
peers with whom you have channels, for example 10/seconds when you
have a channel and 1/second
when you don't.

When relaying an onion message, nodes keep track of where it came from
(by using the `node_id` of
the peer who sent that message). Nodes only need the last such
`node_id` per outgoing connection,
which ensures the memory footprint is very small. Also, this data
doesn't need to be persisted.

Let's walk through an example to illustrate this mechanism:

* Bob receives an onion message from Alice that should be relayed to Carol
* After relaying that message, Bob stores Alice's `node_id` in its
per-connection state with Carol
* Bob receives an onion message from Eve that should be relayed to Carol
* After relaying that message, Bob replaces Alice's `node_id` with
Eve's `node_id` in its
per-connection state with Carol
* Bob receives an onion message from Alice that should be relayed to Dave
* After relaying that message, Bob stores Alice's `node_id` in its
per-connection state with Dave
* ...

We introduce a new message that will be sent when dropping an incoming
onion message because it
reached rate limits:

1. type: 515 (`onion_message_drop`)
2. data:
   * [`rate_limited`:`u8`]
   * [`shared_secret_hash`:`32*byte`]

Whenever an incoming onion message reaches the rate limit, the
receiver sends `onion_message_drop`
to the sender. The sender looks at its per-connection state to find
where the message was coming
from and relays `onion_message_drop` to the last sender, halving their
rate limits with that peer.

If the sender doesn't overflow the rate limit again, the receiver
should double the rate limit
after 30 seconds, until it reaches the default rate limit again.

The flow will look like:

Alice  Bob  Carol
  | | |
  |  onion_message  | |
  |>| |
  | |  onion_message  |
  | |>|
  | |onion_message_drop   |
  | |<|
  |onion_message_drop   | |
  |<| |

The `shared_secret_hash` field contains a BIP 340 tagged hash of the
Sphinx shared secret of the
rate limiting peer (in the example above, Carol):

* `shared_secret_hash = SHA256(SHA256("onion_message_drop") ||
SHA256("onion_message_drop") || sphinx_shared_secret)`

This value is known by the node that created the onion message: if
`onion_message_drop` propagates
all the way back to them, it lets them know which part of the route is
congested, allowing them
to retry through a different path.

Whenever there is some latency between nodes and many onion messages,
`onion_message_drop` may
be relayed to the incorrect incoming peer (since we only store the
`node_id` of the _last_ incoming
peer in our outgoing connection state). The following example highlights this:

 Eve   Bob  Carol
  |  onion_message  | |
  |>|  onion_message  |
  |  onion_message  |>|
  |>|  onion_message  |
  |  onion_message  |>|
  |>|  onion_message  |
|>|
Alice   |onion_message_drop   |
  |  onion_message  |+|
  |>|  onion_message ||
  | ||--->|
  | |||
  | |||
  | |||
  |onion_message_drop   |<---+|
  |<| |

In this example, Eve is spamming but `onion_message_drop` is
propagated back to Alice instead.
However, this scheme will _statistically_ penalize the right incoming
peer (with a probability
depending on the volume of onion messages that the spamming peer is
generating compared to the
volume of legitimate onion messages).

It is an interesting research problem to find formulas for those
probabilities to evaluate how
efficient this will be against various