Hi Joost, If I understand correctly circuitbreaker, it adds new "dynamic" HTLC slot limits, in opposition to the "static" ones declared to your counterparty during channel opening (within the protocol-limit of 483). On top, you can rate-limit the HTLCs forwards based on the incoming source.
Effectively, this scheme would penalize your own routing hop as HTLC senders routing algorithms would likely cause the hop, in the lack of a new gossip message advertising the limits/rates. Further, it sounds for the measure to be robust, a circuitbreaking policy should be applied recursively by your counterparty on their network topologies. Otherwise, you're opening I think you'll have the non-constrained links being targeted as entry points in the rate-limited, "jamming-safe" subset of the graph. The limits could be based on HTLC values, e.g the Xth slots for HTLCs of value <1k sats, the Yth slots for HTLC of value <100k sats, the remaining Zth slots for HTLC of value <200k sats. IIRC, this jamming countermeasure has been implemented by Eclair [0] and discussed more in detail here [1]. While it increases the liquidity cost for an attacker to launch jamming attacks against the high-value slots, it comes at the major downside of lowering the cost for jamming low-value slots. Betting on an increasing bitcoin price, all other things equals, we'll make simple payments from lambda users more and more vulnerable. Beyond that, I think this solution would classify in the reputation-based family of solutions, where reputation is local and enforced through rate-limiting (from my understanding), I would say there is no economic proportionality enforced between the rate-limiting and the cost for an attacker. A jamming attacker could open new channels during period of low-fees in the edges of the graph, and still launch attacks against distant hops by splitting the jamming traffic between many sources, therefore avoiding force-closures (e.g 230 HTLCs from channel Mallory, 253 HTLCs from channel Malicia). Even force-closure in case of observed jamming isn't that evident, as the economic traffic could still be opportunistic locally but only a jam on a distant hop. So I think the economic equilibrium and risk structure of this scheme is still uncertain. However, I think the mode of queuing HTLCs is still valuable itself, independently of jamming, either a) to increase routed privacy of HTLC (e.g "delay my HTLC" option [2]), maybe with double opt-in of both senders/hops or b) as a congestion control mechanism where you have >100% of honest incoming HTLC traffic and you would like to earn routing fees on all of them, in the limit of what the outgoing CLTV allow you. An advanced idea could be based on statistics collection, sending back-pressure messages or HTLC sending scheduling information to the upstream hops. Let's say in the future we have more periodic payments, those ones could be scheduled in periods of low-congestions. So I wonder if we don't have two (or even more) problems when we think about jamming, the first one, the HTLC forward "counterparty risk" (the real jamming) and the other one, congestion and scheduling of efficient HTLC traffic, with some interdependencies between them of course. On experimenting with circuitbreaker, I don't know which HTLC intercepting interface it does expect, we still have a rudimentary one on the LDK-side only supporting JIT channels use-case. Best, Antoine [0] https://github.com/ACINQ/eclair/pull/2330 [1] https://jamming-dev.github.io/book/about.html [2] https://github.com/lightning/bolts/issues/1008 [3] https://github.com/lightningdevkit/rust-lightning/pull/1835 Le ven. 2 déc. 2022 à 13:00, Joost Jager <joost.ja...@gmail.com> a écrit : > A simple but imperfect way to deal with channel jamming and spamming is to > install a lightning firewall such as circuitbreaker [1]. It allows you to > set limits like a maximum number of pending htlcs (fight jamming) and/or a > rate limit (fight spamming). Incoming htlcs that exceed one of the limits > are failed back. > > Unfortunately there are problems with this approach. Failures probably > lead to extra retries which increases the load on the network as a whole. > Senders are also taking note of the failure, penalizing you and favoring > other nodes that do not apply limits. With a large part of the network > applying limits, it will probably work better because misbehaving nodes > have fewer opportunities to affect distant nodes. Then it also becomes less > likely that limits are applied to traffic coming from well-behaving nodes, > and the reputation of routing nodes isn’t degraded as much. > > But how to get to the point where restrictions are applied generally? > Currently there isn’t too much of a reason for routing nodes to constrain > their peers, and as explained above it may even be bad for business. > > Instead of failing, an alternative course of action for htlcs that exceed > a limit is to hold and queue them. For example, if htlcs come in at a high > rate, they’ll just be stacking up on the incoming side and are gradually > forwarded when their time has come. > > An advantage of this is that a routing node’s reputation isn’t affected > because there are no failures. This however may change in the future with > fat errors [2]. It will then become possible for senders to identify slow > nodes, and the no-penalty advantage may go away. > > A more important effect of holding is that the upstream nodes are punished > for the bad traffic that they facilitate. They see their htlc slots > occupied and funds frozen. They can’t coop close, and a force-close may be > expensive depending on the number of htlcs that materialize on the > commitment transaction. This could be a reason for them to take a careful > look at the source of that traffic, and also start applying limits. Limits > propagating recursively across the network and pushing bad senders into > corners where they can’t do much harm anymore. It’s sort of paradoxical: > jamming channels to stop jamming. > > One thing to note is that routing nodes employing the hold strategy are > potentially punishing themselves too. If they are the initiator of a > channel with many pending htlcs, the commit fee for them to pay can be high > in the case of a force-close. They do not need to sweep the htlcs that were > extended by their peer, but still. One way around this is to only use the > hold strategy for channels that the routing node did not initiate, and use > the fail action or no limit at all for self-initiated channels. > > Interested to hear opinions on the idea. I’ve also updated circuitbreaker > with a queue mode for anyone willing to experiment with it [3]. > > [1] https://github.com/lightningequipment/circuitbreaker > [2] > https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-October/003723.html > [3] https://github.com/lightningequipment/circuitbreaker/pull/14 > _______________________________________________ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >
_______________________________________________ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev