Instead of trying to make sure everyone’s gossip acceptance matches exactly, 
which as you point it seems like a quagmire, why not (a) do a sync on startup 
and (b) do syncs of the *new* things. This way you aren’t stuck staring at the 
same channels every time you do a sync. Sure, if you’re rejecting a large % of 
channel updates in total you’re gonna end up hitting degenerate cases, but we 
can consider tuning the sync frequency if that becomes an issue.

Like eclair, we don’t bother to rate limit and don’t see any issues with it, 
though we will skip relaying outbound updates if we’re saturating outbound 
connections.

> On Apr 14, 2022, at 17:06, Alex Myers <a...@endothermic.dev> wrote:
> 
> 
> Hello lightning developers,
> 
> I’ve been investigating set reconciliation as a means to reduce bandwidth and 
> redundancy of gossip message propagation. This builds on some earlier work 
> from Rusty using the minisketch library [1]. The idea is that each node will 
> build a sketch representing it’s own gossip set. Alice’s node will encode and 
> transmit this sketch to Bob’s node, where it will be merged with his own 
> sketch, and the differences produced. These differences should ideally be 
> exactly the latest missing gossip of both nodes. Due to size constraints, the 
> set differences will necessarily be encoded, but Bob’s node will be able to 
> identify which gossip Alice is missing, and may then transmit exactly those 
> messages.
> 
> This process is relatively straightforward, with the caveat that the sets 
> must otherwise match very closely (each sketch has a maximum capacity for 
> differences.) The difficulty here is that each node and lightning 
> implementation may have its own rules for gossip acceptance and propagation. 
> Depending on their gossip partners, not all gossip may propagate to the 
> entire network.
> 
> Core-lightning implements rate limiting for incoming channel updates and node 
> announcements. The default rate limit is 1 per day, with a burst of 4. I 
> analyzed my node’s gossip over a 14 day period, and found that, of all 
> publicly broadcasting half-channels, 18% of them fell afoul of our 
> spam-limiting rules at least once. [2]
> 
> Picking several offending channel ids, and digging further, the majority of 
> these appear to be flapping due to Tor or otherwise intermittent connections. 
> Well connected nodes may be more susceptible to this due to more frequent 
> routing attempts, and failures resulting in a returned channel update (which 
> otherwise might not have been broadcast.) A slight relaxation of the rate 
> limit resolves the majority of these cases.
> 
> A smaller subset of channels broadcast frequent channel updates with minor 
> adjustments to htlc_maximum_msat and fee_proportional_millionths parameters. 
> These nodes appear to be power users, with many channels and large balances. 
> I assume this is automated channel management at work.
> 
> Core-Lightning has updated rate-limiting in the upcoming release to achieve a 
> higher acceptance of incoming gossip, however, it seems that a broader 
> discussion of rate limits may now be worthwhile. A few immediate ideas:
> - A common listing of current default rate limits across lightning network 
> implementations.
> - Internal checks of RPC input to limit or warn of network propagation issues 
> if certain rates are exceeded.
> - A commonly adopted rate-limit standard.
> 
> My aim is a set reconciliation gossip type, which will use a common, simple 
> heuristic to accept or reject a gossip message. (Think one channel update per 
> block, or perhaps one per block_height << 5.) See my github for my current 
> draft. [3] This solution allows tighter consensus, yet suffers from the same 
> problem as original anti-spam measures – it remains somewhat arbitrary. I 
> would like to start a conversation regarding gossip propagation, 
> channel_update and node_announcement usage, and perhaps even bandwidth goals 
> for syncing gossip in the future (how about a million channels?) This would 
> aid in the development of gossip set reconciliation, but could also benefit 
> current node connection and routing reliability more generally.
> 
> Thanks,
> Alex
> 
> [1] https://github.com/sipa/minisketch
> [2] 
> https://github.com/endothermicdev/lnspammityspam/blob/main/sampleoutput.txt
> [3] 
> https://github.com/endothermicdev/lightning-rfc/blob/gossip-minisketch/07-routing-gossip.md#set-reconciliation
> 
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to