Re: [Lightning-dev] Quick analysis of channel_update data

2019-02-18 Thread Fabrice Drouin
I'll start collecting and checking data again, but from what I see now using our checksum extension still significantly reduces gossip traffic. I'm not saying that heuristics to reduce the number of updates cannot help, but I just don't think it should be our primary way of handling such traffic.

Re: [Lightning-dev] Quick analysis of channel_update data

2019-02-18 Thread Rusty Russell
BTW, I took a snapshot of our gossip store from two weeks back, which simply stores all gossip in order (compacting every week or so). channel_updates which updated existing channels: 17766 ... which changed *only* the timestamps: 12644 ... which were a week since the last: 7233 ... which only

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-20 Thread Fabrice Drouin
Additional info on channel_update traffic: Comparing daily backups of routing tables over the last 2 weeks shows that nearly all channels get at least a new update every day. This means that channel_update traffic is not primarily cause by nodes publishing new updates when channel are about to bec

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Rusty Russell
Christian Decker writes: > Assume that we have a network in which a node D receives the updates > from a node A through two or more separate paths: > > A --- B --- D > \--- C ---/ > > And let's assume that some channel of A (c_A) is flapping (not the ones > to B and C). A will send out two update

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Rusty Russell
Fabrice Drouin writes: > I think there may even be a simpler case where not replacing updates > will result in nodes not knowing that a channel has been re-enabled: > suppose you got 3 updates U1, U2, U3 for the same channel, U2 disables > it, U3 enables it again and is the same as U1. If you disc

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Fabrice Drouin
On Tue, 8 Jan 2019 at 17:11, Christian Decker wrote: > > Rusty Russell writes: > > Fortunately, this seems fairly easy to handle: discard the newer > > duplicate (unless > 1 week old). For future more advanced > > reconstruction schemes (eg. INV or minisketch), we could remember the > > latest t

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Matt Corallo
I mentioned this on IRC, but note that the flapping is not just useless information to be discarded without consideration. An important use of routing data is providing a "good" subset to nodes like mobile clients that don't want all the bandwidth to stay fully in sync. A pretty good indicator o

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Christian Decker
Fabrice Drouin writes: > I think there may even be a simpler case where not replacing updates > will result in nodes not knowing that a channel has been re-enabled: > suppose you got 3 updates U1, U2, U3 for the same channel, U2 disables > it, U3 enables it again and is the same as U1. If you dis

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Christian Decker
Rusty Russell writes: >> But only 18 000 pairs of channel updates carry actual fee and/or HTLC >> value change. 85% of the time, we just queried information that we >> already had! > > Note that this can happen in two legitimate cases: > 1. The weekly refresh of channel_update. > 2. A node updated

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-07 Thread Rusty Russell
Fabrice Drouin writes: > Follow-up: here's more detailed info on the data I collected and > potential savings we could achieve: > > I made hourly routing table backups for 12 days, and collected routing > information for 17 000 channel ids. > > There are 130 000 different channel updates :on avera

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-04 Thread Fabrice Drouin
On Fri, 4 Jan 2019 at 04:43, ZmnSCPxj wrote: > > - in set reconciliation schemes: we could reconcile [channel id | > > timestamp | checksum] first > > Perhaps I misunderstand how set reconciliation works, but --- if timestamp is > changed while checksum is not, then it would still be seen a

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-03 Thread ZmnSCPxj via Lightning-dev
Good morning, > - in set reconciliation schemes: we could reconcile [channel id | > timestamp | checksum] first Perhaps I misunderstand how set reconciliation works, but --- if timestamp is changed while checksum is not, then it would still be seen as a set difference and still require fu

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-03 Thread Fabrice Drouin
Follow-up: here's more detailed info on the data I collected and potential savings we could achieve: I made hourly routing table backups for 12 days, and collected routing information for 17 000 channel ids. There are 130 000 different channel updates :on average each channel has been updated 8 t

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-02 Thread Fabrice Drouin
On Wed, 2 Jan 2019 at 18:26, Christian Decker wrote: > > For the ones that flap with a period that is long enough for the > disabling and enabling updates being flushed, we are presented with a > tradeoff. IIRC we (c-lightning) currently hold back disabling > `channel_update`s until someone actual

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-02 Thread Christian Decker
Hi Fabrice, happy new year to you too :-) Thanks for taking the time to collect that information. It's very much in line with what we were expecting in that most of the updates come from flapping channels. Your second observation that some updates only change the timestamp is likely due to the st

[Lightning-dev] Quick analysis of channel_update data

2019-01-02 Thread Fabrice Drouin
Hello All, and Happy New Year! To understand why there is a steady stream of channel updates, even when fee parameters don't seem to actually change, I made hourly backups of the routing table of one of our nodes, and compared these routing tables to see what exactly was being modified. It turns