I mentioned this on IRC, but note that the flapping is not just useless 
information to be discarded without consideration. An important use of routing 
data is providing a "good" subset to nodes like mobile clients that don't want 
all the bandwidth to stay fully in sync. A pretty good indicator of a useless 
channel would be flapping, given its probably not very reliable for routing.

I'm somewhat unconvinced that we should be optimizing for as little bandwidth 
use as possible here, though wins that don't lose information are nice.

Matt

> On Jan 8, 2019, at 16:28, Christian Decker <decker.christ...@gmail.com> wrote:
> 
> Fabrice Drouin <fabrice.dro...@acinq.fr> writes:
> 
>> I think there may even be a simpler case where not replacing updates
>> will result in nodes not knowing that a channel has been re-enabled:
>> suppose you got 3 updates U1, U2, U3 for the same channel, U2 disables
>> it, U3 enables it again and is the same as U1. If you discard it and
>> just keep U1, and your peer has U2, how will you tell them that the
>> channel has been enabled again ? Unless "discard" here means keep the
>> update but don't broadcast it ?
> 
> Excellent point, that's a simpler example of how it could break down.
> 
>>> I think all the bolted on things are pretty much overkill at this point,
>>> it is unlikely that we will get any consistency in our views of the
>>> routing table, but that's actually not needed to route, and we should
>>> consider this a best effort gossip protocol anyway. If the routing
>>> protocol is too chatty, we should make efforts towards local policies at
>>> the senders of the update to reduce the number of flapping updates, not
>>> build in-network deduplications. Maybe something like "eager-disable"
>>> and "lazy-enable" is what we should go for, in which disables are sent
>>> right away, and enables are put on an exponential backoff timeout (after
>>> all what use are flappy nodes for routing?).
>> 
>> Yes there are probably heuristics that would help reducing gossip
>> traffic, and I see your point but I was thinking about doing the
>> opposite: "eager-enable" and "lazy-disable", because from a sender's
>> p.o.v trying to use a disabled channel is better than ignoring an
>> enabled channel.
> 
> That depends on what you are trying to optimize. Your solution keeps
> more channels in enabled mode, potentially increasing failures due to
> channels being unavailable. I was approaching it from the other side,
> since failures are on the critical path in the payment flow, they'd
> result in longer delays and many more retries, which I think is annoying
> too. It probably depends on the network structure, i.e., if the fanout
> from the endpoints is large, missing some channels shouldn't be a
> problem, in which case the many failures delaying your payment weighs
> more than not finding a route (eager-disable & lazy-enable). If on the
> other hand we are really relying on a huge number of flaky connections
> then eager-enable & lazy-disable might get lucky and get the payment
> through. I'm hoping the network will have the latter structure, because
> we'd have really unpredictable behavior anyway.
> 
> We'll probably gain more insight once we start probing the network. My
> expectation is that today's network is a baseline, whose resiliency and
> redundancy will improve over time, hopefully swinging in favor of
> trading off the speed gains over bare routability.
> 
> Cheers,
> Christian
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to