Hi Rene,

thanks a lot for your contribution! This is exactly what I needed to
start coding again :) I intend to release a somewhat usable version of
your approach in lnd-manageJ [1] soon. Preliminary results indicate that
results (i.e. MPPs to try) can be computed in less than a second, which
is great!

Important remark: my code can only be used for real MPPs once a usable
MPP gRPC call is made available, see lnd issue #5746 [2].

While working on my implementation, a few questions came to mind:

1) What's the reasoning behind combining parallel channels?

I agree that parallel channels make things a lot more complicated, but I
also see the benefit from a node operator's point of view. That being
said, wouldn't it suffice to treat parallel channels individually?

1.1) A payment of size 2 needs to be split into 1+1 to fit through
parallel channels of size 1+1. Combining the 1+1 channels into a virtual
channel of size 2 only complicates the code that has to do come up with
a MPP that doesn't over-saturate the actual channels. On the other hand,
I don't think the probability for the virtual channel of size 2 is more
realistic than reasoning about two individual channels and their
probabilities - but I didn't even try to see the math behind that.
Please prove me wrong? :)

1.2) The Mission Control information provided by lnd can be used to
place a minimum available balance on each of the parallel channels. If
we know that node A isn't able to forward N sats to node B, we can treat
all parallel channels between A and B (in that direction) to have a
capacity of at most N-1 sats. How would this look like if we combined
the parallel channels into a virtual one? Note that it may still be
possible to route two individual payments/onions of size N-1 sats from A
to B, given two parallel channels with that many sats on A's side.

2) Optimal Piecewise Linearization

See Twitter [3].

Is it worth it cutting a channel into pieces of different sizes, instead
of just having (as per your example) 5 pieces of the same size? If it
makes a noticeable difference, adding some complexity to the code might
be worth it.

3) Size of Piecewise Linearization

My gut feeling is that cutting a 1 BTC channel into 5 pieces is
different from cutting a 0.01 BTC channel into 5 pieces. Would it make
sense to use different values of N depending on the channel size?

4) Leftovers after Piecewise Linearization

If I cut some channel into N pieces, I might end up with up to N-1 sats
that don't end up in any of the N pieces, effectively making the channel
look smaller than it is. For smaller values of N that's obviously not an
issue (given the uncertainty we're dealing with), but it might be more
problematic if quantization is used with larger values. Any thoughts on
this?

5) Fees (and other properties?)

How can we integrate fees into the function? I must admit, I haven't
even thought about that, yet. A copy-paste answer would be great,
though! :) Maybe it's also a good idea to punish channels based on their
CLTV delta? Ratio of enabled channels? Age? Manual punishment score? ...

6) Non-Zero Base Fee

See Twitter [4].

According to Stefan [5] it should be possible to integrate ZmnSCPxj's ideas
to make this work with non-zero base fees. How?
Simpler approach: Twitter [6].

7) Private Channels

[very niche topic, not really that interesting nor urgent]

I'm a fan of adding private channels to provide more outbound liquidity,
mainly to reduce gossip and hide my intentions. If my total liquidity to
some peer is below the amount announced in public channels, I don't see
any meaningful complication. However, I might have a public channel of
size N and several private channels bringing my local liquidity to some
value >N. It's rather obvious that not announcing this fact is a bad
idea, as any #pickhardtpayments implementation would think I have 0-N on
my side of the channel(s). Assuming I'm willing to accept this tradeoff,
do you see other complications or issues with hidden liquidity?

My gut feeling is that this isn't an issue, at all, as channel balances
change all the time, which is something the algorithm already has to
deal with.

8) Quality of Approximation

There are some problems in computer science that are hard/impossible to
approximate, in the sense that any kind of deviation from the optimum
could cause the computed results to be extremely bad. Do you have some
idea (or proof) that your kind of approximation isn't causing a major
issue? I guess a piece-wise linearization with an infinite number of
pieces corresponds to the optimal result. Given a finite number of
pieces, how large is the difference to the optimum?

Bye,
Carsten

1: https://github.com/C-Otto/lnd-manageJ/issues/6
2: https://github.com/lightningnetwork/lnd/issues/5746
3: https://twitter.com/c_otto83/status/1502329970349248521
4: https://twitter.com/c_otto83/status/1502329271964033027
5: https://twitter.com/stefanwouldgo/status/1502681455918473217
6: https://twitter.com/c_otto83/status/1502330558793363464
-- 
Dr. Carsten Otto
cars...@c-otto.de
https://c-otto.de

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to