Dear fellow Lightning Developers,

I am pleased (and a bit proud) to be able to inform you that I finally
found a quick way to approximate the slow minimum convex cost flow
computation. This is necessary for optimally reliable and cheap payment
flows [0] to deliver large multi part payments over the Lightning Network.
The proposed solution happens via piecewise linearization [1] of the min
cost flow problem on the uncertainty network which we face in order to
compute the optimal split and planning of large amount multi part payments.
The notion of "optimal" is obviously subjective with respect to the chosen
cost function. As known we suggest to include the negative logarithm of
success probabilities based on the likelihood that enough liquidity is
available on a channel as a dominant feature of the used cost function. We
give the background for this in [2] which since then has already been
picked up by c-lightning and LDK. The c-lightning team even published
benchmarks showing significant improvement in payment speed over their
previously used cost function [2b].

Let me recall that one of the largest criticisms and concerns of our
approach to use minimum cost flows for payment delivery back in July /
August last year (especially by the folks from lightning labs) was that the
min cost flow approach would be impractical due to run time constrains.
Thus I am delighted that with the now published code [3] (which has exactly
100 lines including data import and parsing and ignoring comments) we are
able to compute a reasonable looking approximation to the optimal solution
in a sub second run time on the complete public channel graph of the
Lightning Network. This is achieved via piecewise linearization of the
convex cost function and invoking of a standard linear min cost flow solver
[4] for the linearized problem. This works quickly despite the fact that
the piecewise linearization adds a significant higher amount of arcs to the
network and blows up the size of the network on which we solve the min cost
flow problem. This makes me fairly certain that with proper pruning of the
graph we might even reach the 100 millisecond runtime frontier, which would
be far faster than what I dreamed & hoped to be possible.

The currently widely deployed Dijkstra search to generate a single
candidate path takes roughly 100ms of runtime. It seems that with the
runtime of the piecewise linearized problem the min cost flow approach is
now competitive from a runtime perspective. The flow computation is still a
bit slower than Dijkstra in both theory and practice. However the piecewise
linearized min cost flow has the huge advantage that it generates several
candidate paths for a solid approximation of the optimal MPP split.
Remember the exact min cost flow corresponds to the optimal MPP split. The
later was not used so far as the min cost flow was considered to be too
slow. Yet the question how to split seems to be addressed as issues in
implementations [5][6][7] and acknowledged to be complicated (especially
with respect to fees) in [8]. This result is btw of particular interest for
LSPs. If an LSP has to schedule x payments per second it can just do one
flow computation with several sink nodes and plan all of those payments
with a single min cost flow computation. This globally optimizes the
potentially heavy load that LSPs might have even if all payments were so
small that no splitting was necessary.

The iPython notebook which I shared contains about 1 page to explain how to
conduct a piecewise linear approximation. The quality of the approximation
is not the major goal here as I am just focused to demonstrate the run time
of the approach and the principle how to achieve this runtime. Thus I do
the piecewise linear approximation very roughly in the published code.
Selecting the optimal piecewise approximation [9] will not change the the
runtime of flow computation but only blows up the code to prepare the
solver. This is why I decided to keep the code as simple and short as
possible even if that means that the approximation will be not as close to
the optimum as it could be in practice. For the same reason I did not
include any code to update the uncertainty network from previously failed
or successful attempts by using conditional success probabilities P(X>a |
min_liquidity < X < max_liquidity ). People who are interested might look
at my hands on instructions and explanations for coders in this
rust-lightning issue [10]. The folks from LDK have picked this up and
implemented this already for single path payments in [11] which might be
relevant for people who prefer code over math papers. An obvious
optimization of the piece wise linearization would be to chose the first
segment of the piecewise linear approximation with a capacity of the
certain liquidity and a unit cost of 0 for that piece.

Our original papers describe everything only from a theoretical point of
view and with simulations. However our mainnet experiments from July last
year [12] indicated that we easily have been able to deliver for example
0.3679 BTC via a couple rounds of min cost flow computations through the
Lightning Network (given we accept the computational waiting time). This
experiment was too slow to be used in practice but confirmed our model and
approach to deliver such large amounts. Given that the min cost flow
computation is now significantly below the median onion round trip times it
is my understanding that with these newly introduced techniques we should
be able to deliver substantial monetary amounts over the Lightning Network
within a couple of seconds despite the uncertainty about the Liquidity in
remote channels. While I am very excited about the result I think some
principle roadblocks of the protocol are becoming more and more visible:

1. The well known issue of hanging HTLCs. As of now I believe onion
messages would be great to acknowledge incoming HTLCs of multipart payments
but I certainly believe this will not be sufficient if we don't go for a
cancelable & stuckless payment protocol as suggested in [13].
2. I believe the Lightning Network Protocol should be able to handle
redundant overpayments as suggested in [14] and [15] (The later paper
derives independently of us the same success probabilities and tries to
maximize them!). This would allow to transform the problem of finding a
flow that maximizes the success probability to a flow that expects to
deliver the amount of the invoice (when redundancy is applied) with a
single round of min cost flow computation on average.

Last but not least, please allow me to make a short remark on the (still to
me very surprisingly controversial) base fee discussion: For simplicity I
did not include any fee considerations to the published code (besides a fee
report on how expensive the computed flow is). However in practice we wish
to optimize at least for high reliability (via neg log success
probabilities) and cheap fees which in particular with the ppm is very
easily possible to be included to the piece wise linearized cost function.
While for small base fees it seems possible to encode the base fee into the
first segment of the piecewise linearized approximation I think the base
fee will still be tricky to be handled in practice (even with this
approximation). For example if the base fee is too high the "base fee
adjusted" unit cost of the first segment of the piecewise linearized
problem might be higher than the unit cost of the second segment which
effectively would break the convexity. Thus I reiterate my earlier point
that from the perspective of the year long pursued goal of optimizing for
fees (which all Dijkstra based single path implementations do) it seems to
be best if the non linearity that is introduced by the base fee would be
removed at all. According to discussions with people who crate Lightning
Network explorer (and according to my last check of gossip) about 90% of
channels have a base fee of 1 sat or lower and ~38% of all channels already
set their base fee away from the default value to 0 [16].

I hope this mail was useful for you. Feel free to ask me anything if there
should be questions or if you need help to integrate those results into
your software!

With kind regards Rene Pickhardt

[0]: https://arxiv.org/abs/2107.05322
[1]: https://en.wikipedia.org/wiki/Piecewise_linear_function
[2]: https://arxiv.org/abs/2103.08576
[2b]:
https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657

[3]:
https://github.com/renepickhardt/mpp-splitter/blob/master/Minimal%20Linearized%20min%20cost%20flow%20example%20for%20MPP.ipynb
[4]: https://developers.google.com/optimization/flow/mincostflow
[5]: https://github.com/lightningnetwork/lnd/issues/4203
[6]: https://github.com/lightningdevkit/rust-lightning/issues/1276
[7]: https://github.com/ElementsProject/lightning/issues/4753
[8]: https://github.com/ACINQ/eclair/pull/1427
[9]: http://www.iaeng.org/publication/WCECS2008/WCECS2008_pp1191-1194.pdf
[10]:
https://github.com/lightningdevkit/rust-lightning/issues/1170#issuecomment-972396747
[11]: https://github.com/lightningdevkit/rust-lightning/pull/1227
[12]: https://twitter.com/renepickhardt/status/1418849788531990530
[13]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-June/002029.html
[14]: https://berkeley-defi.github.io/assets/material/1910.01834.pdf
[15]: https://dl.acm.org/doi/10.1145/3479722.3480997
[16]: https://lnrouter.app/graph/zero-base-fee

-- 
https://ln.rene-pickhardt.de
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to