Good morning Stefan,

> > > For myself, I think a variant of Pickhardt-Richter payments can be 
> > > created which adapts to the reality of the current network where 
> > > `base_fee > 0` is common, but is biased against `base_fee > 0`, can be a 
> > > bridge from the current network with `base_fee > 0` and a future with 
> > > `#zerobasefee`.
> >
> > I have been thinking about your idea (at least what I understood of
> > it) of using amountprop_fee + amountbase_fee/min_flow_size, where
> > min_flow_size is a suitable quantization constant (say, 10k or 100k
> > sats, may also chosen dynamically), as a component of the cost
> > function, and I am pretty sure it is great at achieving exactly what
> > you are proposing here. This is a nicely convex (even linear in this
> > component) function and so it's easy to find min-cost flows for it. It
> > solves the problem (that I hadn't thought about before) that you have
> > pointed out in splitting flows into HTLCs. If you use
> > min_flow_size=max_htlc_size, it is even optimal (for this
> > min_flow_size). If you use a smaller min_flow_size, it is still
> > optimal for channels with base_fee=0 but overestimates the fee for
> > channels with base_fee>0, and is less accurate the smaller the
> > min_flow_size and the larger the base_fee. So it will be biased
> > against channels with larger base_fee. But notice that with min-cost
> > flows, we are rarely looking for the cheapest solution anyway, because
> > these solutions (if they include more than one path) will usually
> > fully saturate the cheapest channels and thus have very low success
> > probability. So all in all, I believe you found a great practical
> > solution for this debate. Everybody is free to use any base_fee they
> > chose, we get a workable cost function, and I conjecture that
> > economics will convince most people to choose a zero or low base_fee.
>
> I am glad that this is helpful.
> Still, I have not really understood well the variant problem "min cost flow 
> with gains and losses" and this scheme might not work there very well.
> On the other hand, the current algorithms are already known to suck for large 
> payments, so even a not-so-good algorithm based on Pickhardt-Richter may be 
> significantly better than existing deployed code.

In yet another thread I proposed the cost function:

    fee + fee_budget * (1 - success_probability)

If the base-to-prop hack (i.e. use a quantization constant like I proposed) can 
be done on the `fee` component above, does it now become convex?

With an amount of 0, `success_probability` is 1, and if we use the base-to-prop 
hack to convert base fees to proportional fees, then the output is also 0 at 
`amount = 0`.

It can even be made separable by clever redefinition of addition (as I pointed 
out in that thread) but perhaps it is too clever and breaks other properties 
that a mincostflow algo needs.

The above is attractive since the cost unit ends up being millisatoshi.
In my experience with CLBOSS, hastily-thought heuristics kinda suck unless they 
are based on actual economic theory, meaning everything should really be in 
terms of millisatoshis or other financial units.

Would this be workable?
Pardon my complete ignorance of maths.

Presumably, there is a reason why the Pickhardt-Richter paper suggests 
`-log(success_probability) + fudging_factor * fee`.
My initial understanding is that this makes simple addition a correct behavior 
(success_probabilities are multiplied), making for a separable cost function 
that uses "standard" arithmetic addition rather than the excessively clever one 
I came up with.
However, it might affect convexity as well?

(on the other hand, credibility should really be measured in decibels anyway, 
and one is estimating the credibility of the implied claim of a published 
channel that it can actually deliver the money to where it is needed...)

The neglogprob is in units of decibels, and I am not sure how to convert a 
millisatoshi-unit fee into decibels.
In particular the logarithm implies a non-linear relationship between 
probability and fee.

I think it is reasonable for paying users to say "if it will take more than NN 
millisatoshis to pay for it, never mind, I won't continue the transaction 
anymore", which is precisely why I added the fee budget in the C-Lightning 
`pay` implementation long ago.

On the other hand, perhaps the nonlinear relationship between the success 
probability and the fee makes sense.
If the success probability is already fairly high, then any small change in fee 
is significant to the user, but as the success probability drops, then the user 
would be willing to accept up to the fee budget.
This implies that if success probability is high, the effect of an increase in 
fee should be larger in comparison to the effect of an increase in success 
probability, but if success probability is low, then the effect of an increase 
in fee should be smaller compared to an increase in success probability.
I am uncertain if the neglogprob plus some fee times a conversion factor has 
that behavior.

Regards,
ZmnSCPxj
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to