Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread ZmnSCPxj via Lightning-dev
Good morning aj,


> On Wed, Jun 29, 2022 at 12:38:17PM +, ZmnSCPxj wrote:
>
> > > > ### Inverting The Filter: Feerate Cards
> > > > Basically, a feerate card is a mapping between a probability-of-success 
> > > > range and a feerate.
> > > > * 00%->25%: -10ppm
> > > > * 26%->50%: 1ppm
> > > > * 51%->75%: 5ppm
> > > > * 76%->100%: 50ppm
> > > > The economic insight here is this:
> > > > * The payer wants to pay because it values a service / product more 
> > > > highly than the sats they are spending.
>
> > * If payment fails, then the payer incurs an opportunity cost, as it is 
> > unable to utilize the difference in subjective value between the service / 
> > product and the sats being spent.
>
>
> (If payment fails, the only opportunity cost they incur is that they
> can't use the funds that they locked up for the payment. The opportunity
> cost is usually considered to occur when the payment succeeds: at that
> point you've lost the ability to use those funds for any other purpose)

I think you misunderstand me completely.

The "payment fails" term here means that *all possible routes below the fee 
budget have failed*, i.e. a complete payment failure that will cause your `pay` 
command to error out with a frownie face.

In that case, the payer is unable to purchase the service or product.

The opportunity cost they lose is the lack of the service or product; they keep 
the value of the sats that did not get paid, but lose the value of the service 
or product they wanted to buy in the first place.

In that case, the payer loses the subjective difference in value between the 
service / product, and the sats they would have paid.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread Anthony Towns
On Wed, Jun 29, 2022 at 12:38:17PM +, ZmnSCPxj wrote:
> > > ### Inverting The Filter: Feerate Cards
> > > Basically, a feerate card is a mapping between a probability-of-success 
> > > range and a feerate.
> > > * 00%->25%: -10ppm
> > > * 26%->50%: 1ppm
> > > * 51%->75%: 5ppm
> > > * 76%->100%: 50ppm
> The economic insight here is this:
> * The payer wants to pay because it values a service / product more highly 
> than the sats they are spending.

> * If payment fails, then the payer incurs an opportunity cost, as it is 
> unable to utilize the difference in subjective value between the service / 
> product and the sats being spent.

(If payment fails, the only opportunity cost they incur is that they
can't use the funds that they locked up for the payment. The opportunity
cost is usually considered to occur when the payment succeeds: at that
point you've lost the ability to use those funds for any other purpose)

>   * Thus, the subjective difference in value between the service / product 
> being bought, and the sats to be paid, is the cost of payment failure.

If you couldn't successfully route the payment at any price, you never
had the opportunity to buy whatever the thing was.

> We can now use the left-hand side of the feerate card table, by multiplying 
> `100% - middle_probability_of_success` (i.e. probability of failure) by the 
> fee budget (i.e. cost of failure), and getting the 
> cost-of-failure-for-this-entry.

I don't think that makes much sense; your expected gain if you just try
one option is:

 (1-p)*0 + p*cost*(benefit/cost - fee)
 
where p is the probability of success that corresponds with the fee.

I don't think you can do that calculation with a range; if I fix the
probabilities as:

  12.5%  -10ppm
  27.5%1ppm
  62.5%5ppm
  87.5%   50ppm

then that approach chooses:

  -10 ppm if the benefit/cost is in (-10ppm, 8.77ppm)
5 ppm if the benefit/cost is in [8.77ppm, 162.52ppm)
   50 ppm if the benefit/cost is >= 162.52ppm

so for that policy, one of those entries is already irrelevant.

But that just feels super unrealistic to me. If your benefit is 8ppm,
and you try at -10ppm, and that fails, why wouldn't you try again at
5ppm? That means the real calculation is:

   p1*(benefit/cost - fee1) 
   + (p2-p1)*(benefit/cost - fee2 - retry_delay)
   - (1-p2)*(2*retry_delay)

Which is:

   p2*(benefit/cost)
 - p1*fee1 - (p2-p1)*fee2
 - (2-p1-p2)*retry_delay

My feeling is that the retry_delay factor's going to dominate...

That's also only considering one hop; to get the entire path, you
need them all to succeed, giving an expected benefit (for a particular
combination of rate card entries) of:

  (p1*p2*p3*p4*p5)*cost*(benefit/cost - (fee1 + fee2 + fee3 + fee4 + fee5)

And (p1*..*p5) is going to be pretty small in most cases -- 5 hops at
87.5% each already gets you down to only a 51% total chance of success.
And there's an exponential explosion of combinations, if each of the
5 hops has 4 options on their rate card, that's up to 1024 different
options to be evaluated...

> We then evaluate the fee card by plugging this in to each entry of the 
> feerate card, and picking which entry gives the lowest total fee.

I don't think that combines hops correctly. For example, if the rate
cards for hop1 and hop2 are both:

   10%  10ppm
  100%  92ppm

and your expected benefit/cost is 200ppm (so 100ppm per hop), then
treated individually you get:

   10%*(100ppm - 10ppm) = 9ppm  <-- this one!
  100%*(100ppm - 92ppm) = 8ppm

but treated together, you get:

1%*(200ppm -  20ppm) =  1.8ppm
   10%*(200ppm - 102ppm) =  9.8ppm (twice)
  100%*(200ppm - 184ppm) = 16ppm <-- this one!

> This is then added as a fee in payment algorithms, thus translated down to 
> "just optimize for low fee".

You're not optimising for low fee though, you're optimising for
maximal expected value, assuming you can't retry. But you can retry,
and probably in reality also want to minimise the chance of failure up
to some threshold.

For example: if I buy a coffee with lightning every week day for a year,
that's 250 days, so maybe I'd like to choose a fee so that my payment
failure rate is <0.4%, to avoid embarassment and holding up the queue.

> * Nodes utilizing wall strategies and doing lots of rebalancing put low 
> limits on the fee budget of the rebalancing cost.
>   * These nodes are willing to try lots of possible routes, hoping to nab the 
> liquidity of a low-fee node on the cheap in order to resell it later.
>   * Such nodes are fine with low probability of success.

Sure. But in that case, they don't care about delays, so why wouldn't they
just try the lowest fee rates all the time, no matter what their expected
value is? They can retry once an hour indefinitely, and eventually they
should get lucky, if the rate card's even remotely accurate. (Though
chances are they won't get -10ppm lucky for the entire path)

Finding out that you're paying 50ppm at the exact same time someone else
is "payin

Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread ZmnSCPxj via Lightning-dev
Good morning aj,

> On Sun, Jun 05, 2022 at 02:29:28PM +, ZmnSCPxj via Lightning-dev wrote:
>
> Just sharing my thoughts on this.
>
> > Introduction
> > 
> > Optimize for reliability+
> > uncertainty+fee+drain+uptime...
> > .--~~--.
> > / \
> > / \
> > / \
> > / \
> > / \
> > --' `--
> > Just Just
> > optimize optimize
> > for for
> > low fee low fee
>
>
> I think ideally you want to optimise for some combination of fee, speed
> and reliability (both liklihood of a clean failure that you can retry
> and of generating stuck payments). As Matt/Peter suggest in another
> thread, maybe for some uses you can accept low speed for low fees,
> while in others you'd rather pay more and get near-instant results. I
> think drain should just go to fee, and uncertainty/uptime are just ways
> of estimating reliability.
>
> It might be reasonable to generate local estimates for speed/reliability
> by regularly sending onion messages or designed-to-fail htlcs.
>
> Sorry if that makes me a midwit :)

Actually feerate cards help with this; it just requires an economic insight to 
translate probability-of-success to an actual cost that the payer incurs.


> > ### Inverting The Filter: Feerate Cards
> > Basically, a feerate card is a mapping between a probability-of-success 
> > range and a feerate.
> > * 00%->25%: -10ppm
> > * 26%->50%: 1ppm
> > * 51%->75%: 5ppm
> > * 76%->100%: 50ppm
>
>
> Feerate cards don't really make sense to me; "probability of success"
> isn't a real measure the payer can use -- naively, if it were, they could
> just retry at 1ppm 10 times and get to 95% chances of success. But if
> they can afford to retry (background rebalancing?), they might as well
> just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
> and see if they're lucky; but if they want a 1s response time, and can't
> afford retries, what good is even a 75% chance of success if that's the
> individual success rate on each hop of their five hop path?

The economic insight here is this:

* The payer wants to pay because it values a service / product more highly than 
the sats they are spending.
* There is a subjective difference in value between the service / product being 
bought and the amount to be spent.
  * In short, if the payment succeeds and the service / product is acquired, 
then the payer perceives itself as richer (increased utilons) by that 
subjective difference.
* If payment fails, then the payer incurs an opportunity cost, as it is unable 
to utilize the difference in subjective value between the service / product and 
the sats being spent.
  * Thus, the subjective difference in value between the service / product 
being bought, and the sats to be paid, is the cost of payment failure.
* That difference in value is the "fee budget" that Lightning Network payment 
algorithms all require as an argument.
  * If the LN fee total is greater than the fee budget, the payment algorithm 
will reject that path outright.
  * If the LN fee total is greater than the subjective difference in value 
between the service / product being bought and the amount to be delivered at 
the destination, then the payer gets negative utility and would prefer not to 
continue paying --- which is exactly what the payment algorithm does, it 
rejects such paths.

Therefore the fee budget is the cost of failure.

We can now use the left-hand side of the feerate card table, by multiplying 
`100% - middle_probability_of_success` (i.e. probability of failure) by the fee 
budget (i.e. cost of failure), and getting the cost-of-failure-for-this-entry.
We then evaluate the fee card by plugging this in to each entry of the feerate 
card, and picking which entry gives the lowest total fee.
This is then added as a fee in payment algorithms, thus translated down to 
"just optimize for low fee".

If the above logic seems dubious, consider this:

* Nodes utilizing wall strategies and doing lots of rebalancing put low limits 
on the fee budget of the rebalancing cost.
  * These nodes are willing to try lots of possible routes, hoping to nab the 
liquidity of a low-fee node on the cheap in order to resell it later.
  * i.e. those nodes are fine with taking a long time to successfully route a 
payment from themselves to themselves; they absolutely insist on low fees or 
else they will not earn anything.
  * Such nodes are fine with low probability of success.
  * Being fine with low probability of success means that the effect of the 
left-hand side of the feerate card is smaller and such nodes will tend to get 
the low probability of success entries.
* Buyers getting FOMOed into buying some neat new widget want to get their 
grubby hands on the widget ASAP.
  * These nodes are willing to pay a premium to get the neat new widget RIGHT 
NOW.
  * i.e. these nodes will be willing to provide a higher fee budget.
  * Being fine with a higher fee budget means that the effect of the left-hand 
side of the feerate card is larger and such nodes 

Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread Anthony Towns
On Sun, Jun 05, 2022 at 02:29:28PM +, ZmnSCPxj via Lightning-dev wrote:

Just sharing my thoughts on this.

> Introduction
> 
>   Optimize for reliability+
>uncertainty+fee+drain+uptime...
>  .--~~--.
> /\
>/  \
>   /\
>  /  \
> /\
> _--'  `--_
> Just  Just
>   optimize  optimize
> for   for
>   low fee   low fee

I think ideally you want to optimise for some combination of fee, speed
and reliability (both liklihood of a clean failure that you can retry
and of generating stuck payments). As Matt/Peter suggest in another
thread, maybe for some uses you can accept low speed for low fees,
while in others you'd rather pay more and get near-instant results. I
think drain should just go to fee, and uncertainty/uptime are just ways
of estimating reliability.

It might be reasonable to generate local estimates for speed/reliability
by regularly sending onion messages or designed-to-fail htlcs.

Sorry if that makes me a midwit :)

> Rene Pickhardt also presented the idea of leaking friend-of-a-friend 
> balances, to help payers increase their payment reliability.

I think foaf (as opposed to global) gossip of *fee rates* is a very
interesting approach to trying to give nodes more *current* information,
without flooding the entire network with more traffic than it can
cope with.

> Now we can consider that *every channel is a marketplace*.
> What is being sold is the sats inside the channel.

(Really, the marketplace is a channel pair (the incoming channel and
the outgoing channel), and what's being sold is their relative balance)

> So my concrete proposal is that we can do the same friend-of-a-friend balance 
> leakage proposed by Rene, except we leak it using *existing* mechanisms --- 
> i.e. gossiping a `channel_update` with new feerates adjusted according to the 
> supply on the channel --- rather than having a new message to leak 
> friend-of-a-friend balance directly.

+42

> Because we effectively leak the balance of channels by the feerates on the 
> channel, this totally leaks the balance of channels.

I don't think this is true -- you ideally want to adjust fees not to
maintain a balanced channel (50% on each side), but a balanced *flow*
(1:1 incoming/outgoing payment volume) -- it doesn't really matter if
you get the balanced flow that results in an average of a 50:50, 80:20
or 20:80 ratio of channel balances (at least, it doesn't as long as your
channel capacity is 10 or 100 times the payment size, and your variance
is correspondingly low).

Further, you have two degrees of freedom when setting fee rates: one
is how balanced the flows are, which controls how long your channel can
remain useful, but the other is how *much* flow there is -- if halving
your fee rate doubles the flow rate in sats/hour, then that will still
increase your profit. That also doesn't leak balance information.

> ### Inverting The Filter: Feerate Cards
> Basically, a feerate card is a mapping between a probability-of-success range 
> and a feerate.
> * 00%->25%: -10ppm
> * 26%->50%: 1ppm
> * 51%->75%: 5ppm
> * 76%->100%: 50ppm

Feerate cards don't really make sense to me; "probability of success"
isn't a real measure the payer can use -- naively, if it were, they could
just retry at 1ppm 10 times and get to 95% chances of success. But if
they can afford to retry (background rebalancing?), they might as well
just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
and see if they're lucky; but if they want a 1s response time, and can't
afford retries, what good is even a 75% chance of success if that's the
individual success rate on each hop of their five hop path?

And if you're not just going by odds of having to retry, then you need to
get some current information about the channel to plug into the formula;
but if you're getting *current* information, why not let that information
be the feerate directly?

> More concretely, we set some high feerate, impose some kind of constant 
> "gravity" that pulls down the feerate over time, then we measure the relative 
> loss of outgoing liquidity to serve as "lift" to the feerate.

If your current fee rate is F (ppm), and your current volume (flow) is V
(sats forwarded per hour), then your profit is FV. If dropping your fee
rate by dF (<0) results in an increase of V by dV (>0), then you want:

   (F+dF)(V+dV) > FV
   FV + VdF + FdV + dFdV > FV
   FdV > -VdF
   dV/dF < -V/F (flip the inequality because dF is negative)

   (dV/V)/(dF/F) < -1  (fee-elasticity of volume is in the elastic
region)

(<-1 == elastic == flow changes more than the fee does == drop the fee
rate; >-1 == ineleastic == flow changes less than the fee does == raise
the fee rate; =-1 == unit elastic == yo

[Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-05 Thread ZmnSCPxj via Lightning-dev
Introduction


Bell Curve Meme (LEET ASCII ART SKILLZ)

  Optimize for reliability+
   uncertainty+fee+drain+uptime...
 .--~~--.
/\
   /  \
  /\
 /  \
/\
_--'  `--_
Just  Just
  optimize  optimize
for   for
  low fee   low fee

Recently, Rene Pickhardt (Chatham House rules note: I asked for explicit 
permission from Rene to reveal his name here) presented some work-in-progress 
thinking about a concept called "Price of Anarchy".
Roughly speaking, we can consider this "Price of Anarchy" as being similar to 
concepts such as:

* The Cost of Decentralization of Bitcoin.
* The cost of the Tragedy of the Commons.

Briefly, we need to find a "dominant strategy" for payers and forwarding nodes.
The "dominant strategy" is the strategy which optimizes:

* For payers: minimizes their personal payment failures and fees.
* For forwarders: maximizes their net earnings over time.

Worse, the "dominant strategy" is a strategy that STILL works better than other 
strategies even if the other strategies are commonly used on the network, AND 
still work better even if everyone else is using the dominant strategy.
The technical term here is "Nash equilibrium", which is basically the above 
definition.

This will cause some amount of payment failures and impose fees on payers.
Now, we can compare the rate of payment failures and average fees, when 
everyone uses this specific dominant strategy, versus the following 
**imaginary** case:

* There is a perfectly tr\*stable central coordinator with perfect knowledge 
(knows all channel balances and offline/online state of nodes) who decides the 
paths where payments go through, optimizing for reduced payment failures and 
reduced fees.
  * Nobody is seriously proposing to install this, we are just trying to 
imagine how it would work and how much fees and payment failures are **IF** 
there were such a perfectly tr\*stable coordinator.

The difference in the cost between the "dominant strategy" case and the 
"perfect ***IMAGINARY*** central coordinator", is the Price of Anarchy.
Anarchy here means that the dominant strategy is used due to every actor being 
free to use any strategy, and assuming that each actor is rational and tries to 
improve its goal.

I will present a motivating example first which was presented to me directly, 
and then present a possibly dominant strategy for forwarding nodes, which *I 
think* causes the dominant strategy for forwarders to be "just optimize for low 
fees".
And I think this dominant strategy for forwarding nodes will lead to behavior 
that is reasonably close to the perfect-coordinator case.

Braess Paradox
==

Suppose we have the following network:

S > A
|   0   |
|   |
|   |
|2 2|
|   |
|   |
v   0   v
B > R

The numbers above are the cost to transfer one satoshi.

Let us suppose that all payers on the LSP `S` just want to send one satoshi 
payments to some merchant on LSP `R`.
In the above case, we can expect that there is no preferred route between 
`S->A->R` vs `S->B->R` so in general, the load will be balanced between both 
possible routes.

Now suppose we want to improve the Lightning Network and add a new channel, 
because obviously adding a new channel can only be a positive good because it 
gives more liquidity to the network amirite?

Suppose A and B create a humongous large channel (because they are routing 
nodes, they want to have lots of liquidity with each other) which "in practice" 
(*cough*) will "never" deplete in one direction or the other (i.e. it is 
perpetually bidirectional), and they set the feerate to 1 both ways.

S > A
|   0 / |
|   /   |
| / |
|2 1/1 2|
| / |
|   /   |
v / 0   v
B > R

In the above case, pathfinders from `S`, which use *only* minimize-fees (i.e. 
all pathfinding algorithms that predate Pickhardt-Richter payments), will 
*always* use `S->A->B->R`, which only costs 1, rather than `S->A->R` (which 
costs 2) or `S->B->R` (which costs 2).
The problem is that, in the past, the paths `S->A->R` and `S->B->R` got 
reasonably balanced traffic and *maybe* they are able to handle half the total 
number of payments each.
Now suppose that with `S->A` having to handle *all* the payments, it now 
reaches depletion, and some of the payments fail and have to be retried, 
increasing payment time and making our users mad (we actually have users 
now???).

This is the Braess Paradox: "Adding more roads can cause more traffic, removing 
roads can cause less traffic".
Naively, we believ