Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-17 Thread Antoine Riard
As long as protocol development and design is done neutrally, I'm all fine!


Le ven. 17 févr. 2023 à 10:48, Joost Jager  a écrit :

> Right, that was my above point about fetching scoring data - there's three
>> relevant "buckets" of
>> nodes, I think - (a) large nodes sending lots of payments, like the
>> above, (b) "client nodes" that
>> just connect to an LSP or two, (c) nodes that route some but don't send a
>> lot of payments (but do
>> send *some* payments), and may have lots or not very many channels.
>>
>> (a) I think we're getting there, and we don't need to add anything extra
>> for this use-case beyond
>> the network maturing and improving our scoring algorithms.
>> (b) I think is trivially solved by downloading the data from a node in
>> category (a), presumably the
>> LSP(s) in question (see other branch of this thread)
>> (c) is trickier, but I think the same solution of just fetching
>> semi-trusted data here more than
>> sufficies. For most routing nodes that don't send a lot of payments we're
>> talking about a very small
>> amount of payments, so trusting a third-party for scoring data seems
>> reasonable.
>>
>
> I see that in your view all nodes will either be large nodes themselves,
> or be downloading scoring data from large nodes. I'd argue that that is
> more of a move towards centralisation than the `ha` flag is. The flag at
> least allows small nodes to build up their view of the network in an
> efficient and independently manner.
>
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-17 Thread Joost Jager
>
> Right, that was my above point about fetching scoring data - there's three
> relevant "buckets" of
> nodes, I think - (a) large nodes sending lots of payments, like the above,
> (b) "client nodes" that
> just connect to an LSP or two, (c) nodes that route some but don't send a
> lot of payments (but do
> send *some* payments), and may have lots or not very many channels.
>
> (a) I think we're getting there, and we don't need to add anything extra
> for this use-case beyond
> the network maturing and improving our scoring algorithms.
> (b) I think is trivially solved by downloading the data from a node in
> category (a), presumably the
> LSP(s) in question (see other branch of this thread)
> (c) is trickier, but I think the same solution of just fetching
> semi-trusted data here more than
> sufficies. For most routing nodes that don't send a lot of payments we're
> talking about a very small
> amount of payments, so trusting a third-party for scoring data seems
> reasonable.
>

I see that in your view all nodes will either be large nodes themselves, or
be downloading scoring data from large nodes. I'd argue that that is more
of a move towards centralisation than the `ha` flag is. The flag at least
allows small nodes to build up their view of the network in an efficient
and independently manner.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-16 Thread Antoine Riard
Yeah definitely looking forward to talk more about highly available
lightning channels. During next LN channel jamming meetup! .

Le jeu. 16 févr. 2023 à 00:43, Matt Corallo  a
écrit :

>
>
> On 2/14/23 11:36 PM, Joost Jager wrote:
> > But how do you decide to set it without a credit relationship? Do I
> measure my channel and set the
> >
> > bit because the channel is "usually" (at what threshold?) saturating
> in the inbound direction? What
> > happens if this changes for an hour and I get unlucky? Did I just
> screw myself?
> >
> >
> > As a node setting the flag, you'll have to make sure you open new
> channels, rebalance or swap-in in
> > time to maintain outbound liquidity. That's part of the game of running
> an HA channel.
>
> Define "in time" in a way that results in senders not punishing you for
> not meeting your "HA
> guarantees" due to a large flow. I don't buy that this results in anything
> other than pressure to
> add credit.
>
> >  > How can you be sure about this? This isn't publicly visible data.
> >
> > Sure it is! https://river.com/learn/files/river-lightning-report.pdf
> > 
> >
> >
> > Some operators publish data, but are the experiences of one of the most
> well connected (custodial)
> > nodes representative for the network as a whole when evaluating payment
> success rates? In the end
> > you can't know what's happening on the lightning network.
>
> Right, that was my above point about fetching scoring data - there's three
> relevant "buckets" of
> nodes, I think - (a) large nodes sending lots of payments, like the above,
> (b) "client nodes" that
> just connect to an LSP or two, (c) nodes that route some but don't send a
> lot of payments (but do
> send *some* payments), and may have lots or not very many channels.
>
> (a) I think we're getting there, and we don't need to add anything extra
> for this use-case beyond
> the network maturing and improving our scoring algorithms.
> (b) I think is trivially solved by downloading the data from a node in
> category (a), presumably the
> LSP(s) in question (see other branch of this thread)
> (c) is trickier, but I think the same solution of just fetching
> semi-trusted data here more than
> sufficies. For most routing nodes that don't send a lot of payments we're
> talking about a very small
> amount of payments, so trusting a third-party for scoring data seems
> reasonable.
>
> Once we do that, everyone gets a similar experience as the River report :).
>
> Matt
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-15 Thread Matt Corallo



On 2/14/23 11:36 PM, Joost Jager wrote:

But how do you decide to set it without a credit relationship? Do I measure 
my channel and set the

bit because the channel is "usually" (at what threshold?) saturating in the 
inbound direction? What
happens if this changes for an hour and I get unlucky? Did I just screw 
myself?


As a node setting the flag, you'll have to make sure you open new channels, rebalance or swap-in in 
time to maintain outbound liquidity. That's part of the game of running an HA channel.


Define "in time" in a way that results in senders not punishing you for not meeting your "HA 
guarantees" due to a large flow. I don't buy that this results in anything other than pressure to 
add credit.



 > How can you be sure about this? This isn't publicly visible data.

Sure it is! https://river.com/learn/files/river-lightning-report.pdf



Some operators publish data, but are the experiences of one of the most well connected (custodial) 
nodes representative for the network as a whole when evaluating payment success rates? In the end 
you can't know what's happening on the lightning network.


Right, that was my above point about fetching scoring data - there's three relevant "buckets" of 
nodes, I think - (a) large nodes sending lots of payments, like the above, (b) "client nodes" that 
just connect to an LSP or two, (c) nodes that route some but don't send a lot of payments (but do 
send *some* payments), and may have lots or not very many channels.


(a) I think we're getting there, and we don't need to add anything extra for this use-case beyond 
the network maturing and improving our scoring algorithms.
(b) I think is trivially solved by downloading the data from a node in category (a), presumably the 
LSP(s) in question (see other branch of this thread)
(c) is trickier, but I think the same solution of just fetching semi-trusted data here more than 
sufficies. For most routing nodes that don't send a lot of payments we're talking about a very small 
amount of payments, so trusting a third-party for scoring data seems reasonable.


Once we do that, everyone gets a similar experience as the River report :).

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-15 Thread Joost Jager
>
> I think the performance question depends on the type of payment flows
> considered. If you're an
> end-user sending a payment to your local Starbucks for coffee, here fast
> payment sounds the end-goal.
> If you're doing remittance payment, cheap fees might be favored, and in
> function of those flows you're
> probably not going to select the same "performant" routing nodes. I think
> adding latency as a criteria for
> pathfinding construction has already been mentioned in the past for LDK
> [0].
>

My hopes are that eventually lightning nodes can run so efficient that in
practice there is no real trade-off anymore between cost and speed. But of
course hard to say how that's going to play out. I am all for adding
latency as an input to pathfinding. Attributable errors should help with
that too.


> Or there is the direction to build forward-error-correction code on top of
> MPP, like in traditional
> networking [1]. The rough idea, you send more payment shards than the
> requested sum, and then
> you reveal the payment secrets to the receiver after an onion
> interactivity round to finalize payment.
>

This is not very different from payment pre-probing is it? So try a larger
set of possible routes simultaneously and when one proves to be open, send
the real payment across that route. Of course a balance may have shifted in
the mean time, but seems unlikely enough to prevent the approach from being
usable. The obvious downside is that the user needs more total liquidity to
have multiple htlcs outstanding at the same time. Nevertheless an
interesting way to reduce payment latency.


> At the end of the day, we add more signal channels between HTLC senders
> and the routing
> nodes offering capital liquidity, if the signal mechanisms are efficient,
> I think they should lead
> to better allocation of the capital. So yes, I think more liquidity might
> be used by routing nodes
> to serve finely tailored HTLC requests by senders, however this liquidity
> should be rewarded
> by higher routing fees.
>

This is indeed part of the idea. By signalling HA, you may not only attract
more traffic, but also be able to command a higher fee.


> I think if we have lessons to learn on policy rules design and deployment
> on the base-layer
> (the full-rbf saga), it's to be careful in the initial set of rules, and
> how we ensure smooth
> upgradeability, from one version to another. Otherwise the re-deployment
> cost towards
> the new version might incentive the old routing node to stay on the
> non-optimal versions,
> and as we have historical buckets in routing algorithms, or preference for
> older channels,
> this might lead the end-user to pay higher fees, than they could access to.
>

I see the parallel, but also it seems that we have this situation already
today on lightning. Senders apply penalties and routing nodes need to make
assumptions about how they are penalised. Perhaps more explicit signalling
can actually help to reduce the degree of uncertainty as to how a routing
nodes is supposed to perform to keep senders happy?


> This is where the open question lies to me - "highly available" can be
> defined with multiple
> senses, like fault-tolerance, latency processing, equilibrated liquidity.
> And a routing node might
> not be able to optimize its architecture for the same end-goal (e.g more
> watchtower on remote
> host probably increases the latency processing).
>

Yes, good point. So maybe a few more bits to signal what a sender may
expect from a channel exactly?


> > Without shadow channels, it is impossible to guarantee liquidity up to
> the channel capacity. It might make sense for senders to only assume high
> > availability for amounts up to `htlc_maximum_msat`.
>
> As a note, I think "senders assumption" should be well-documented,
> otherwise there will be
> performance discrepancies between node implementations or even versions.
> E.g, an upgraded
> sender penalizing a node for the lack of shadow/parallel channels
> fulfilling HTLC amounts up to
> `htlc_maximum_msat`.
>

Well documented, or maybe even explicit in the name of the feature bit. For
example `htlc_max_guaranteed`.


> I think signal availability should be explicit rather than implicit. Even
> if it's coming with more
> gossip bandwidth data consumed. I would say for bandwidth performance
> management, relying
> on new gossip messages, where they can be filtered in function of the
> level of services required
> is interesting.
>

In terms of implementation, I think this kind of signalling is easier as an
extension of `channel_update`, but it can probably work as a separate
message too.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
>
> But how do you decide to set it without a credit relationship? Do I
> measure my channel and set the
>
bit because the channel is "usually" (at what threshold?) saturating in the
> inbound direction? What
> happens if this changes for an hour and I get unlucky? Did I just screw
> myself?
>

As a node setting the flag, you'll have to make sure you open new channels,
rebalance or swap-in in time to maintain outbound liquidity. That's part of
the game of running an HA channel.


> > How can you be sure about this? This isn't publicly visible data.
>
> Sure it is! https://river.com/learn/files/river-lightning-report.pdf


Some operators publish data, but are the experiences of one of the most
well connected (custodial) nodes representative for the network as a whole
when evaluating payment success rates? In the end you can't know what's
happening on the lightning network.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/14/23 1:42 PM, Antoine Riard wrote:

Hi Joost,

 > I think movement in this direction is important to guarantee competitiveness with centralised 
payment systems and their (at least theoretical) ability to
 > process a payment in the blink of an eye. A lightning wallet trying multiple paths to find one 
that works doesn't help with this.


Or there is the direction to build forward-error-correction code on top of MPP, 
like in traditional
networking [1]. The rough idea, you send more payment shards than the requested 
sum, and then
you reveal the payment secrets to the receiver after an onion interactivity 
round to finalize payment.


Ah, thank you for bringing this up! I'd thought about it and then forgot to 
mention it in this thread.

I think this is very important to highlight as we talk about "building a reliable lightning network 
out of unreliable nodes" - this is an *incredibly* powerful feature for this.


While its much less capital-effecient, the ability to over-commit upfront and then only allow the 
recipient to claim a portion of the total committed funds would substantially reduce the impact of 
failed HTLCs on payment latency. Of course the extra round-trip to request the "unlock keys" for the 
correct set of HTLCs adds a chunk to total latency, so senders will have to be careful about 
deciding when to do this or not.


Still, now that we have onion messages, we should do (well, try) this! Its not super complicated to 
implement (like everything it seems, the obvious implementation forgoes proof-of-payment, and like 
everything the obvious solution is PTLCs, I think). Its not clear to me how we get good data from 
trials, though, we'd need a sufficient set of the network to support this that we could actually 
test it, which is hard to get for a test.


Maybe someone (anyone?) wants to do some experiments doing simulations using real probing success 
rates to figure out how successful this would be and propose a concrete sender strategy that would 
improve success rates.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Antoine Riard
Hi Joost,

> For a long time I've held the expectation that eventually payers on the
lightning network will become very strict about node performance. That they
will > require a routing node to operate flawlessly or else apply a hefty
penalty such as completely avoiding the node for an extended period of time
- multiple > weeks. The consequence of this is that routing nodes would
need to manage their liquidity meticulously because every failure
potentially has a large
> impact on future routing revenue.

I think the performance question depends on the type of payment flows
considered. If you're an
end-user sending a payment to your local Starbucks for coffee, here fast
payment sounds the end-goal.
If you're doing remittance payment, cheap fees might be favored, and in
function of those flows you're
probably not going to select the same "performant" routing nodes. I think
adding latency as a criteria for
pathfinding construction has already been mentioned in the past for LDK [0].

> I think movement in this direction is important to guarantee
competitiveness with centralised payment systems and their (at least
theoretical) ability to
> process a payment in the blink of an eye. A lightning wallet trying
multiple paths to find one that works doesn't help with this.

Or there is the direction to build forward-error-correction code on top of
MPP, like in traditional
networking [1]. The rough idea, you send more payment shards than the
requested sum, and then
you reveal the payment secrets to the receiver after an onion interactivity
round to finalize payment.

> A common argument against strict penalisation is that it would lead to
less efficient use of capital. Routing nodes would need to maintain pools of
> liquidity to guarantee successes all the time. My opinion on this is that
lightning is already enormously capital efficient at scale and that it is
worth
> sacrificing a slight part of that efficiency to also achieve the lowest
possible latency.

At the end of the day, we add more signal channels between HTLC senders and
the routing
nodes offering capital liquidity, if the signal mechanisms are efficient, I
think they should lead
to better allocation of the capital. So yes, I think more liquidity might
be used by routing nodes
to serve finely tailored HTLC requests by senders, however this liquidity
should be rewarded
by higher routing fees.

> This brings me to the actual subject of this post. Assuming strict
penalisation is good, it may still not be ideal to flip the switch from one
day to the other. > Routing nodes may not offer the required level of
service yet, causing senders to end up with no nodes to choose from.

> One option is to gradually increase the strength of the penalties, so
that routing nodes are given time to adapt to the new standards. This does
require > everyone to move along and leaves no space for cheap routing
nodes with less leeway in terms of liquidity.

I think if we have lessons to learn on policy rules design and deployment
on the base-layer
(the full-rbf saga), it's to be careful in the initial set of rules, and
how we ensure smooth
upgradeability, from one version to another. Otherwise the re-deployment
cost towards
the new version might incentive the old routing node to stay on the
non-optimal versions,
and as we have historical buckets in routing algorithms, or preference for
older channels,
this might lead the end-user to pay higher fees, than they could access to.

> Therefore I am proposing another way to go about it: extend the
`channel_update` field `channel_flags` with a new bit that the sender can
use to signal > `highly_available`.

> It's then up to payers to decide how to interpret this flag. One way
could be to prefer `highly_available` channels during pathfinding. But if
the routing
> node then returns a failure, a much stronger than normal penalty will be
applied. For routing nodes this creates an opportunity to attract more
traffic by > marking some channels as `highly_available`, but it also comes
with the responsibility to deliver.

This is where the open question lies to me - "highly available" can be
defined with multiple
senses, like fault-tolerance, latency processing, equilibrated liquidity.
And a routing node might
not be able to optimize its architecture for the same end-goal (e.g more
watchtower on remote
host probably increases the latency processing).

> Without shadow channels, it is impossible to guarantee liquidity up to
the channel capacity. It might make sense for senders to only assume high
> availability for amounts up to `htlc_maximum_msat`.

As a note, I think "senders assumption" should be well-documented,
otherwise there will be
performance discrepancies between node implementations or even versions.
E.g, an upgraded
sender penalizing a node for the lack of shadow/parallel channels
fulfilling HTLC amounts up to
`htlc_maximum_msat`.

> A variation on this scheme that requires no extension of `channel_update`
is to signal availability 

Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/14/23 2:34 AM, Joost Jager wrote:

Hi Matt,

If nodes start aggressively preferring routes through nodes that reliably 
route payments (which
I believe lnd already does, in effect, to some large extent), they should 
do so by measurement,
not signaling.


The signaling is intended as a way to make measurement more efficient. If a node signals that a 
particular channel is HA and it fails, no other measurements on that same node need to be taken by 
the sender. They can skip the node altogether for a longer period of time.


But as a lightning node I don't actually care if a node is binary good/bad. I care about what 
success rate a node has. If you make the decision binary, suddenly in order for a node to be "good" 
I *have* to establish a credit relationship with my peers (i.e. support 0conf splicing). I think 
that is a very, very bad thing to do to the lightning network.


If someone wants to establish such a relationship with their peers, so be it, but as developers we 
should strongly avoid adding features which push node operators in that direction, and part of that 
is writing good routing scoring so that we aren't boxing ourselves into some binary good/bad idea of 
a node but rather estimating liquidity.


Honestly this just strikes me as developers being too lazy to do things right. If we do things 
carefully and we are seeing issues then we can consider breaking lightning, but until we give it a 
good shot, let's not!



In practice, many channels on the network are “high availability” today, 
but only in one
direction (I.e. they aren’t regularly spliced/rebalanced and are regularly 
unbalanced). A node
strongly preferring a high payment success rate *should* prefer such a 
channel, but in your
scheme would not.


This shouldn't be a problem, because the HA signaling is also directional. Each end can decide 
independently on whether to add the flag for a particular channel.


But how do you decide to set it without a credit relationship? Do I measure my channel and set the 
bit because the channel is "usually" (at what threshold?) saturating in the inbound direction? What 
happens if this changes for an hour and I get unlucky? Did I just screw myself?



This ignores the myriad of “at what threshold do you signal HA” issues, 
which likely make such a
signal DOA, anyway.


I think this is a product of sender preference for HA channels and the severity of the penalty if an 
HA channel fails. Given this, routing nodes will need to decide whether they can offer a service 
level that increases their routing revenue overall if they would signal HA. It is indeed dynamic, 
but I think the market is able to work it out.


I'm afraid this is going to immediately fall into a cargo cult of "set the bit" vs "don't set the 
bit" and we'll never get useful data out of it. But you may be right.



Finally, I’m very dismayed at this direction in thinking on how ln should 
work - nodes should be
measuring the network and routing over paths that it thinks are reliable 
for what it wants,
*robustly over an unreliable network*. We should absolutely not be 
expecting the lightning
network to be built out of high reliability nodes, that creates strong 
centralization pressure.
To truly meet a “high availability” threshold, realistically, you’d need to 
be able to JIT 0conf
splice-in, which would drive lightning to actually being a credit network.


Different people can have different opinions about how ln should work, that is fine. I see a 
trade-off between the reliability of the network and the barrier of entry, and I don't think the 
optimum is on one of the ends of the scale.


My point wasn't that lightning should be unreliable, but rather a reliable network build on 
unreliable hops. I'm very confident we can accomplish that without falling back to forcing nodes to 
establish credit to meet "reliability requirements".



With reasonable volume, lightning today is very reliable and relatively 
fast, with few retries
required. I don’t think we need to change anything to fix it. :)


How can you be sure about this? This isn't publicly visible data.


Sure it is! https://river.com/learn/files/river-lightning-report.pdf

I'm also quite confident we can do substantially better than this.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/13/23 7:05 PM, ZmnSCPxj wrote:

Good morning all,


First of all let's see what types of reputation system exist (and yes,
this is my very informal categorization):

- First hand experience
- Inferred experience
- Hearsay

The first two are likely the setup we all are comfortable with: we ourselves
experienced something, and make some decisions based on that
experience. This is probably what we're all doing at the moment: we
attempt a payment, it fails, we back off for a bit from that channel
being used again. This requires either being able to witness the issue
directly (local peer) or infer from unforgeable error messages (the
failing node returns an error, and it can't point the finger at someone
else). Notice that this also includes some transitive constructions,
such as the backpressure mechanism we were discussing for ariard's
credentials proposal.

Ideally we'd only rely on the first two to make decisions, but here's
exactly the issue we ran into with Bittorrent: repeat interactions are
too rare. In addition, our local knowledge gets out of date the longer
we wait, and a previously failing channel may now be good again, and
vice-versa. For us to have sufficient knowledge to make good decisions
we need to repeatedly interact with the same nodes in the network, and
since end-users will be very unlikely to do that, we might end up in a
situation were we instinctively fall back to the hearsay method, either
by sharing our local reputation with peers and then somehow combine that
with our own view. To the best of my knowledge such a system has never
been built successfully, and all attempts have ended in a system that
was either way too simple or is gameable by rational players.



In lightning we have a trivial solution to this - your wallet vendor/LSP is 
already extracting a fee
from you for every HTLC routed through it, it has you captive and can set the 
fee (largely)
arbitrarily (up to you paying on-chain fees to switch LSPs). They can happily 
tell you their view of
the network ~live and you should generally accept it. Its by no means perfect, 
and there's plenty of
games they could play on, eg, your privacy, but its pretty damned good.

If we care a ton about the risks here, we could have a few altruistic nodes 
that release similar
info and users can median-filter the data in one way or another to reduce risk.

I just do not buy that this is a difficult problem for the "end user" part of 
the network. For
larger nodes its (obviously, and trivially) not a problem either, which leaves the 
"middle nodes"
stranded without good data but without an LSP they want to use for data. I 
believe that isn't a
large enough cohort to change the whole network around for, and them asking a 
few altruistic (let's
say, developer?) nodes for scoring data seems more than sufficient.


But this is all ultimately hearsay.

LSPs can be bought out, and developers can go rogue.
Neither should be trusted if at all possible.


You're missing the point - if your LSP wants to "go rogue" here, at worst they can charge you more 
fees. They could also do this by...charging you more fees. I'm not really sure what your concern is.



Which is why I think forwardable peerswaps fixes this: it *creates* paths that 
allow payment routing, without requiring pervasive monitoring (which is 
horrible because eventually the network will be large enough that you will 
never encounter the same node twice if you're a plebeian, and if you're an 
aristocrat, you have every incentive to lie to the plebeians to solidify your 
power) of the network.


No, this is much, much worse for the network. In order to do this "live" (i.e. without failing a 
payment) you have to establish trust relationships across the network (i.e. make giving your peers 
credit a requirement to be considered a "robust node" and, thus, receive fee revenue).


Doing splicing/peerswap as a better way to rebalance is, of course, awesome, but it doesn't solve 
the issue of "what do I do if I'm just too low on capacity *right now* to clear this HTLC".


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
Hi Christian,


> And after all this rambling, let's get back to the topic at hand: I
> don't think enshrining the differences of availability in the protocol,
> thus creating two classes of nodes, is a desirable
> feature.


Yes so to be clear, the HA signaling is not on the node level but on the
channel level. So each node can decide per channel whether they want to
potentially attract additional traffic at the cost of severe penalties (or
avoidance if you want to use a different wording) if the channel can't be
used. They can still maintain a set of less reliable channels along side.


> Communicating up-front that I intend to be reliable does
> nothing, and penalizing after the fact isn't worth much due to the
> repeat interactions issue.


I think it is currently quite common for pathfinders to try another channel
of the same node for the payment at hand. Or re-attempt the same channel
for a future payment to the same destination. I understand the repeat
interactions issue, but not sure about the extent to which it applies to
lightning in practice. A think a common pattern for payments in general is
to pay to the same destinations repeatedly, for example for a daily coffee.


> It'd be even worse if now we had to rely on a
> third party to aggregate and track the reliability, in order to get
> enough repeat interactions to build a good model of their liquidity,
> since we're now back in the hearsay world, and the third party can feed
> us wrong information to maximize their profits.
>

Yes, using 3rd party info seems difficult. As mentioned in my reply to
Matt, the idea of HA signaling is to make local reliability tracking more
efficient so that it becomes less likely that senders need to rely on
external aggregators for their view on the network.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Joost Jager
Hi Matt,

If nodes start aggressively preferring routes through nodes that reliably
> route payments (which I believe lnd already does, in effect, to some large
> extent), they should do so by measurement, not signaling.
>

The signaling is intended as a way to make measurement more efficient. If a
node signals that a particular channel is HA and it fails, no other
measurements on that same node need to be taken by the sender. They can
skip the node altogether for a longer period of time.


> In practice, many channels on the network are “high availability” today,
> but only in one direction (I.e. they aren’t regularly spliced/rebalanced
> and are regularly unbalanced). A node strongly preferring a high payment
> success rate *should* prefer such a channel, but in your scheme would not.
>

This shouldn't be a problem, because the HA signaling is also directional.
Each end can decide independently on whether to add the flag for a
particular channel.


> This ignores the myriad of “at what threshold do you signal HA” issues,
> which likely make such a signal DOA, anyway.
>

I think this is a product of sender preference for HA channels and the
severity of the penalty if an HA channel fails. Given this, routing nodes
will need to decide whether they can offer a service level that increases
their routing revenue overall if they would signal HA. It is indeed
dynamic, but I think the market is able to work it out.


> Finally, I’m very dismayed at this direction in thinking on how ln should
> work - nodes should be measuring the network and routing over paths that it
> thinks are reliable for what it wants, *robustly over an unreliable
> network*. We should absolutely not be expecting the lightning network to be
> built out of high reliability nodes, that creates strong centralization
> pressure. To truly meet a “high availability” threshold, realistically,
> you’d need to be able to JIT 0conf splice-in, which would drive lightning
> to actually being a credit network.
>

Different people can have different opinions about how ln should work, that
is fine. I see a trade-off between the reliability of the network and the
barrier of entry, and I don't think the optimum is on one of the ends of
the scale.


> With reasonable volume, lightning today is very reliable and relatively
> fast, with few retries required. I don’t think we need to change anything
> to fix it. :)
>

How can you be sure about this? This isn't publicly visible data.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread ZmnSCPxj via Lightning-dev
Good morning all,

> > First of all let's see what types of reputation system exist (and yes,
> > this is my very informal categorization):
> > 
> > - First hand experience
> > - Inferred experience
> > - Hearsay
> > 
> > The first two are likely the setup we all are comfortable with: we ourselves
> > experienced something, and make some decisions based on that
> > experience. This is probably what we're all doing at the moment: we
> > attempt a payment, it fails, we back off for a bit from that channel
> > being used again. This requires either being able to witness the issue
> > directly (local peer) or infer from unforgeable error messages (the
> > failing node returns an error, and it can't point the finger at someone
> > else). Notice that this also includes some transitive constructions,
> > such as the backpressure mechanism we were discussing for ariard's
> > credentials proposal.
> > 
> > Ideally we'd only rely on the first two to make decisions, but here's
> > exactly the issue we ran into with Bittorrent: repeat interactions are
> > too rare. In addition, our local knowledge gets out of date the longer
> > we wait, and a previously failing channel may now be good again, and
> > vice-versa. For us to have sufficient knowledge to make good decisions
> > we need to repeatedly interact with the same nodes in the network, and
> > since end-users will be very unlikely to do that, we might end up in a
> > situation were we instinctively fall back to the hearsay method, either
> > by sharing our local reputation with peers and then somehow combine that
> > with our own view. To the best of my knowledge such a system has never
> > been built successfully, and all attempts have ended in a system that
> > was either way too simple or is gameable by rational players.
> 
> 
> In lightning we have a trivial solution to this - your wallet vendor/LSP is 
> already extracting a fee
> from you for every HTLC routed through it, it has you captive and can set the 
> fee (largely)
> arbitrarily (up to you paying on-chain fees to switch LSPs). They can happily 
> tell you their view of
> the network ~live and you should generally accept it. Its by no means 
> perfect, and there's plenty of
> games they could play on, eg, your privacy, but its pretty damned good.
> 
> If we care a ton about the risks here, we could have a few altruistic nodes 
> that release similar
> info and users can median-filter the data in one way or another to reduce 
> risk.
> 
> I just do not buy that this is a difficult problem for the "end user" part of 
> the network. For
> larger nodes its (obviously, and trivially) not a problem either, which 
> leaves the "middle nodes"
> stranded without good data but without an LSP they want to use for data. I 
> believe that isn't a
> large enough cohort to change the whole network around for, and them asking a 
> few altruistic (let's
> say, developer?) nodes for scoring data seems more than sufficient.

But this is all ultimately hearsay.

LSPs can be bought out, and developers can go rogue.
Neither should be trusted if at all possible.

Which is why I think forwardable peerswaps fixes this: it *creates* paths that 
allow payment routing, without requiring pervasive monitoring (which is 
horrible because eventually the network will be large enough that you will 
never encounter the same node twice if you're a plebeian, and if you're an 
aristocrat, you have every incentive to lie to the plebeians to solidify your 
power) of the network.

Ultimately the network gets healthier if flows are bidirectional, swaps are 
essential to bootstrapping from the starting state where there are distinct 
"customers" and "merchants", but current one-hop-only peerswaps are too local 
for the blockchain cost, and multi-hop source-routed swaps have the same issue 
as standard payments.
The advantage of forwardable peerswaps is that it is specifically not source 
routed --- intermediate nodes make decisions of where to forward, and they are 
thus incentivized to benefit the network because they benefit themselves.


I think it should be a principle of protocol design to embrace a capitalistic 
mindset, by which I mean: ensuring the rules make "beneficial for me" the same 
as "beneficial to everyone".
Certainly I can take a common knife from my kitchen and stick the pointy end 
into my neighbor, then take all their belongings, which would be very 
beneficial to me, but would not be beneficial to everyone, which is why laws 
against manslaughter and theft exist.
Ultimately, protocol design is the laying down of laws, and the proper function 
of this lawmaking position is to ensure that "beneficial for me" will be 
something that is "beneficial to everyone".
Indeed, the entire point of having a punitive Poon-Dryja is to ensure that 
"beneficial for me" does not include theft of the channel funds by using old 
state, and is exemplary of this principle.
"Greed is Good" might not be true, but perhaps: "We should strive to 

Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Matt Corallo

Thanks Christian,

On 2/13/23 7:32 AM, Christian Decker wrote:

Hi Matt,
Hi Joost,

let me chime in here, since we seem to be slowly reinventing all the
research on reputation systems that is already out there. First of all
let me say that I am personally not a fan of reputation systems in
general, just to get my own biases out of the way, now on to the why :-)

Reputation systems are great when they work, but they are horrible to
get right, and certainly the patchworky approach we see being proposed
today will end up with a system that is easy to exploit and hard to
understand. The last time I encountered this kind of scenario was during
my work on Bittorrent, where the often theorized tit-for-tat approach
failed spectacularly, and leeching (i.e., not contributing to other
people's download) is rampant even today (BT only works because a few
don't care about their upload bandwidth).

First of all let's see what types of reputation system exist (and yes,
this is my very informal categorization):

  - First hand experience
  - Inferred experience
  - Hearsay

The first two are likely the setup we all are comfortable with: we ourselves
experienced something, and make some decisions based on that
experience. This is probably what we're all doing at the moment: we
attempt a payment, it fails, we back off for a bit from that channel
being used again. This requires either being able to witness the issue
directly (local peer) or infer from unforgeable error messages (the
failing node returns an error, and it can't point the finger at someone
else). Notice that this also includes some transitive constructions,
such as the backpressure mechanism we were discussing for ariard's
credentials proposal.

Ideally we'd only rely on the first two to make decisions, but here's
exactly the issue we ran into with Bittorrent: repeat interactions are
too rare. In addition, our local knowledge gets out of date the longer
we wait, and a previously failing channel may now be good again, and
vice-versa. For us to have sufficient knowledge to make good decisions
we need to repeatedly interact with the same nodes in the network, and
since end-users will be very unlikely to do that, we might end up in a
situation were we instinctively fall back to the hearsay method, either
by sharing our local reputation with peers and then somehow combine that
with our own view. To the best of my knowledge such a system has never
been built successfully, and all attempts have ended in a system that
was either way too simple or is gameable by rational players.


In lightning we have a trivial solution to this - your wallet vendor/LSP is already extracting a fee 
from you for every HTLC routed through it, it has you captive and can set the fee (largely) 
arbitrarily (up to you paying on-chain fees to switch LSPs). They can happily tell you their view of 
the network ~live and you should generally accept it. Its by no means perfect, and there's plenty of 
games they could play on, eg, your privacy, but its pretty damned good.


If we care a ton about the risks here, we could have a few altruistic nodes that release similar 
info and users can median-filter the data in one way or another to reduce risk.


I just do not buy that this is a difficult problem for the "end user" part of the network. For 
larger nodes its (obviously, and trivially) not a problem either, which leaves the "middle nodes" 
stranded without good data but without an LSP they want to use for data. I believe that isn't a 
large enough cohort to change the whole network around for, and them asking a few altruistic (let's 
say, developer?) nodes for scoring data seems more than sufficient.



I also object to the wording of penalizing nodes that haven't been as
reliable in the past. It's not penalizing them if, based on our local
information, we decide to route over other nodes for a bit. Our goal is
optimize the payment process, chosing the best possible routes, not
making a judgement on the honesty or reliability of a node. When talking
about penalizing we see node operators starting to play stupid games to
avoid that perceived penalty, when in reality they should do their best
to route as many payments successfully as possible (the negative fees
for direct peers "exhausting" a balanced flow is one such example of
premature optimization in that direction imho).


Yes! Very much yes! I hate this line of thinking.


So I guess what I'm saying is that we need to get away from this
patchwork mode of building the protocol, and have a much clearer model
for a) what we want to achieve, b) how much untrustworthy information we
want to rely on, and c) how we protect (and possibly prove security)
against manipulation by rational players. For the last question we at
least have one nice feature (for now), namely that the identities are
semi-permanent, and so white-washing attacks at least are not free.

And after all this rambling, let's get back to the topic at hand: I
don't think enshrining 

Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Christian Decker
Hi Matt,
Hi Joost,

let me chime in here, since we seem to be slowly reinventing all the
research on reputation systems that is already out there. First of all
let me say that I am personally not a fan of reputation systems in
general, just to get my own biases out of the way, now on to the why :-)

Reputation systems are great when they work, but they are horrible to
get right, and certainly the patchworky approach we see being proposed
today will end up with a system that is easy to exploit and hard to
understand. The last time I encountered this kind of scenario was during
my work on Bittorrent, where the often theorized tit-for-tat approach
failed spectacularly, and leeching (i.e., not contributing to other
people's download) is rampant even today (BT only works because a few
don't care about their upload bandwidth).

First of all let's see what types of reputation system exist (and yes,
this is my very informal categorization):

 - First hand experience
 - Inferred experience
 - Hearsay

The first two are likely the setup we all are comfortable with: we ourselves
experienced something, and make some decisions based on that
experience. This is probably what we're all doing at the moment: we
attempt a payment, it fails, we back off for a bit from that channel
being used again. This requires either being able to witness the issue
directly (local peer) or infer from unforgeable error messages (the
failing node returns an error, and it can't point the finger at someone
else). Notice that this also includes some transitive constructions,
such as the backpressure mechanism we were discussing for ariard's
credentials proposal.

Ideally we'd only rely on the first two to make decisions, but here's
exactly the issue we ran into with Bittorrent: repeat interactions are
too rare. In addition, our local knowledge gets out of date the longer
we wait, and a previously failing channel may now be good again, and
vice-versa. For us to have sufficient knowledge to make good decisions
we need to repeatedly interact with the same nodes in the network, and
since end-users will be very unlikely to do that, we might end up in a
situation were we instinctively fall back to the hearsay method, either
by sharing our local reputation with peers and then somehow combine that
with our own view. To the best of my knowledge such a system has never
been built successfully, and all attempts have ended in a system that
was either way too simple or is gameable by rational players.

I also object to the wording of penalizing nodes that haven't been as
reliable in the past. It's not penalizing them if, based on our local
information, we decide to route over other nodes for a bit. Our goal is
optimize the payment process, chosing the best possible routes, not
making a judgement on the honesty or reliability of a node. When talking
about penalizing we see node operators starting to play stupid games to
avoid that perceived penalty, when in reality they should do their best
to route as many payments successfully as possible (the negative fees
for direct peers "exhausting" a balanced flow is one such example of
premature optimization in that direction imho).

So I guess what I'm saying is that we need to get away from this
patchwork mode of building the protocol, and have a much clearer model
for a) what we want to achieve, b) how much untrustworthy information we
want to rely on, and c) how we protect (and possibly prove security)
against manipulation by rational players. For the last question we at
least have one nice feature (for now), namely that the identities are
semi-permanent, and so white-washing attacks at least are not free.

And after all this rambling, let's get back to the topic at hand: I
don't think enshrining the differences of availability in the protocol,
thus creating two classes of nodes, is a desirable
feature. Communicating up-front that I intend to be reliable does
nothing, and penalizing after the fact isn't worth much due to the
repeat interactions issue. It'd be even worse if now we had to rely on a
third party to aggregate and track the reliability, in order to get
enough repeat interactions to build a good model of their liquidity,
since we're now back in the hearsay world, and the third party can feed
us wrong information to maximize their profits.

Regards,
Christian


Matt Corallo  writes:
> Hi Joost,
>
> I’m not sure I agree that lightning is “capital efficient” (or even close to 
> it), but more generally I don’t see why this needs a signal.
>
> If nodes start aggressively preferring routes through nodes that reliably 
> route payments (which I believe lnd already does, in effect, to some large 
> extent), they should do so by measurement, not signaling.
>
> In practice, many channels on the network are “high availability” today, but 
> only in one direction (I.e. they aren’t regularly spliced/rebalanced and are 
> regularly unbalanced). A node strongly preferring a high payment success rate 
> *should* 

Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Matt Corallo
Hi Joost,

I’m not sure I agree that lightning is “capital efficient” (or even close to 
it), but more generally I don’t see why this needs a signal.

If nodes start aggressively preferring routes through nodes that reliably route 
payments (which I believe lnd already does, in effect, to some large extent), 
they should do so by measurement, not signaling.

In practice, many channels on the network are “high availability” today, but 
only in one direction (I.e. they aren’t regularly spliced/rebalanced and are 
regularly unbalanced). A node strongly preferring a high payment success rate 
*should* prefer such a channel, but in your scheme would not.

This ignores the myriad of “at what threshold do you signal HA” issues, which 
likely make such a signal DOA, anyway.

Finally, I’m very dismayed at this direction in thinking on how ln should work 
- nodes should be measuring the network and routing over paths that it thinks 
are reliable for what it wants, *robustly over an unreliable network*. We 
should absolutely not be expecting the lightning network to be built out of 
high reliability nodes, that creates strong centralization pressure. To truly 
meet a “high availability” threshold, realistically, you’d need to be able to 
JIT 0conf splice-in, which would drive lightning to actually being a credit 
network.

With reasonable volume, lightning today is very reliable and relatively fast, 
with few retries required. I don’t think we need to change anything to fix it. 
:)

Matt

> On Feb 13, 2023, at 06:46, Joost Jager  wrote:
> 
> 
> Hi,
> 
> For a long time I've held the expectation that eventually payers on the 
> lightning network will become very strict about node performance. That they 
> will require a routing node to operate flawlessly or else apply a hefty 
> penalty such as completely avoiding the node for an extended period of time - 
> multiple weeks. The consequence of this is that routing nodes would need to 
> manage their liquidity meticulously because every failure potentially has a 
> large impact on future routing revenue.
> 
> I think movement in this direction is important to guarantee competitiveness 
> with centralised payment systems and their (at least theoretical) ability to 
> process a payment in the blink of an eye. A lightning wallet trying multiple 
> paths to find one that works doesn't help with this.
> 
> A common argument against strict penalisation is that it would lead to less 
> efficient use of capital. Routing nodes would need to maintain pools of 
> liquidity to guarantee successes all the time. My opinion on this is that 
> lightning is already enormously capital efficient at scale and that it is 
> worth sacrificing a slight part of that efficiency to also achieve the lowest 
> possible latency.
> 
> This brings me to the actual subject of this post. Assuming strict 
> penalisation is good, it may still not be ideal to flip the switch from one 
> day to the other. Routing nodes may not offer the required level of service 
> yet, causing senders to end up with no nodes to choose from.
> 
> One option is to gradually increase the strength of the penalties, so that 
> routing nodes are given time to adapt to the new standards. This does require 
> everyone to move along and leaves no space for cheap routing nodes with less 
> leeway in terms of liquidity.
> 
> Therefore I am proposing another way to go about it: extend the 
> `channel_update` field `channel_flags` with a new bit that the sender can use 
> to signal `highly_available`. 
> 
> It's then up to payers to decide how to interpret this flag. One way could be 
> to prefer `highly_available` channels during pathfinding. But if the routing 
> node then returns a failure, a much stronger than normal penalty will be 
> applied. For routing nodes this creates an opportunity to attract more 
> traffic by marking some channels as `highly_available`, but it also comes 
> with the responsibility to deliver.
> 
> Without shadow channels, it is impossible to guarantee liquidity up to the 
> channel capacity. It might make sense for senders to only assume high 
> availability for amounts up to `htlc_maximum_msat`.
> 
> A variation on this scheme that requires no extension of `channel_update` is 
> to signal availability implicitly through routing fees. So the more expensive 
> a channel is, the stronger the penalty that is applied on failure will be. It 
> seems less ideal though, because it could disincentivize cheap but reliable 
> channels on high traffic links.
> 
> The effort required to implement some form of a `highly_available` flag seem 
> limited and it may help to get payment success rates up. Interested to hear 
> your thoughts.
> 
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list

[Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Joost Jager
Hi,

For a long time I've held the expectation that eventually payers on the
lightning network will become very strict about node performance. That they
will require a routing node to operate flawlessly or else apply a hefty
penalty such as completely avoiding the node for an extended period of time
- multiple weeks. The consequence of this is that routing nodes would need
to manage their liquidity meticulously because every failure potentially
has a large impact on future routing revenue.

I think movement in this direction is important to guarantee
competitiveness with centralised payment systems and their (at least
theoretical) ability to process a payment in the blink of an eye. A
lightning wallet trying multiple paths to find one that works doesn't help
with this.

A common argument against strict penalisation is that it would lead to less
efficient use of capital. Routing nodes would need to maintain pools of
liquidity to guarantee successes all the time. My opinion on this is that
lightning is already enormously capital efficient at scale and that it is
worth sacrificing a slight part of that efficiency to also achieve the
lowest possible latency.

This brings me to the actual subject of this post. Assuming strict
penalisation is good, it may still not be ideal to flip the switch from one
day to the other. Routing nodes may not offer the required level of service
yet, causing senders to end up with no nodes to choose from.

One option is to gradually increase the strength of the penalties, so that
routing nodes are given time to adapt to the new standards. This does
require everyone to move along and leaves no space for cheap routing nodes
with less leeway in terms of liquidity.

Therefore I am proposing another way to go about it: extend the
`channel_update` field `channel_flags` with a new bit that the sender can
use to signal `highly_available`.

It's then up to payers to decide how to interpret this flag. One way could
be to prefer `highly_available` channels during pathfinding. But if the
routing node then returns a failure, a much stronger than normal penalty
will be applied. For routing nodes this creates an opportunity to attract
more traffic by marking some channels as `highly_available`, but it also
comes with the responsibility to deliver.

Without shadow channels, it is impossible to guarantee liquidity up to the
channel capacity. It might make sense for senders to only assume high
availability for amounts up to `htlc_maximum_msat`.

A variation on this scheme that requires no extension of `channel_update`
is to signal availability implicitly through routing fees. So the more
expensive a channel is, the stronger the penalty that is applied on failure
will be. It seems less ideal though, because it could disincentivize cheap
but reliable channels on high traffic links.

The effort required to implement some form of a `highly_available` flag
seem limited and it may help to get payment success rates up. Interested to
hear your thoughts.

Joost
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev