Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-06-28 Thread Rusty Russell
Hi Roasbeef,

This is over-design: if you fail to get reliable gossip, your routing
will suffer anyway.  Nothing new here.

And if you *know* you're missing gossip, you can simply delay onchain
closures for longer: since nodes should respect the old channel ids for
a while anyway.

Matt's proposal to simply defer treating onchain closes is elegant and
minimal.  We could go further and relax requirements to detect onchain
closes at all, and optionally add a perm close message.

Cheers,
Rusty.

Olaoluwa Osuntokun  writes:
> Hi y'all,
>
> This mail was inspired by this [1] spec PR from Lisa. At a high level, it
> proposes the nodes add a delay between the time they see a channel closed on
> chain, to when they remove it from their local channel graph. The motive
> here is to give the gossip message that indicates a splice is in process,
> "enough" time to propagate through the network. If a node can see this
> message before/during the splicing operation, then they'll be able relate
> the old and the new channels, meaning it's usable again by senders/receiver
> _before_ the entire chain of transactions confirms on chain.
>
> IMO, this sort of arbitrary delay (expressed in blocks) won't actually
> address the issue in practice. The proposal suffers from the following
> issues:
>
>   1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement
>   takes longer than 2 hours to reach the "economic majority" of
>   senders/receivers, then the channel won't be able to mask the splicing
>   downtime.
>
>   2. Gossip propagation delay and offline peers. These days most nodes
>   throttle gossip pretty aggressively. As a result, a pair of nodes doing
>   several in-flight splices (inputs become double spent or something, so
>   they need to try a bunch) might end up being rate limited within the
>   network, causing the splice update msg to be lost or delayed significantly
>   (IIRC CLN resets these values after 24 hours). On top of that, if a peer
>   is offline for too long (think mobile senders), then they may miss the
>   update all together as most nodes don't do a full historical
>   _channel_update_ dump anymore.
>
> In order to resolve these issues, I think instead we need to rely on the
> primary splicing signal being sourced from the chain itself. In other words,
> if I see a channel close, and a closing transaction "looks" a certain way,
> then I know it's a splice. This would be used in concert w/ any new gossip
> messages, as the chain signal is a 100% foolproof way of letting an aware
> peer know that a splice is actually happening (not a normal close). A chain
> signal doesn't suffer from any of the gossip/time related issues above, as
> the signal is revealed at the same time a peer learns of a channel
> close/splice.
>
> Assuming, we agree that a chain signal has some sort of role in the ultimate
> plans for splicing, we'd need to decide on exactly _what_ such a signal
> looks like. Off the top, a few options are:
>
>   1. Stuff something in the annex. Works in theory, but not in practice, as
>   bitcoind (being the dominant full node implementation on the p2p network,
>   as well as what all the miners use) treats annexes as non-standard. Also
>   the annex itself might have some fundamental issues that get in the way of
>   its use all together [2].
>
>   2. Re-use the anchors for this purpose. Anchor are nice as they allow for
>   1st/2nd/3rd party CPFP. As a splice might have several inputs and outputs,
>   both sides will want to make sure it gets confirmed in a timely manner.
>   Ofc, RBF can be used here, but that requires both sides to be online to
>   make adjustments. Pre-signing can work too, but the effectiveness
>   (minimizing chain cost while expediting confirmation) would be dependent
>   on the fee step size.
>
>   In this case, we'd use a different multi-sig output (both sides can rotate
>   keys if they want to), and then roll the anchors into this splicing
>   transaction. Given that all nodes on the network know what the anchor size
>   is (assuming feature bit understanding), they're able to realize that it's
>   actually a splice, and they don't need to remove it from the channel graph
>   (yet).
>
>   3. Related to the above: just re-use the same multi-sig output. If nodes
>   don't care all that much about rotating these keys, then they can just use
>   the same output. This is trivially recognizable by nodes, as they already
>   know the funding keys used, as they're in the channel_announcement.
>
>   4. OP_RETURN (yeh, I had to list it). Self explanatory, push some bytes in
>   an OP_RETURN and use that as the marker.
>
>   5. Fiddle w/ the locktime+sequence somehow to make it identifiable to
>   verifiers. This might run into some unintended interactions if the inputs
>   provided have either relative or absolute lock times. There might also be
>   some interaction w/ the main constructing for eltoo (uses the locktime).
>
> Of all the options, I think #2 makes the 

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Peter Todd
On Tue, Jun 28, 2022 at 11:31:54AM -0400, Matt Corallo wrote:
> On 6/28/22 9:05 AM, Christian Decker wrote:
> > It is worth mentioning here that the LN protocol is generally not very
> > latency sensitive, and from my experience can easily handle very slow
> > signers (3-5 seconds delay) without causing too many issues, aside from
> > slower forwards in case we are talking about a routing node. I'd expect
> > routing node signers to be well below the 1 second mark, even when
> > implementing more complex signer logic, including MuSig2 or nested
> > FROST.
> 
> In general, and especially for "edge nodes", yes, but if forwarding nodes
> start taking a full second to forward a payment, we probably need to start
> aggressively avoiding any such nodes - while I'd love for all forwarding
> nodes to take 30 seconds to forward to improve privacy, users ideally expect
> payments to complete in 100ms, with multiple payment retries in between.

Idle question: would it be worthwhile to allow people to opt-in to their
payments happening more slowly for privacy? At the very least it'd be fine if
payments done by automation for rebalancing, etc. happened slowly.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Matt Corallo




On 6/28/22 9:05 AM, Christian Decker wrote:

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested
FROST.


In general, and especially for "edge nodes", yes, but if forwarding nodes start taking a full second 
to forward a payment, we probably need to start aggressively avoiding any such nodes - while I'd 
love for all forwarding nodes to take 30 seconds to forward to improve privacy, users ideally expect 
payments to complete in 100ms, with multiple payment retries in between.


This obviously probably isn't ever going to happen in lightning, but getting 95th percentile 
payments down to one second is probably a good goal, something that requires never having to retry 
payments and also having forwarding nodes not take more than, say, 150ms.


Of course I don't think we should ever introduce a timeout on the peer level - if your peer went 
away for a second and isn't responding quickly to channel updates it doesn't merit closing a 
channel, but its something we will eventually want to handle in route selection if it becomes more 
of an issue going forward.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Christian Decker
Olaoluwa Osuntokun  writes:
>> Rene Pickhardt brought up the issue of latency with regards to
>> nested/recursive MuSig2 (or nested FROST for threshold) on Bitcoin
>> StackExchange
>
> Not explicitly, but that strikes me as more of an implementation level
> concern. As an example, today more nodes are starting to use replicated
> database backends instead of a local ed embedded database. Using such a
> database means that _network latency_ is now also a factor, as committing
> new states requires round trips between the DBMS that'll increase the
> perceived latency of payments in practice. The benefit ofc is better support
> for backups/replication.
>
> I think in the multi-signature setting for LN, system designers will also
> need to factor in the added latency due to adding more signers into the mix.
> Also any system that starts to break up the logical portions of a node
> (signing, hosting, etc -- like Blockstream's Greenlight project), will need
> to wrangle with this as well (such is the nature of distributed systems).

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested
FROST.

In particular remember that the LN protocol implements a batch
mechanism, with changes applied to the commitment transaction as a
batch. Not every change requires a commitment and thus a signature. This
means that while a slow signer may have an impact on payment latency, it
should generally not have an impact on throughput on the routing nodes.

Regards,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Three Strategies for Lightning Forwarding Nodes

2022-06-28 Thread Michael Folkson via Lightning-dev
Hey ZmnSCPxj

It is an interesting topic. Alex Bosworth did a presentation at the Lightning 
Hack Day last year with a similar attempt at categorizing the different 
strategies for a routing/forwarding node (Ping Pong, Liquidity Battery, Inbound 
Sourcing, Liquidity Trader, Last Mile, Swap etc)

https://btctranscripts.com/lightning-hack-day/2021-03-27-alex-bosworth-lightning-routing/

It seems like your attempt is a little more granular and unstructured (based on 
individual responses) but perhaps it fits into the broad categories Alex 
suggested maybe with some additional ones?

Thanks
Michael

--
Michael Folkson
Email: michaelfolkson at protonmail.com
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3


--- Original Message ---
On Tuesday, June 28th, 2022 at 03:34, ZmnSCPxj via Lightning-dev 
 wrote:


> Good morning list,
>
> This is a short (relative to my typical crap) writeup on some strategies that 
> Lightning forwarding nodes might utilize.
>
> I have been thinking of various strategies that actual node operators (as I 
> understood from discussing with a few of them) use:
>
> * Passive rebalance / feerate by balance
> * Set feerates according to balance: increase feerates when our side has low 
> balance, reduce feerates when our side has high balance.
> * "passive rebalance" because we are basically encouraging payments via our 
> channel if the balance is in our favor, and discouraging payments if the 
> balance is against us, thus typical payments will "normally" rebalance our 
> node naturally without us spending anything.
> * Low fee
> * Just fix the fee to a low fee, e.g. base 1 proportional 1 or even the 
> @zerofeerouting guy of base 0 proportional 0.
> * Ridiculously simple, no active management, no scripts, no nothing.
> * Wall
> * Set to a constant (or mostly constant) high feerate.
> * Actively rebalance, targeting low-fee routes (i.e. less than our earnings), 
> and constantly probe the network for the rare low-fee routes that we can use 
> to rebalance.
> * Basically, buy cheap liquidity and resell it at higher prices.
>
>
> The interesting thing is how the three interact.
>
> Suppose we have a mixed network composed ONLY of passive rebalancers and 
> walls.
> In that case, the passive rebalancers might occasionally set channels to low 
> fees, in which case the walls buy up their liquidity, but eventually the 
> liquidity of the passive rebalancer is purchased and the passive rebalancer 
> raises their price point.
> The network then settles with every forwarding node having roughly equal 
> balance on their channels, but note that it was the walls who paid to the 
> passive rebalancers to get the channels into a nice balance.
> In particular, if there were only a single wall node, it can stop rebalancing 
> once the price to rebalance costs more than 49% of its earnings, so it paid 
> 49% of its earnings to the passive rebalancers and keeps 51% of its earnings, 
> thus earning more than the passive rebalancers earn.
> However, once multiple wall nodes exist, they will start bidding for the 
> available liquidity from the passive rebalancers and the may find it 
> difficult to compete once the passive rebalancers set their feerates to more 
> than 50% of the wall feerate, at which point the passive rebalancers now end 
> up earning more than the wall nodes (because the wall nodes now pay more to 
> the passive rebalancers than what they keep).
>
> Thus, it seems to me that passive rebalancers would outcompete wall 
> strategies, if they were the only strategies on the network.
>
> However, the network as-is contains a LOT of tiny nodes with low feerates.
>
> In such an environment, walls can pick up liquidity for really, really cheap, 
> leaving the low-feerate nodes with no liquidity in the correct direction.
> And thus, it seems plausible that they can resell the liquidity later at much 
> higher feerates, possibly outcompeting the passive rebalancers.
>
> Unfortunately:
>
> * Low feerate nodes are notoriously unreliable for payments; their channels 
> are usually saturated in one side or another. since walls keep taking their 
> liquidity.
> * Because of this known unreliability, some payer strategies filter them out 
> via some heuristics (e.g. payment unreliability information).
> Thus, even in the rare case where payment flows change on the network, they 
> are not used by payers --- instead, walls exploit them since walls do not 
> care if rebalancing fails, they will always just retry later.
> * One argument FOR using low-feerate nodes is that it "supports the network".
> * However, it seems to me that the low-feerate nodes are actually being 
> exploited by the wall nodes instead, and the low-feerate nodes have too 
> little payment reliability to actually support payers instead of large-scale 
> forwarders.
> * Both low-feerates and walls do not leak their channel balances, whereas 
> passive rebalancers do leak their channel balance.