I’m not sure I understand this - is there much reason to want taproot commitment outputs? I mean they’re cool, and witnesses are a bit smaller, which is nice I guess, but they’re not providing materially new features, AFAIU. Taproot funding, on the other hand, provides a Bitcoin-wide privacy improvement as well the potential future ability of channel participants to use multisig for their own channel funds transparently.

Sure, if we’re doing taproot funding outputs we should probably just do it for the commitment outputs as well, because why not (and it’s a prereq for PTLCs). But trying to split them up seems like added complexity “just because”? I suppose it tees us up for eventual PTLC support in todays channels, but we can also consider that separately when we get to that point, IMO.

Am I missing some important utility of taproot commitment transaction outputs?

Matt

On Oct 27, 2022, at 02:17, Johan Torås Halseth <joha...@gmail.com> wrote:


Hi, Laolu.

I think it could be worth considering dividing the taprootyness of a channel into two: 
1) taproot funding output 
2) taproot commitment outputs

That way we could upgrade existing channels only on the commitment level, not needing to close or re-anchor the channels using an adapter in order to get many of the taproot benefits.

New channels would use taproot multisig (musig2) for the funding output.

This seems to be less disruptive to the existing network, and we could get features enabled by taproot to larger parts of the network quicker. And to me this seems to carry less complexity (and closing fees) than an adapter.

One caveat is that this wouldn't work (I think) for Eltoo channels, as the funding output would not be plain multisig anymore.

- Johan

On Sat, Mar 26, 2022 at 1:27 AM Antoine Riard <antoine.ri...@gmail.com> wrote:
Hi Laolu,

Thanks for the proposal, quick feedback.

> It *is* still the case that _ultimately_ the two transactions to close the
> old segwit v0 funding output, and re-open the channel with a new segwit v1
> funding output are unavoidable. However this adapter commitment lets peers
> _defer_ these two transactions until closing time.

I think there is one downside coming with adapter commitment, which is the uncertainty of the fee overhead at the closing time. Instead of closing your segwit v0 channel _now_ with known fees, when your commitment is empty of time-sensitive HTLCs, you're taking the risk of closing during fees spikes, due a move triggered by your counterparty, when you might have HTLCs at stake.

It might be more economically rational for a LN node operator to pay the upgrade cost now if they wish  to benefit from the taproot upgrade early, especially if long-term we expect block fees to increase, or wait when there is a "normal" cooperative closing.

So it's unclear to me what the economic gain of adapter commitments ?

> In the remainder of this mail, I'll describe an alternative
> approach that would allow upgrading nearly all channel/commitment related
> values (dust limit, max in flight, etc), which is inspired by the way the
> Raft consensus protocol handles configuration/member changes.

Long-term, I think we'll likely need a consensus protocol anyway for multi-party constructions (channel factories/payment pools). AFAIU this proposal doesn't aim to roll out a full-fledged consensus protocol *now* though it could be wise to ensure what we're building slowly moves in this direction. Less critical code to maintain across bitcoin codebases/toolchains.

> The role of the signature it to prevent "spoofing" by one of the parties
> (authenticate the param change), and also it serves to convince a party that
> they actually sent a prior commitment propose update during the
> retransmission phase.

What's the purpose of data origin authentication if we assume only two-parties running over Noise_XK ?

I think it's already a security property we have. Though if we think we're going to reuse these dynamic upgrades for N counterparties communicating through a coordinator, yes I think it's useful.

> In the past, when ideas like this were brought up, some were concerned that
> it wouldn't really be possible to do this type of updates while existing
> HTLCs were in flight (hence some of the ideas to clear out the commitment
> beforehand).

The dynamic upgrade might serve in an emergency context where we don't have the leisury to wait for the settlement of the pending HTLCs. The timing of those ones might be beyond the coordination of link counterparties. Thus, we have to allow upgrade of non-empty commitments (and if there are undesirable interferences between new commitment types and HTLCs/PTLCs present, deal case-by-case).

Antoine

Le jeu. 24 mars 2022 à 18:53, Olaoluwa Osuntokun <laol...@gmail.com> a écrit :
Hi y'all,

## Dynamic Commitments Retrospective

Two years-ish ago I made a mailing list post on some ideas re dynamic
commitments [1], and how the concept can be used to allow us to upgrade
channel types on the fly, and also remove pesky hard coded limits like the
483 HTLC in-flight limit that's present today. Back then my main target was
upgrading all the existing channels over to the anchor output commitment
variant, so the core internal routing network would be more resilient in a
persistent high fee environment (which hasn't really happened over the past
2 years for various reasons tbh). Fast forward to today, and with taproot
now active on mainnet, and some initial design work/sketches for
taproot-native channels underway, I figure it would be good to bump this
concept as it gives us a way to upgrade all 80k+ public channels to taproot
without any on chain transactions.

## Updating Across Witness Versions w/ Adapter Commitments

In my original mail, I incorrectly concluded that the dynamic commitments
concept would only really work within the confines of a "static" multi-sig
output, meaning that it couldn't be used to help channels upgrade to future
segwit witness versions.  Thankfully this reply [2] by ZmnSCPxj, outlined a
way to achieve this in practice. At a high level he proposes an "adaptor
commitment" (similar to the kickoff transaction in eltoo/duplex), which is
basically an upgrade transaction that spends one witness version type, and
produces an output with the next (upgraded) type. In the context of
converting from segwit v0 to v1 (taproot), two peers would collaboratively
create a new adapter commitment that spends the old v0 multi-sig output, and
produces a _new_ v1 multi-sig output. The new commitment transaction would
then be anchored using this new output.

Here's a rough sequence diagram of the before and after state to better
convey the concept:

  * Before: fundingOutputV0 -> commitmentTransaction

  * After fundingOutputV0 -> fundingOutputV1 (the adapter) ->
    commitmentTransaction

It *is* still the case that _ultimately_ the two transactions to close the
old segwit v0 funding output, and re-open the channel with a new segwit v1
funding output are unavoidable. However this adapter commitment lets peers
_defer_ these two transactions until closing time. When force closing two
transactions need to be confirmed before the commitment outputs can be
resolved. However, for co-op close, you can just spend the v0 output, and
deliver to the relevant P2TR outputs. The adapter commitment can leverage
sighash anyonecanpay to let both parties (assuming it's symmetric) attach
additional inputs for fees (to avoid introducing the old update_fee related
static fee issues), or alternatively inherit the anchor output pattern at
this level.

## Existing Dynamic Commitments Proposals

Assuming this concept holds up, then we need an actual concrete protocol to
allow for dynamic commitment updates. Last year, Rusty made a spec PR
outlining a way to upgrade the commitment type (leveraging the new
commitment type feature bits) upon channel re-establish [3]. The proposal
relies on another message that both sides send (`stfu`) to clear the
commitment (similar to the shutdown semantics) before the switch over
happens. However as this is tied to the channel re-establish flow, it
doesn't allow both sides to do things like only allow your peer to attach N
HTLCs to start with, slowing increasing their allotted slots and possibly
reducing them (TCP AIMD style).

## A Two-Phase Dynamic Commitment Update Protocol

IMO if we're adding in a way to do commitment/channel upgrades, then it may
be worthwhile to go with a more generalized, but slightly more involved
route instead. In the remainder of this mail, I'll describe an alternative
approach that would allow upgrading nearly all channel/commitment related
values (dust limit, max in flight, etc), which is inspired by the way the
Raft consensus protocol handles configuration/member changes.

For those that aren't aware, Raft is a consensus protocol analogous to Paxos
(but isn't byzantine fault tolerant out of the box) that was designed as a
more understandable alternative to Paxos for a pedagogical environment.
Typically the algorithm is run in the context of a fixed cluster with N
machines, but supports adding/removing machines from the cluster with a
configuration update protocol. At a high level the way this works is that a
new config is sent to the leader, with the leader synchronizing the config
change with the other members of the cluster. Once a majority threshold is
reached, the leader then commits the config change with the acknowledged
parties using the new config (basically a two phase commit). I'm skipping
over some edge cases here that can arise if the new nodes participate
consensus too early, which can cause a split majority leading to two leaders
being elected.

Applying this to the LN context is a bit simpler than a generalized
protocol, as we typically just have two parties involved. The initiator is
already naturally a "leader" in our context, as they're the only ones that
can do things like trigger fee updates.

### Message Structure

At a high level I propose we introduce two new messages, with the fields
looking something like this for `commitment_update_propose`:
 * type: 0 (`channel_id`)
   * value: [`32*byte`:`chan_id`]
 * type: 1 (`propose_sig`)
   * value: [`64*byte`:`sig`]
 * type: 2 (`update_payload`)
   * value: [`*byte`:`tlv_payload`]

and this `commitment_update_apply`:
 * type: 0 (`channel_id`)
   * value: [`32*byte`:`chan_id`]
 * type: 1 (`local_propose`)
   * value: [`*byte`:`commitment_update_propose`]
 * type: 2 (`remote_propose`)
   * value: [`*byte`:`commitment_update_propose`]

### Protocol Flow

The core idea here is that either party can propose a commitment/channel
param update, but only the initiator can actually apply it. The
`commitment_update_propose` encodes the new set of updates, with a signature
covering the TLV blob for the new params (more on why that's needed later).
The `commitment_update_apply` includes up to _two_
`commitment_update_propose` messages (one for the initiator and one for the
responder, as nested TLV messages). The `commitment_update_propose` message
would be treated like any other `update_*` message, in that it takes a new
commitment signature to properly commit/apply it.

The normal flow takes the form of both sides sending a
`commitment_update_propose` message, with the initiator finally committing
both by sending a `commitment_update_apply` message. In the event that only
the responder wants to apply a param change/update, then the initiator can
reply immediately with a `commitment_update_apply` message that doesn't
include a param change for their commitment (or they just echo the
parameters if they're acceptable).

### Handling Retransmissions

The role of the signature it to prevent "spoofing" by one of the parties
(authenticate the param change), and also it serves to convince a party that
they actually sent a prior commitment propose update during the
retransmission phase. As the `commitment_update_propose` message would be
retransmitted like any other message, if the initiator attempts to commit
the update but the connection dies, they'll retransmit it as normal along
with their latest signature.

### Nested TLV Param Generality

The messages as sketched out here just have an opaque nested TLV field which
makes it extensible to add in other things like tweaking the total number of
max HTLCs, the current dust values, min/max HTLCs, etc (all things that are
currently hard coded for the lifetime of the channel). An initial target
would likely just be a `chan_type` field, with future feature bits governing
_what_ type of commitment updates both parties understand in the future.

In the past, when ideas like this were brought up, some were concerned that
it wouldn't really be possible to do this type of updates while existing
HTLCs were in flight (hence some of the ideas to clear out the commitment
beforehand). I don't see a reason why this fundamentally _shouldn't_ be
allowed, as from the point of view of the channel update state machine, all
updates (adds/removes) get applied as normal, but with this _new_ commitment
type/params. The main edge case we'll need to consider is cases where the
new params make older HTLCs invalid for some reason.

## Conclusion

Using the adapter commitment idea combined with a protocol for updating
commitments on the fly, would potentially allow us to update all 80k+ segwit
v0 channels to the base level of taprooty channels without any on chain
transactions. The two transactions (open+close) must happen eventually, but
by holding another layer of spends off-chain we can defer them (potentially
indefinitely, as we have channels today that have been opened for over a
year).

Deploying a generalised on-the-fly dynamic commitment update protocol gives
us a tool to future proof the _existing_ anchored multi-sig outputs in the
chain, and also a way to remove many of the hard coded parameters we have
today in the protocol. One overly inflexible parameter we have today in the
network is the 483 HTLC limit. Allowing this value to float would allow
peers to apply similar congestion avoidance algorithm that are used in TCP
today, and also give us a way to protect the network against future
unforeseen widespread policy changes (like a raising of the dust limit).

-- Laolu

[1]: https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html
[2]: https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002770.html
[3]: https://github.com/lightning/bolts/pull/868
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to