Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-30 Thread Matt Corallo


> On Jun 28, 2022, at 19:11, Peter Todd  wrote:
> 
> Idle question: would it be worthwhile to allow people to opt-in to their
> payments happening more slowly for privacy? At the very least it'd be fine if
> payments done by automation for rebalancing, etc. happened slowly.

Yea, actually, I think that’d be a really cool idea. Obviously you don’t want 
to hold onto an HTLC for much longer than you have to or you’re DoS’ing 
yourself using up channel capacity, but most channels spend the vast majority 
of their time with zero HTLCs, so waiting a second instead of 100ms to batch 
seems totally reasonable.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-30 Thread Christian Decker
Matt Corallo  writes:

> On 6/28/22 9:05 AM, Christian Decker wrote:
>> It is worth mentioning here that the LN protocol is generally not very
>> latency sensitive, and from my experience can easily handle very slow
>> signers (3-5 seconds delay) without causing too many issues, aside from
>> slower forwards in case we are talking about a routing node. I'd expect
>> routing node signers to be well below the 1 second mark, even when
>> implementing more complex signer logic, including MuSig2 or nested
>> FROST.
>
> In general, and especially for "edge nodes", yes, but if forwarding nodes 
> start taking a full second 
> to forward a payment, we probably need to start aggressively avoiding any 
> such nodes - while I'd 
> love for all forwarding nodes to take 30 seconds to forward to improve 
> privacy, users ideally expect 
> payments to complete in 100ms, with multiple payment retries in between.
>
> This obviously probably isn't ever going to happen in lightning, but getting 
> 95th percentile 
> payments down to one second is probably a good goal, something that requires 
> never having to retry 
> payments and also having forwarding nodes not take more than, say, 150ms.
>
> Of course I don't think we should ever introduce a timeout on the peer level 
> - if your peer went 
> away for a second and isn't responding quickly to channel updates it doesn't 
> merit closing a 
> channel, but its something we will eventually want to handle in route 
> selection if it becomes more 
> of an issue going forward.
>
> Matt

Absolutely agreed, and I wasn't trying to say that latency is not a
concern, I was merely pointing out that the protocol as is, is very
latency-tolerant. That doesn't mean that routers shouldn't strive to be
as fast as possible, but I think the MuSig schemes, executed over local
links, is unlikely to be problematic when considering overall network
latency that we have anyway.

For edge nodes it's rather nice to have relaxed timings, given that they
might be on slow or flaky connections, but routers are a completely
different category.

Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Peter Todd
On Tue, Jun 28, 2022 at 11:31:54AM -0400, Matt Corallo wrote:
> On 6/28/22 9:05 AM, Christian Decker wrote:
> > It is worth mentioning here that the LN protocol is generally not very
> > latency sensitive, and from my experience can easily handle very slow
> > signers (3-5 seconds delay) without causing too many issues, aside from
> > slower forwards in case we are talking about a routing node. I'd expect
> > routing node signers to be well below the 1 second mark, even when
> > implementing more complex signer logic, including MuSig2 or nested
> > FROST.
> 
> In general, and especially for "edge nodes", yes, but if forwarding nodes
> start taking a full second to forward a payment, we probably need to start
> aggressively avoiding any such nodes - while I'd love for all forwarding
> nodes to take 30 seconds to forward to improve privacy, users ideally expect
> payments to complete in 100ms, with multiple payment retries in between.

Idle question: would it be worthwhile to allow people to opt-in to their
payments happening more slowly for privacy? At the very least it'd be fine if
payments done by automation for rebalancing, etc. happened slowly.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Matt Corallo




On 6/28/22 9:05 AM, Christian Decker wrote:

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested
FROST.


In general, and especially for "edge nodes", yes, but if forwarding nodes start taking a full second 
to forward a payment, we probably need to start aggressively avoiding any such nodes - while I'd 
love for all forwarding nodes to take 30 seconds to forward to improve privacy, users ideally expect 
payments to complete in 100ms, with multiple payment retries in between.


This obviously probably isn't ever going to happen in lightning, but getting 95th percentile 
payments down to one second is probably a good goal, something that requires never having to retry 
payments and also having forwarding nodes not take more than, say, 150ms.


Of course I don't think we should ever introduce a timeout on the peer level - if your peer went 
away for a second and isn't responding quickly to channel updates it doesn't merit closing a 
channel, but its something we will eventually want to handle in route selection if it becomes more 
of an issue going forward.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Christian Decker
Olaoluwa Osuntokun  writes:
>> Rene Pickhardt brought up the issue of latency with regards to
>> nested/recursive MuSig2 (or nested FROST for threshold) on Bitcoin
>> StackExchange
>
> Not explicitly, but that strikes me as more of an implementation level
> concern. As an example, today more nodes are starting to use replicated
> database backends instead of a local ed embedded database. Using such a
> database means that _network latency_ is now also a factor, as committing
> new states requires round trips between the DBMS that'll increase the
> perceived latency of payments in practice. The benefit ofc is better support
> for backups/replication.
>
> I think in the multi-signature setting for LN, system designers will also
> need to factor in the added latency due to adding more signers into the mix.
> Also any system that starts to break up the logical portions of a node
> (signing, hosting, etc -- like Blockstream's Greenlight project), will need
> to wrangle with this as well (such is the nature of distributed systems).

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested
FROST.

In particular remember that the LN protocol implements a batch
mechanism, with changes applied to the commitment transaction as a
batch. Not every change requires a commitment and thus a signature. This
means that while a slow signer may have an impact on payment latency, it
should generally not have an impact on throughput on the routing nodes.

Regards,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-27 Thread Michael Folkson via Lightning-dev
Thanks for the summary Laolu, very informative.

> One other cool topic that came up is the concept of leveraging recursive 
> musig2 (so musig2 within musig2) to make channels even _more_ multi-sigy.

A minor point but terminology can get frustratingly sticky if it isn't agreed 
on early. Can we refer to it as nested​​ MuSig2 going forward rather than 
recursive​​ MuSig2? It is a more accurate description in my opinion and going 
through some old transcripts the MuSig2 authors [0] also refer it to nested 
MuSig2 (as far as I can make out).

Rene Pickhardt brought up the issue of latency with regards to nested/recursive 
MuSig2 (or nested FROST for threshold) on Bitcoin StackExchange [1]. Was this 
discussed at the LN Summit? I don't know how all the Lightning implementations 
treat latency currently (how long a channel counterparty has to provide a 
needed signature before moving to a unhappy path) but Rene's concern is delays 
in the regular completions of a nested MuSig2 or FROST scheme could make them 
unviable for the Lightning channel use case depending on the exact setup and 
physical location of signers etc.

MuSig2 obviously generates an aggregated Schnorr signature and so even nested 
MuSig2 require the Lightning protocol to recognize and verify Schnorr 
signatures which it currently doesn't right? So is the current thinking that 
Schnorr signatures will be supported first with a Schnorr 2-of-2 on the funding 
output (using OP_CHECKSIGADD and enabling the nested schemes) before 
potentially supporting non-nested MuSig2 between the channel counterparties on 
the funding output later? Or is this still in the process of being discussed?

[0]: 
https://btctranscripts.com/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/
[1]: 
https://bitcoin.stackexchange.com/questions/114159/how-do-the-various-lightning-implementations-treat-latency-how-long-do-they-wai

--
Michael Folkson
Email: michaelfolkson at [protonmail.com](http://protonmail.com/)
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3

--- Original Message ---
On Wednesday, June 8th, 2022 at 03:38, Olaoluwa Osuntokun  
wrote:

> Hi y'all,
>
> Last week nearly 30 (!) Lightning developers and researchers gathered in
> Oakland, California for three day to discuss a number of matters related to
> the current state and evolution of the protocol. This time around, we had
> much better representation for all the major Lightning Node implementations
> compared to the last LN Dev Summit (Zurich, Oct 2021).
>
> Similar to the prior LN Dev Summit, notes were kept throughout the day that
> attempted on a best effort basis to capture the relevant discussions,
> decisions, and new relevant research or follow up areas to circle back on.
> Last time around, I sent out an email that summarized some key takeaways
> (from my PoV) of the last multi-day dev summit [1]. What follows in this
> email is a similar summary/recap of the three day summit. Just like last
> time: if you attended and felt I missed out on a key point, or inadvertently
> misrepresented a statement/idea, please feel free to reply, correcting or
> adding additional detail.
>
> The meeting notes in full can be found here:
> https://docs.google.com/document/d/1KHocBjlvg-XOFH5oG_HwWdvNBIvQgxwAok3ZQ6bnCW0/edit?usp=sharing
>
> # Simple Taproot Channels
>
> During the last summit, Taproot was a major discussion topic as though the
> soft fork had been deployed, we we're all still watching the  's stack up
> on the road to ultimately activation. Fast forward several months later and
> Taproot has now been fully activated, with ecosystem starting to
> progressively deploy more and more advanced systems/applications that take
> advantage of the new features.
>
> One key deployment model that came out of the last LN Dev summit was the
> concept of an iterative roadmap that progressively revamped the system to
> use more taprooty features, instead of a "big bang" approach that would
> attempt to package up as many things as possible into one larger update. At
> a high level the iterative roadmap proposed that we unroll an existing
> larger proposal [2] into more bite sized pieces that can be incrementally
> reviewed, implemented, and ultimately deployed (see my post on the LN Dev
> Summit 2021 for more details).
>
> ## Extension BOLTs
>
> Riiight before we started on the first day, I wrote up a minimal proposal
> that attempted to tackle the first two items of the Taproot iterative
> deployment schedule (musig2 funding outputs and simple tapscript mapping)
> [3]. I called the proposal "Simple Taproot Channels" as it set out to do a
> mechanical mapping of the current commitment and script structure to a more
> taprooty domain. Rather than edit 4 or 5 different BOLTs with a series of
> "if this feature bit applies" nested clauses, I instead opted to create a
> new standalone "extension bolt" that defines _new_ behavior on top of the
> existing BOLTs, referring 

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-23 Thread Olaoluwa Osuntokun
Hi Michael,

> A minor point but terminology can get frustratingly sticky if it isn't
> agreed on early. Can we refer to it as nested MuSig2 going
> forward rather than recursive MuSig2?

No strong feelings on my end, the modifier _nested_ is certainly a bit less
loaded and conceptually simpler, so I'm fine w/ using that going forward if
others are as well.

> Rene Pickhardt brought up the issue of latency with regards to
> nested/recursive MuSig2 (or nested FROST for threshold) on Bitcoin
> StackExchange

Not explicitly, but that strikes me as more of an implementation level
concern. As an example, today more nodes are starting to use replicated
database backends instead of a local ed embedded database. Using such a
database means that _network latency_ is now also a factor, as committing
new states requires round trips between the DBMS that'll increase the
perceived latency of payments in practice. The benefit ofc is better support
for backups/replication.

I think in the multi-signature setting for LN, system designers will also
need to factor in the added latency due to adding more signers into the mix.
Also any system that starts to break up the logical portions of a node
(signing, hosting, etc -- like Blockstream's Greenlight project), will need
to wrangle with this as well (such is the nature of distributed systems).

> MuSig2 obviously generates an aggregated Schnorr signature and so even
> nested MuSig2 require the Lightning protocol to recognize and verify
> Schnorr signatures which it currently doesn't right?

Correct.

> So is the current thinking that Schnorr signatures will be supported first
> with a Schnorr 2-of-2 on the funding output (using OP_CHECKSIGADD and
> enabling the nested schemes) before potentially supporting non-nested
> MuSig2 between the channel counterparties on the funding output later? Or
> is this still in the process of being discussed?

The current plan is to jump straight to using musig2 in the funding output,
so: a single aggregated 2-of-2 key, with a single multi-signature being used
to close the channel (co-op or force close).

Re nested vs non-nested: to my knowledge, if Alice uses the new protocol
extensions to open a taproot channel w/ Bob, then she wouldn't necessarily
be aware that Bob is actually Barol (Bob+Carol). She sees Bob's key (which
might actually be an aggregated key) and his public nonce (which might
actually also be composed of two nonces), and just runs the protocol as
normal. Sure there might be some added latency depending on Barol's system
architecture, but from Alice's PoV that might just be normal network latency
(eg: Barol is connecting over Tor which already adds some additional
latency).

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-15 Thread Bastien TEINTURIER
Hey Zman and list,

I don't think waxwing's proposal will help us for private gossip.
The rate-limiting it provides doesn't seem to be enough in our case.
The proposal rate-limits token issuance to once every N blocks where
N is the age of the utxo to which we prove ownership of. Once the token
is issued and verified, the attacker can spend that utxo, and after N
blocks he's able to get a new token with this new utxo.

That is a good enough rate-limit for some scenarios, but in our case
it means that every N blocks people are able to double the capacity
they advertise without actually having more funds.

We can probably borrow ideas from this proposal, but OTOH I don't
see how to apply it to lightning gossip, what we want isn't really rate
limiting, we want a stronger link between advertised capacity and
real on-chain capacity.

Cheers,
Bastien

Le mer. 15 juin 2022 à 00:01, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

>
> > ## Lightning Gossip
> >
> > # Gossip V2: Now Or Later?
>
> 
>
> > A proposal for the "re-design the entire thing" was floated in the past
> by
> > Rusty [6]. It does away with the strict coupling of channels to channel
> > announcements, and instead moves them to the _node_ level. Each node
> would
> > then advertise the set of "outputs" they have control of, which would
> then
> > be mapped to the total capacity of a node, without requiring that these
> > outputs self identify themselves on-chain as Lightning Channels. This
> also
> > opens up the door to different, potentially more privacy preserving
> > proofs-of-channel-ownership (something something zkp).
>
> waxwing recently posted something interesting over in bitcoin-dev, which
> seems to match the proof-of-channel-ownereship.
>
> https://gist.github.com/AdamISZ/51349418be08be22aa2b4b469e3be92f
>
> I confess to not understanding the mathy bits but it seems to me, naively,
> that the feature set waxwing points out match well with the issues we want
> to have:
>
> * We want to rate-limit gossip somehow.
> * We want to keep the mapping of UTXOs to channels private.
>
> It requires a global network that cuts across all uses of the same
> mechanism (similar to defiads, but more private --- basically this means
> that it cannot be just Lightning which uses this mechanism, at least to
> acquire tokens-to-broadcast-my-channels) to prevent a UTXO from being
> reused across services, a property I believe is vital to the expected
> spam-resistance.
>
> > # Friend-of-a-friend Balance Sharing & Probing
> >
> > A presentation was given on friend-of-a-friend balance sharing [16]. The
> > high level idea is that if we share _some_ information within a local
> > radius, then this gives the sender more information to choose a path
> that's
> > potentially more reliable. The tradeoff here ofc is that nodes will be
> > giving away more information that can potentially be used to ascertain
> > payment flows. In an attempt to minimize the amount of information
> shared,
> > the presenter proposed that just 2 bits of information be shared. Some
> > initial simulations showed that sharing local information actually
> performed
> > better than sharing global information (?). Some were puzzled w.r.t how
> > that's possible, but assuming the slides+methods are published others can
> > dig further into the model/parameter used to signal the inclusion.
> >
> > Arguably, information like this is already available via probing, so one
> > line of thinking is something like: "why not just share _some_ of it"
> that
> > may actually lead to less internal failures? This is related to a sort of
> > tension between probing as a tool to increase payment reliability and
> also
> > as a tool to degrade privacy in the network. On the other hand, others
> > argued that probing provides natural cover traffic, since they actually
> > _are_ payments, though they may not be intended to succeed.
> >
> > On the topic of channel probing, a sort of makeshift protocol was
> devised to
> > make it harder in practice, sacrificing too much on the axis of payment
> > reliability. At a high level it proposes that:
> >
> > * nodes more diligently set both their max_htlc amount, as well as the
> > max_htlc_value_in_flight amount
> >
> > * a 50ms (or select other value) timer should be used when sending out
> > commitment signatures, independent of HTLC arrival
> >
> > * nodes leverage the max_htlc value to set a false ceiling on the max in
> > flight parameter
> >
> > * for each HTLC sent/forwarded, select 2 other channels at random and
> > reduce the "fake" in-flight ceiling for a period of time
> >
> > Some more details still need to be worked out, but some felt that this
> would
> > kick start more research into this area, and also make balance mapping
> > _slightly_ more difficult. From afar, it may be the case that achieving
> > balance privacy while also achieving acceptable levels of payment
> > reliability might be at odds with each other.

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-14 Thread ZmnSCPxj via Lightning-dev


> ## Lightning Gossip
>
> # Gossip V2: Now Or Later?



> A proposal for the "re-design the entire thing" was floated in the past by
> Rusty [6]. It does away with the strict coupling of channels to channel
> announcements, and instead moves them to the _node_ level. Each node would
> then advertise the set of "outputs" they have control of, which would then
> be mapped to the total capacity of a node, without requiring that these
> outputs self identify themselves on-chain as Lightning Channels. This also
> opens up the door to different, potentially more privacy preserving
> proofs-of-channel-ownership (something something zkp).

waxwing recently posted something interesting over in bitcoin-dev, which seems 
to match the proof-of-channel-ownereship.

https://gist.github.com/AdamISZ/51349418be08be22aa2b4b469e3be92f

I confess to not understanding the mathy bits but it seems to me, naively, that 
the feature set waxwing points out match well with the issues we want to have:

* We want to rate-limit gossip somehow.
* We want to keep the mapping of UTXOs to channels private.

It requires a global network that cuts across all uses of the same mechanism 
(similar to defiads, but more private --- basically this means that it cannot 
be just Lightning which uses this mechanism, at least to acquire 
tokens-to-broadcast-my-channels) to prevent a UTXO from being reused across 
services, a property I believe is vital to the expected spam-resistance.

> # Friend-of-a-friend Balance Sharing & Probing
>
> A presentation was given on friend-of-a-friend balance sharing [16]. The
> high level idea is that if we share _some_ information within a local
> radius, then this gives the sender more information to choose a path that's
> potentially more reliable. The tradeoff here ofc is that nodes will be
> giving away more information that can potentially be used to ascertain
> payment flows. In an attempt to minimize the amount of information shared,
> the presenter proposed that just 2 bits of information be shared. Some
> initial simulations showed that sharing local information actually performed
> better than sharing global information (?). Some were puzzled w.r.t how
> that's possible, but assuming the slides+methods are published others can
> dig further into the model/parameter used to signal the inclusion.
>
> Arguably, information like this is already available via probing, so one
> line of thinking is something like: "why not just share _some_ of it" that
> may actually lead to less internal failures? This is related to a sort of
> tension between probing as a tool to increase payment reliability and also
> as a tool to degrade privacy in the network. On the other hand, others
> argued that probing provides natural cover traffic, since they actually
> _are_ payments, though they may not be intended to succeed.
>
> On the topic of channel probing, a sort of makeshift protocol was devised to
> make it harder in practice, sacrificing too much on the axis of payment
> reliability. At a high level it proposes that:
>
> * nodes more diligently set both their max_htlc amount, as well as the
> max_htlc_value_in_flight amount
>
> * a 50ms (or select other value) timer should be used when sending out
> commitment signatures, independent of HTLC arrival
>
> * nodes leverage the max_htlc value to set a false ceiling on the max in
> flight parameter
>
> * for each HTLC sent/forwarded, select 2 other channels at random and
> reduce the "fake" in-flight ceiling for a period of time
>
> Some more details still need to be worked out, but some felt that this would
> kick start more research into this area, and also make balance mapping
> _slightly_ more difficult. From afar, it may be the case that achieving
> balance privacy while also achieving acceptable levels of payment
> reliability might be at odds with each other.

A point that was brought up is that nodes can lie about their capacity, and 
there would be no way to counteract this.

Even given the above, it would be trivial for a lying node to randomly lie 
about their `max_htlc` to still be noticed by nodes who try to filter out nodes 
who do not update their `max_htlc`s.
(maximal lying is to always say 50% of your capacity is in `max_htlc`, your 
node can lie by setting `max_htlc` from 35%->65%, you can coordinate this with 
another lying peer node too by use of an odd message number to set up the lying 
protocol so both of you can lie about the channel capacity consistently)

I think your best bet is really to utilize feerates, as lying with those is 
expected to lead to economic loss.



> # Node Fee Optimization & Fee Rate Cards
>
> Over the past few years, a common thread we've seen across successful
> routing nodes is dynamic fee setting as a way to encourage/discourage
> traffic. A routing nodes can utilize the set of fees of a channel to either
> make it too expensive for other nodes to route through (it's already
> depleted don't try unless you'll give be 10 mil 

[Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-07 Thread Olaoluwa Osuntokun
Hi y'all,

Last week nearly 30 (!) Lightning developers and researchers gathered in
Oakland, California for three day to discuss a number of matters related to
the current state and evolution of the protocol.  This time around, we had
much better representation for all the major Lightning Node implementations
compared to the last LN Dev Summit (Zurich, Oct 2021).

Similar to the prior LN Dev Summit, notes were kept throughout the day that
attempted on a best effort basis to capture the relevant discussions,
decisions, and new relevant research or follow up areas to circle back on.
Last time around, I sent out an email that summarized some key takeaways
(from my PoV) of the last multi-day dev summit [1]. What follows in this
email is a similar summary/recap of the three day summit. Just like last
time: if you attended and felt I missed out on a key point, or inadvertently
misrepresented a statement/idea, please feel free to reply, correcting or
adding additional detail.

The meeting notes in full can be found here:
https://docs.google.com/document/d/1KHocBjlvg-XOFH5oG_HwWdvNBIvQgxwAok3ZQ6bnCW0/edit?usp=sharing

# Simple Taproot Channels

During the last summit, Taproot was a major discussion topic as though the
soft fork had been deployed, we we're all still watching the  's stack up
on the road to ultimately activation. Fast forward several months later and
Taproot has now been fully activated, with ecosystem starting to
progressively deploy more and more advanced systems/applications that take
advantage of the new features.

One key deployment model that came out of the last LN Dev summit was the
concept of an iterative roadmap that progressively revamped the system to
use more taprooty features, instead of a "big bang" approach that would
attempt to package up as many things as possible into one larger update. At
a high level the iterative roadmap proposed that we unroll an existing
larger proposal [2] into more bite sized pieces that can be incrementally
reviewed, implemented, and ultimately deployed (see my post on the LN Dev
Summit 2021 for more details).

## Extension BOLTs

Riiight before we started on the first day, I wrote up a minimal proposal
that attempted to tackle the first two items of the Taproot iterative
deployment schedule (musig2 funding outputs and simple tapscript mapping)
[3]. I called the proposal "Simple Taproot Channels" as it set out to do a
mechanical mapping of the current commitment and script structure to a more
taprooty domain. Rather than edit 4 or 5 different BOLTs with a series of
"if this feature bit applies" nested clauses, I instead opted to create a
new standalone "extension bolt" that defines _new_ behavior on top of the
existing BOLTs, referring to the BOLTs when necessary. The style of the
document was inspired by the "proposals" proposal (very meta), which was
popularized by cdecker and adopted by t-bast with his documents on
Trampoline and Blinded Paths.

If the concept catches on, extension BOLTs provide us with a new way to
extend the spec: rather than insert everything in-line, we could instead
create new standalone documents for larger features. Having a single self
contained document makes the proposal easier to review, and also gives the
author more room to provide any background knowledge, primaries, and also
rationale. Overtime, as the new extensions become widespread (eg: taproot is
the default channel type), we can fold in the extensions back to the main
set of "core" BOLTs (or make new ones as relevant).

Smaller changes to the spec like deprecating an old field or tightening up
some language will likely still follow the old approach of mutating the
existing BOLTs, but larger overhauls like the planned PTLC update may find
the extension BOLTs to be a better tool.

## Tapscript, Musig2, and Lightning

As mentioned above the Simple Taproot Channels proposal does two main
things:
  1. Move the existing 2-of-2 p2wsh segwit v0 funding output to a _single
  key_ p2tr output, with the single key actually being an aggregated musig2
  key.

  2. Map all our existing scripts to the tapscript domain, using the
  internal key (keyspend path) for things like revocations, which an
  potentially allow nodes to store less state for HTLCs.

Of the two components #1 is by far the trickiest. Musig2 is a very elegant
protocol (not to mention the spec which y'all should totally check out) but
as the signatures aren't deterministic (like RFC 6979 [5]), both signers
need to "protect themselves at all times" to ensure they don't ever re-use
nonces, which can lead to a private key leak (!!).

Rather than try to create some sort of psuedo-deterministic nonces scheme
(which maaybe works until the Blockstream Research team squints vaguely in
its direction), I opted to just make all nonces 100% ephemeral and tied to
the lifetime of a connection. Musig2 defines something called a public
nonces, which is actually two individual 33-byte nonces. This value needs to
be exchanged