[Lightning-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-07 Thread Jeremy
I made a comment on
https://github.com/bitcoin/bips/pull/943#issuecomment-876034559 but it
occurred to me it is more ML appropriate.

In general, one thing that strikes me is that when anyprevout is used for
eltoo you're generally doing a script like:

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
 CLTV DROP
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```

This means that you're overloading the CLTV clause, which means it's
impossible to use Eltoo and use a absolute lock time, it also means you
have to use fewer than a billion sequences, and if you pick a random # to
mask how many payments you've done / pick random gaps let's say that
reduces your numbers in half. That may be enough, but is still relatively
limited. There is also the issue that multiple inputs cannot be combined
into a transaction if they have signed on different locktimes.

Since Eltoo is the primary motivation for ANYPREVOUT, it's worth making
sure we have all the parts we'd need bundled together to see it be
successful.

A few options come to mind that might be desirable in order to better serve
the eltoo usecase

1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated
to eltoo sequences). This has the benefit of giving a per input sequence,
but the drawback of using a CSV bit. Because there's only 1 CSV per input,
this technique cannot be used with a sequence tag.
2) CSFS -- it would be possible to take a signature from stack for an
arbitrary higher number, e.g.:
```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
DUP musigkey(Aseq, BSeq) CSFSV  GTE VERIFY
   1::musigkey(Au,Bu) CHECKSIG
ENDIF
```
Then, posession of a higher signed sequence would allow for the use of the
update path. However, the downside is that there would be no guarantee that
the new state provided for update would be higher than the past one without
a more advanced covenant.
3) Sequenced Signature: It could be set up such that ANYPREVOUT keys are
tagged with a N byte sequence (instead of 1), and a part of the process of
signature verification includes hashing a sequence on the signature itself.

E.g.

```
IF
10 CSV DROP
1::musigkey(As,Bs) CHECKSIG
ELSE
   ::musigkey(Au,Bu) CHECKSIG
ENDIF
```
To satisfy this clause, a signature `::S` would be required. When
validating the signature S, the APO digest would have to include the value
. It is non cryptographically checked that N+1 > N.
5) Similar to 3, but look at more values off the stack. This is also OK,
but violates the principle of not making opcodes take variable numbers of
things off the stack. Verify semantics on the extra data fields could
ameliorate this concern, and it might make sense to do it that way.
4) Something in the Annex: It would also be possible to define a new
generic place for lock times in the annex (to permit dual height/time
relative/absolute, all per input. The pro of this approach is that it would
be solving an outstanding problem for script that we want to solve anyways,
the downside is that the Annex is totally undefined presently so it's
unclear that this is an appropriate use for it.
5) Do Nothing :)


Overall I'm somewhat partial to option 3 as it seems to be closest to
making ANYPREVOUT more precisely designed to support Eltoo. It would also
be possible to make it such that if the tag N=1, then the behavior is
identical to the proposal currently.

--
@JeremyRubin 

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-07 Thread Ryan Gentry via Lightning-dev
Hi all,

Thanks so much for the great feedback over the last week. Seems like
general agreement that adding a simple home for descriptive design
documents focusing on new LN features would be a good thing, and augment
the prescriptive BOLTs (which have done a great job getting us this far!).

If there is a point of contention, it seems to be about how not only this
interacts with the existing BIP system, but also how the BOLTs interact
with the BIP system. The only problem I have with BOLTs and bLIPs as BIPs
is that it introduces large scope creep over what was originally a pretty
simple proposal. I don't really care where these design documents exist,
only that there is a standard format and that LN developers and users feel
empowered to create them and share them with the broader ecosystem.

If we proceed with creating bLIPs in the lightning-rfc repo today and later
decide to recreate the BOLTs as BIPs, it will be no trouble at all to
recreate bLIPs as BIPs as well.

The BIP Process Wishlist sounds great and can be addressed independently.
If recruits for merging the BOLTs can be found, we can tackle the mechanics
of a merge then (alongside maybe some of the other bitcoin-related *IP
repos that exist outside the BIPs? [1] [2]).

Best,
Ryan

[1] https://github.com/satoshilabs/slips
[2] https://github.com/rsksmart/RSKIPs

On Fri, Jul 2, 2021 at 1:21 PM Antoine Riard 
wrote:

> Hi Ryan,
>
> Thanks for starting this discussion, I agree it's a good time for the
> Lightning development community to start this self-introspection on its own
> specification process :)
>
> First and foremost, maybe we could take a minute off to celebrate the
> success of the BOLT process and the road traveled so far ? What was a fuzzy
> heap of ideas on a whiteboard a few years ago has bloomed up to a living
> and pulsating distributed ecosystem of thousands of nodes all around the
> world. If the bet was to deliver on fast, instant, cheap, reasonably
> scalable, reasonably confidential Bitcoin payments, it's a won one and
> that's really cool.
>
> Retrospectively, it was a foolhardy bet for a wide diversity of factors.
> One could think about opinionated, early design choices deeply affecting
> protocol safety and efficiency of which the ultimate validity was still a
> function of fluky base layer evolutions [0]. Another could consider the
> communication challenges of softly aligning development teams on the common
> effort of designing and deploying from scratch a cryptographic protocol as
> sophisticated as Lightning. Not an easy task when you're mindful about the
> timezones spread, the diversity of software engineering backgrounds and the
> differing schedules of priorities.
>
> So kudos to everyone who has played a part in the Lightning dev process.
> The OGs who started the tale, the rookies who jumped on the wagon on the
> way and today newcomers showing up with new seeds to nurture the ecosystem
> :)
>
> Now, I would say we more-or-less all agree that the current BOLT process
> has reached its limits. Both from private conservations across the teams
> but also frustrations expressed during the irc meetings in the past months.
> Or as a simple data point, the only meaningful spec object we did merge on
> the last 18 months is anchor output, it did consumes a lot of review and
> engineering bandwidth from contributors, took few refinement to finalize
> (`option_anchors_zero_fee_htlc_tx`) and I believe every implementations are
> still scratching their heads on a robust, default fee-bumping strategy.
>
> So if we agree about the BOLT process limitations, the next question to
> raise is how to improve it. Though there, as expressed in other replies,
> I'm more we're not going to be able to do that much, as ultimately we're
> upper bounded by a fast-pacing, always-growing, permissionless ecosystem of
> applications and experiments moving forward in baazar-style and
> lower-bounded by a decentralized process across teams allocating their
> engineering resources with different priorities or even exploring Lightning
> massive evolution stages in heterogenous, synergic directions.
>
> Breeding another specification process on top of Lightning sounds a good
> way forward. Though I believe it might be better to take time to operate
> the disentanglement nicely. If we take the list of ideas which could be
> part of such a process, one of them, dynamic commitments could make a lot
> of sense to be well-designed and well-supported by every implementation. In
> case of emergency fixes to deploy safer channel types, if you have to close
> all your channels with other implementations, on a holistic scale, it might
> cloak the mempools and spike the feerate, strickening safety of every other
> channel on the network. Yes we might have safety interdepencies between
> implementations :/
>
> And it's also good to have thoughtful, well-defined specification bounds
> when you're working on coordinated security disclosures to know who has
> imp