Re: [Lightning-dev] Splicing Proposal: Feedback please!
ZmnSCPxj writes: > Good morning Rusty, > > In BOLT #2 we currently impose a 2^24 satoshi limit on total channel > capacity. Is splicing intended to allow violation of this limit? I do not > see it mentioned in the proposal. Can I splice 21 million bitcoins on a > 1-satoshi channel? Good question! I think that's the kind of thing we should consider carefully at the Summit. > It may be good to start brainstorming possible failure modes during splice, > and how to recover, and also to indicate the expected behavior in the > proposal, as I believe these will be the points where splicing must be > designed most precisely. What happens when a splice is ongoing and the > communication gets disconnected? What happens when some channel failure > occurs during splicing and we are forced to drop onchain? And so on. Agreed, but we're now debating two fairly different methods for splicing. Once we've decided on that, we can try to design the proposals themselves. Thanks, Rusty. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Splicing Proposal: Feedback please!
Christian Decker writes: > On Thu, Oct 11, 2018 at 3:40 AM Rusty Russell wrote: > >> > * Once we have enough confirmations we merge the channels (either >> > automatically or with the next channel update). A new commitment tx is >> > being created which now spends each output of each of the two funding tx >> > and assigns the channel balance to the channel partners accordingly to >> the >> > two independent channels. The old commitment txs are being invalidated. >> > * The disadvantage is that while splicing is not completed and if the >> > funder of the splicing tx is trying to publish an old commitment tx the >> > node will only be punished by sending all the funds of the first funding >> tx >> > to the partner as the special commitment tx of the 2nd output has no >> newer >> > state yet. >> >> Yes, this is the alternative method; produce a parallel funding tx >> (which only needs to support a single revocation, or could even be done >> by a long timeout) and then join them when it reaches the agreed depth. >> >> It has some elegance; particularly because one side doesn't have to do >> any validation or store anything until it's about to splice in. You get >> asked for a key and signature, you produce a new one, and sign whatever >> tx they want. They hand you back the tx and the key you used once it's >> buried far enough, and you check the tx is indeed buried and the output >> is the script you're expecting, then you flip the commitment tx. >> >> But I chose chose not to do this because every transaction commitment >> forever will require 2 signatures, and doesn't allow us to forget old >> revocation information. >> >> And it has some strange side-effects: onchain this looks like two >> channels; do we gossip about both? We have to figure the limit on >> splice-in to make sure the commitment tx stays under 400kSipa. >> > > This is a lot closer to my original proposal for splicing, and I > still like it a lot more since the transition from old to new > channel is bascially atomic (not having to update state on both > pre-splice and post-splice version). The new funds will remain > unavailable for the same time, and since we allow only one > concurrent splice in your proposal we don't even lose any > additional time regarding the splice-outs. > > So pulling the splice_add_input and splice_add_output up to > signal the intent of adding funds to a splice. Splice_all_added > is then used to start moving the funds to a pre-allocated 2-of-2 > output where the funds can mature. Once the funds are > matured (e.g., 6 confirmations) we can start the transition: both > parties claim the funding output, and the pre-allocated funds, to > create a new funding tx which is immediately broadcast, and we > flip over to the new channel state. No need to keep parallel > state and then disambiguating which one it was. If we're going to do side splice-in like this, I would use a very different protocol: the reason for this protocol was to treat splice-in and splice-out the same, and inline splice-in requires wait time. Since splice-out doesn't, we don't need this at all. It would look much more like: 1. Prepare any output with script of specific form. eg: OP_DEPTH 3 OP_EQUAL OP_IF OP_CHECKMULTISIG OP_ELSE OP_CHECKLOCKTIMEVERIFY OP_DROP OP_CHECKSIG OP_ENDIF 1. type: 40 (`splice_in`) (`option_splice`) 2. data: * [`32`:`channel_id`] * [`8`: `satoshis`] * [`32`: `txid`] * [`4`: `txoutnum`] * [`4`: `blockheight`] * [`33`: `myrescue_pubkey`] 1. type: 137 (`update_splice_in_accept`) (`option_splice`) data: * [`32`:`channel_id`] * [`32`: `txid`] * [`4`: `txoutnum`] 1. type: 138 (`update_splice_in_reject`) (`option_splice`) data: * [`32`:`channel_id`] * [`32`: `txid`] * [`2`:`len`] * [`len`:`errorstr`] The recipient of `splice_in` checks that it's happy with the `blockheight` (far enough in future). Once it sees the tx referred to buried to its own `minimum_depth`, it checks output is what they claimed, then sends `update_splice_in_accept`; it's followed up `commitment_signed` like normal, but from this point onwards, all commitment txs signatures have one extra sig. Similarly, splice-out: 1. type: 139 (`update_splice_out`) (`option_splice`) * [`32`:`channel_id`] * [`8`: `satoshis`] * [`2`: `scriptlen`] * [`scriptlen`: `outscript`] The recipient checks that the output script is standard, and the amount can be afforded by the other side. From then on, each commitment tx has a new output. Note this doesn't put the splice out on the blockchain! 1. type: 140 (`propose_reopen`) (`option_splice`) * [`32`:`channel_id`] * [`4`:`feerate_per_kw`] * [`33`:`funding_pubkey`] This is initiates a mutually-agreed broadcast of the current state: all inputs (original and spliced), all spliced outputs, and a funding-style 2x2 which has all the remaining funds. Call this a 'reopen tx'. This
Re: [Lightning-dev] Splicing Proposal: Feedback please!
Good morning Rusty and Christian and list, > This is one of the cases where a simpler solution (relatively > speaking ^^) is to be preferred imho, allowing for future > iterations. I would basically agree here, with the further proviso that I think splice is not quite as priority as AMP, decorrelation, or watchtowers. The simpler solution has the drawback of more transactions onchain, but massively reduces the complexity of maintaining parallel state updates. Parallel updates would increase greatly our need to test the feature in various conditions (and specify also in the formal spec what possible failure modes are and how they should be recovered, as a basic safety for users of Lightning). Of course, the same course of thought is what lead to onchain transaction bloat in the first place. Splicing features might be versioned, with the possibility of better splicing mechanisms being defined in later BOLT specs. This can allow us to iterate somewhat and start with the simpler-but-more-onchain-txes splicev1 feature, possibly getting replaced with a more thought-out-and-planned splicev2 feature with parallel updates (and careful analysis of possible failures in parallel updates and how we should recover from them). The drawback is that this is further complexity later on by having to possibly support multiple splicing mechanisms (but if we assign completely separate feature bits, it may be possible for pragmatic implementations to eventually stop signalling the ability to splice using older splicing feature versions in favor of newer splicing feature versions). Regards, ZmnSCPxj___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts
Another way would be to always have two update transactions, effectively creating a larger overall counter: [anchor] -> [update highbits] -> [update lobits] -> [settlement] We normally update [update lobits] until it saturates. If lobits saturates we increment [update highbits] and reset [update lobits] to the lowest valid value. This will provide a single counter with 10^18 possible updates, which should be enough for a while even without reanchoring. Regards, ZmnSCPxj Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, October 12, 2018 1:37 AM, Christian Decker wrote: > Thanks Anthony for pointing this out, I was not aware we could > roll keypairs to reset the state numbers. > > I basically thought that 1billion updates is more than I would > ever do, since with splice-in / splice-out operations we'd be > re-anchoring on-chain on a regular basis anyway. > > On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns wrote: > >> On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote: >>> eltoo is a drop-in replacement for the penalty based invalidation >>> mechanism that is used today in the Lightning specification. [...] >> >> Maybe this is obvious, but in case it's not, re: the locktime-based >> sequencing in eltoo: >> >> "any number above 0.500 billion is interpreted as a UNIX timestamp, and >> with a current timestamp of ~1.5 billion, that leaves about 1 billion >> numbers that are interpreted as being in the past" >> >> I think if you had a more than a 1B updates to your channel (50 updates >> per second for 4 months?) I think you could reset the locktime by rolling >> over to use new update keys. When unilaterally closing you'd need to >> use an extra transaction on-chain to do that roll-over, but you'd save >> a transaction if you did a cooperative close. >> >> ie, rather than: >> >> [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc] >> or >> [funding] -> [coop close / re-fund] -> [coop close] >> >> you could have: >> [funding] -> [update 1B] -> [update 23,310,561 with key2] -> [HTLCs] >> or >> [funding] -> [coop close] >> >> You could repeat this when you get another 1B updates, making unilateral >> closes more painful, but keeping cooperative closes cheap. >> >> Cheers, >> aj___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts
Thanks Anthony for pointing this out, I was not aware we could roll keypairs to reset the state numbers. I basically thought that 1billion updates is more than I would ever do, since with splice-in / splice-out operations we'd be re-anchoring on-chain on a regular basis anyway. On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns wrote: > On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote: > > eltoo is a drop-in replacement for the penalty based invalidation > > mechanism that is used today in the Lightning specification. [...] > > Maybe this is obvious, but in case it's not, re: the locktime-based > sequencing in eltoo: > > "any number above 0.500 billion is interpreted as a UNIX timestamp, and > with a current timestamp of ~1.5 billion, that leaves about 1 billion > numbers that are interpreted as being in the past" > > I think if you had a more than a 1B updates to your channel (50 updates > per second for 4 months?) I think you could reset the locktime by rolling > over to use new update keys. When unilaterally closing you'd need to > use an extra transaction on-chain to do that roll-over, but you'd save > a transaction if you did a cooperative close. > > ie, rather than: > > [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc] > or > [funding] -> [coop close / re-fund] -> [coop close] > > you could have: > [funding] -> [update 1B] -> [update 23,310,561 with key2] -> [HTLCs] > or > [funding] -> [coop close] > > You could repeat this when you get another 1B updates, making unilateral > closes more painful, but keeping cooperative closes cheap. > > Cheers, > aj > > ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Splicing Proposal: Feedback please!
On Thu, Oct 11, 2018 at 3:40 AM Rusty Russell wrote: > > * Once we have enough confirmations we merge the channels (either > > automatically or with the next channel update). A new commitment tx is > > being created which now spends each output of each of the two funding tx > > and assigns the channel balance to the channel partners accordingly to > the > > two independent channels. The old commitment txs are being invalidated. > > * The disadvantage is that while splicing is not completed and if the > > funder of the splicing tx is trying to publish an old commitment tx the > > node will only be punished by sending all the funds of the first funding > tx > > to the partner as the special commitment tx of the 2nd output has no > newer > > state yet. > > Yes, this is the alternative method; produce a parallel funding tx > (which only needs to support a single revocation, or could even be done > by a long timeout) and then join them when it reaches the agreed depth. > > It has some elegance; particularly because one side doesn't have to do > any validation or store anything until it's about to splice in. You get > asked for a key and signature, you produce a new one, and sign whatever > tx they want. They hand you back the tx and the key you used once it's > buried far enough, and you check the tx is indeed buried and the output > is the script you're expecting, then you flip the commitment tx. > > But I chose chose not to do this because every transaction commitment > forever will require 2 signatures, and doesn't allow us to forget old > revocation information. > > And it has some strange side-effects: onchain this looks like two > channels; do we gossip about both? We have to figure the limit on > splice-in to make sure the commitment tx stays under 400kSipa. > This is a lot closer to my original proposal for splicing, and I still like it a lot more since the transition from old to new channel is bascially atomic (not having to update state on both pre-splice and post-splice version). The new funds will remain unavailable for the same time, and since we allow only one concurrent splice in your proposal we don't even lose any additional time regarding the splice-outs. So pulling the splice_add_input and splice_add_output up to signal the intent of adding funds to a splice. Splice_all_added is then used to start moving the funds to a pre-allocated 2-of-2 output where the funds can mature. Once the funds are matured (e.g., 6 confirmations) we can start the transition: both parties claim the funding output, and the pre-allocated funds, to create a new funding tx which is immediately broadcast, and we flip over to the new channel state. No need to keep parallel state and then disambiguating which one it was. The downsides of this is that we now have 2 on-chain transactions (pre-allocation and re-open), and splice-outs are no longer immediate if we have a splice-in in the changeset as well. The latter can be remediatet with one more reanchor that just considers splice-ins that were proposed. > > > I believe splicing out is even safer: > > * One just creates a spent of the funding tx which has two outputs. One > > output goes to the recipient of the splice out operation and the second > > output acts as a new funding transaction for the newly spliced channel. > > Once signatures for the new commitment transaction are exchanged > (basically > > following the protocol to open a channel) the splicing operation can be > > broadcasted. > > > > * The old channel MUST NOT be used anymore but the new channel can be > > operational right away without blockchain confirmation. In case someone > > tries to publish an old state of the old channel it will be a double > spent > > of the splicing operation and in the worst case will be punished and the > > splicing was not successful. > > > > if one publishes an old state of the new > > channel everything will just work as normal even if the funding tx is not > > yet mined. It could only be replaced with an old state of the previous > > channel (which as we saw is not a larger risk than the usual operation > of a > > lightning node) > > Right, you're relying on CPFP pushing through the splice-out tx if it > gets stuck. This requires that we check carefully for standardness and > other constraints which might prevent this; for example, we can't allow > more than 20 (?) of these in a row without being sufficiently buried > since I think that's where CPFP calculations top out. > We shouldn't allow more than one pending splice operation anyway, as stated in your proposal initially. We are already critically reliant on our transaction being confirmed on-chain, so I don't see this as much of an added issue. > > As mentioned maybe you had this workflow already in your mind but I don't > > see why we need to send around all the messages twice with my workflow. > We > > only need to maintain double state but only until it is fair / safe to do > > so. I would also