Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-05-04 Thread Rusty Russell
Matt Corallo  writes:
> On 4/27/21 01:04, Rusty Russell wrote:
>> Matt Corallo  writes:
 On Apr 24, 2021, at 01:56, Rusty Russell  wrote:

 Matt Corallo  writes:
>>> I promise it’s much less work than it sounds like, and avoids having to 
>>> debug these things based on logs, which is a huge pain :). Definitely less 
>>> work than a new state machine:).
>> 
>> But the entire point of this proposal is that it's a subset of the
>> existing state machine?
>
> Compared to today, its a good chunk of additional state machine logic to 
> enforce when a message can or can not be sent, 
> and additional logic for when we can (or can not) flush any pending
> changes buffer(s)

Kind of.  I mean, we can add a "update_noop" message which simply
requests your turn and has no other effects.

>> The only "twist" is that if it's your turn and you receive an update,
>> you can either reply with a "yield" message, or ignore it.
>
> How do you handle the "no changes to make" case - do you send yields back and 
> forth ever Nms all day long or is there 
> some protocol by which you resolve it when both parties try to claim turn at 
> once?

You don't do anything?

If you want to send an update:
1. If it is your turn, send it.
2. If it is not your turn, send it and wait for either a `yield`, or a
   different update.  In the former case, it's now your turn, in the
   latter case it's not and your update was ignored.

If you receive an update when it's your turn:
1. If you've sent an update already, ignore it.
2. Otherwise, send `yield`.

>>> Isn’t that pretty similar? Discard one splice proposal deterministically 
>>> (ok that’s new) and the loser has to store their proposal in a holding cell 
>>> for later (which they have to do in turn-based anyway). Logic to check if 
>>> there’s unsettled things in RAA handling is pretty similar to turn-based, 
>>> and logic to reject other messages is the same as shutdown handling today.
>> 
>> Nope, with the simplified protocol you can `update_splice` at any time
>> instead of your normal update, since both sides are already in sync.
>
> Hmm, I'm somewhat failing to understand why its that different - you can only 
> update_splice if its your turn, which is 
> about exactly the same amount of additional logic to check turn conditions as 
> just flag "want to do splice". Either way 
> you have the same pending splice buffer.

No, for turn-taking, this case is exactly like any other update.

For non-turn taking, we need an explicit quiescence protocol, and to
handle simultanous splicing.

 - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
   the splice.
>>>
>>> If we like turn based, why not just deterministic throw out one slice? :)
>> 
>> Because while I am going to implement turn-based, I'm not sure if anyone
>> else is.  I guess we'll see?
>
> My point was more that its similar in logic - if you throw out the splice 
> deterministically and just keep it in some 
> "pending slice" buffer on the sending side, you've just done basically what 
> you'd do to implement turns, while keeping 
> the non-turn slice protocol a bit easier :).

No, you really haven't.  Right now you can have Alice propose a splice
while Bob proposes at the same time, so we have a tiebreak protocol.
And you can have Alice propose a splice while Bob proposes a different
update which needs to be completely resolved before the splice can
continue.

Whereas in turn taking, when someone proposes a splice, that's what
you're doing, as soon as it is received.  And when someone wants to
propose a splice, they can do it as soon as it's their turn.  If it's
not their turn and the other side proposes a splice, they can jump onto
that (happy days, since the splice proposer pays for 1 input 1 output
and the core of the tx!).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-05-04 Thread Rusty Russell
Matt Corallo  writes:
> On 4/27/21 17:32, Rusty Russell wrote:
>> OK, draft is up:
>> 
>>  https://github.com/lightningnetwork/lightning-rfc/pull/867
>> 
>> I have to actually implement it now (though the real win comes from
>> making it compulsory, but that's a fair way away).
>> 
>> Notably, I added the requirement that update_fee messages be on their
>> own.  This means there's no debate on the state of the channel when
>> this is being applied.
>
> I do have to admit *that* part I like :).
>
> If we don't do turns for splicing, I wonder if we can take the rules around 
> splicing pausing other HTLC updates, make 
> them generic for future use, and then also use them for update_fee in a 
> simpler-to-make-compulsory change :).

Yes, it is similar to the close requirement, except that requires all
HTLCs be absent.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Matt Corallo




On 4/27/21 17:32, Rusty Russell wrote:

OK, draft is up:

 https://github.com/lightningnetwork/lightning-rfc/pull/867

I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).

Notably, I added the requirement that update_fee messages be on their
own.  This means there's no debate on the state of the channel when
this is being applied.


I do have to admit *that* part I like :).

If we don't do turns for splicing, I wonder if we can take the rules around splicing pausing other HTLC updates, make 
them generic for future use, and then also use them for update_fee in a simpler-to-make-compulsory change :).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Matt Corallo



On 4/27/21 01:04, Rusty Russell wrote:

Matt Corallo  writes:

On Apr 24, 2021, at 01:56, Rusty Russell  wrote:

Matt Corallo  writes:

I promise it’s much less work than it sounds like, and avoids having to debug 
these things based on logs, which is a huge pain :). Definitely less work than 
a new state machine:).


But the entire point of this proposal is that it's a subset of the
existing state machine?


Compared to today, its a good chunk of additional state machine logic to enforce when a message can or can not be sent, 
and additional logic for when we can (or can not) flush any pending changes buffer(s).




The only "twist" is that if it's your turn and you receive an update,
you can either reply with a "yield" message, or ignore it.


How do you handle the "no changes to make" case - do you send yields back and forth ever Nms all day long or is there 
some protocol by which you resolve it when both parties try to claim turn at once?




Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
that’s new) and the loser has to store their proposal in a holding cell for 
later (which they have to do in turn-based anyway). Logic to check if there’s 
unsettled things in RAA handling is pretty similar to turn-based, and logic to 
reject other messages is the same as shutdown handling today.


Nope, with the simplified protocol you can `update_splice` at any time
instead of your normal update, since both sides are already in sync.


Hmm, I'm somewhat failing to understand why its that different - you can only update_splice if its your turn, which is 
about exactly the same amount of additional logic to check turn conditions as just flag "want to do splice". Either way 
you have the same pending splice buffer.



- MUST use the higher of the two `funding_feerate_perkw` as the feerate for
  the splice.


If we like turn based, why not just deterministic throw out one slice? :)


Because while I am going to implement turn-based, I'm not sure if anyone
else is.  I guess we'll see?


My point was more that its similar in logic - if you throw out the splice deterministically and just keep it in some 
"pending slice" buffer on the sending side, you've just done basically what you'd do to implement turns, while keeping 
the non-turn slice protocol a bit easier :).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Rusty Russell
OK, draft is up:

https://github.com/lightningnetwork/lightning-rfc/pull/867

I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).

Notably, I added the requirement that update_fee messages be on their
own.  This means there's no debate on the state of the channel when
this is being applied.

Cheers,
Rusty.

Rusty Russell  writes:

> Matt Corallo  writes:
>>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>>> 
>>> Matt Corallo  writes:
 Somehow I missed this thread, but I did note in a previous meeting - these 
 issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
 tests for precisely these types of message-non-delivery-and-resending 
 production desync bugs for several years. When it initially landed it 
 forced several rewrites of parts of the state machine, but quickly 
 exhausted the bug fruit (though catches other classes of bugs occasionally 
 as well). The state machine here is really not that big - while I agree 
 simplifying it where possible is nice, ripping things out to replace them 
 with fresh code (which would need similar testing) is probably not the 
 most obvious decrease in complexity.
>>> 
>>> It's historically had more bugs than anything else in the protocol.  We
>>> literally found another one in feerate negotiation since the last
>>> c-lightning release :(
>>> 
>>> I'd rather not have bugs than try to catch them all.
>>
>> I promise it’s much less work than it sounds like, and avoids having to 
>> debug these things based on logs, which is a huge pain :). Definitely less 
>> work than a new state machine:).
>
> But the entire point of this proposal is that it's a subset of the
> existing state machine?
>
>>> You could propose a splice (or update to anchors, or whatever) any time
>>> when it's your turn, as long as you haven't proposed any other updates.
>>> That's simple!
>>
>> I presume you’d need to take it a few steps further - if the last
>> message received required a response CS/RAA, you must still wait until
>> things have settled down. I guess it also depends on the exact
>> semantics of a “turn based” message protocol - if you received some
>> updates and a signature, are you allowed to add more updates after you
>> send your CS/RAA (then you have a good chunk of today’s complexity),
>> or do you have to wait until they send you back their last RAA (in
>> which case presumably they aren’t allowed to include anything else as
>> then they’d be able to monopolize update windows). In the first case
>> you still have the same issues of today, in the second less so, but
>> you’re doing a similar “ok, just pause updates and wait for things to
>> settle “, I think.
>
> Yes, as the original proposal stated: you propose changes, send
> commitment_signed, receive revoke_and_ack and commitment_signed, then
> send revoke_and_ack.  Then both sides are in sync, and the other side
> has a turn.
>
> The only "twist" is that if it's your turn and you receive an update,
> you can either reply with a "yield" message, or ignore it.
>
>>> Instead, *both* sides have to send a splice message to synchronize, and
>>> they can only do so once all in-flight changes have cleared. You have
>>> to resolve simultaneous splice attempts (we use "highest feerate"
>>> tiebreak by node_id), and keep track of this stage while you clear
>>> in-flight changes.
>>
>> Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
>> that’s new) and the loser has to store their proposal in a holding cell for 
>> later (which they have to do in turn-based anyway). Logic to check if 
>> there’s unsettled things in RAA handling is pretty similar to turn-based, 
>> and logic to reject other messages is the same as shutdown handling today.
>
> Nope, with the simplified protocol you can `update_splice` at any time
> instead of your normal update, since both sides are already in sync.
>
>>> Here's the subset of requirements from the draft which relate to this:
>>> 
>>> The sender:
>>> - MUST NOT send another splice message while a splice is being negotiated.
>>> - MUST NOT send a splice message after sending uncommitted changes.
>>> - MUST NOT send other channel updates until splice negotiation has 
>>> completed.
>>> 
>>> The receiver:
>>> - MUST respond with a `splice` message of its own if it has not already.
>>> - MUST NOT reply with `splice` until all commitment updates are resolved by 
>>> both peers.
>>
>> Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
>> HTLCs left”.
>
> Yes, and in fact this protocol was flawed and had to be revised, as it
> did not actually mean both sides were committed in the case of
> simultaneous splice proposals :(
>
>>> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>>>  the splice.
>>
>> If we like turn based, why not just deterministic throw out one slice? :)
>
> Because while I 

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-26 Thread Rusty Russell
Matt Corallo  writes:
>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>> 
>> Matt Corallo  writes:
>>> Somehow I missed this thread, but I did note in a previous meeting - these 
>>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>>> tests for precisely these types of message-non-delivery-and-resending 
>>> production desync bugs for several years. When it initially landed it 
>>> forced several rewrites of parts of the state machine, but quickly 
>>> exhausted the bug fruit (though catches other classes of bugs occasionally 
>>> as well). The state machine here is really not that big - while I agree 
>>> simplifying it where possible is nice, ripping things out to replace them 
>>> with fresh code (which would need similar testing) is probably not the most 
>>> obvious decrease in complexity.
>> 
>> It's historically had more bugs than anything else in the protocol.  We
>> literally found another one in feerate negotiation since the last
>> c-lightning release :(
>> 
>> I'd rather not have bugs than try to catch them all.
>
> I promise it’s much less work than it sounds like, and avoids having to debug 
> these things based on logs, which is a huge pain :). Definitely less work 
> than a new state machine:).

But the entire point of this proposal is that it's a subset of the
existing state machine?

>> You could propose a splice (or update to anchors, or whatever) any time
>> when it's your turn, as long as you haven't proposed any other updates.
>> That's simple!
>
> I presume you’d need to take it a few steps further - if the last
> message received required a response CS/RAA, you must still wait until
> things have settled down. I guess it also depends on the exact
> semantics of a “turn based” message protocol - if you received some
> updates and a signature, are you allowed to add more updates after you
> send your CS/RAA (then you have a good chunk of today’s complexity),
> or do you have to wait until they send you back their last RAA (in
> which case presumably they aren’t allowed to include anything else as
> then they’d be able to monopolize update windows). In the first case
> you still have the same issues of today, in the second less so, but
> you’re doing a similar “ok, just pause updates and wait for things to
> settle “, I think.

Yes, as the original proposal stated: you propose changes, send
commitment_signed, receive revoke_and_ack and commitment_signed, then
send revoke_and_ack.  Then both sides are in sync, and the other side
has a turn.

The only "twist" is that if it's your turn and you receive an update,
you can either reply with a "yield" message, or ignore it.

>> Instead, *both* sides have to send a splice message to synchronize, and
>> they can only do so once all in-flight changes have cleared. You have
>> to resolve simultaneous splice attempts (we use "highest feerate"
>> tiebreak by node_id), and keep track of this stage while you clear
>> in-flight changes.
>
> Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
> that’s new) and the loser has to store their proposal in a holding cell for 
> later (which they have to do in turn-based anyway). Logic to check if there’s 
> unsettled things in RAA handling is pretty similar to turn-based, and logic 
> to reject other messages is the same as shutdown handling today.

Nope, with the simplified protocol you can `update_splice` at any time
instead of your normal update, since both sides are already in sync.

>> Here's the subset of requirements from the draft which relate to this:
>> 
>> The sender:
>> - MUST NOT send another splice message while a splice is being negotiated.
>> - MUST NOT send a splice message after sending uncommitted changes.
>> - MUST NOT send other channel updates until splice negotiation has completed.
>> 
>> The receiver:
>> - MUST respond with a `splice` message of its own if it has not already.
>> - MUST NOT reply with `splice` until all commitment updates are resolved by 
>> both peers.
>
> Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
> HTLCs left”.

Yes, and in fact this protocol was flawed and had to be revised, as it
did not actually mean both sides were committed in the case of
simultaneous splice proposals :(

>> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>>  the splice.
>
> If we like turn based, why not just deterministic throw out one slice? :)

Because while I am going to implement turn-based, I'm not sure if anyone
else is.  I guess we'll see?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-24 Thread Matt Corallo


> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
> 
> Matt Corallo  writes:
>> Somehow I missed this thread, but I did note in a previous meeting - these 
>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>> tests for precisely these types of message-non-delivery-and-resending 
>> production desync bugs for several years. When it initially landed it forced 
>> several rewrites of parts of the state machine, but quickly exhausted the 
>> bug fruit (though catches other classes of bugs occasionally as well). The 
>> state machine here is really not that big - while I agree simplifying it 
>> where possible is nice, ripping things out to replace them with fresh code 
>> (which would need similar testing) is probably not the most obvious decrease 
>> in complexity.
> 
> It's historically had more bugs than anything else in the protocol.  We
> literally found another one in feerate negotiation since the last
> c-lightning release :(
> 
> I'd rather not have bugs than try to catch them all.

I promise it’s much less work than it sounds like, and avoids having to debug 
these things based on logs, which is a huge pain :). Definitely less work than 
a new state machine:).

> You could propose a splice (or update to anchors, or whatever) any time
> when it's your turn, as long as you haven't proposed any other updates.
> That's simple!

I presume you’d need to take it a few steps further - if the last message 
received required a response CS/RAA, you must still wait until things have 
settled down. I guess it also depends on the exact semantics of a “turn based” 
message protocol - if you received some updates and a signature, are you 
allowed to add more updates after you send your CS/RAA (then you have a good 
chunk of today’s complexity), or do you have to wait until they send you back 
their last RAA (in which case presumably they aren’t allowed to include 
anything else as then they’d be able to monopolize update windows). In the 
first case you still have the same issues of today, in the second less so, but 
you’re doing a similar “ok, just pause updates and wait for things to settle “, 
I think.

> Instead, *both* sides have to send a splice message to synchronize, and
> they can only do so once all in-flight changes have cleared. You have
> to resolve simultaneous splice attempts (we use "highest feerate"
> tiebreak by node_id), and keep track of this stage while you clear
> in-flight changes.

Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
that’s new) and the loser has to store their proposal in a holding cell for 
later (which they have to do in turn-based anyway). Logic to check if there’s 
unsettled things in RAA handling is pretty similar to turn-based, and logic to 
reject other messages is the same as shutdown handling today.

> Here's the subset of requirements from the draft which relate to this:
> 
> The sender:
> - MUST NOT send another splice message while a splice is being negotiated.
> - MUST NOT send a splice message after sending uncommitted changes.
> - MUST NOT send other channel updates until splice negotiation has completed.
> 
> The receiver:
> - MUST respond with a `splice` message of its own if it has not already.
> - MUST NOT reply with `splice` until all commitment updates are resolved by 
> both peers.

Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
HTLCs left”.

> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>  the splice.

If we like turn based, why not just deterministic throw out one slice? :)

> - MUST NOT send other channel updates until splice negotiation has completed.
> 
> Similar requirements exist for other major channel changes.
> 
> Cheers,
> Rusty.
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-23 Thread Rusty Russell
Matt Corallo  writes:
> Somehow I missed this thread, but I did note in a previous meeting - these 
> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
> tests for precisely these types of message-non-delivery-and-resending 
> production desync bugs for several years. When it initially landed it forced 
> several rewrites of parts of the state machine, but quickly exhausted the bug 
> fruit (though catches other classes of bugs occasionally as well). The state 
> machine here is really not that big - while I agree simplifying it where 
> possible is nice, ripping things out to replace them with fresh code (which 
> would need similar testing) is probably not the most obvious decrease in 
> complexity.

It's historically had more bugs than anything else in the protocol.  We
literally found another one in feerate negotiation since the last
c-lightning release :(

I'd rather not have bugs than try to catch them all.

>> I've been revisiting this because it makes things like splicing easier:
>> the current draft requires stopping changes while splicing is being
>> negotiated, which is not entirely trivial.  With the simplified method,
>> you don't have to wait at all.
>
> Hmm, what’s nontrivial about this? How much more complicated is this than 
> having an alternation to updates and pausing HTLC updates for a cycle or two 
> while splicing is negotiated (I assume it would still need a similar 
> requirement, as otherwise you have the same complexity)? We already have a 
> similar update-stopping process for shutdown, though of course it doesn’t 
> include restarting.

You could propose a splice (or update to anchors, or whatever) any time
when it's your turn, as long as you haven't proposed any other updates.
That's simple!

Instead, *both* sides have to send a splice message to synchronize, and
they can only do so once all in-flight changes have cleared.  You have
to resolve simultaneous splice attempts (we use "highest feerate"
tiebreak by node_id), and keep track of this stage while you clear
in-flight changes.

Here's the subset of requirements from the draft which relate to this:

The sender:
- MUST NOT send another splice message while a splice is being negotiated.
- MUST NOT send a splice message after sending uncommitted changes.
- MUST NOT send other channel updates until splice negotiation has completed.

The receiver:
- MUST respond with a `splice` message of its own if it has not already.
- MUST NOT reply with `splice` until all commitment updates are resolved by 
both peers.
- MUST use the higher of the two `funding_feerate_perkw` as the feerate for
  the splice.
- MUST NOT send other channel updates until splice negotiation has completed.

Similar requirements exist for other major channel changes.

Cheers,
Rusty.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-23 Thread Matt Corallo


> On Apr 20, 2021, at 17:19, Rusty Russell  wrote:
> 
> After consideration, I prefer alternation.  It fits better with the
> existing implementations, and it is more optimal than reflection for
> optimized implementations.
> 
> In particular, you have a rule that says you can send updates and
> commitment_signed when it's not your turn, and the leader either
> responds with a "giving way" message, or ignores your changes and sends
> its own.
> 
> A simple implementation *never* sends a commitment_signed until it
> receives "giving way" so it doesn't have to deal with orphaned
> commitments.  A more complex implementation sends opportunistically and
> then has to remember that it's committed if it loses the race.  Such an
> implementation is only slower than the current system if that race
> happens.

Somehow I missed this thread, but I did note in a previous meeting - these 
issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
tests for precisely these types of message-non-delivery-and-resending 
production desync bugs for several years. When it initially landed it forced 
several rewrites of parts of the state machine, but quickly exhausted the bug 
fruit (though catches other classes of bugs occasionally as well). The state 
machine here is really not that big - while I agree simplifying it where 
possible is nice, ripping things out to replace them with fresh code (which 
would need similar testing) is probably not the most obvious decrease in 
complexity.

> I've been revisiting this because it makes things like splicing easier:
> the current draft requires stopping changes while splicing is being
> negotiated, which is not entirely trivial.  With the simplified method,
> you don't have to wait at all.

Hmm, what’s nontrivial about this? How much more complicated is this than 
having an alternation to updates and pausing HTLC updates for a cycle or two 
while splicing is negotiated (I assume it would still need a similar 
requirement, as otherwise you have the same complexity)? We already have a 
similar update-stopping process for shutdown, though of course it doesn’t 
include restarting.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-20 Thread Rusty Russell
Christian Decker  writes:
> Rusty Russell  writes:
>>> This is in stark contrast to the leader-based approach, where both
>>> parties can just keep queuing updates without silent times to
>>> transferring the token from one end to the other.
>>
>> You've swayed me, but it needs new wire msgs to indicate "these are
>> your proposals I'm reflecting to you".
>>
>> OTOH they don't need to carry data, so we can probably just have:
>>
>> update_htlcs_ack:
>>* [`channel_id`:`channel_id`]
>>* [`u16`:`num_added`]
>>* [`num_added*u64`:`added`]
>>* [`u16`:`num_removed`]
>>* [`num_removed*u64`:`removed`]
>>
>> update_fee can stay the same.
>>
>> Thoughts?
>
> So this would pretty much be a batch-ack, sent after a whole series of
> changes were proposed to the leader, and referenced by their `htlc_id`,
> correct? This is one optimization step further than what I was thinking,
> but it can work. My proposal would have been to either reflect the whole
> message (nodes need to remember proposals they've sent anyway in case of
> disconnects, so matching incoming changes with the pending ones should
> not be too hard), or send back individual acks, containing the hash of
> the message if we want to safe on bytes transferred. Alternatively we
> could also use reference the change by its htlc_id.

[ Following up on an old thread ]

After consideration, I prefer alternation.  It fits better with the
existing implementations, and it is more optimal than reflection for
optimized implementations.

In particular, you have a rule that says you can send updates and
commitment_signed when it's not your turn, and the leader either
responds with a "giving way" message, or ignores your changes and sends
its own.

A simple implementation *never* sends a commitment_signed until it
receives "giving way" so it doesn't have to deal with orphaned
commitments.  A more complex implementation sends opportunistically and
then has to remember that it's committed if it loses the race.  Such an
implementation is only slower than the current system if that race
happens.

I've been revisiting this because it makes things like splicing easier:
the current draft requires stopping changes while splicing is being
negotiated, which is not entirely trivial.  With the simplified method,
you don't have to wait at all.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-20 Thread Rusty Russell
Christian Decker  writes:
>> And you don't get the benefit of the turn-taking approach, which is that
>> you can have a known state for fee changes.  Even if you change it to
>> have opener always the leader, it still has to handle the case where
>> incoming changes are not allowed under the new fee regime (and similar
>> issues for other dynamic updates).
>
> Good point, I hadn't considered that a change from one side might become
> invalid due to a change from the other side. I think however this can only
> affect changes that result in other changes no longer being applicable,
> e.g., changing the number of HTLCs you'll allow on a channel making the
> HTLC we just added and whose update_add is still in flight invalid.

To make dynamic changes in the current system, you need to make them the
same way we make feechanges: first remote, then local (once they ack).

This means you have to handle the cases where this causes the the commit
tx to not meet the new restrictions.  It's all possible, it's just
messy.

> I don't think fee changes are impacted here, since the non-leader only
> applies the change to its commitment once it gets back its own change.
> The leader will have inserted your update_add into its stream after the
> fee update, and so you'll first apply the fee update, and then use the
> correct fee to add the HTLC to your commitment, resulting in the same
> state.

Sure, but we still have the (existing) problem where you propose a fee
change you can no longer afford, because the other side is also adding
things.

They can just refuse to reflect the fee in that case, though.

> The remaining edgecases where changes can become invalid if they are in
> flight, can be addressed by bouncing the change through the non-leader,
> telling him that "hey, I'd like to propose this change, if you're good
> with it send it back to me and I'll add it to my stream". This can be
> seen as draining the queue of in-flight changes, however the non-leader
> may pipeline its own changes after it and take the updated parameters
> into consideration. Think of it as a two-phase commit, alerting the peer
> with a proposal, before committing it by adding it to the stream. It
> adds latency (about 1/2RTT over the token-passing approach since we can
> emulate it with the token-passing approach) but these synchronization
> points are rare and not on the critical path when forwarding payments.

You can create a protocol to reject changes, but now we're more complex
than the simply-alternate-leader approach.

> With the leader-based approach, we add 1RTT latency to the updates from
> one side, but the other never has to wait for the token, resulting in
> 1/2RTT per direction as well, since messages are well-balanced.

Good point.

>> Yes, but it alternates because that's optimal for a non-busy channel
>> (since it's usually "Alice adds htlc, Bob completes the htlc").
>
> What's bothering me more about the turn-based approach is that while the
> token is in flight, neither endpoint can make any progress, since the
> one reliquishing the token promised not to say anything and the other
> one hasn't gotten the token yet. This might result in rather a lot of
> dead-air if both sides have a constant stream of changes to add. So we'd
> likely have to add a timeout to defer giving up the token, to counter
> dead-air, further adding delay to the changes from the other end, and
> adding yet another parameter.

I originally allowed optimistically sending commitment_signed.  But it
means there can be more than one commitment tx for any given height (you
have to assume they received the sig and might broadcast it), which
seemed to complicate things.  OTOH this is only true if you choose to do
this.

> This is in stark contrast to the leader-based approach, where both
> parties can just keep queuing updates without silent times to
> transferring the token from one end to the other.

You've swayed me, but it needs new wire msgs to indicate "these are your
proposals I'm reflecting to you".

OTOH they don't need to carry data, so we can probably just have:

update_htlcs_ack:
   * [`channel_id`:`channel_id`]
   * [`u16`:`num_added`]
   * [`num_added*u64`:`added`]
   * [`u16`:`num_removed`]
   * [`num_removed*u64`:`removed`]

update_fee can stay the same.

Thoughts?
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-15 Thread Christian Decker


> And you don't get the benefit of the turn-taking approach, which is that
> you can have a known state for fee changes.  Even if you change it to
> have opener always the leader, it still has to handle the case where
> incoming changes are not allowed under the new fee regime (and similar
> issues for other dynamic updates).

Good point, I hadn't considered that a change from one side might become
invalid due to a change from the other side. I think however this can only
affect changes that result in other changes no longer being applicable,
e.g., changing the number of HTLCs you'll allow on a channel making the
HTLC we just added and whose update_add is still in flight invalid.

I don't think fee changes are impacted here, since the non-leader only
applies the change to its commitment once it gets back its own change.
The leader will have inserted your update_add into its stream after the
fee update, and so you'll first apply the fee update, and then use the
correct fee to add the HTLC to your commitment, resulting in the same
state.

The remaining edgecases where changes can become invalid if they are in
flight, can be addressed by bouncing the change through the non-leader,
telling him that "hey, I'd like to propose this change, if you're good
with it send it back to me and I'll add it to my stream". This can be
seen as draining the queue of in-flight changes, however the non-leader
may pipeline its own changes after it and take the updated parameters
into consideration. Think of it as a two-phase commit, alerting the peer
with a proposal, before committing it by adding it to the stream. It
adds latency (about 1/2RTT over the token-passing approach since we can
emulate it with the token-passing approach) but these synchronization
points are rare and not on the critical path when forwarding payments.

>> The downside is that we add a constant overhead to one side's
>> operations, but since we pipeline changes, and are mostly synchronous
>> during the signing of the commitment tx today anyway, this comes out to
>> 1 RTT for each commitment.
>
> Yeah, it adds 1RTT to every hop on the network, vs my proposal which
> adds just over 1/2 RTT on average.

Doesn't that assume a change of turns while the HTLC was in-flight?
Adding and resolving an HTLC requires one change coming from either side
of the channel, implying that a turn change must have been performed,
which itself takes 1 RTT. Thus to add an remove an HTLC we add at least
1RTT for each hop.

With the leader-based approach, we add 1RTT latency to the updates from
one side, but the other never has to wait for the token, resulting in
1/2RTT per direction as well, since messages are well-balanced.

> Yes, but it alternates because that's optimal for a non-busy channel
> (since it's usually "Alice adds htlc, Bob completes the htlc").

What's bothering me more about the turn-based approach is that while the
token is in flight, neither endpoint can make any progress, since the
one reliquishing the token promised not to say anything and the other
one hasn't gotten the token yet. This might result in rather a lot of
dead-air if both sides have a constant stream of changes to add. So we'd
likely have to add a timeout to defer giving up the token, to counter
dead-air, further adding delay to the changes from the other end, and
adding yet another parameter.

This is in stark contrast to the leader-based approach, where both
parties can just keep queuing updates without silent times to
transferring the token from one end to the other.

Cheers,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Christian Decker  writes:
> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).

But now you need to be able to propose two kinds of things, which is
actually harder to implement; update-from-you and update-from-me.  This
is a deeper protocol change.

And you don't get the benefit of the turn-taking approach, which is that
you can have a known state for fee changes.  Even if you change it to
have opener always the leader, it still has to handle the case where
incoming changes are not allowed under the new fee regime (and similar
issues for other dynamic updates).

> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.

Yeah, it adds 1RTT to every hop on the network, vs my proposal which
adds just over 1/2 RTT on average.

> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)

Yes, but it alternates because that's optimal for a non-busy channel
(since it's usually "Alice adds htlc, Bob completes the htlc").

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Bastien TEINTURIER  writes:
> It's a bit tricky to get it right at first, but once you get it right you
> don't need to touch that
> code again and everything runs smoothly. We're pretty close to that state,
> so why would we want to
> start from scratch? Or am I missing something?

Well, if you've implemented a state-based approach then this is simply a
subset of that so it's simple to implement (I believe, I haven't done it
yet!).

But with a synchronous approach like this, we can do dynamic protocol
updates at any time without having a special "stop and drain" step.

For example, you can decrease the amount of HTLCs you accept, without
worrying about the case where there HTLCs being added right now.  This
solves a similar outstanding problem with update_fee.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker 
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell  writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2].  Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> >stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> >when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> >and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> >sent the initial commitsig but not the final revocation) and
> >receive-in-progress (if you have received the initial commitsig
> >not not received the final revocation).  If either is set,
> >the sender (as indicated by the flags) retransmits the entire
> >sequence.
> >Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler.  There is only ever one pair of commitment txs for any
> >given commitment index.
> > 2. Fee changes are now deterministic.  No worrying about the case where
> >the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> >negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> >receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> >this would be compulsory) you'll be able signficantly simplify;
> >you'll need to record the index at which HTLCs were changed
> >(added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] 

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-13 Thread Christian Decker
I wonder if we should just go the tried-and-tested leader-based
mechanism:

 1. The node with the lexicographically lower node_id is determined to
be the leader.
 2. The leader receives proposals for changes from itself and the peer
and orders them into a logical sequence of changes
 3. The leader applies the changes locally and streams them to the peer.
 4. Either node can initiate a commitment by proposing a `flush` change.
 5. Upon receiving a `flush` the nodes compute the commitment
transaction and exchange signatures.

This is similar to your proposal, but does away with turn changes (it's
always the leader's turn), and therefore reduces the state we need to
keep track of (and re-negotiate on reconnect).

The downside is that we add a constant overhead to one side's
operations, but since we pipeline changes, and are mostly synchronous
during the signing of the commitment tx today anyway, this comes out to
1 RTT for each commitment.

On the other hand a token-passing approach (which I think is what you
propose) require a synchronous token handover whenever a the direction
of the updates changes. This is assuming I didn't misunderstand the turn
mechanics of your proposal :-)

Cheers,
Christian

Rusty Russell  writes:
> Hi all,
>
> Our HTLC state machine is optimal, but complex[1]; the Lightning
> Labs team recently did some excellent work finding another place the spec
> is insufficient[2].  Also, the suggestion for more dynamic changes makes it
> more difficult, usually requiring forced quiescence.
>
> The following protocol returns to my earlier thoughts, with cost of
> latency in some cases.
>
> 1. The protocol is half-duplex, with each side taking turns; opener first.
> 2. It's still the same form, but it's always one-direction so both sides
>stay in sync.
> update+-> commitsig-> <-revocation <-commitsig revocation->
> 3. A new message pair "turn_request" and "turn_reply" let you request
>when it's not your turn.
> 4. If you get an update in reply to your turn_request, you lost the race
>and have to defer your own updates until after peer is finished.
> 5. On reconnect, you send two flags: send-in-progress (if you have
>sent the initial commitsig but not the final revocation) and
>receive-in-progress (if you have received the initial commitsig
>not not received the final revocation).  If either is set,
>the sender (as indicated by the flags) retransmits the entire
>sequence.
>Otherwise, (arbitrarily) opener goes first again.
>
> Pros:
> 1. Way simpler.  There is only ever one pair of commitment txs for any
>given commitment index.
> 2. Fee changes are now deterministic.  No worrying about the case where
>the peer's changes are also in flight.
> 3. Dynamic changes can probably happen more simply, since we always
>negotiate both sides at once.
>
> Cons:
> 1. If it's not your turn, it adds 1 RTT latency.
>
> Unchanged:
> 1. Database accesses are unchanged; you need to commit when you send or
>receive a commitsig.
> 2. You can use the same state machine as before, but one day (when
>this would be compulsory) you'll be able signficantly simplify;
>you'll need to record the index at which HTLCs were changed
>(added/removed) in case peer wants you to rexmit though.
>
> Cheers,
> Rusty.
>
> [1] This is my fault; I was persuaded early on that optimality was more
> important than simplicity in a classic nerd-snipe.
> [2] https://github.com/lightningnetwork/lightning-rfc/issues/794
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-12 Thread Rusty Russell
Hi all,

Our HTLC state machine is optimal, but complex[1]; the Lightning
Labs team recently did some excellent work finding another place the spec
is insufficient[2].  Also, the suggestion for more dynamic changes makes it
more difficult, usually requiring forced quiescence.

The following protocol returns to my earlier thoughts, with cost of
latency in some cases.

1. The protocol is half-duplex, with each side taking turns; opener first.
2. It's still the same form, but it's always one-direction so both sides
   stay in sync.
update+-> commitsig-> <-revocation <-commitsig revocation->
3. A new message pair "turn_request" and "turn_reply" let you request
   when it's not your turn.
4. If you get an update in reply to your turn_request, you lost the race
   and have to defer your own updates until after peer is finished.
5. On reconnect, you send two flags: send-in-progress (if you have
   sent the initial commitsig but not the final revocation) and
   receive-in-progress (if you have received the initial commitsig
   not not received the final revocation).  If either is set,
   the sender (as indicated by the flags) retransmits the entire
   sequence.
   Otherwise, (arbitrarily) opener goes first again.

Pros:
1. Way simpler.  There is only ever one pair of commitment txs for any
   given commitment index.
2. Fee changes are now deterministic.  No worrying about the case where
   the peer's changes are also in flight.
3. Dynamic changes can probably happen more simply, since we always
   negotiate both sides at once.

Cons:
1. If it's not your turn, it adds 1 RTT latency.

Unchanged:
1. Database accesses are unchanged; you need to commit when you send or
   receive a commitsig.
2. You can use the same state machine as before, but one day (when
   this would be compulsory) you'll be able signficantly simplify;
   you'll need to record the index at which HTLCs were changed
   (added/removed) in case peer wants you to rexmit though.

Cheers,
Rusty.

[1] This is my fault; I was persuaded early on that optimality was more
important than simplicity in a classic nerd-snipe.
[2] https://github.com/lightningnetwork/lightning-rfc/issues/794
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev