Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-14 Thread Rusty Russell
Joost Jager  writes:
>>
>> > A crucial thing is that these hold fees don't need to be symmetric. A new
>> > node for example that opens a channel to a well-known, established
>> routing
>> > node will be forced to pay a hold fee, but won't see any traffic coming
>> in
>> > anymore if it announces a hold fee itself. Nodes will need to build a
>> > reputation before they're able to command hold fees. Similarly, routing
>> > nodes that have a strong relation may decide to not charge hold fees to
>> > each other at all.
>>
>> I can still establish channels to various low-reputation nodes, and then
>> use them to grief a high-reputation node.  Not only do I get to jam up
>> the high-reputation channels, as a bonus I get the low-reputation nodes
>> to pay for it!
>
> So you're saying:
>
> ATTACKER --(no hold fee)--> LOW-REP --(hold fee)--> HIGH-REP
>
> If I were LOW-REP, I'd still charge an unknown node a hold fee. I would
> only waive the hold fee for high-reputation nodes. In that case, the
> attacker is still paying for the attack. I may be forced to take a small
> loss on the difference, but at least the larger part of the pain is felt by
> the attacker. The assumption is that this is sufficient enough to deter the
> attacker from even trying.

No, because HIGH-REP == ATTACKER and LOW-REP pays.

> I guess your concern is with trying to become a routing node? If nobody
> knows you, you'll be forced to pay hold fees but can't attract traffic if
> you charge hold fees yourself. That indeed means that you'll need to be
> selective with whom you accept htlcs from. Put limits in place to control
> the expenditure. Successful forwards will earn a routing fee which could
> compensate for the loss in hold fees too.

"Be selectinve with whom you accept HTLCs from"... it always comes back
to incentives to de-anonymize the network :(

> I think this mechanism can create interesting dynamics on the network and
> eventually reach an equilibrium that is still healthy in terms of
> decentralization and privacy.

I suspect that if you try to create a set of actual rules for nodes
using actual numbers, I think you'll find you enter a complexity spiral
as you try to play whack-a-mole on all the different ways you can
exploit it.

(This is what happened every time I tried to design a peer-penalty
system).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Christian Decker  writes:
> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).

But now you need to be able to propose two kinds of things, which is
actually harder to implement; update-from-you and update-from-me.  This
is a deeper protocol change.

And you don't get the benefit of the turn-taking approach, which is that
you can have a known state for fee changes.  Even if you change it to
have opener always the leader, it still has to handle the case where
incoming changes are not allowed under the new fee regime (and similar
issues for other dynamic updates).

> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.

Yeah, it adds 1RTT to every hop on the network, vs my proposal which
adds just over 1/2 RTT on average.

> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)

Yes, but it alternates because that's optimal for a non-busy channel
(since it's usually "Alice adds htlc, Bob completes the htlc").

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Bastien TEINTURIER  writes:
> It's a bit tricky to get it right at first, but once you get it right you
> don't need to touch that
> code again and everything runs smoothly. We're pretty close to that state,
> so why would we want to
> start from scratch? Or am I missing something?

Well, if you've implemented a state-based approach then this is simply a
subset of that so it's simple to implement (I believe, I haven't done it
yet!).

But with a synchronous approach like this, we can do dynamic protocol
updates at any time without having a special "stop and drain" step.

For example, you can decrease the amount of HTLCs you accept, without
worrying about the case where there HTLCs being added right now.  This
solves a similar outstanding problem with update_fee.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker 
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell  writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2].  Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> >stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> >when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> >and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> >sent the initial commitsig but not the final revocation) and
> >receive-in-progress (if you have received the initial commitsig
> >not not received the final revocation).  If either is set,
> >the sender (as indicated by the flags) retransmits the entire
> >sequence.
> >Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler.  There is only ever one pair of commitment txs for any
> >given commitment index.
> > 2. Fee changes are now deterministic.  No worrying about the case where
> >the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> >negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> >receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> >this would be compulsory) you'll be able signficantly simplify;
> >you'll need to record the index at which HTLCs were changed
> >(added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] https://github.com/lightningnetwork/lightning-rfc/issues/794

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
Hey laolu,

I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal.


Yes, maybe it's better to not offer two mechanisms and wait for dynamic
commitments to offer that
flexibility.

Instead, you may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.


Exactly, that's the kind of heuristic I had in mind. Peers need to slowly
build trust before you
give them access to more resources.

This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information


Agreed, it's easy to implement locally but it's not going to be very nice
to your peer, who has
no way of knowing why you're rejecting HTLCs and may end up closing the
channel because it sees
weird behavior. That's why we need to offer an explicit re-negotiation of
these parameters, let's
keep this use-case in mind when designing dynamic commitments!

Cheers,
Bastien

Le lun. 12 oct. 2020 à 20:59, Olaoluwa Osuntokun  a
écrit :

>
> > I suggest adding tlv records in `commitment_signed` to tell our channel >
> > peer that we're changing the values of these fields.
>
> I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal. Note that in that paradigm, something
> like this would be a distinct message, and also only be allowed with a
> "clean commitment" (as otherwise what if I reduce the number of slots to a
> value that is lower than the number of active slots?). With this, both
> sides
> would be able to propose/accept/deny updates to the flow control parameters
> that can be used to either increase the security of a channel, or implement
> a sort of "slow start" protocol for any new peers that connect to you.
>
> Similar to congestion window expansion/contraction in TCP, when a new peer
> connects to you, you likely don't want to allow them to be able to consume
> all the newly allocated bandwidth in an outgoing direction. Instead, you
> may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.
>
> A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
> several classes of attacks (supplementing any mitigations by "channel
> acceptor" hooks), and also give forwarding nodes more _control_ of exactly
> how their allocated bandwidth is utilized by all connected peers.  This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information, and may end up in weird re-send loops (as they _why_ an
> HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
> state, since we don't have any sort of "unadd", then the channel may end up
> borked if the violating party keeps retransmitting the same update upon
> reconnection.
>
> > Are there other fields you think would need to become dynamic as well?
>
> One other value that IMO should be dynamic to protect against future
> unexpected events is the dust limit. "It Is Known", that this value
> "doesn't
> really change", but we should be able to upgrade _all_ channels on the fly
> if it does for w/e reason.
>
> -- Laolu
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
I totally agree with the simplicity argument, I wanted to raise this
because it's (IMO) an issue
today because of the way we deal with on-chain fees, but it's less
impactful once update_fee is
scoped to some min_relay_fee.

Let's put this aside for now then and we can revisit later if needed.

Thanks for the feedback everyone!
Bastien

Le lun. 12 oct. 2020 à 20:49, Olaoluwa Osuntokun  a
écrit :

> > It seems to me that the "funder pays all the commit tx fees" rule exists
> > solely for simplicity (which was totally reasonable).
>
> At this stage, I've learned that simplicity (when doing anything that
> involves multi-party on-chain fee negotiating/verification/enforcement can
> really go a long way). Just think about all the edge cases w.r.t
> _allocating
> enough funds to pay for fees_ we've discovered over the past few years in
> the state machine. I fear adding a more elaborate fee splitting mechanism
> would only blow up the number of obscure edge cases that may lead to a
> channel temporarily or permanently being "borked".
>
> If we're going to add a "fairer" way of splitting fees, we'll really need
> to
> dig down pre-deployment to ensure that we've explored any resulting edge
> cases within our solution space, as we'll only be _adding_ complexity to
> fee
> splitting.
>
> IMO, anchor commitments in their "final form" (fixed fee rate on commitment
> transaction, only "emergency" use of update_fee) significantly simplifies
> things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
> fees". However, as you note this doesn't fully distribute the worst-case
> cost of needing to go to chain with a "fully loaded" commitment
> transaction.
> Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
> perspective, once again putting the burden on the broadcaster/confirmer to
> make up the difference.
>
> -- Laolu
>
>
> On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels would
>> only exist between peers
>> that trust each other, which is quite limiting (I'm hoping we can do
>> better).
>>
>> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
>> that steal funds come from
>> the fact that a routing node has paid downstream but cannot claim the
>> upstream HTLCs (correct me
>> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
>> the HTLCs they offer
>> while they're pending in the commit-tx, regardless of whether they're
>> funder or fundee.
>>
>> The simplest way to do this would be to deduce the HTLC cost (172 *
>> feerate) from the offerer's
>> main output (instead of the funder's main output, while keeping the base
>> commit tx weight paid
>> by the funder).
>>
>> A more extreme proposal would be to tie the *total* commit-tx fee to the
>> channel usage:
>>
>> * if there are no pending HTLCs, the funder pays all the fee
>> * if there are pending HTLCs, each node pays a proportion of the fee
>> proportional to the number of
>> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
>> pays 75% of the
>> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
>> redistributed.
>>
>> This model uses the on-chain fee as collateral for usage of the channel.
>> If Alice wants to forward
>> HTLCs through this channel (because she has something to gain - routing
>> fees), she should be taking
>> on some of the associated risk, not Bob. Bob will be taking the same risk
>> downstream if he chooses
>> to forward.
>>
>> I believe it also forces the fundee to care about on-chain feerates,
>> which is a healthy incentive.
>> It may create a feedback loop between on-chain feerates and routing fees,
>> which I believe is also
>> a good long-term thing (but it's hard to predict as there may be negative
>> side-effects as well).
>>
>> What do you all think? Is this a terrible idea? Is it okay-ish, but not
>> worth the additional
>> complexity? Is it an amazing idea worth a lightning nobel? Please don't
>> take any of my claims
>> for granted and challenge them, the