Re: [Lightning-dev] Direct Message draft

2020-02-23 Thread Rusty Russell
Christian Decker  writes:
> Rusty Russell  writes:
>
>> I like it!  The lack of "reply" function eliminates all storage
>> requirements for the intermediaries.  Unfortunately it's not currently
>> possible to fit the reply onion inside the existing onion, but I know
>> Christian has a rabbit in his hat for this?
>
> I think circular payment really means an onion that is
>
>> A -> ... -> B -> ... -> A
>
> and not a reply onion inside of a forward onion.
>
> The problem with the circular path is that the "recipient" cannot add
> any reply without invalidating the HMACs on the return leg of the
> onion. The onion is fully predetermined by the sender, any malleability
> introduced in order to allow the recipient to reply poses a threat to
> the integrity of the onion routing, e.g., it opens us up to probing by
> fiddling with parts of the onion until the attacker identifies the
> location the recipient is supposed to put his reply into.
>
> As Rusty mentioned I have a construction of the onion routing packet
> that allows us to compress it in such a way that it fits inside of the
> payload itself.

I think this has the same problem though, that there's no way Alice can
send Bob an onion to use with an arbitrary message?

> Another advantage is that the end-to-end payload is not covered by the
> HMACs in the header, meaning that the recipient can construct a reply
> without having to modify (and invalidate) the routing onion. I guess
> this is going back to the roots of the Sphinx paper :-)

Good point, and it's trivial.  The paper suggests the payload be "final
key" followed by the desired data, providing a simple validation scheme.

We could potentially generalize the HTLC messages like this, but it's
unnecessary at this point.

Thanks,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-23 Thread Rusty Russell
Anthony Towns  writes:
> On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
>> And if there is a grace period, I can just gum up the network with lots
>> of slow-but-not-slow-enough HTLCs.
>
> Well, it reduces the "gum up the network for  blocks" to "gum
> up the network for  seconds", which seems like a pretty
> big win. I think if you had 20 hops each with a 1 minute grace period,
> and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> so at the very least, successfully performing this attack would be
> demonstrating lightning's solved bitcoin's transactions-per-second
> limitation?

But the comparison here is not with the current state, but with the
"best previous proposal we have", which is:

1. Charge an up-front fee for accepting any HTLC.
2. Will hang-up after grace period unless you either prove a channel
   close, or gain another grace period by decrypting onion.

(There was is an obvious extension to this, where you pay another HTLC
first which covers the (larger) up-front fee for the "I know the next
HTLC is going to take a long time").

That proposal is simpler, and covers this case quite nicely.

> at which point the best B can do is unilaterally close the B/C channel
> with their pre-HTLC commitment, but they still have to wait for that to
> confirm before they can safely cancel the HTLC with A, and that will
> likely take more than whatever the grace period is, so B will be losing
> money on holding fees.
>
> Whereas:
>
>   A->B: here's a HTLC, locked in
>
>   B->C: HTLC proposal
>   C->B: sure: updated commitment with HTLC locked in
>   B->C: great, corresponding updated commitment, plus revocation
>   C->B: revocation
>
> means that if C goes silent before B receives a new commitment, B can
> cancel the HTLC with A with no risk (B can publish the old commitment
> still even if the new arrives later, and C can only publish the pre-HTLC
> commitment), and if C goes silent after B receives the new commitment, B
> can drop the new commitment to the blockchain and pay A's fees out of it.

Interesting; this adds a trip, but not in latency (since C can still
count on the HTLC being locked in at step 3).

I don't see how it helps B though?  It still ends up paying A, and C
doesn't pay anything?

It forces a liveness check of C, but TBH I dread rewriting the state
machine for this when we can just ping like we do now.

>> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
>> would fail this HTLC once it's committed, here's the error" 
>
> Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> fast fail the above too. 
>
> And I think something like that's necessary (at least with my view of how
> this "keep the HTLC open" payment would work), otherwise B could send C a
> "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> timeout of 2016 blocks" and if C couldn't reject it immediately would
> owe B 50c per millisecond it took to cancel.

Well, surely grace period (and penalty rate) are either fixed in the
protocol or negotiated up-front, not per-HTLC.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Direct Message draft

2020-02-23 Thread ZmnSCPxj via Lightning-dev
Good morning Christian, and Rusty,

> > > Would it not be better to create a circular path? By this I mean,
> > > Alice constructs an onion that overall creates a path from herself to
> > > Bob and back, ensuring different nodes on the forward and return
> > > directions. The onion hop at Bob reveals that Bob is the chosen
> > > conversation partner, and Bob forwards its reply via the onion return
> > > path (that Alice prepared herself to get back to her via another
> > > path).
> >
> > I like it! The lack of "reply" function eliminates all storage
> > requirements for the intermediaries. Unfortunately it's not currently
> > possible to fit the reply onion inside the existing onion, but I know
> > Christian has a rabbit in his hat for this?
>
> I think circular payment really means an onion that is
>
> > A -> ... -> B -> ... -> A
>
> and not a reply onion inside of a forward onion.
>
> The problem with the circular path is that the "recipient" cannot add
> any reply without invalidating the HMACs on the return leg of the
> onion. The onion is fully predetermined by the sender, any malleability
> introduced in order to allow the recipient to reply poses a threat to
> the integrity of the onion routing, e.g., it opens us up to probing by
> fiddling with parts of the onion until the attacker identifies the
> location the recipient is supposed to put his reply into.

At the risk of constructing a novel cryptosystem, I think we can separate the 
request/response from the onion.
We effectively treat the onion as establishing a *non*-encrypted temporary 
tunnel, and add an asymmetric encryption between Alice and Bob for the request 
and response.

The onion is kept as-is (except without information about HTLC amounts and 
timelocks).
We add a field *uotside* the onion which contains the request/response.
At each hop, we derive a key from the shared secret between the ephemeral 
keypair and the hop keypair, and use the derived key to generate a symmetric 
stream cipher to encrypt the separated request/response.

Alice first creates a *separate* ephemeral key just for communication with Bob.
It encrypts the request using this level 2 ephemeral key and adds a MAC tag as 
well, and treat the encrypted request plus the MAC tag as a binary blob.
Then, for each hop, it derives the symmetric stream cipher using the onion 
(level 1) ephemeral key with that hop and applies the stream cipher on it.
Then it sends out the completed onion plus this encrypted request/response blob.

Each hop effectively peels a layer of encryption, because it is a symmetric 
stream cipher.
On reaching Bob, the encrypted request/response message plus MAC tag is 
revealed to Bob.
Bob learns it is the true destination by some TLV in the onion part, including 
the (level 2) ephemeral key, then validates the MAC using the Alice and Bob 
shared secret and if it is valid, decrypts the request part and processes the 
request.

Bob then generates its reply, and encrypts the reply with the shared secret 
between its static key and the level 2 ephemeral key, then creates a MAC using 
the level 2 ephemeral key and its static key.
Then it sends it together with the rest of the onion onward.

Each hop after Bob effectively adds a layer of encryption, because it is a 
symmetric stream cipher.
On reaching Alice, Alice (who knows the entire route, since it was the one who 
established the route) can peel every layer between Bob and Alice on the return 
route.
Then it should get the binary blob that is the (level 2) encryption of the 
reply plus a MAC, validates the MAC, then decrypts the reply.

We treat the tunnel as unencrypted (i.e. we have a level 2 asymmetric 
encryption between Alice and Bob) because the request/response is outside the 
onion.
We still bother to do a multilayer encryption so that, in case of a route like 
A->I->J->K->L->B, where I and L are controlled by the same surveillor, the 
request/response is different at each hop and I and L cannot be certain they 
are on the same route (though of course timing and message size can improve 
their accuracy --- we can have a fixed size for the request/response to hide 
message size, but we can do nothing about timing).

My *vague* understanding is that HORNET is effectively a better version of this 
--- it uses a "full" onion to establish a circuit, then uses simpler symmetric 
ciphers during circuit operation.
This effectively tears down the circuit as soon as the message passes through.


>
> As Rusty mentioned I have a construction of the onion routing packet
> that allows us to compress it in such a way that it fits inside of the
> payload itself. I'll write up a complete proposal over the coming days,
> but the basic idea is to initialize the unused part of the onion in such
> a way that it cancels out the layers of encryption and the fully wrapped
> onion consists of all `0x00` bytes. These can then be removed resulting
> in a compressed onion, and the sender can simply add the padding 0x00

Re: [Lightning-dev] A proposal for up-front payments.

2020-02-23 Thread Anthony Towns
On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> > I think the way it would end up working
> > is that the further the route extends, the greater the payments are, so:
> >   A -> B   : B sends A 1msat per minute
> >   A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> >   A -> B -> C -> D : D sends C 3 msat, etc
> >   A -> B -> C -> D -> E : E sends D 4 msat, etc
> > so each node is receiving +1 msat/minute, except for the last one, who's
> > paying n msat/minute, where n is the number of hops to have gotten up to
> > the last one. There's the obvious privacy issue there, with fairly
> > obvious ways to fudge around it, I think.
> Yes, it needs to scale with distance to work at all.  However, it has
> the same problems with other upfront schemes: how does E know to send
> 4msat per minute?

D tells it "if you want this HTLC, you'll need to pay 4msat/minute after
the grace period of 65 seconds". Which also means A as the originator can
also choose whatever fees they like. The only consequence of choosing too
high a fee is that it's more likely one of the intermediate nodes will
say "screw that!" and abort the HTLC before it gets to the destination.

> > I think it might make sense for the payments to have a grace period --
> > ie, "if you keep this payment open longer than 20 seconds, you have to
> > start paying me x msat/minute, but if it fulfills or cancels before
> > then, it's all good".
> But whatever the grace period, I can just rely on knowing that B is in
> Australia (with a 1 second HTLC commit time) to make that node bleed
> satoshis.  I can send A->B->C, and have C fail the htlc after 19
> seconds for free.  But B has to send 1msat to A.  B can't blame A or C,
> since this attack could come from further away, too.

So A gives B a grace period of 35 seconds, B deducts 5 seconds
processing time and 10 seconds for latency, so gives C a grace period of
20 seconds; C rejects after 19 seconds, and B still has 15 seconds to
notify A before he has to start paying fees. Same setup as decreasing
timelocks when forwarding HTLCs.

> And if there is a grace period, I can just gum up the network with lots
> of slow-but-not-slow-enough HTLCs.

Well, it reduces the "gum up the network for  blocks" to "gum
up the network for  seconds", which seems like a pretty
big win. I think if you had 20 hops each with a 1 minute grace period,
and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
so at the very least, successfully performing this attack would be
demonstrating lightning's solved bitcoin's transactions-per-second
limitation?

I think you could do better by having the acceptable grace period be
dynamic: both (a) requiring a shorter grace period the more funds a HTLC
locks up, which stops a single HTLC from gumming up the channel, and (b) 
requiring a shorter grace period the more active HTLCs you have (or, the
more active HTLCs you have that are in the grace period, perhaps). That
way if the network is loaded, you're prioritising more efficient routes
(or at least ones that are willing to pay their way), and if it's under
attack, you're dynamically increasing the resources needed to maintain
the attack.

Anyway, that's my hot take; not claiming it's a perfect solution or
final answer, rather that this still seems worth brainstorming out.

My feeling is that this might interact nicely with the sender-initiated
upfront fee. Like, you could replace a grace period of 30 seconds at
2msat/minute by always charging 2msat/minute but doing a forward payment
of 1msat. But at this point I can't keep it all in my head at once to
figure out something that really makes sense.

> > Maybe this also implies a different protocol for HTLC forwarding,
> > something like:
> >   1. A sends the HTLC onion packet to B
> >   2. B decrypts it, makes sure it makes sense
> >   3. B sends a half-signed updated channel state back to A
> >   4. A accepts it, and forwards the other half-signed channel update to B
> > so that at any point before (4) Alice can say "this is taking too long,
> > I'll start losing money" and safely abort the HTLC she was forwarding to
> > Bob to avoid paying fees; while only after (4) can she start the time on
> > expecting Bob to start paying fees that she'll forward back. That means
> > 1.5 round-trips before Bob can really forward the HTLC on to Carol;
> > but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
> > as Alice/Bob has finished (2).
> We added a ping-before-commit[1] to avoid the case where B has disconnected
> and we don't know yet; we have to assume an HTLC is stuck once we send
> commitment_signed.  This would be a formalization of that, but I don't
> think it's any better?

I don't think it's any better as things stand, but with the "B pays A
holding fees" I think it becomes necessary. If you've got a route
A->B->C then from B's perspective I think it