Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-24 Thread Christian Decker
ZmnSCPxj  writes:
> Good morning Christian, Rusty, and list,
> You can take this a step further and make the realm 0 byte into a
> special type "0" which has a fixed length of 1299 bytes, with the
> length never encoded for this special type.  It would then define the
> next 1299 bytes as the "V", having the format of 64 bytes of the
> current hop format (short channel ID, amount, CLTV, 12-byte padding,
> HMAC), plus 19*65 bytes as the encrypted form of the next hop data.
> This lets us reclaim even the realm byte, removing its overhead by
> re-encoding it as the type in a TLV system, and with the special
> exception of dropping the "L" for the type 0 (== current realm 0)
> case.

I disagree that this would be any clearer than the current proposal
since we completely lose the separation of payload encoding vs. onion
encoding. Let's not mix the concepts of payload and transport onion,
please.

> In short, drop the concept of 65-byte "frames".
>
> We could have another special length-not-encoded type 255, which
> declares the next 32 bytes as HMAC and the rest of the onion packet as
> the data for the next hop.
>
> The above is not a particularly serious proposal.

You had me worried for a second there :-)
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-23 Thread ZmnSCPxj via Lightning-dev
Good morning Christian, Rusty, and list,

> There is however a third option, namely make the entire payload a
> TLV-set and then use the old payload format (`short_channel_id`,
> `amt_to_forward`, `outgoing_ctlv_value`) as a single TLV-value with 20
> bytes of size. That means we have only 2 bytes of overhead compared to
> the old v0 format (4 byte less than option 2), and can drop it if we
> require some other payload that doesn't adhere to this format.

You can take this a step further and make the realm 0 byte into a special type 
"0" which has a fixed length of 1299 bytes, with the length never encoded for 
this special type.
It would then define the next 1299 bytes as the "V", having the format of 64 
bytes of the current hop format (short channel ID, amount, CLTV, 12-byte 
padding, HMAC), plus 19*65 bytes as the encrypted form of the next hop data.
This lets us reclaim even the realm byte, removing its overhead by re-encoding 
it as the type in a TLV system, and with the special exception of dropping the 
"L" for the type 0 (== current realm 0) case.

In short, drop the concept of 65-byte "frames".

We could have another special length-not-encoded type 255, which declares the 
next 32 bytes as HMAC and the rest of the onion packet as the data for the next 
hop.

The above is not a particularly serious proposal.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-22 Thread Christian Decker
Rusty Russell  writes:
> There are two ways to add TLV to the onion:
> 1. Leave the existing fields and put TLV in the padding:
>* [`8`:`short_channel_id`]
>* [`8`:`amt_to_forward`]
>* [`4`:`outgoing_cltv_value`]
>* [`12`:`padding`]
> 2. Replace existing fields with TLV (eg. 2=short_channel_id,
>4=amt_to_forward, 6=outgoing_cltv_value) and use realm > 0
>to flag the new TLV format.
>
> The length turns out about the same for intermediary hops, since:
> TLV of short_channel_id => 10 bytes
> TLV of amt_to_forward => probably 5-6 bytes.
> TLV of outgoing_cltv_value => probably 3-4 bytes.
>
> For final hop, we don't use short_channel_id, so we save significantly
> there.  That's also where many proposals to add information go (eg. a
> special "app-level" value), so it sways me in the direction of making
> TLV take the entire room.

I'd definitely vote for making the entire payload a TLV (option 2) since
that allows us to completely redefine the payload. I don't think the
overhead argument really applies since we're currently wasting 12 bytes
of payload anyway, and with option 2 we still fit the current payload in
a single frame.

There is however a third option, namely make the entire payload a
TLV-set and then use the old payload format (`short_channel_id`,
`amt_to_forward`, `outgoing_ctlv_value`) as a single TLV-value with 20
bytes of size. That means we have only 2 bytes of overhead compared to
the old v0 format (4 byte less than option 2), and can drop it if we
require some other payload that doesn't adhere to this format.

Cheers,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-21 Thread Rusty Russell
Subnote on this, there's a query on TLV format (1 byte type, 1 byte+
len).

There are two ways to add TLV to the onion:
1. Leave the existing fields and put TLV in the padding:
   * [`8`:`short_channel_id`]
   * [`8`:`amt_to_forward`]
   * [`4`:`outgoing_cltv_value`]
   * [`12`:`padding`]
2. Replace existing fields with TLV (eg. 2=short_channel_id,
   4=amt_to_forward, 6=outgoing_cltv_value) and use realm > 0
   to flag the new TLV format.

The length turns out about the same for intermediary hops, since:
TLV of short_channel_id => 10 bytes
TLV of amt_to_forward => probably 5-6 bytes.
TLV of outgoing_cltv_value => probably 3-4 bytes.

For final hop, we don't use short_channel_id, so we save significantly
there.  That's also where many proposals to add information go (eg. a
special "app-level" value), so it sways me in the direction of making
TLV take the entire room.

Cheers,
Rusty.

Christian Decker  writes:
> Heya everybody,
>
> during the spec meeting in Adelaide we decided that we'd like to extend
> our current onion-routing capabilities with a couple of new features,
> such as rendez-vous routing, spontaneous payments, multi-part payments,
> etc. These features rely on two changes to the current onion format:
> bigger per-hop payloads (in the form of multi-frame payloads) and a more
> modern encoding (given by the TLV encoding).
>
> In the following I will explain my proposal on how to extend the per-hop
> payload from the current 65 bytes (which include realm and HMAC) to
> multiples.
>
> Until now we had a 1-to-1 relationship between a 65 byte segment of
> payload and a hop in the route. Since this is no longer the case, I
> propose we call the 65 byte segment a frame, to differentiate it from a
> hop in the route, hence the name multi-frame onion. The creation and
> decoding process doesn't really change at all, only some of the
> parameters.
>
> When constructing the onion, the sender currently always right-shifts by
> a single 65 byte frame, serializes the payload, and encrypts using the
> ChaCha20 stream. In parallel it also generates the fillers (basically 0s
> that get appended and encrypted by the processing nodes, in order to get
> matching HMACs), these are also shifted by a single 65 byte frame on
> each hop. The change in the generation comes in the form of variable
> shifts for both the payload serialization and filler generation,
> depending on the payload size. So if the payload fits into 32 bytes
> nothing changes, if the payload is bigger, we just use additional frames
> until it fits. The payload is padded with 0s, the HMAC remains as the
> last 32 bytes of the payload, and the realm stays at the first
> byte. This gives us
>
>> payload_size = num_frames * 65 byte - 1 byte (realm) - 32 bytes (hmac)
>
> The realm byte encodes both the payload format as well as how many
> additional frames were used to encode the payload. The MSB 4 bits encode
> the number of frames used, while the 4 LSB bits encode the realm/payload
> format.
>
> The decoding of an onion packet pretty much stays the same, the
> receiving node generates the shared secret, then generates the ChaCha20
> stream, and decrypts the packet (and additional padding that matches the
> filler the sender generated for HMACs). It can then read the realm byte,
> and knows how many frames to read, and how many frames it needs to left-
> shift in order to derive the next onion.
>
> This is a competing proposal with the proposal by roasbeef on the
> lightning-onion repo [1], but I think it is superior in a number of
> ways. The major advantage of this proposal is that the payload is in one
> contiguous memory region after the decryption, avoiding re-assembly of
> multiple parts and allowing zero-copy processing of the data. It also
> avoids multiple decryption steps, and does not waste space on multiple,
> useless, HMACs. I also believe that this proposal is simpler than [1],
> since it doesn't require re-assembly, and creates a clear distinction
> between payload units and hops.
>
> To show that this proposal actually works, and is rather simple, I went
> ahead and implemented it for c-lightning [2] and lnd [3] (sorry ACINQ,
> my scala is not sufficient to implement if for eclair). Most of the code
> changes are preparation for variable size payloads alongside the legacy
> v0 payloads we used so far, the relevant commits that actually change
> the generation of the onion are [4] and [5] for c-lightning and lnd
> respectively.
>
> I'm hoping that this proposal proves to be useful, and that you agree
> about the advantages I outlined above. I'd also like to mention that,
> while this is working, I'm open to suggestions :-)
>
> Cheers,
> Christian
>
> [1] https://github.com/lightningnetwork/lightning-onion/pull/31
> [2] https://github.com/ElementsProject/lightning/pull/2363
> [3] https://github.com/lightningnetwork/lightning-onion/pull/33
> [4] 
>