Subnote on this, there's a query on TLV format (1 byte type, 1 byte+
len).

There are two ways to add TLV to the onion:
1. Leave the existing fields and put TLV in the padding:
   * [`8`:`short_channel_id`]
   * [`8`:`amt_to_forward`]
   * [`4`:`outgoing_cltv_value`]
   * [`12`:`padding`]
2. Replace existing fields with TLV (eg. 2=short_channel_id,
   4=amt_to_forward, 6=outgoing_cltv_value) and use realm > 0
   to flag the new TLV format.

The length turns out about the same for intermediary hops, since:
TLV of short_channel_id => 10 bytes
TLV of amt_to_forward => probably 5-6 bytes.
TLV of outgoing_cltv_value => probably 3-4 bytes.

For final hop, we don't use short_channel_id, so we save significantly
there.  That's also where many proposals to add information go (eg. a
special "app-level" value), so it sways me in the direction of making
TLV take the entire room.

Cheers,
Rusty.

Christian Decker <decker.christ...@gmail.com> writes:
> Heya everybody,
>
> during the spec meeting in Adelaide we decided that we'd like to extend
> our current onion-routing capabilities with a couple of new features,
> such as rendez-vous routing, spontaneous payments, multi-part payments,
> etc. These features rely on two changes to the current onion format:
> bigger per-hop payloads (in the form of multi-frame payloads) and a more
> modern encoding (given by the TLV encoding).
>
> In the following I will explain my proposal on how to extend the per-hop
> payload from the current 65 bytes (which include realm and HMAC) to
> multiples.
>
> Until now we had a 1-to-1 relationship between a 65 byte segment of
> payload and a hop in the route. Since this is no longer the case, I
> propose we call the 65 byte segment a frame, to differentiate it from a
> hop in the route, hence the name multi-frame onion. The creation and
> decoding process doesn't really change at all, only some of the
> parameters.
>
> When constructing the onion, the sender currently always right-shifts by
> a single 65 byte frame, serializes the payload, and encrypts using the
> ChaCha20 stream. In parallel it also generates the fillers (basically 0s
> that get appended and encrypted by the processing nodes, in order to get
> matching HMACs), these are also shifted by a single 65 byte frame on
> each hop. The change in the generation comes in the form of variable
> shifts for both the payload serialization and filler generation,
> depending on the payload size. So if the payload fits into 32 bytes
> nothing changes, if the payload is bigger, we just use additional frames
> until it fits. The payload is padded with 0s, the HMAC remains as the
> last 32 bytes of the payload, and the realm stays at the first
> byte. This gives us
>
>> payload_size = num_frames * 65 byte - 1 byte (realm) - 32 bytes (hmac)
>
> The realm byte encodes both the payload format as well as how many
> additional frames were used to encode the payload. The MSB 4 bits encode
> the number of frames used, while the 4 LSB bits encode the realm/payload
> format.
>
> The decoding of an onion packet pretty much stays the same, the
> receiving node generates the shared secret, then generates the ChaCha20
> stream, and decrypts the packet (and additional padding that matches the
> filler the sender generated for HMACs). It can then read the realm byte,
> and knows how many frames to read, and how many frames it needs to left-
> shift in order to derive the next onion.
>
> This is a competing proposal with the proposal by roasbeef on the
> lightning-onion repo [1], but I think it is superior in a number of
> ways. The major advantage of this proposal is that the payload is in one
> contiguous memory region after the decryption, avoiding re-assembly of
> multiple parts and allowing zero-copy processing of the data. It also
> avoids multiple decryption steps, and does not waste space on multiple,
> useless, HMACs. I also believe that this proposal is simpler than [1],
> since it doesn't require re-assembly, and creates a clear distinction
> between payload units and hops.
>
> To show that this proposal actually works, and is rather simple, I went
> ahead and implemented it for c-lightning [2] and lnd [3] (sorry ACINQ,
> my scala is not sufficient to implement if for eclair). Most of the code
> changes are preparation for variable size payloads alongside the legacy
> v0 payloads we used so far, the relevant commits that actually change
> the generation of the onion are [4] and [5] for c-lightning and lnd
> respectively.
>
> I'm hoping that this proposal proves to be useful, and that you agree
> about the advantages I outlined above. I'd also like to mention that,
> while this is working, I'm open to suggestions :-)
>
> Cheers,
> Christian
>
> [1] https://github.com/lightningnetwork/lightning-onion/pull/31
> [2] https://github.com/ElementsProject/lightning/pull/2363
> [3] https://github.com/lightningnetwork/lightning-onion/pull/33
> [4] 
> https://github.com/ElementsProject/lightning/pull/2363/commits/aac29daeeb5965ae407b9588cd599f38291c0c1f
> [5] 
> https://github.com/lightningnetwork/lightning-onion/pull/33/commits/216c09c257d1a342c27c1e85ef6653559ef39314
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to