Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-19 Thread Takaya Imai
Hi Subhra,

thanks for your question.

> So as of now if we consider transfer of a file (may of few KB) then you
split it into several blocks and use atomic multi path payment whole using
the blocks for embedding with the preimage inorder to obtain payment.
> But it might be the case you may not have sufficient number of path to
transfer all the blocks at one go because of preimage size limitation of
256 bit (I didn't get the point that there is no limitation on data size,
can anyone explain that ?).

Yes. It needs many blocks and paths if large file.
But it has no problem because it can use the same path several times.

Of course it needs much money to transfer large file. But this is good
point in order to retain lightning network stable.

For DOS attack, OG AMP has the same problem. It might that recipient needs
a limit how many split blocks the recipient accept

> So may be you need several iteration and I presume thats what lightning
network will pitch in where we have several such microtransactions going on.
> What happens if it fails in an iteration ? So the recipient of the file
remains happy with the partial content ? Or will the payment be revoked
(not sure how) if recipient doesn't get the full content ?

This is about the DLAS-up protocol.
The protocol uses OG AMP so the payments and data transfers are revoked in
case that it fails.

Thanks,
Takaya Imai

2020年1月16日(木) 16:14 Subhra Mazumdar :

> Hello Takaya,
> I really liked the idea of data atomic swap mentioned over here.
> So as of now if we consider transfer of a file (may of few KB) then you
> split it into several blocks and use atomic multi path payment whole using
> the blocks for embedding with the preimage inorder to obtain payment. But
> it might be the case you may not have sufficient number of path to transfer
> all the blocks at one go because of preimage size limitation of 256 bit (I
> didn't get the point that there is no limitation on data size, can anyone
> explain that ?). So may be you need several iteration and I presume thats
> what lightning network will pitch in where we have several such
> microtransactions going on. What happens if it fails in an iteration ? So
> the recipient of the file remains happy with the partial content ? Or will
> the payment be revoked (not sure how) if recipient doesn't get the full
> content ?
>
> On Mon, Nov 11, 2019 at 6:29 AM Takaya Imai <
> takaya.i...@frontier-ptnrs.com> wrote:
>
>> Hi all,
>>
>> I propose Data Lightning Atomic Swap.
>> Anyone already have the same idea?
>>
>>
>> [Abstract]
>> This proposal is a way to swap data and lightning payment atomically.
>> It has two patterns, one is for a payer to swap data-download with
>> lightning payment to a payee (DLAS-down), the other is for a payer to swap
>> data-upload with lightning payment to a payee (DLAS-up).
>>
>> The data is embedded to preimage so sending and receiving the data need
>> lightning payment at the same time.
>>
>> -
>>
>> [Motivation]
>> Atomic Swaps among crypto currencies has various ways to implement
>> (on-chain to on-chain[1], on-chain to of-chain(Submarine Swap[2])). And
>> Atomic Swaps between data and crypto currencies are also proposed as a part
>> of TumbleBit mechanism[3], Storm mechanism[4] and so on.
>>
>> Recently Joost Jager proposed Instant messages with lightning onion
>> routing, whatsat[5], which use recent sphinx payload change[6]. This is
>> very awesome but not atomic with lightning payment.
>>
>> Atomic lightning mechanism for data is useful in use cases below.
>>
>> -
>>
>> [Pros & Cons]
>>
>> * DLAS-down
>> ** Pros
>> *** Atomic data download exchange with lightning payment
>> ** Cons
>> *** It needs better mechanism to expand data size
>>
>> * DLAS-up
>> ** Pros
>> *** Atomic data upload exchange with lightning payment
>> ** Cons
>> *** OG AMP[7] is needed to implement
>>
>> -
>>
>> [What I describe]
>> * A way to swap data with lightning payment atomically.
>>
>> -
>>
>> [What I do not describe]
>> * A way to detect that data is correct or not, namely zero knowledge
>> proof process.
>>
>> For example, probabilistic checkable proof like TumbleBit[3] proposed.
>> Just message as data is no problem because no need to check the message
>> is correct or not.
>>
>> * A way in case that different preimages are used in a payment route like
>> Multi-hop locks.
>>
>> -
>>
>> [Specification]
>>
>> Lightning Network(LN) has a mechanism about preimage like a brief image
>> below.
>>
>> Payer Mediators
>>  Payee
>>
>> =
>>
>> Preimage
>> Preimage Hash  <- invoice 
>>  Preimage Hash
>> Preimage Hash  >   Preimage Hash >
>>  Preimage Hash
>> Preimage   <—-—-   Preimage  <
>>  Preimage
>>
>> As you know, preimage Payer gets can be 

Re: [Lightning-dev] On Path Privacy

2020-01-19 Thread ZmnSCPxj via Lightning-dev
Good morning list,

Few people have responded to this topic, but when has that ever stopped me from 
spamming the mailinglist?

Analysis of Path Extension for Privacy
==

As mentioned before, increasing the path length deliberately, by any means, 
intuitively makes it seem that privacy is improved.
I will now show how this is not a panacea and that its cost in terms of 
increased fees, increased risk of stuckness, and increased worst-case stuckness 
might not justify its benefits, which are weaker (at least on the current 
network) than they might seem at first glance.

Let us first start with an example network:


A -- B -- S1-- C -- D -- G -- H
 |  //   \  |  /  |
 | // \ | /   |
 I -- J E -- S2-- F

Let us suppose that `A` wishes to make a payment to `F`.
Further, let us assume for simplicity that all nodes charge the same amount for 
forwarding, and that they all charge 0.01 units base and 0 proportional.

Finally, let us also assume that `S1` and `S2` are two nodes run by a single 
surveillor, and that all the other nodes are otherwise not interested in 
destroying Lightning privacy.
Now, obviously `A` and `F` do not know that `S1` and `S2` are run run by a 
surveillor, thus cannot identify S1 and S2 as belonging to a single actor that 
wants to snoop their payment.


Now let us suppose that A takes the shortest path `A -> B -> S1 -> C -> E -> S2 
-> F`.
Would it be improved to artifically increase the path length?

We can observe that, with the current network, due to the same hash being used 
in the entire route, `S1` and `S2` can easily notice when they are on the same 
route.

Thus, suppose we increased the path length by taking this route instead: `A -> 
B -> S1 -> I -> J -> C -> D -> G -> E -> S2 -> F`.
Then our surveillor gets exactly the same information as in the case `A -> B -> 
S1 -> C -> E -> S2 -> F`.

* An incoming payment went into S1 via B, so the payer must be A or B (taking 
the shortest-path heuristic pre-S1).
* An outgoing payment went out of S2 via F, so the payer must be F (taking the 
shortest-path heuristic post-S2).

Thus, in many cases, it is immaterial if we actually inserted greater length 
onto the path.
We can generally expect that surveillors can easily just buy a bunch of BTC and 
insert many nodes into the network to act as surveillance nodes, especially 
since forwarding pays fees and thus the requirement to lock funds in channels 
is not in fact an opportunity cost.
Once a path passes through more than one surveillor nodes, any increase in 
length between the two endmost cooperating surveillor nodes does not improve 
privacy.
Thus `A` would end up paying the costs of increased path length (higher fees, 
higher risk of stuckness, highest worst-case stuckness) *without* any benefit 
to its privacy.

Indeed, we might point out that if `A` took the path `A -> B -> S1 -> C -> D -> 
G -> H -> F`, then it would have avoided `S2` and the single surveillor would 
have a much larger set of possible destinations to analyze.
But if it instead took the longer path `A -> B -> S1 -> I -> J -> C -> D -> G 
-> E -> S2 -> F`, then `S2` would have helped the surveillor pin down the 
destination that either A or B is paying to.

Which brings the next point: longer path also means increased chances of going 
through *two* surveillance nodes, and thus having the payment endpoints 
(ultimate sender, ultimate receiver) be identified by surveillor nodes.
The increased path length also means less reliable forwarding (more nodes can 
fail), meaning it is now more likely that `A` will have to find another route.
This increases the chances that `S1` and `S2` will be on *some* route.
And because the same hash will *also* be used on the alternate routing 
attempts, then if on one attempt we went `A -> B -> S1 -> C -> D -> G -> H -> 
F` and on the next attempt we went `A -> I -> J -> C -> E -> S2 -> F`, then 
both `S1` and `S2` can still triangulate the ultimate payer and ultimate payee 
as with getting the shortest path in the first place!

The intuition that "sub-optimal paths means better privacy" is countered by the 
intuition that "the more people you tell, the less private it is".
At least on the current network, the fact that we can identify a single payment 
(across multiple nodes within an attempt, and across multiple attempts trying 
to forward the same payment) means increased path length does not buy a lot of 
privacy, and the significant losses in useability might not be commensurate to 
the mild increase in privacy it *theoretically* could get.

Fortunately, the PTLC-based path decorrelation fixes most of these, and with 
that, at least it is hard for `S1` and `S2` to identify if forwards going 
through them are the same payment, at least from the payment hash.

Digression: `permuteroute` Privacy
--

Previously, [I discussed the `permuteroute`