Re: [Lightning-dev] Routemap scaling (was: Just in Time Routing (JIT-Routing) and a channel rebalancing heuristic as an add on for improved routing success in BOLT 1.0)

2019-04-09 Thread ZmnSCPxj via Lightning-dev
Good morning m.a.holden,


> > Let me clarify: When you say "node" here, do you mean Lightning Network 
> > node?
> > Or do you mean instead an in-memory node?
>
> Neither. I meant a node in a tree. I tried to use the term bucket to make the 
> distinction between this and Lightning node.
> The tree is not strictly in-memory. There is a purely conceptual global 
> quadtree.

Can you then be more specific what is the behavior of a specific node?
Note I use the term "node" specifically to refer to Lightning Network Node.

Given the node has space for N channels or N nodes (or whatever), how does the 
node prune its routemap?
What gossip messages do it ignore and which ones do it check?

If that node has to pay to a node that is not in its routemap, how does it 
perform the routemap lookup?

>
> Each bucket is essentially a view over the union of its 4 children. The 
> bucket 00b is (b U 0001b U 0010b U 0011b). The bucket 0011b is (001100b U 
> 001101b U 001110b 00b), etc. Each client choses a suitable max_depth <= 
> HASHLEN/2, and the leaves of their tree contain a set of all PKH whose prefix 
> matches the bucket index. A client will keep, at minimum, a subtree of the 
> global quadtree (and both nodes for all channels where at least one of the 
> nodes is in the same subtree).

When you refer to "sub-tree", do you mean each node stores a tree of certain 
maximum height, with the root of that tree being some random node on the global 
tree?

When you refer to "bucket spilling", what does it mean when the buvket has to 
spill beyond the maximum height of the tree that the node is willing to store?
Obviously some kind of pruning has to be done.
That is the whole point of this exercise.

I still do not see the benefit of your lookup scheme if the lookup scheme is 
just in-memory.
That is not what is interesting at all.
What is interesting is remote lookup, over the network.
It is what must be carefully designed, as the network cannot be trusted and 
anyone can be an attacker and anyone can be a victim.

Suppose a node decides to store the prefix 0b, storing the subtree at 
that point with a height of only 2 (for simplicity).
What happens when it needs to look for a node with prefix 0b01101101?
Does it mean the node has to somehow store nodes that are outside of the prefix 
it decides to store?
Just what data does it have to store?
And why not just use a dist() measure like I proposed?

> > As I understand your proposal, taking over or taking down a node near the 
> > root of the tree will make it difficult to look up several nodes at once.
> > This is because the tree nature makes nodes near the root of the tree (top 
> > of the hierarchy) much more important than leaf nodes, which only worry 
> > about themselves.
>
> If a node advertises itself near the top of the tree then it will be a better 
> information provider than others, but this is never at the exclusion of 
> others below it. All of the information a node in the parent bucket holds is 
> held in part by the nodes in the child buckets. The parent does not know 
> anything that other people don't know too. No nodes are "more important", but 
> they might potentially be "more optimal".

We should face the fact that nodes that know more about the network are "more 
important", really.
Thus, they should be protected.
The simplest protection is to anonymize them by never revealing how much each 
node actually knows of the network.
This enforces that all nodes are peers, with nobody being easily visible as 
more important than anyone else.


Instead, each node that is used in lookup can simply silently delegate (without 
informing whoever is asking) the lookup to some other arbitrary node that it 
knows is more likely to know the destination.
That node might know more than this node, or less, but it is immaterial --- no 
node knows how much each node knows.


>
> If you know about some nodes in bucket 0011b, but none in 0001b where a 
> payment destination is, then you could query any node in 0011b and ask about 
> any nodes in bucket 0001b. Since the buckets are nearby, there is a greater 
> probability that they'll know about more nodes than somebody further away. 
> This would be similar to how your proposal operates.

Then just use my proposal.
It is simpler and I already presented the algorithms used by each node in 
detail, including how each node limits the memory it uses for routemap.

As I mentioned, the point is to provide anonymity by not revealing how much you 
know.
There is a tradeoff between anonymity and some measure of efficiency.
Obviously, I prioritize anonymity, else I would not be ZmnSCPxj.


> you could query any node in 0011b and ask about any nodes in bucket 0001b

In order to query a node, you need to know that node, meaning it (and at least 
one route to it) is stored somewhere in your (limited) memory.

Note that even now in the network, some nodes do not provide a way to contact 
them directly --- you can 

[Lightning-dev] Final expiry probing attack

2019-04-09 Thread Joost Jager
Hi all,

In https://github.com/lightningnetwork/lightning-rfc/pull/516,
the incorrect_or_unknown_payment_details failure message is extended with
an htlc_msat field and thereby replaces the former incorrect_payment_amount
message. The objective of this change is to prevent a probing attack that
allows an intermediate node to find out the final destination of the
payment.

Shouldn't the same change be applied to the cltv expiry?

Currently in lnd, we return a final_expiry_too_soon message if the htlc
expiry is not meeting the invoice cltv delta requirement. This can be used
for probing by using low expiry values, similar to how this was previously
possible with low amounts.

The proposed change would be: when the htlc expiry doesn't meet the invoice
cltv delta requirement, return an incorrect_or_unknown_payment_details
failure (extended with a new htlc_expiry field) instead
of final_expiry_too_soon.

Joost.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev