On Sunday 30 December 2007 13:19, Michael Rogers wrote:
> > The idea was to pick two random nodes from within the cell, and route 
through 
> > them - choosing a new tunnel for every request.
> 
> Why choose a new tunnel for every request? If an attacker inside the
> cell can somehow link messages leaving the cell with messages passing
> through his node, he can gather one predecessor attack sample per
> intercepted message.

True, but:
- It is fast.
- It is simple.
- It should be possible with such a simple request/response to make it 
extremely difficult for two nodes to realise they are on the same path, 
except in the case where they are actually adjacent.

If:
- The two nodes on the path controlled by the attacker are not adjacent,
- We have a separate return path included in the routing information,
- Each hop waits to receive the full reply before forwarding it,
Then it will be very difficult for an attacker to identify that the two nodes 
are connected afaics: with bidirectional tunnels, his only leverage is the 
reply time, but with unidirectional tunnels he loses even that.

If the two nodes *are* adjacent, we can severely limit the information 
available: The last node is the second chosen node (assuming we choose two 
nodes including the exit node). The second-last node is either a node between 
the first chosen node and the second chosen node, or is in fact the first 
chosen node, in which case its predecessor is a back-bearing towards the 
originator. However, we can prevent this by ensuring there is always at least 
one hop between the first chosen node and the second chosen node. Since this 
does not give any information on the originator, it should be a safe 
addition.

The separate return path obviously will be unreliable - any node along it 
could refuse it. This is the same for the outward path: any node could refuse 
the path, or sabotage it, and it will be impossible to establish which node 
is at fault since any node can always blame a later node. However, it *is* 
known that it is one of the nodes along that path that is the problem, and 
this may be enough. With enough redundancy it might even be possible to make 
passive requests work through this, although the main concern here is the 
real time phase of a request.

Perhaps I am missing something critical about onion routing here, but what?
> 
> > The one big weakness is that cells would have to be relatively small 
> > to keep global data available. The complication (IMHO solvable) is bogus 
> > topology attacks.
> 
> And bogus public key attacks (the nodes beyond my neighbour might really
> exist, but my neighbour might MITM their public keys in order to link
> traffic passing through his node with traffic exiting the cell). When
> you say solvable, what do you have in mind?

Each node publishes its links, including its pubkeys, their pubkey hashes, and 
a signature. This is relayed to some distance. We compute a credibility value 
for each node on the graph by starting with a value of 1.0 on each node we 
are a direct peer of, and dividing it by the number of peers on each node 
going outwards, adding up the values from different branches. We can then 
ignore nodes which have a low credibility value. Now, we may not be able to 
do exactly this, if we have to have semi-permanent cell structures, but it 
should be possible to find an algorithm to serve this purpose (eigenvector 
centrality? TrustRank etc?).
> 
> Cheers,
> Michael

Attachment: pgpUxCMt1uc92.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to