Matthew Toseland wrote:
> How is that a useful attack on a darknet?

It allows you to identify people, or at least to have a good guess at 
their identities.

Let's say there are 10,000 people in the network and you know 100 
(person,location) pairs. One of these is yourself, the others can be 
people you know (not necessarily your Freenet neighbours), people who've 
revealed personal information about themselves in conversations, etc. By 
measuring the distance between an anonymous person's location and each 
of your (person,location) pairs, you can guess certain things about the 
anonymous person. If the anonymous person is very close to one of your 
(person,location) pairs, you may be able to guess who they are.

Essentially, location-swapping creates a map of the social network, and 
greedy routing gives you a person's coordinates on that map. That's 
great for scalability, but it strikes me as a bad idea from the 
perspective of anonymity.

> True. This is closely related to correlation attacks; if a request is
> from a node which is too far away from the target, it's probably local.
> Premix routing will fix this; a random start point will help.

I'm not sure that a random start point will help. Messages have to 
travel from the source to the start point with the destination address 
visible, because the source doesn't have the start point's public key. 
It also has to be possible to distinguish between messages that are on 
the way from the source to the start point, and messages that are on the 
way from the start point to the destination. So the attacker can just 
guess the source of messages on their way to the start point, and 
correlate that with the destination address.

Using a random start point for each connection could actually make the 
attack much easier, because previously the source's packets would tend 
to follow the same route every time, whereas now they follow a different 
route to each start point, allowing the attacker to gather more samples. 
Each sample rules out half the network on average, so you don't have to 
intersect many samples to identify the source.

> It is not true that every routing step will always route the request
> closer to the target. We allow it to go for 10 hops without getting a
> closer best-seen-location, before terminating the request.

That could help, although it also reveals information about the previous 
hop's other neighbours (you can tell the previous hop doesn't have a 
neighbour who's online and closer to the destination than you).

Cheers,
Michael

Reply via email to