On Thursday 20 December 2007 13:03, Michael Rogers wrote:
> Matthew Toseland wrote:
> > PROPOSAL:
> > Add a flag RandomRoute. This may be set when a request starts (up to the 
> > user). There is a 50% chance of its being unset. So on average it adds 2 
hops 
> > to the journey - but there is a small chance of requests going much 
further 
> > than that. The advantage is that it greatly obscures the picture for a 
> > distant attacker, by starting off in a somewhat random part of the 
keyspace.
>       
> The second part of Nikita Borisov's thesis [1] is an anonymising DHT
> that does something similar to your proposal: each lookup is forwarded
> randomly for a small number of hops, then forwarded greedily to the
> target. He uses the mixing time of the graph to show how many random
> hops are necessary to make it equally likely that any node was the
> initiator. His DHT is based on Koorde [2] because it has the fastest
> mixing time of any DHT. I don't know how to work out the mixing time of
> Kleinberg graphs but I guess Oskar will, in which case we can work out
> how many random hops we need, at least in an ideal network.

Ooooooooh....!
> 
> > NOTES:
> > We could not overload HTL=10 because HTL is reset to 10 every time we get 
> > closer to the target: we *do not* want to go into random route mode just 
> > because we got a bit closer to the target!
> 
> Yup, IMHO we should get rid of the hop counter and use a weighted coin
> to decide when to switch from random to greedy forwarding, and another
> weighted coin to decide when the request has run out of hops. (I realise
> this will sometimes lead to long paths, but at least we can analyse how
> much information it leaks. AFAIK that's not possible with the current
> hop counter.)

This will frequently lead to very long paths. Also I dunno how well such 
unpredictability will work with future load management/queueing schemes. But 
it's worth considering... but have a look at the previous discussions. 
Certainly it would have been a problem for NGR. For 0.7, there are several 
obvious issues:
- Not compatible afaics with ULPRs / failure tables. ULPRs stop requests if 
we've routed them already (for some period), with the same HTL or less.
- Load management issues maybe.
- It's a good "invisible" DoS on a single request.
- The current counter is reset when we get closer to the target, so to some 
degree it adapts to the network size. We'd lose this, so we may have to set 
the probability of termination quite low, and fiddle with it...

Why can we not analyse how much information the current counter leaks? Because 
of the uncertain topology/average number of resets?
> 
> > PROBLEMS:
> > It reveals that the request is relatively early. This will make local 
> > correlation attacks even easier. So we should do it *after* we have premix 
> > routing, at which point that won't be a problem any more.
> 
> This could actually be used in place of premix routing: instead of
> constructing cells we use random "tunnel IDs". 

No use against a local attacker.

> Messages belonging to the 
> same splitfile are marked by their initiator with the same tunnel ID.

So a local attacker *immediately* knows that the keys are connected. And so 
does a remote attacker if they leak long distance, which they will especially 
with probabilistic termination.

> During the random phase, each hop chooses the next hop or switches to
> greedy forwarding pseudo-randomly based on the tunnel ID, so messages
> with the same tunnel ID follow the same pseudo-random path and switch to
> greedy forwarding at the same point. 

This has been proposed before. One weakness is you'd need more than one tunnel 
to get good or predictable performance. And as soon as you have multiple 
correlatable or identical tunnels from the same node, it gets really easy to 
identify the requestor - admittedly it may require a conspiracy... Am I wrong 
here? That was always my analysis before, but by all means poke holes in it, 
I'd love to implement something simple!

Also, you have a very small anonymity set: if the tunnel is on average 5 hops 
long, there is a 20% chance that the node that sent you the request is the 
originator. This is unacceptable, especially if you can correlate the 20%'s 
because of linked but non-identical tunnels.

Finally, are there any vulnerabilities in changing paths?

> Thus no predecessor attacks are 
> possible during the random phase. Predecessor attacks during the greedy
> phase will lead the attacker to the last hop of the random phase, which
> will provide no information about the location of the initiator if the
> number of random hops is sufficient.

Indeed, random hops seem to be the way forward. But if you get the request 
during the random hop phase, you can attack the originator, although it will 
take much more time than without random hops.
> 
> Tunnels will sometimes break due to churn, but the same can happen with
> cells - no matter what system we use, churn will always allow
> predecessor attacks because it forces initiators to choose new paths.

The idea was to pick two random nodes from within the cell, and route through 
them - choosing a new tunnel for every request. When a node goes down, we get 
notified pretty quickly, since we're in the same cell. Our anonymity set is 
the cell, and nodes only move between cells by cells splitting when they get 
too big. The one big weakness is that cells would have to be relatively small 
to keep global data available. The complication (IMHO solvable) is bogus 
topology attacks.
> 
> Cheers,
> Michael
> 
> [1] http://www.crhc.uiuc.edu/~nikita/papers/phd-thesis.pdf
> [2] http://iptps03.cs.berkeley.edu/final-papers/koorde.ps

Attachment: pgpcWInGDjhCz.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to