How about simply encourage specialisation, and cache only according to that?

On insertion, every node (0-th or n-th hop, all the same) caches the 
content, depending on how near it's specialisation it is. Each not 
should have only one specialisation, IMO. If that is insufficient, the 
one specialisation should broaden and shift, but there should be only 
one peak in the key space concentration.

Each node should figure always be aware of it's current specialisation. 
As keys get cached and dropped, this will change, but the node must be 
aware of it. The further the key is from the node's specialisation, the 
less likely it should be to get cached.

If each node specialises, the routing should improve, and so will the 
deniability. The policy of never caching local requests seems crazy. 
They should be treated the same as any other requests WRT caching.

Everything else seems to be an unnecessary complication with benefits 
that are at best academic.

Node specialisation should arise by design, not by coincidence.

Just MHO.

Gordan

Matthew Toseland wrote:

Two possible caching policies for 0.7:
1. Cache everything, including locally requested files.
PRO: Attacker cannot distinguish your local requests from your passed-on
requests.
CON: He can however probe your datastore (either remotely or if it is
seized). (the Register attack)
BETTER FOR: Opennet.
2. Don't cache locally requested files at all. (Best with client-cache).
PRO: Attacker gains no information on your local requests from your store.
PRO: Useful option for debugging, even if not on in production.
CON: If neighbours then request the file, and don't find it, they know
for sure it's local.
BETTER FOR: Darknet. But depends on how much you trust your peers.

Interesting tradeoff. Unacceptable really.

We all know that the long term solution is to implement premix routing,
but that is definitely not going to happen in 0.7.0.

So here are some possibilities:

1. For the first say 3 hops, the data is routed as normal, but is not
cached. This is determined by a flag on the request, which is randomly
turned off with a probability of 33%.
PRO: Provides some plausible deniability even on darknet.
CON: Doesn't work at all on really small darknets, so will need to be
turned off manually on such.

2. Permanent, random routed tunnels for the first few hops. So, requests
initially go down the node's current tunnel. This is routed through a
few, randomly chosen (on each hop, so no premix), nodes. The tunnel is
changed infrequently. A node may have several tunnels, for performance,
but it will generally reduce your anonymity to send correlated requests
down different tunnels.
PRO: More plausible deniability; some level of defence against
correlation attacks even. But anon set is still relatively small.
CON: Number of tunnels performance/anonymity tradeoff.
CON: A few extra hops.
CON: Sometimes will get bad tunnels.

Anyway, this seems best to me.

Reply via email to