Yes, this was proposed years ago and is in the bug tracker. It's usually
called "bundles" and is regarded as a poor man's tunneling system, in
that it doesn't actually tunnel but maintains many of the advantages of
tunnels.

There are performance problems with it, especially with the current load
limiting system. Some of these could be mitigated on darknet, e.g.
allowing more requests to be in flight etc. A lot of the work around
tunnels is reusable here e.g. deciding what requests to group together.

And as Arne points out, if the bad guy sees a bundle, there's a high
chance that the directly connected peer is the originator. But this is
true now as well, and right now Mallory not only has the HTL, but also
the number of requests.

Note that on darknet we could eventually implement a full blown tunnel
system - if PISCES can be turned into something implementible, which is
unclear to me at the moment. But lets not let the "perfect" be the enemy
of the good.

On 03/05/17 08:16, Arne Babenhauserheide wrote:
> Stefanie Roos <stefanie.r...@uwaterloo.ca> writes:
> 
>> Thanks.
>>
>> Sorry, bit late in answering
> 
> I documented an easier to implement version of your idea as note in
> 
>     https://freenet.mantishub.io/view.php?id=3640#c12321
>     (0003640: Bundles (unencrypted tunnels))
> 
> The difference is that it bundles based on the source peer and mixes in
> local requests (routing by peer only uses information we have at
> routing time).
> 
> This is the note I added — does it fit your proposal well enough?
> 
> - For each source peer for which we do not decrement HTL18, select a target 
> peer at random to whom we forward all HTL18 requests of the source peer.
> - Mix all our local requests with those of one of the source peers, selected 
> at random.
> 
> This approach should defeat attacks based on the distribution of requests for 
> chunks from known files in the requests of a given node.
> 
> possible drawbacks:
> 
> - disconnecting during the fetch would restart all the non-finished blocks on 
> another peer. Therefore local requests would have increased latency, and 
> strongly increased jitter in latency (losing the wrong peer during transfer 
> would require restarting all non-finished requests)
> 
> - The anonymity set *with complete knowledge* (which might be attainable via 
> timing attacks by selective DoSing of your peers, one by one or grouped) is 
> just about 2-3 against one of the nodes (the number of HTL18 hops). There is 
> one node for which capturing packets from you means, that there’s a 30-50% 
> probability that you’re the originator. However for all other nodes none of 
> your traffic goes through them — you’re merely forwarding. So the actual 
> probability that you’re the originator of any captured stream of randomly 
> sorted request keys is only 6-10% (with 5 peers for whom you do not decrement 
> HTL).
> 
> - A small subset of these bundles might propagate very far (in a network of 
> 1000 nodes, the average result for the longest forwarding should be 10 hops, 
> in a network of 16000 14 hops, and so on as log2(N)), so peers might have to 
> replace a target if the latency for the bundled requests is very high (this 
> will limit the maximum length of the forwarding). I’m not sure if our 
> existing connection dropping conditions will fire here (due to timeouts or a 
> too small success rate). Churn should also limit the length of the tunnels: 
> If the average session uptime of a peer is 2 hours, a 10 hops forwarding 
> should typically be broken within 12 minutes while a 2 hops forwarking should 
> live for one hour.
> 
> Best wishes,
> Arne
> --
> Unpolitisch sein
> heißt politisch sein
> ohne es zu merken
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to