JARGON: Correlation attack: If you are very close to the originator, and can identify a stream of requests, you may be able to identify it simply by the fact that there is a large proportion of the known download coming from a peer. This works a few hops away too IMHO. So it's both malicious peers and attackers controlling an (as yet unquantified) proportion of the network.
MAST: Mobile attacker source tracing. Requires an identifiable stream of requests. Uses the location of the target of a request combined with the fact that it was routed "here" to eliminate a swathe of the keyspace. Use this data to guess the location of the originator. Announce to it, and hopefully get more requests from the request stream, so the attack speeds up. Random route: Route to a random peer for some limited number of hops at the beginning of a request/insert. For inserts the performance cost is negligible; for requests it costs significant latency and bandwidth but may improve data persistence (because e.g. currently we don't cache on the first few hops). Age-based routing restrictions: One possible countermeasure for MAST, suggested long ago by somebody whose identity is on the bug tracker, is to not route to nodes which were added more recently than when we started the request. Rendezvous tunnels: Send two or more "anchor" requests out, via different nodes, use a shared secret scheme to split a secret key between them. These are either routed entirely randomly, or routed randomly and then to a target location. When they meet, combine the secrets and create an encrypted tunnel, which goes down the most direct path (one of them might be routed directly to the target location); this should be no longer than the typical path for random routing. The main drawbacks are that 1) nodes prior to the tunnel exit can't cache anything, which costs some performance, and 2) Additional latency. #1 is not important if the data being fetched or inserted is not popular anyway, or is very small. Mixnet tunnels: For small but important data we can actually afford onion routing, across the entire network, even though it will involve an absurd number of hops. The most obvious problem is how to obtain nodes to put on the chain. Non-telescoping tunnel construction is apparently used widely in other mixnets, so it's not necessarily "bad" to have *seen* the node somehow... Both kinds of tunnels could support deliberate delays and telescoping stages for inserts, however reliability becomes a problem. ATTACKS: Firstly, we need to sort out announcement. See the other email. MAST is much slower if you can't announce directly to the target, although sadly it is still possible. Announcement spam is also a serious problem, and it's VITAL for performance that we maintain the seednodes list automatically and have a lot more seeds; and we may be able to avoid shipping a full list to everyone! Inserts of unpredictable files are not vulnerable to MAST at run-time. An attacker can however gain some location range samples from his logs once the file is identified (published). => Initial random route on inserts undermines this. Allows for some samples in the random routing phase, but these are less valuable. => No need for age-based routing restrictions on unpredictable inserts. Small inserts to predictable locations, especially if they are frequent, e.g. for forums, freesite inserts, provide location range samples. => Initial random route on inserts reduces the value of these samples dramatically. => Tunnels are the obvious solution. This prevents both MAST and local correlation attacks. (Big inserts to predictable locations should not exist, we are talking about individual blocks here) Downloads of known files are vulnerable to MAST at run-time. An attacker can quickly approach the originator. => Initial random route greatly reduces the value of samples, but a similar attack although weaker is possible against the random routing stage, so it does not eliminate the threat completely. => Bundling requests together reduces the number of samples for initial random routing, although may be tricky to get right. => Age-based routing restrictions, during the random routing phase, could be very effective against MAST. The main problem is that big downloads may take so long that they are unrealistic. Solutions: 1) Improving performance improves security. 2) If a request is failing, the cost of using rendezvous tunnels is marginal; clearly the data isn't cached locally. So requests for old content that isn't readily available can just use rendezvous tunnels. We can detect this in the client layer and switch modes. 3) The minimum peer age can be specified for requests, based on the size of the file, provided that it has only a limited number of possible values and all of them are popular. It will be dropped once we exit the random routing stage. Age limits should depend on when the node was originally seen, how often it's been online recently and so on as well as its actual connected time. I don't think using the Bootstrapping Certificate will work, since an attacker doesn't need to keep bootstrapped nodes online, although it could at least cost him a pool of largely idle IP addresses, so maybe... 4) As soon as we reach the end of the random routing stage, we switch to rendezvous tunnels regardless of success rates. Various crazy bursting schemes have been proposed, but likely only work on darknet. After we exit the random routing or rendezvous tunnel phase, we can safely do a number of risky optimisations. One of the most frequently suggested is returning data directly. Clearly we do want the data to be cached on enough nodes, so we need to be careful here; we may want to transfer it just to cache it, but probably not on ALL nodes on the path. Let's say we've decided we don't need to cache it on every node on the path. Path folding also occurs in the routing stage, so we could efficiently combine the two: If we want the node, we could connect to it and transfer the data, and upgrade it to a full connection if the transfer succeeds and validates. If we don't want the node, direct transfer would leak more information than we do already, which may be an issue depending on e.g. the number of end-points. Downside: Encourages people to use opennet. But if we can make opennet sufficiently secure is this a problem? Darknet is a fallback, and we need it, but it's not going to happen unless we have a lot of users first. We can further encourage use of darknet by FOAF connections, and possibly by making opennet connections additional to darknet connections. Also there are other things we can do on darknet: Not only FOAF connections, but also some form of bursting; bloom filter sharing is easier because connections are longer lived (although maybe not dramatically); and if we trust our direct peers, there may be other possible optimisations e.g. broadcast to trusted peers only before requesting.
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Devl mailing list Devl@freenetproject.org https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl