On Thursday 03 January 2008 01:17, Michael Rogers wrote: > Matthew Toseland wrote: > > - It should be possible with such a simple request/response to make it > > extremely difficult for two nodes to realise they are on the same path, > > except in the case where they are actually adjacent. > > I think that's optimistic - they could use the tunnel construction time, > or the delay between the tunnel being constructed and used, or (for > two-way tunnels) the delay between the request and the response.
The tunnel is constructed onion-style during the request's forwarding. The return tunnel likewise. There is no tunnel construction time. > > Let's say the attacker controls the exit node of tunnel X, and it > normally takes 5 seconds to construct a tunnel. If an interesting > request is sent through tunnel X, all the attacker's nodes record > predecessor samples for any outgoing tunnels constructed in the 5 > seconds before tunnel X was constructed, and successor samples for any > incoming tunnels constructed in the 5 seconds after tunnel X was > constructed. It doesn't matter that most of the samples are wrong - all > that matters is that the initiator is more likely to be recorded than a > random node (or a random member of the cell). Ok, this is possible, although slow. Is it fatal? How much data does the attacker need? > > If the attacker knows that tunnels always contain more than two nodes > and don't pass through any node more than once, he can also eliminate > samples from the exit node and its immediate neighbours. But that's not > vital, it just reduces the level of noise. > > > However, it *is* > > known that it is one of the nodes along that path that is the problem, and > > this may be enough. > > Yup, if we can trust the topological information then we should be able > to identify unreliable nodes and avoid using them in tunnels (but on the > other hand that allows an attacker to attract more tunnels and thus > gather more samples by performing well, as in I2P). > > > Each node publishes its links, including its pubkeys, their pubkey hashes, and > > a signature. This is relayed to some distance. We compute a credibility value > > for each node on the graph by starting with a value of 1.0 on each node we > > are a direct peer of, and dividing it by the number of peers on each node > > going outwards, adding up the values from different branches. We can then > > ignore nodes which have a low credibility value. > > Sounds promising, but the result will be different from each node's > point of view. How do we ensure that all members of the cell agree about > which nodes to ignore? If they don't agree then it's easy for an > attacker to calculate credibility values from every other node's point > of view in order to work out who would or wouldn't have chosen a given > exit node. I don't know. IMHO we need to have a fixed cell which every node within it uses, in order to avoid a range of attacks. > > > Now, we may not be able to > > do exactly this, if we have to have semi-permanent cell structures, but it > > should be possible to find an algorithm to serve this purpose (eigenvector > > centrality? TrustRank etc?). > > We're caught in a cleft stick here: algorithms that give a universally > agreed score for each node are vulnerable to Sybil attacks [1], but if a > node's score depends on who evaluates it (eg the flow-based algorithm > you sketched above) then how do we agree about which nodes to ignore? Well can we just use it from one node's perspective? When I last looked into it that looked very risky. > > Cheers, > Michael > > [1] http://www.sigcomm.org/sigcomm2005/paper-CheFri.pdf
pgpeWJfTRFpCn.pgp
Description: PGP signature
_______________________________________________ Devl mailing list Devl@freenetproject.org http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl