On Thursday 06 November 2003 04:32 pm, Newsbyte wrote: > As for the question at hand; I'm no coder, but as far as I've understood, > there is an estimator of how much time it would cost to retrieve something > from a node(?) > > Can't you make something that > > 1)checks if the requested item is in it's store > 2)if not, send it to the next node (and follow the normal way) > 3.a)if yes, estimate how long it would take to get it from another node > 3.b)Send the data back in the timeframe it has estimated it would take if > it came from another node (maybe add a bit randomn time to it too?) > > Something like that?
The problem with this is that it doesn't really work. For the sake of argument, lets say that the time it takes your node to return some data is T, and the time it would take it to get it from the next node is T + 100. Right now if we return is less than T+100, they can tell it came from us. If you add another 100 delay in there, then they can still tell it came from you, because if it came from another node, it would have delayed too. So if you return in less than T + 200 they know it came from you. If you make a random delay between 0 and 200, then they can tell it came from you if you happen to delay less than 100 they know it was from you, and if you delay more than 100 they don't, but you have to average 100. So they still can tell half the time. If you delay randomly between 0 and 1000. Then they can only tell if you happen to delay less than 100, because it is conceivable that the other node did not delay at all. So they can tell for sure 1/10th of the time, however if you only happen to delay 200 then the odds are still stacked against you. Especially if they probed several content related values and they came back faster than random probability is likely to produce. So, to make this secure, you have to implement two level data stores, you are still venerable more than 10% of the time, and in doing so have more than cut the network performance in half. The problem with adding delays is that it never makes the problem go away, and the relative security you gain is proportional to the multiplier of the time you waste. A some what better idea is to do something like what toad suggested and use upstream nodes to help with the delay. Suppose the last node in the request chain never got the data back, IE: your node has the data, and so it frowards the request and the data to an upstream node that inturn returns the data to the node in the request. That would be helpful in terms of fighting a timing attack, because an attacker would have to know something about the performance of the upstream nodes. However it has the same problem as Toads idea, that it, if you are forwarding the request upstram and then they have to return it, then you are telling that node you have the data, which is much worse the plausible but statistically unlikely deniability. _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
