Newsbyte wrote:
"The problem with this is that it doesn't really work. For the sake of
argument, lets say that the time it takes your node to return some data is T,
and the time it would take it to get it from the next node is T + 100.


Right now if we return is less than T+100, they can tell it came from us.

If you add another 100 delay in there, then they can still tell it came from
you, because if it came from another node, it would have delayed too. So if
you return in less than T + 200 they know it came from you."
I'm not quite following this. I was aware of the problem you describe, but according to me, this is the case if you set fixed time-delays at each node.
What I suggested was adding the time that a node estimates it would take if it had to get it from the next node it estimates as the best candidate. Since that would differ from node to node, the time would be a variable to begin with. So if it returned it in less then T + 200, they wouldn't know a thing, unless they also knew what the best-estimator was for that node towards restrieving it from another node.
Because, if the time-delay isn't fixed, but dependable on the estimate the node has for requesting it further, then the analysis you describe above is fruitless (at least, to determine with certainty that it wasn't there before).
I dunno, maybe I'm missing something.

I thought the same as you at first, but then I realized: I *think* he's saying that if we fake-route it to another node, then the datasource of that node would have fake-routed it to another node, inceasing the time for routing. But you can continue this indefinitely, causing the faked (and also real) time delay to go to infinity. THe only way out is to sometimes not fake it. In which case, you will reveal your DS during the times you don't fake it.


Have I got that right, Tom?

-Martin


_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to