Why can't we just use the game theory equasion that I suggested? There
is no way we can detect when a node is maliciously returning DNF, and if
it did it would almost certainly only do it on a very few keys. Hence
Expected overall time = Probability of success * Expected time for
success
+ Probability of failure *
(Expected time for failure + Expected overall time)Expected overall time is essentially the expected time for that key for the whole node, whether it succeeds or fails, and we add it on because if we fail, the time for the request will be the failure time plus the time we get from the next route. This seems a rigorous and non-arbitrary system with absolutely no voodoo/alchemy involved. On Sat, Jun 28, 2003 at 08:32:16PM -0700, Ian Clarke wrote: > Ok, we need to think about how to account for DNFs in NGrouting - and we > want to do it with the minimum of alchemy. > > The problem is that most DNFs will be "legitimate", meaning that the > data wasn't in Freenet so the node shouldn't be punished for returning a > DNF. But how do we know which DNFs are and are not legitimate? > > The answer is, we can't - but what we can do is look at a node's DNF > performance over time, and observe whether it is returning DNFs more > frequently than the best-performing node in our routing table. > > Why the best performing node? Well, we want to estimate the ratio of > requests for which there is actually data in the network. Imagine an > ubernode - one which is extremely smart and incredibly effective at > finding what it is looking for (therefore not DNFing). Well, no matter > how smart it is - it isn't going to be able to find data that isn't in > the network - so its DNF ratio is likely to be a good indicator of the > ratio of requests for which data exists at all. > > So, now we know that a node is performing less effectively than the best > performing node, how should it be penalized when calculating estimated > routing time? This is a tough one, since it demands that we quantify > the time-cost of a node failing to find some information which actually > exists. Ultimately this is impossible to do - but we can take a stab at > it by pretending that all requests are part of a splitfile download. In > this case, the cost of a failed request is the cost of having to request > another chunk of data to replace the one we didn't get. Now, clearly > this doesn't account for the mental anguish someone might have if they > don't get their daily dose of TFE - but at least it has some realistic > basis in reality. > > So, to account for the liklihood of unwarrented DNFs - we should add the > following to our estimate: > > timeEstimate += (thisNodesDnfProb - bestDnfProb)*globalAverageResponseTime > > Reasonable (at least as a first iteration)? > > Ian. > > -- > Ian Clarke [EMAIL PROTECTED] > Coordinator, The Freenet Project http://freenetproject.org/ > Founder, Locutus http://locut.us/ > Personal Homepage http://locut.us/ian/ -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
pgp00000.pgp
Description: PGP signature
