If, as has been reported, a node which always DNFs gets an increasing number of requests then there is a bug in NGR - plain and simple - and it should be easy enough to track down.

*This* should be our focus, not iTTL which seems like [yet another] hasty solution to a problem we haven't found yet.

Toad is right to say that calculating tSearchFailed isn't easy, but wrong to say that tSearchFailed should be infinite. Just because something is unknown does not imply that it is infinite (you don't know my birthday, does that mean its infinite too?).

In fact, tSearchFailed clearly shouldn't be infinite, if it were then we would prefer a slow node with pDNF of 0.8 over a fast node with pDNF 0.8000000000000001. Clearly that would be silly, ergo we want tSearchFailed to be less than infinity.

Estimates are calculated using (simplified):

e = pDF*tDF + pDNF*(tDNF * tSearchFailed)

Provided that tSearchFailed is always > tDF, a node should never prefer a node which DNFs over a node where requests succeed, especially when pDNF is 1. Zab's experiments with blackholes suggests that even when pDNF is 1, nodes are still routing requests - which suggests a bug. An easy way to debug this might be to include the estimator() calculation in the DataRequest such that a blackhole node could see why people keep routing to it.

Lets examine the obvious explanations before we start to question the laws of physics.
Ian.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to