Martin Stone Davis wrote:

Ian Clarke wrote:

If, as has been reported, a node which always DNFs gets an increasing number of requests then there is a bug in NGR - plain and simple - and it should be easy enough to track down.

*This* should be our focus, not iTTL which seems like [yet another] hasty solution to a problem we haven't found yet.

Toad is right to say that calculating tSearchFailed isn't easy, but wrong to say that tSearchFailed should be infinite. Just because something is unknown does not imply that it is infinite (you don't know my birthday, does that mean its infinite too?).

In fact, tSearchFailed clearly shouldn't be infinite, if it were then we would prefer a slow node with pDNF of 0.8 over a fast node with pDNF 0.8000000000000001. Clearly that would be silly, ergo we want tSearchFailed to be less than infinity.

Estimates are calculated using (simplified):

e = pDF*tDF + pDNF*(tDNF + tSearchFailed)

Provided that tSearchFailed is always > tDF, a node should never prefer a node which DNFs over a node where requests succeed, especially when pDNF is 1. Zab's experiments with blackholes suggests that even when pDNF is 1, nodes are still routing requests - which suggests a bug. An easy way to debug this might be to include the estimator() calculation in the DataRequest such that a blackhole node could see why people keep routing to it.

Lets examine the obvious explanations before we start to question the laws of physics.
Ian.


I think you are right that black holes with a pDNF=1 would die out even on a system without iTTL. However, grey holes which set their pDNF<1 could survive. All it would have to do would be to sometimes purposefully send an illegitimate DNF. Every illegitimate DNF they produce will propogate illegitimate DNFs in the system, due to the "infected" failure tables.

iTTL acts like a vaccine to this contagion, because routed nodes will retry in case of a black or grey hole.

One concern about iTTL might be that lifting the # of hops restriction could lead to the client never fetching the data, since the more hops there are, the greater the likelihood of a transfer failure somewhere along the way. If a transfer is almost certain to eventually fail, won't it just add to network congestion unnecessarily?

One simple way to deal with this would be to continue to use HTL, but not die on a DNF. This would be just like iTTL, except that HTL<0 would be used to QR (not DNF!). Call it DNDoDNF, for "do not die on DNF".

Another thing I should point out:


  0 = prob(DNFisFatal / DNDoDNF) <=
       prob(DNFisFatal / DNDoDNF + unobtanium) <=
         prob(DNFisFatal / DieOnDNF) = 1

That is, unobtanium causes DNFs to die sometimes. To take an extreme example, if only 1 node in every node's RT is suitable for the key (by unobtanium's standards), then DNDoDNF + unobtanium will be exactly what we have now.

The greater the load, the more DNDoDNF + unobtanium will be close to DieOnDNF. However, if there is less load, and unobtanium thinks that more nodes are suitable, then we try harder to obtain the key.

What, then, is the objection to DNDoDNF + unobtanium?


Slightly off topic, but: A more sophisticated alternative to HTL would be to directly control the chance of failure along the route.


On every request, a node would send along a pTransferWillSucceed, indicating the chance he thinks the key will eventually succeed to get back to the requestor given that the key is found. If the requestee decides to route the request, she uses unobtanium NGR to determine the first candidate.

Then, before sending the request, she multiplies the pTransferWillSucceed she received from the requestor by the (1-pTransferFailGivenDataFound) for the routed node. If that value is below a certain configurable threshold (say, 50%), she will consider the node QR:ed, and try the next-best route. If she sends the request, she also sends along the new estimate of pTransferWillSuceed. If all the nodes "QR", then she RNF:s, as usual.

The nice thing about this method is that when the # of hops gets large, the nodes go into "desperation" mode and just try to get a successful data transfer, rather than a fast one. Call it STL for success-to-live.

-Martin

-Martin



_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to