Toad wrote:

On Thu, Dec 04, 2003 at 05:52:50PM -0800, Martin Stone Davis wrote:

Ian Clarke wrote:


If, as has been reported, a node which always DNFs gets an increasing number of requests then there is a bug in NGR - plain and simple - and it should be easy enough to track down.

*This* should be our focus, not iTTL which seems like [yet another] hasty solution to a problem we haven't found yet.

Toad is right to say that calculating tSearchFailed isn't easy, but wrong to say that tSearchFailed should be infinite. Just because something is unknown does not imply that it is infinite (you don't know my birthday, does that mean its infinite too?).

In fact, tSearchFailed clearly shouldn't be infinite, if it were then we would prefer a slow node with pDNF of 0.8 over a fast node with pDNF 0.8000000000000001. Clearly that would be silly, ergo we want tSearchFailed to be less than infinity.

Estimates are calculated using (simplified):

e = pDF*tDF + pDNF*(tDNF + tSearchFailed)

Provided that tSearchFailed is always > tDF, a node should never prefer a node which DNFs over a node where requests succeed, especially when pDNF is 1. Zab's experiments with blackholes suggests that even when pDNF is 1, nodes are still routing requests - which suggests a bug. An easy way to debug this might be to include the estimator() calculation in the DataRequest such that a blackhole node could see why people keep routing to it.

Lets examine the obvious explanations before we start to question the laws of physics.
Ian.

I think you are right that black holes with a pDNF=1 would die out even on a system without iTTL. However, grey holes which set their pDNF<1 could survive. All it would have to do would be to sometimes purposefully send an illegitimate DNF. Every illegitimate DNF they produce will propogate illegitimate DNFs in the system, due to the "infected" failure tables.


iTTL acts like a vaccine to this contagion, because routed nodes will retry in case of a black or grey hole.

One concern about iTTL might be that lifting the # of hops restriction could lead to the client never fetching the data, since the more hops there are, the greater the likelihood of a transfer failure somewhere along the way. If a transfer is almost certain to eventually fail, won't it just add to network congestion unnecessarily?

One simple way to deal with this would be to continue to use HTL, but not die on a DNF. This would be just like iTTL, except that HTL<0 would be used to QR (not DNF!). Call it DNDoDNF, for "do not die on DNF".


The problem with this is that both QRs and DNFs can be caused by passage
through several nodes. The end result is that we would greatly increase
the number of nodes the request passes through (like iTTL).

<m0davis> I assume you are talking about STL. But I don't understand what you're saying. Elaborate just a bit?

<toad__> Well, if we get a DNF, it could be relayed after passing
through several nodes, right?  And if we get a QR, it could be a route
not found QR, in which case, we tried several nodes and they all QRd.

<m0davis> So how is that an argument against STL? STL would limit the
number of nodes passed through.  Just it would do it in a different
manner from HTL.  The number of nodes passed through would never be so
much that the chance of getting back to the requestor was<50% (or
whatever we configure it to be).


Response, Toad?




Slightly off topic, but: A more sophisticated alternative to HTL would be to directly control the chance of failure along the route.

On every request, a node would send along a pTransferWillSucceed, indicating the chance he thinks the key will eventually succeed to get back to the requestor given that the key is found. If the requestee decides to route the request, she uses unobtanium NGR to determine the first candidate.

Then, before sending the request, she multiplies the pTransferWillSucceed she received from the requestor by the (1-pTransferFailGivenDataFound) for the routed node. If that value is below a certain configurable threshold (say, 50%), she will consider the node QR:ed, and try the next-best route. If she sends the request, she also sends along the new estimate of pTransferWillSuceed. If all the nodes "QR", then she RNF:s, as usual.

The nice thing about this method is that when the # of hops gets large, the nodes go into "desperation" mode and just try to get a successful data transfer, rather than a fast one. Call it STL for success-to-live.

-Martin

-Martin




_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to