From:
Mark J Roberts <[EMAIL PROTECTED]>
Date:
Fri, 22 Nov 2002 06:49:56 -0600


Ian Clarke:

Because when a node forwards a request for a key to a particular reference, it is saying "I think this key will be found if it send it to this node". This will improve the node's impression of what keys it is actually likely to get by forwarding the request to that node, as well as "punishing" that reference for the DNF.

I don't think this is true. Consider:

Ref (key X -> node Y)

|........................X........................|
<- Y's KEYSPACE ->

We route a request for Kreq:

self -> Y -> N1 -> N2 -> N3 -> N4 -> N5

and N5 returns DNF, appending the closest key in its store, Z, which is in
the center of its keyspace graph:


|........................Z........................|
<- N5's KEYSPACE ->

Where does Z fall on Y's keyspace graph? Maybe:


|..Z..............................................|

<- Y's KEYSPACE ->


If we hold the basic premise of "With each hop the request
gets closer to the data" then the fact that Z falls far away
from X is more of an indicator of Y's inaccuracy of the global
keyspace map since N5 likely is more specialized in that field.
Now whether this premise is true or not currently is
debatable but I think that it is the ultimate goal.

We change (key X -> node Y) to (key Z -> node Y). See? Now we're routing
worse than before. In effect, we've replaced key X with a random other key
Z, subject to the constraint that Z is closer to X than any other key in our
store. Now imagine this happening, repeatedly, to every key. They start to
wiggle and drift around.

Anyway, these keyspace graphs are pretty silly, and probably don't
correspond much to reality. The key objection I have is:

* Failure of node Y to succeed for key Kreq does not imply that key
X is 'bad' with respect to node Y.

* Key Z is not likely to be 'good' with respect to node Y.

Moving on, your other proposed strategy is roughly, "If we can't put a
(key -> node) reference in our store normally - one that ties a found key to
the node that found it - we'll settle for a nearby key."

Consider what happens when one of these (key -> node) references, which was
put in the store as a result of a failed request for Kreq, is used to route
a request for Kreq in the future. That's right: it fails just like the first
time. So, as I commented in #freenet, we have path compression, alright, but
the path leads straight to failure.

This is why I would propose to have a temporary routing table attached to the
repeated request mechanism that instead of immediately rejecting
recently requested keys, it would use these returned best-of-request-chain
references to temporarily tesseract the path length (consequently lengthening
the request path without increasing the HTL by skipping most of the previously
failed nodes). If the request comes up dry or the repeated key expires from the
temporary table then the routing table is left as if the whole thing never happened.

This might ulimately lead to the *reduction* (or fixing) of the default HTL
since multiple retries with the same HTL will dig deeper and deeper
*without* involving more and more nodes per request.
So in the end the HTL will be more of an indication of how many nodes
to involve in each request rather then an indication of depth of search.

We have to judge these routing schemes by looking at the worst case request
loading that nothing can be done about. This would be repeated requests for
random keys that don't exist. Here we have a mechanism that:
- has the possibility of lowering the maxHTL (i.e. lessening the impact of random requests)
- increasing the potential of finding data with multiple retries
- not messing up the routing if it doesn't find anything
- doesn't involve any more nodes then a random request

I would like to see more debate over this concept since I think
that it would bring much success and efficency to Freenet routing.

Mike


_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to