Toad wrote:
On Wed, Nov 19, 2003 at 04:20:15AM -0800, Martin Stone Davis wrote:
Ian Clarke wrote:
Niklas Bergh wrote:
NGR is designed to take into account a nodes available bw and include
that fact in routing decisions. Instead of overloading an already
overloaded (bw-wise) node even more NGR would be sending the Q to
another node.

A bw-caused QR will completely hide the actual bw problem from other
nodes' NGRT:s since the 'load' that is actually causing a QR includes
multiple other factors.

I think I agree, NGR should handle bandwidth limitations, using QRs is too crude a tool and as Niklas points out, may even hide the problem from NGR forcing us to rely on the backoff mechanism.

Ian.

NGR is just a way for a node to maximize what it can get from other nodes. Pure NGR (like pure capitalism, Ian) does this in a narrowly self-interested manner. It isn't designed to make the node a good citizen, w.r.t not overloading other nodes.

This is why we need exponential backoff. Which does the socialist
element pretty well IMHO.

I am still not comfortable with the current system - because it makes no effort to balance the set of remote requestors' backoff values.

  QRs are basically distributed in a random manner. First, a Node
discovers it is too busy. So it sends QRs to whichever nodes happen
to query after that local determination of overload. The increasing
degree of backoff (remotely) should *help* to make sure that the remote
node doesn't receive too many (more than its fair share of) QR/backoffs
relative to other requestors. But it is still possible that one
requestor could get backed off *much more* than another, due to
exponential vs linear backoff. The disadvantage is this: the backoff is
a period of time in which requestor and requestee are "out of touch"
with each other. This decreases the time-granularity of a requestor's
measurement of requestee's state. It is hard to demonstrate without
some simulation or yet another statistic. I hope I have described it
sufficiently here. Probably our best path is to watch those backoff
times, and if we ever see them exceeding 25-50 seconds, we ARE in
trouble. Currently, in my routing table , a significant portion of
my routes are backed off for more than 100 seconds. There probably
should be an upper limit for the backoff value. I will suggest 60
seconds, plucking this number "out of the air." The reason you may
see one or two with ridiculously high values is the exponential
algorithm. The risk is that we are already backed off for an extended
time - basically by coincidence of independent, yet sequential QRs.
Then we get ANOTHER QR ! The requestee is transitioning through
very many periods of (overloaded, not overloaded, ...) during our
oversized backoff timeframe .

  It is worth acknowledging that we are still much better off than
we were a few weeks back, with NO backoff :)

Also, if you wish to see WHICH of your requestors are *effective*
NGR nodes, set logInboundRequests=true , and study the numbers at

http://localhost:8899/servlet/nodestatus/inboundRequests.txt

if could be interesting if requestees favored those requestors that
have higher *success* rates.


_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to