Niklas Bergh wrote:
NGR is designed to take into account a nodes available bw and include that fact in routing decisions. Instead of overloading an already overloaded (bw-wise) node even more NGR would be sending the Q to another node.
A bw-caused QR will completely hide the actual bw problem from other nodes' NGRT:s since the 'load' that is actually causing a QR includes multiple other factors.
I think I agree, NGR should handle bandwidth limitations, using QRs is too crude a tool and as Niklas points out, may even hide the problem from NGR forcing us to rely on the backoff mechanism.
Ian.
NGR is just a way for a node to maximize what it can get from other nodes. Pure NGR (like pure capitalism, Ian) does this in a narrowly self-interested manner. It isn't designed to make the node a good citizen, w.r.t not overloading other nodes.
Therefore, we need some way of contraining nodes' behavior for the benefit of the network. This leads to an idea: design the constraints such that the total number of queries made by a requesting node matches the capacity of requestee nodes to successfully handle all the queries. A laudable goal, no?
So, how should we design the constraint? Here are a few options:
1. Use no constraint. This is clearly wrong. NGR will not constrain the number of queries made.
2. Use our current system: return QR messages when a node is overloaded, and respond to QR messages by backing off. Based on a couple of hours of use, I believe there must still be a bug in the implementation (see my "Re: Freenet stable 5038/unstable 6340"). Assuming that is the case, we'll need to fix the bug before we can evaluate how well it's working.
3. Use some other system, such as that proposed in "[Yet] another load-balancing idea" or "Additional ways to reduce load aside from QR." or other threads. These require further exploration, and perhaps someone should summarize all the possible approaches.
For now, I think we should make sure option #2 is bug-free before we evaulate how well/poorly it works. Also, someone (Ken? Ed?) has suggested that we really should be using linear and not exponential backoff. This should also be considered before making drastic changes.
Missing from the discussion above is how to evaluate the contraint (or lack thereof). Certainly one measure is the load on nodes due to queries, with lower==better. That measures how well it's accomplishing the goal of matching the number of queries made with the number than can be handled. What other measures might be useful?
-Martin
P.S. Unfortunately, I'll be in New York until Sunday night and won't be able to respond to any follow up. My gf and I are trying to get on "Who Wants to be a Millionaire?" W00t!
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
