Martin Stone Davis wrote:
Ken Corson wrote:
Martin Stone Davis wrote:
Martin Stone Davis wrote:
We start at the top of the list, and see who is going to make our will the fastest. Since our lawyer is "backed off" at the moment, we go with our chef.

important: "at the moment" . How big do we consider this "moment" to be ? 100ms , 5 seconds, a singular point in time ? hmmm....

If what you're saying is that we need to include back-off time into the estimate(), then I agree. Just because the node is backed off doesn't mean we shouldn't consider it for routing.

I think that is right, about including back-off time. To elaborate: a node decides whether it is 'overloaded' MANY TIMES a second. Backing off remotely is a very rough remote estimation of this rapidly changing local condition. Yes, even if a route is backed off it should still be considered. Meaning, we are willing to make the request wait, for up to X(?) time units, before forwarding it. X should not be a function of the chosen route, or that route's back off time; rather, X is an inverse form of specifying a maximum queue length for queries waiting locally. We must prevent the queue from ballooning.

If we have four pending wills that are best suited to a single lawyer,
we could even prioritize those! But we must recognize that (via QR,
then backing off) we are attempting to control our query emission
rate onto a single route, and some decisions must be made - such as
how long we are willing to queue (or buffer) a request before we
forward it to our best estimation of appropriate next hop. Ie. we find
many good matches of request:route, but in the process we push
many requests back in line. When do we kick them out of the line ???
And how do we handle it - QR or DNF ? or even a PCR(PoorlyChosenRoute)
 i'm +only joking+ about PCR

We would love to handle every request made of us, but that appeared to
be problematic, thus QR. Which then resulted in the QR+backoff form of
rate control. QR+backoff is an incomplete solution, because the
requestors that are chosen for backoff are being selected at random.
Yet some requestors are more responsible than others for placing us
in the overloaded condition (as measured by their query rate, AND
by the number of active trailers to a specific node). This works
against NGR's efficiency. QR+backoff did reduce our incoming
rate of requests (a part of the local load). But it did NOT do it in
a "balanced" fashion! It is "greedy" in that it doesn't consider
optimal distributed network routing, only the local load. And yet
I am very happy to see that it made an improvement on local load!!!

This is a form of queueing, even though we use a 'bin' instead of

probably has the payoff of vastly better query throughput in the
freenet network. But increases wait time for someone sitting at
a browser.

> I'm having a lot of trouble understanding this paragraph. Try again.


Sorry, I retract this point. It involves more queueing theory than I
am currently brushed up on :) Plus I was probably barking up the wrong
tree anyway !

ken

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to