Martin Stone Davis wrote:
Martin Stone Davis wrote:
We start at the top of the list, and see who is going to make our will the fastest. Since our lawyer is "backed off" at the moment, we go with our chef.
important: "at the moment" . How big do we consider this "moment" to be ? 100ms , 5 seconds, a singular point in time ? hmmm....
If what you're saying is that we need to include back-off time into the estimate(), then I agree. Just because the node is backed off doesn't mean we shouldn't consider it for routing.
The solution is to look ahead in our list until we find a nice query/node combination. This could be done by sampling a certain number of queries from the ticker randomly. Then, for each one sampled, and for each node in the RT, calculate estimate(). Pick the *combination* with the smallest value.
This is a form of queueing, even though we use a 'bin' instead of
a formal 'queue.' I like it muchly, however, the timeout 'period'
on the requesting end needs to be considered / adjusted. What is
the upper limit of loiter time on the requestee ? This clearly
introduces some latency along the query path, at each hop. Which
probably has the payoff of vastly better query throughput in the
freenet network. But increases wait time for someone sitting at
a browser.
I'm having a lot of trouble understanding this paragraph. Try again.
> Which is more important for this project ? Only Ian
could tell us - it is (originally) his design, after all. It seems that people (would like to) use this network for file distribution (other than HTML files) ... The obvious difference here is that we would be shifting from a "forward instantaneously" strategy to a more efficient queueing model. Plus we are making this tradeoff in order to preserve our "route to the best node with best effort" concept, rather than trading off best routing for best speed.
This has nothing to do with load balancing, but should improve routing, while increasing CPU usage by some unknown amount. Thoughts?
Actually this has A LOT to do with load balancing, indirectly.
Okay, you're right about that. I should have said "this has nothing to do with *rducing* load". QR back-off is for reducing load. This is designed to improve NGR, which indeed has a lot to do with load balancing.
Given that the requestor should have a timeout capability, how would the requestee handle timeouts on his end ? Just drop them? Pass back a QRejected message to the requestor ? Maybe buffer these rejections and pass back a block of them, every XXX ms ? The central issue is efficient state chain maintenance at the requestor.
Those last two choices sound best to me.
-Martin
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
