On Mon, Nov 17, 2003 at 02:47:59PM -0500, Ken Corson wrote:
> Martin Stone Davis wrote:
> >Martin Stone Davis wrote:
> >>We start at the top of the list, and see who is going to make our will 
> >>the fastest.  Since our lawyer is "backed off" at the moment, we go 
> >>with our chef.
> 
> important: "at the moment" . How big do we consider this "moment" to
> be ? 100ms , 5 seconds, a singular point in time ? hmmm....
> 
> >>The solution is to look ahead in our list until we find a nice 
> >>query/node combination.  This could be done by sampling a certain 
> >>number of queries from the ticker randomly.  Then, for each one 
> >>sampled, and for each node in the RT, calculate estimate().  Pick the 
> >>*combination* with the smallest value.
> 
> This is a form of queueing, even though we use a 'bin' instead of
> a formal 'queue.' I like it muchly, however, the timeout 'period'
> on the requesting end needs to be considered / adjusted. What is
> the upper limit of loiter time on the requestee ? This clearly
> introduces some latency along the query path, at each hop. Which
> probably has the payoff of vastly better query throughput in the
> freenet network. But increases wait time for someone sitting at
> a browser. Which is more important for this project ? Only Ian
> could tell us - it is (originally) his design, after all. It
> seems that people (would like to) use this network for file
> distribution (other than HTML files) ... The obvious difference
> here is that we would be shifting from a "forward instantaneously"
> strategy to a more efficient queueing model. Plus we are making
> this tradeoff in order to preserve our "route to the best node
> with best effort" concept, rather than trading off best routing
> for best speed.

What you are suggesting will result in fast queries getting faster, and
slow queries timing out - but producing considerable CPU load in the
process. Is this a good thing?
> 
> 
> >>This has nothing to do with load balancing, but should improve 
> >>routing, while increasing CPU usage by some unknown amount.  Thoughts?
> 
> Actually this has A LOT to do with load balancing, indirectly.
> Given that the requestor should have a timeout capability, how
> would the requestee handle timeouts on his end ? Just drop them?
> Pass back a QRejected message to the requestor ? Maybe buffer
> these rejections and pass back a block of them, every XXX ms ?
> The central issue is efficient state chain maintenance at the
> requestor.
> 
> _______________________________________________
> Devl mailing list
> [EMAIL PROTECTED]
> http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to