* Matthew Toseland <[EMAIL PROTECTED]> [2007-11-30 01:26:40]:

> PROPOSAL:
> 
> Requests are by default queued. Queued is in fact a stage which requests go 
> through once it is decided they need to be routed. Requests MAY have 
> deadlines, this is an interesting question which we should examine later: it 
> has some serious disadvantages in terms of traffic identification and 
> applying different policy to different requests, but it may have some major 
> advantages too. In any case, there is initially no timeout for a request, or 
> a very long timeout.
> 
> When a node has spare capacity in terms of the number of requests it can 
> carry, it sends a message to this effect, inviting requests, to each of its 
> peers. A node receiving such a message will normally send the queued request 
> closest location-wise to the sender, although timing concerns may factor in 
> also. Within a short period several requests will have been received, and the 
> node will accept one and reject the others.
> 
> How to define spare capacity? Initially we simply have an arbitrary number of 
> requests. We can tune this, or introduce an algorithm to tune it, later.
> 
> How to prevent all requests from going to the fastest nodes rather than those 
> best suites for them? Calculate the rate of acceptance of requests for each 
> node. Calculate the median, double it, and don't let any node accept requests 
> faster than this. Obviously this is a tunable heuristic! The point is that we 
> want to route to the node with the best location, rather than the fastest 
> node, but we don't want to have to always wait for the slowest node.
> 
> One obvious problem is that requests which are not close to any of our nodes 
> may not get sent for a very long time, and worse, they may clog up the node 
> so that requests which are closer to the target don't get through. Partly the 
> answer is that this won't happen in a non-perverse network: a node's location 
> should be reasonably close to that of its peers. The other part of the 
> solution is to implement a timeout: after a request has been queued for a 
> certain period and not made any progress, it backtracks to try to find a 
> better node. We can keep the existing HTL mechanism: if we backtrack, our HTL 
> is reduced, if we make progress, our HTL is reset. And there may be a request 
> deadline imposed also, although as I said there are issues with this.
> 
> 
> 
> Obviously I expect the above will evolve over time. However, IMHO it would be 
> significantly better at matching demand to supply than the current algorithm 
> while not sacrificing routing, and would therefore be a major improvement in 
> performance, which is one of the most important problems Freenet needs to 
> deal with.

You proposal is far away from tit-for-tat ... How do you prevent
leachers from forgetting to send a "I want to process requests" flag ?

NextGen$

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to