On Fri, Apr 21, 2006 at 11:46:58PM +0100, Michael Rogers wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Matthew Toseland wrote: > > - If we simply move on to the next node (since backed off nodes are > > ignored for routing at present and that seems to work), then to some > > degree the load will go to where it is least overloaded... > > ...but at the cost of extra hops. This is a hard one. Longer paths waste > bandwidth, but so do rejected requests. If the sender retries when a > request is rejected and the request follows the same path, it will just > keep getting rejected. > > > a very slow node at a good point in the > > keyspace will then inflict great harm. > > Agreed, I think at some point we have to try to route around slow nodes. > I suppose the question is where on the path the re-routing should occur > - - near the source or near the bottleneck?
Well in 0.5 we were constantly routing around overloaded nodes, to the point that it was the dominant force in routing! How do we balance the two issues? We need to be able to identify abnormally slow nodes and ignore them... Ideally location swapping would give them a smaller piece of the keyspace, but if this isn't possible we can just detect nodes which are way outside the normal performance range and not talk to them. > > > What do you mean by sliding window flow control? Doesn't AIMD suffice? > > In TCP the sending rate is limited by the minimum of the congestion > window and the receiver window. The congestion window represents the > capacity of the link, and grows and shrinks in response to packet loss. > The receiver window represents the capacity of the receiver - the front > of the window is moved forward when the receiving process removes data > from the buffer, and the back of the window is moved forward when > incoming data is added to the buffer. The receiver specifies the window > size in acks and keepalives ("ack, send 3 more... ack, send 2 more"). If > the receiving process stalls then the back of the window catches up with > the front and the sender can't send any more data, no matter how large > the congestion window. Hmmm. How do you propose to apply this to freenet? How would we determine the receiver window size? And when we cannot send because of the receiver window, what then? Ignore it, move on to the next node, increment a counter which if it exceeds some value results in our dropping the node from routing for a while? I'm not sure this is applicable to Freenet... it's essential that we reduce the AIMD counter for the node if it is overloaded; the receiver window idea is designed to deal with buffers... I think I have a simple solution with regards to rerouting: 1. Normally if we cannot send a request to the node which is preferred for the key, we reject the request in order to reduce the incoming load. 2. If the AIMD congestion window for a node reaches rock bottom and stays there for more than X period, we back off that node - remove it from routing entirely for Y period. After that, we try again. If in Z period (probably equal to Y), it does not recover, we back off again for double the period. There will be a maximum backoff. If we reach the maximum backoff, we tell the user - not just on the routing page, but on the main web interface page as a UserAlert. 3. While a node is backed off, we do not accept any of its requests. There will be a custom message for this rejection, so the node can tell the user if all its requests are being rejected because it is too slow. If we used the receiver window idea, we would have to create some arbitrary "if more than X% of requests to the node cannot be routed in Y period..."; there's enough alchemy already. :) > > So instead of pinging your neighbours to see how overloaded they are, > the receiver window should tell you when they're ready to receive more data. How would a node know how many requests it can process? Would we have an arbitrary number of in flight requests? The key question though is how to integrate this with the above mechanism. > > > Maybe... IMHO if we can implement a form of load balancing that doesn't > > have the problems associated with the current one then it would be > > worthwhile for me to do it immediately; we just have to thrash out the > > design... that is assuming the design is 100% bulletproof... > > Cool, it would definitely be better if you implemented it. I'm writing a > simulator at the moment for some other work so hopefully I'll be able to > do some simulations if you need any. That would be really useful.. > > Cheers, > Michael -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
signature.asc
Description: Digital signature
_______________________________________________ Devl mailing list Devl@freenetproject.org http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl