Ok, thinking about it, here is a proposal, or rather, the beginning of a proposal. I'm assuming that we get rid of NLM, fair sharing, and anything else intended to control load, and replace it with this. We will absolutely need to simulate this before we write a single line of code to deploy it.
The core idea is that a node will include a floating point number in response to any kind of request showing how close that node is to being overloaded. 0.0 would mean its doing nothing, 1.0 would mean that it is completely saturated and must reject requests. Clearly the goal is to avoid getting anywhere near to 1.0. A node tracks several things: - What is the overall average load reported by responses this node has received - What is the average load reported by responses this node has received, per remote node - What is the average load reported by responses this node forwarded, per remote node I think, given these metrics, we should be able to do the following: - Limit our overall rate of initiating local requests based on the global average reported load - Limit our rate of local requests based on the average load of the connection to the peer it would need to be forwarded to - Detect when remote peers are abusing the system by disregarding load - as evidenced by a significantly higher average load in replies forwarded to them Of course, lots of questions: - How do we compute the averages? A decaying running average of some kind? What time period? - How do we translate load into a desired rate of requests? - What criteria indicate that a peer is abusing the system? What is the remedy? This is basically control theory stuff, and we'll need robust answers to these questions before we proceed (ideally with a theoretical rather than experimental foundation). Ian. -- Ian Clarke Founder, The Freenet Project Email: ian at freenetproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20110829/9f83f964/attachment.html>