-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sorry to contradict what I said yesterday, but last night I tried to go through the argument from first principles and came to a different conclusion.
What are we trying to achieve with load balancing / congestion control? 1. Reduce load on overloaded nodes 2. Don't redirect load down longer paths 3. When capacity is greater than traffic, give everyone as much as they ask for 4. When traffic is greater than capacity, give everyone a fair share My proposal yesterday tried to solve all 4 points with one mechanism based on end-to-end feedback, but as Matthew pointed out, there's no point adjusting the rate of a path that's not going to be used again. Also I'm not sure the mechanism I proposed yesterday would ensure fairness - the rate of a link changes according the ratio of successes to failures, and there's no reason to think that a node that produces more than its share of traffic will have a higher proportion of failures on anything except the local link. So I've come to the conclusion that fairness should be enforced on the link nearest the sender, and flow control should be enforced on the link nearest the (overloaded) receiver. Here's my new suggestion: 1. AIMD flow control on each link. The rate is increased in response to local acks, and decreased in response to local packet loss, local timeouts and local overloads. The rate is not affected by end-to-end replies, failures, overloads or timeouts. Rationale: this mechanism is meant to deal with two problems: packet loss due to congestion in the underlying network, which should result in a TCP-friendly slowdown, and bandwidth/CPU overload at the neighbour, which should be relieved by finding the maximum rate at which the neighbour can process messages. Instead of pushing load back to the sender, fair queueing limits the bandwidth available to the sender in the first place. 2. A token bucket for each incoming link, filled at equal rates, with the total rate determined by the user-specified outgoing bandwidth limit. 3. A token bucket for each outgoing link, filled at a rate determined by the AIMD flow control on the link (see above). 4. Forward a packet if there are enough tokens in the incoming and outgoing buckets; otherwise send an overload message. Rationale: the incoming buckets ensure long-term fairness between neighbours while allowing short-term bursts. To avoid wasting bandwidth when a neighbour uses less than its share in the long term, when a bucket overflows the tokens are redistributed to the emptiest bucket. This allows unfairness when there's excess capacity while enforcing fairness when capacity is scarce. Unlike my suggestion yesterday, this mechanism requires two buckets per neighbour rather than one bucket per pair of neighbours. Open problems: Should each local client have its own incoming bucket, or will that just encourage clients to open multiple connections? Should all clients share a single bucket? Should local clients get a fixed share of the bandwidth (say 10%), independent of the number of neighbours? Should a well behaved sender adjust its rate in response to a non-local overload? Or should it keep sending at the same rate, relying on fair queueing and the fact that its next request is unlikely to reach the same overloaded node? What about retransmissions? Cheers, Michael -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) iD8DBQFEP2plyua14OQlJ3sRAhebAJ4mqOGC8fsTuqeYFTNBFA3GeTfGEgCg8SaO g8CrZ+mg8qG0RonksU/m9jU= =78kv -----END PGP SIGNATURE----- _______________________________________________ Devl mailing list Devl@freenetproject.org http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl