Matthew Toseland wrote: > We do. The link layer uses a protocol which behaves more or less the same as > TCP with regard to congestion control.
Maybe this has been fixed in 1084, but a couple of days ago my node was trying to send 107 KB/s through a 16 KB/s pipe for long enough to kill all the TCP connections sharing the pipe. I understand that was due to a bug at the request layer, but it shows that congestion control at the message layer currently isn't working: we're depending on the request layer to do both jobs. > IMHO we would still need rejection, for several reasons: > 1. Reject due to loop. Yeah, of course. :-) > 2. Reject due to overload if something unexpected happens. I'm not sure what you have in mind... > 3. Reject due to overload as a normal part of the operation of the node > because if we just send one token to one node it will not always send a > request, so IMHO we have to send tokens to several nodes and then reject some > if we get more requests than we'd expected. If we still need to reject requests pre-emptively then what's the advantage of tokens over the current system? But on the other hand I see your point, it's possible that several peers will spend their tokens at once. How about this: instead of tokens, we give each peer an enforced delay between requests. If the load is light, we gradually decrease the delay. If the load is heavy, we increase the delay. Like token buckets, we can enforce fairness or other policies by giving different peers different delays. Unlike stop/start signals or tokens it smooths out bursts, because a peer can't save up delay: if there's a long delay between requests A and B, you still have to wait the full period between B and C. What happens if a request arrives early? Maybe the peer's ignoring the delay or maybe it was network jitter. Keep the peer's requests in a queue and process them at the proper time... if the queue gets more than about one second long, something's wrong - drop the connection and warn the user. What do you reckon? Cheers, Michael
