Hi Matthew,
Let me just make sure I've got this straight before I start writing code.
My proposal:
1) Keep track of the number of tokens available for each incoming and
outgoing link
2) When a request is received, decrement the number of tokens available
for the incoming link - if already zero, reject the request
3) When a request is forwarded or answered locally, generate a token and
give it to a deserving incoming link
4) If a request can't be forwarded due to lack of tokens on the outgoing
link, reject the request and don't generate a token
5) If a request can't be forwarded due to the bandwidth limiter, reject
the request and don't generate a token
Your proposal:
1) Keep track of the number of tokens available for each incoming and
outgoing link
2) When a request is received, decrement the number of tokens available
for the incoming link - if already zero, reject the request
3) When a request is forwarded, answered locally, or times out, generate
a token and give it to a deserving incoming link
4) If a request can't be forwarded due to lack of tokens on the outgoing
link, queue the request - the queue size is automatically limited by (2)
5) If a request can't be forwarded due to the bandwidth limiter, should
it be rejected or queued?
6) When a token is received, if there are any requests queued for that
outgoing link, send the first one
For the moment let's assume no misrouting in either proposal.
In my proposal, tokens are generated at the rate at which requests can
be served (forwarded or answered locally). However, this rate can change
due to peers connecting and disconnecting, congestion, CPU load, etc, so
it's occasionally necessary to probe for available capacity. This can be
done by spontaneously generating a token and giving it to a deserving
incoming link. In equilibrium, the number of rejected requests will
equal the number of spontaneously generated tokens, so there's a
tradeoff between rejecting too many requests and failing to use
available capacity.
In your proposal, tokens seem to be generated at a constant rate (in the
long term), because you eventually generate one token for each request
you accept, regardless of its fate. It seems like this might not control
load unless the source responds to rejections by slowing down, which has
its own problems. So I'd like to suggest a third scheme, which is a
hybrid of the first two: queue requests that can't be forwarded
immediately, but don't generate tokens for requests that time out.
Instead, spontaneously generate a token once in a while to probe for
available capacity.
What do you reckon?
Michael
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl