On Thu, Jun 22, 2006 at 03:09:34PM +0100, Michael Rogers wrote:
> Hi Matthew,
> 
> Let me just make sure I've got this straight before I start writing code.
> 
> My proposal:
> 
> 1) Keep track of the number of tokens available for each incoming and 
> outgoing link
> 2) When a request is received, decrement the number of tokens available 
> for the incoming link - if already zero, reject the request
> 3) When a request is forwarded or answered locally, generate a token and 
> give it to a deserving incoming link

When a request is *completed*. Otherwise we will be creating too many
tokens when a request is forwarded more than once!

> 4) If a request can't be forwarded due to lack of tokens on the outgoing 
> link, reject the request and don't generate a token

Why don't we generate a token? If we don't use one up, then that's fine
I suppose; it does mean reassigning the token we just grabbed from the
incoming node.

> 5) If a request can't be forwarded due to the bandwidth limiter, reject 
> the request and don't generate a token

Nice touch...
> 
> Your proposal:
> 
> 1) Keep track of the number of tokens available for each incoming and 
> outgoing link

Yes.

> 2) When a request is received, decrement the number of tokens available 
> for the incoming link - if already zero, reject the request

Yes, I think ... we have a bunch of tokens which allow for new requests
to be sent from that node, and a bunch of queued or running requests
from that node.

> 3) When a request is forwarded, answered locally, or times out, generate 
> a token and give it to a deserving incoming link

When a request *completes*. Even with no misrouting, we will sometimes
have to forward a request more than once due to loops, small networks,
etc.

> 4) If a request can't be forwarded due to lack of tokens on the outgoing 
> link, queue the request - the queue size is automatically limited by (2)

If it can't be immediately forwarded to its preferred node then we keep
it on the queue for the time being, yes. But if the tokens are separate
from the queue, how is the queue size limited? As far as I can see it is
necessary that the maximum token bucket size *include* queued/running
requests as well as possible ones.

> 5) If a request can't be forwarded due to the bandwidth limiter, should 
> it be rejected or queued?

I don't know that it is necessary to operate on that level, but it's a
useful optimization. I'd be interested to see how it works without that
though; the bwlimiter will limit the actual transfers, and if we have
many requests going then it will take a long time for each to complete,
so it will self-compensate...

Anyway, if we do, the request is queued; the intent was that a request
being queued is the normal case when it arrives; it's lucky if it can be
forwarded immediately. So the bandwidth limiter would simply be a
constraint that "we can only forward a request every X millis", where X
is determined from the bandwidth limit and the success ratio.

> 6) When a token is received, if there are any requests queued for that 
> outgoing link, send the first one

Yup.
> 
> For the moment let's assume no misrouting in either proposal.

Okay, this means that a slow peer can determine the rate for the whole
node, it is a useful simplifying assumption for initial simulations and
design...
> 
> In my proposal, tokens are generated at the rate at which requests can 
> be served (forwarded or answered locally). 

In both proposals, tokens are generated at the rate at which requests
can be served. In my proposal, tokens are only *created* when a request
is answered locally, and then they are forwarded backwards. Some of them
are lost along the way due to some requests generating more than one
forward; I don't _think_ this is a problem...

> However, this rate can change 
> due to peers connecting and disconnecting, congestion, CPU load, etc, so 
> it's occasionally necessary to probe for available capacity.

Hmmm, is it? When a peer connects - at least for the first time - we can
give it a full incoming bucket; this then becomes a full outgoing bucket
on the other end. As far as congestion and CPU load go, there is only a
problem when we actually lose a request, correct? That would be a
timeout. Which should still generate a token; we know what requests are
in flight, and if one is lost it times out, and we still have a
completion so we still make a token, allowing another request to be sent
- but only after the timeout has expired, so usually a long time.

> This can be 
> done by spontaneously generating a token and giving it to a deserving 
> incoming link.

Is this necessary?

> In equilibrium, the number of rejected requests will 
> equal the number of spontaneously generated tokens, so there's a 
> tradeoff between rejecting too many requests and failing to use 
> available capacity.

Hmmm. Okay, what exactly are we talking about with rejected requests? I
had assumed that if a request was rejected, it would just remain on the
previous queue; if it keeps being rejected eventually it will timeout.
We don't have to keep things as they are...
> 
> In your proposal, tokens seem to be generated at a constant rate (in the 
> long term), because you eventually generate one token for each request 
> you accept, regardless of its fate. It seems like this might not control 
> load unless the source responds to rejections by slowing down, which has 
> its own problems.

Why does it not control load? If it takes ages for requests to complete,
then we are compelled to wait ages between sending requests. This does
indeed propagate back across the network, because of the policy on
deserving nodes. Doesn't it?

If node A sends a request to node B, and it is rejected because of lack
of tokens, then the request remains queued on node A, and eventually
will time out. Only as many requests as there are tokens for will be
forwarded. Because of the "fullest non-full bucket" policy, a node
attempting to spam will not receive many tokens. So load is effectively
limited, and this propagates back to the source, no?

> So I'd like to suggest a third scheme, which is a 
> hybrid of the first two: queue requests that can't be forwarded 
> immediately, but don't generate tokens for requests that time out. 
> Instead, spontaneously generate a token once in a while to probe for 
> available capacity.
> 
> What do you reckon?

I'm not convinced that spontaneous generation of tokens is necessary
except on connection of a new node.
> 
> Michael
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20060622/5bec83bf/attachment.pgp>

Reply via email to