On Sunday 02 December 2007 00:25, Michael Rogers wrote:
> Matthew Toseland wrote:
> > We do. The link layer uses a protocol which behaves more or less the same 
as 
> > TCP with regard to congestion control.
> 
> Maybe this has been fixed in 1084, but a couple of days ago my node was
> trying to send 107 KB/s through a 16 KB/s pipe for long enough to kill
> all the TCP connections sharing the pipe. I understand that was due to a
> bug at the request layer, but it shows that congestion control at the
> message layer currently isn't working: we're depending on the request
> layer to do both jobs.

Well... yes and no. It was a bug at the LINK layer iirc. Remember the 
pathetically low payload percentages?
> 
> > IMHO we would still need rejection, for several reasons:
> > 1. Reject due to loop.
> 
> Yeah, of course. :-)
> 
> > 2. Reject due to overload if something unexpected happens.
> 
> I'm not sure what you have in mind...
> 
> > 3. Reject due to overload as a normal part of the operation of the node 
> > because if we just send one token to one node it will not always send a 
> > request, so IMHO we have to send tokens to several nodes and then reject 
some 
> > if we get more requests than we'd expected.
> 
> If we still need to reject requests pre-emptively then what's the
> advantage of tokens over the current system? But on the other hand I see
> your point, it's possible that several peers will spend their tokens at
> once.

The point is we can't have large numbers of tokens accumulate on a peer: this 
is very difficult to manage. Therefore, we must send short-lived tokens to 
multiple peers whenever there's an opportunity to do a request.
> 
> How about this: instead of tokens, we give each peer an enforced delay
> between requests. If the load is light, we gradually decrease the delay.
> If the load is heavy, we increase the delay. Like token buckets, we can
> enforce fairness or other policies by giving different peers different
> delays.

We tried this in 0.5...
> 
> Unlike stop/start signals or tokens it smooths out bursts, because a
> peer can't save up delay: if there's a long delay between requests A and
> B, you still have to wait the full period between B and C.

I'm not talking about stop/start signals, nor am I talking about tokens in the 
sense that you use the word.
> 
> What happens if a request arrives early? 

If it arrives early, if we've already accepted a different request, etc etc, 
we simply reject it with overload. Then it remains queued on the sender.

> Maybe the peer's ignoring the 
> delay or maybe it was network jitter. Keep the peer's requests in a
> queue and process them at the proper time... if the queue gets more than
> about one second long, something's wrong - drop the connection and warn
> the user.

The proposal already queues the requests, but we need to control the rate at 
which requests enter the queue.
> 
> What do you reckon?
> 
> Cheers,
> Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20071203/1ef54260/attachment.pgp>

Reply via email to