Should we impose an arbitrary upper limit on the number of requests
running at any one time? It might help to smooth over temporary resource
shortages. I believe mrogers' token passing algorithm includes an
element of this, but we don't need to wait until then to implement it.

If we have too many requests running, we wouldn't accept any remote
requests, or start any local ones.

However, this may be overengineering? We have a variety of overlapping
feedback mechanisms already, some of which don't seem to be interacting
properly - for example, pre-emptive bandwidth allocation should keep
down bwlimitDelayTime, but it doesn't - or does it?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20060919/3188b1be/attachment.pgp>

Reply via email to