The current bandwidth limiting mechanism seems to be causing serious
problems.  It works by limiting the speed of transmission of data on a
byte-per-byte basis.  Unfortunately this creates a situation where
transfers of data occur more slowly, which means they take longer, which
means that we have more concurrent transfers overall, which slows them
down even further - and the cycle continues - with the entire Freenet 
network getting bogged down in a web of extremely slow transfers.

The alternative is for a node to try to maximize the per-transfer 
connection speed by rejecting new requests when the upstream connection 
speed is maxed out.  Some claim that this is a terrible idea and will 
screw up routing because it will be impossible to get a node to accept a 
datarequest, but I disagree.  

Imagine you go to McDonalds and ask a server for some food, they take 
your order.  Now, you didn't know, but that server is actually serving 
20 other people at the same time and consequently it takes you ages to 
get your food.  Wouldn't it be better if that server said "Sorry Sir, 
I'm really busy - please try another server".

In short, by making a node try to service its existing transfers as 
quickly as it can, it gets them out of the way faster and can thus serve 
just as many requests as a node which accepts all requests but takes 
ages to serve each individual one.

Thoughts?

Ian.

-- 
Ian Clarke                                                  [EMAIL PROTECTED]
Coordinator, The Freenet Project              http://freenetproject.org/
Weblog                               http://slashdot.org/~sanity/journal

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to