On Monday 16 June 2003 07:37 am, Toad wrote:
> No, that information is not available to routing. If large chunks are a
> problem we should impose a limit - we do not want only a few nodes to
> cache a really large chunk.

Yeah, I agree with you there. But would it be possible to do the following:
Suppose larger chunks are accessed in general less frequently. (I'd like to 
see some hard evidence of that, but bare with me...) Then when a node reseves 
an insert request for a piece of data, it looks at it's routing table and 
finds a few nodes that it thinks are fairly good candidates for the data. 
Then it picks one of them, but in it's decision it weights it's perseption of 
that nodes available bandwidth. If it thinks that a node has a slow upload 
rate and it is a large datafile then that node is more luckily to be 
forwarded the data. This is because we are assuming large data is less 
popular so it wouldn't need to upload as often. (IE: A node with 1000 1KB 
chunks in it's data store uploaded an average of 10 times each, generates a 
LOT more traffic than, a node with 1 1MB chunk in it's data store that is 
uploaded 9 times.)

The number of nodes that have a particular piece of data is the same. The 
large pieces of data from split files are more frequently on slow 
connections. But this does not harm the overall download rate as the 
requester can spawn more threads. It also means that the low bandwidth users 
would on average receve fewer requests but still use all of their available 
store space.

Is there any reason that we couldn't/shouldn't make the datasize available to 
the requesting node?
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to