On February 27, 2003 11:39 pm, Ian Clarke wrote:
> I am somewhat concerned that the datastore data size limit of 1/200th of
> the total datastore size is less than an optimal solution.
>
> Current behavior is that if someone decides to have a datastore of less
> than 200MB (or is it 256?), then even 1MB chunks of data won't be cached
> on that node, although the user will still be able to download such
> data.
>
> Let's think about that, any user who initially decides to give Freenet
> less than 200MB will automatically start leaching larger chunks of data
> without storing them locally.  Someone who opted (as I do) to devote
> 50MB to Freenet will not cache anything larger than about 500k, even
> though I will be able to get such data from other users in the network
> who do.
>
> I see this as a problem, it doesn't really disadvantage those who opt
> for smaller datastores yet it ensures that the network as a whole is
> significantly disadvantaged by such users.
>
> This can't be the only way to do this.  I dislike chosing arbitrary
> limits where it can be avoided, but if we must have one, I think setting
> a fixed maximum size on data which a datastore will cache (say 1MB)
> would be better than setting a variable limit based on the somewhat
> arbitrary 1/200th ratio that we currently employ.

I would rather see a larger minimum datastore size.

Ed Tomlinson


_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to