How hard would it be to split the storage, so that a node _could_ have 
several stores. Then the limit could be avoided to some degree by just 
adding a new storefile, and people could also use several disks for the 
datastore without raiding.

// Dennis (cyberdo)

On Thu, 23 Feb 2006, Jusa Saari wrote:

> On Sat, 18 Feb 2006 01:28:41 +0000, Matthew Toseland wrote:
>
>> Is it worth breaking backwards compatibility for the 0.7 datastore (with
>> prior builds of 0.7) to fix an inherent 64TB limit?
>>
>> The code uses an int for offsets into files, which is easily fixed.
>> However it also uses, on disk, an int for block numbers. This means
>> datastores are limited to 2G * 32K = 64TB. Normally I wouldn't regard this
>> as a big problem, but since we are in pre-alpha, and since there isn't
>> that much content, I'm inclined to make the change...
>
> If you switch to using longs, you will propably lose more disk space from
> having to store extra 32 bits per chunk than gain, since it is unlikely
> that anyone has 64TB disk space or would donate it to Freenet even if they
> did...
>
> Come to think of it: would it be possible to use PostgreSQL to store the
> datastore, with a plugin perhaps ? Would it offer speed advantages ?
>
> _______________________________________________
> Tech mailing list
> Tech at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech
>

Reply via email to