If I store a 1MB file in Tahoe, how much total storage (ballpark) does 
it use summed across all nodes?

I'm reading the "expansion factor" section of the architecture.txt file 
and it says:

> In general, small private grids should work well, but the participants will
> have to decide between storage overhead and reliability. Large stable grids
> will be able to reduce the expansion factor down to a bare minimum while
> still retaining high reliability, but large unstable grids (where nodes are
> coming and going very quickly) may require more repair/verification bandwidth
> than actual upload/download traffic.

What's a reasonable estimate of the total storage capacity of a Tahoe 
grid across 100 servers, each devoting 10GB of storage?  The total disk 
capacity would be 1000GB, but with encoding and replication, does that 
mean we could store 999GB of data?  Or 750?  Or 500?  Less?

Similarly, what are reasonable configuration values for this, assuming 
each server has >90% uptime come and the 100 are split between 4 
clusters of 25.

(Can we configure Tahoe to avoid storing in servers inside the same 
cluster?)

Just trying to wrap my head of Tahoe's capabilities while considering 
its utility harvesting extra space within a large server cluster.

Thanks!

-david


_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to