Jan-Benedict Glaw <[email protected]> writes: > However, with disks that large, you also hit statistical effects, > which even make it harder to put those into RAIDs. Consider a 2TB > filesystem on a 2TB disk. Sooner or later, you will face a read (or > even write) errors, which will at least easily result in a r/o > filesystem. For reading other shares, that's not much of a problem. > But you're instantly also loosing a hugh *writeable* area. > > So with disks that large, do you use a small number of large > partitions/filesystems (or even only one), or do you cut it down to, > say, 10 filesystems of 200GB each, starting a separate tahoe node for > each filesystem. Or do you link the individual filesystems into the > storage directory?
Or do you use a filesystem that can avoid bad blocks natively, and then maybe run tahoe on that. I would think ZFS can do this, but I don't actually know that.
pgpvjMVR4kSpl.pgp
Description: PGP signature
_______________________________________________ tahoe-dev mailing list [email protected] http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
