>>> On Sat, 28 Apr 2007 20:42:25 +0100,
>>> [EMAIL PROTECTED] (Peter Grandi) said:

>>> On Fri, 27 Apr 2007 14:13:56 -0600, Andreas Dilger
>>> <[EMAIL PROTECTED]> said:

pg> [ ... 'fsck' times ... ] My main reason to look at Lustre is
pg> not to take advantage of the cluster based parallelism, but
pg> to have 6x2TB OSTs on the same machine and hope that if
pg> there are active updates to only one then only one needs
pg> 'fsck'ing. Basically my main reason is to reduce post-crash
pg> service unavailability due to 'fsck'.

Just noticed a recent thread in 'comp.arch.storage' that
demonstrates the same worry, only on a much, much bugger scale:

  
http://groups.google.com/group/comp.arch.storage/browse_thread/thread/d6f10ec24c07ed53/ecd3a745fbf561e6

   «The system's storage is based on code which writes many files
    to the file system, with overall storage needs currently around
    40TB and expected to reach hundreds of TBs. The average file
    size of the system is ~100K, which translates to ~500 million
    files today, and billions of files in the future.»

   «We're looking for an alternative solution, in an attempt to
    improve performance and ability to recover from disasters
    (fsck on 2^42 files isn't practical, and I'm getting pretty
    worried due to this fact - even the smallest filesystem
    inconsistency will leave me lots of useless bits).»

Ehehehe, «pretty worried» :-). From numbers that I have seen a
Lustre cluster might support that kind of requirements, even if
the recovery time might be some days. I hope.

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to