If you are seeing this problem it means you are using the ext3-based ldiskfs. 
Go back to the download site and get the lustre-ldiskfs and lustre-modules RPMs 
with ext4 in the name. 

That is the code that was tested with LUNs over 8TB. We kept these separate for 
some time to reduce risk for users that did not need larger LUN sizes.  This is 
the default for the recent Whamcloud 1.8.6 release. 

Cheers, Andreas

On 2011-07-14, at 11:15 AM, Theodore Omtzigt <t...@stillwater-sc.com> wrote:

> I configured a Lustre file system on a collection of storage servers 
> that have 12TB raw devices. I configured a combined MGS/MDS with the 
> default configuration. On the OSTs however I added the force_over_8tb to 
> the mountfsoptions.
> 
> Two part question:
> 1- do I need to set that parameter on the MGS/MDS server as well
> 2- if yes, how do I properly add this parameter on this running Lustre 
> file system (100TB on 9 storage servers)
> 
> I can't resolve the ambiguity in the documentation as I can't find a 
> good explanation of the configuration log mechanism that is being 
> referenced in the man pages. The fact that the doc for --writeconf 
> states "This is very dangerous", I am hesitant to pull the trigger as 
> there is 60TB of data on this file system that I rather not lose.
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to