On Fri, May 23, 2008 at 8:01 AM, John Summerfield <
[EMAIL PROTECTED]> wrote:

> Matthias Saou wrote:
>
>> Sandor W. Sklar wrote :
>>
>>  (Not to hijack this subject, but I wish there was some resource or
>>>  community that had experience in this area.  I'm liberal in politics,  but
>>> uber-conservative at work.  I don't want to be the "first" to do  something
>>> weird, like 8 TB filesystems.  I want to learn from the pain  and experience
>>> of others.  :-)
>>>
>>
>> I've got a few production 12+ TB filesystems which are working just
>> great. I'm using RHEL5 with XFS. The custom dkms-xfs package and rebuilt
>> xfsprogs I use can be found here :
>>
>> http://ftp.freshrpms.net/pub/freshrpms/redhat/testing/EL5/xfs/
>>
>> Note that you should install the proper kernel devel package for your
>> system for the XFS module to be able to rebuild.
>>
>> Note also that this is completely unsupported by Red Hat or by me ;-)
>>
>> Matthias
>>
>>  Before I used a filesystem not officially supported by RH, I would
> clarify with RH what it does to our support agreement.
>
> In Red Hat's shoes, I might well say, "Go away or pay lots more dollars."
> I'd rather use CentOS than pay money and find there's no support when I need
> it.
>
> As to the size of filesystems, I've not tried it myself, but I saw one
> report of unacceptable e2fsck times on a filesystem less than one TB (IE one
> disk) in size. It's something I intend to try, I don't know whether it's the
> software or the user that's broken.
>
>
>
>
Thanks to everyone for their replies.  To answer someone's point about
backups, this is for a disk based backup system. :)  It has the ability to
replication at the application level to other systems.  I also agree that I
want a filesystem that Red Hat will support.  GFS2 is a possibility, I just
have no experience with it.  On the surface, I am not sure it was designed
to be the home of over 150 million files rather a smaller number of large
files (databases).  To someone else's point, checking that filesystem,
journaling, etc. are all greatly affected by these large numbers.

I am going to hammer out with the software vendor why the single volume.  I
have my doubts on this requirement.  It centers around DR/restore
flexibility if I understand it correctly which doesn't necessarily apply
given it is a replicated system.  I would really just prefer to do multiple
4TB ext3 volumes on top of LVM.  :)

Just to throw out another thought on fsck and performance, do the user space
tools for GFS2 behave similarly as those of ext2/3 in that fsck/healing
processes can only occur while the volume is unmounted?  Is there a way to
tune this like with tune2fs?
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to