I believe he meant that people usually refrain from using multiple filesystem 
types on one OS but it actually is better when you tailor the FS layout to 
different applications accessing them. One can expect different usage patterns 
on /var than on /usr for example and the filesystem choice should reflect this.

GFS2 is ready for production, as mentioned, it can exceed ext3 performance and 
is a good choice. However, if an application does lots of metadata operation, 
this must be transferred between the nodes via Ethernet and causes some (well 
quite some) overhead. Faster Ethernet helps but it does not solve everything.

However, SAN performance with a good disk backend exceeds Ethernet performance 
by far (if we are not talking about 10GbE which is costly too).

Cachefs is another alternative but for huge shared filesystems where you need 
the best performance, GFS2 (or OCFS2) is the best choice. With RHCS you can 
even use CLVM and stripe the GFS2 volume over multiple LUNs, thus increasing 
performance further.

It all depends on what the application is doing. If you want mostly-read access 
and the FS is not too big (TBs of storage), cachefs is probably the best 
solution.

Regards,
Morgan


-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of Win Htin
Sent: Thursday, November 04, 2010 7:06 PM
To: [email protected]
Subject: Re: [rhelv5-list] GFS in production environment

Folks,

Thanks for the various suggestions and inputs. I guess I still need to
figure out a bit before finalizing my new design. Some answers from me
follows.

> you could just install your application via RPMs?
ANSWER: Not possible because the app is provided by a third party
vendor in tgz format and the actual app install is done by the app
guys.

> You can also try OCFS2... that is another clustered FS.
ANSWER: Will look into this. Thanks.

> If one is going to look at DRBD, then I would strongly consider GFS2 instead,
> for NFS failover, especially for supportability.  And if you're looking there,
> then just using GFS2 itself is now a re-consideration (without NFS).
ANSWER: Due to the nature of my HW (Blades), DRBD seems a bit complicated.

> GFS2 aggregate data transfer can exceed Ext3 performance, if deployed 
> correctly
> and depending on the SAN topology, leveraging it's multiple nodes.  If GFS2 
> gets
> bogged down in meta-data operations and RHCS multicast traffic over the
> Ethernet, that's where most have issues.
>
> I've run into a few cases where binaries are written quite improper, and lock
> files in their respective trees, causing wholly unnecessary overhead between
> nodes.  That's why I'm a fan of read-only NFS for sharing binaries, avoiding 
> the
> issues of locking.  It's compounded by the fact when people use the same GFS2
> file system for all sorts of other operations (data, temporary, etc...) and
> don't consider their actual usage of the file systems.

> P.S.  There's no law that one has to use GFS2 or NFS, but not both, on a
> system.  I've seen an allergy to doing such for a reason I still don't
> understand.  Same deal with Ext3, Ext4 and/or XFS on the same system, one can
> have multiple file systems in use.
ANSWER: Bryan, do you mean it is not a good idea to have both NFS and
GFS2 running at the same time? e.g. /app partition mounted through
GFS2 file system and /home through NFS? Is it better going GFS2 for
both /app and /home?

 A rather crazy question but if the consensus is that GFS2 is not up
to snuff for production, what is it good for?

I currently have a shared disk group on the SAN and out of my N+1
servers, N number of servers mount the partition as Read-only and the
remaining server mounts it as Read-write. Any time updates are
required, it is done through that server.

Cheers,
Win

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to