if resource is only mounted in a node, you can use ext4, xfs or any
filesystem.
if it is mounted in more nodes simultaneously, you should use ocfs2,
gfs2 or other cluster filesystem.
El 20/04/15 a las 14:29, Lentes, Bernd escribió:
Hi,
we'd like to create a two-node cluster for our services (web, database, virtual
machines). We will have two servers and a shared fiberchannel SAN. What would
you do e.g. with the content of the webpages we offer ? Put them on the SAN so
we don't need to synchronize them between the two nodes ? Also the database and
the vm's on the SAN ? Which fs would you recommend for the SAN volumes ? OCFS2
? Can I mount the same volume on each node contemporarily ? Or do I have to use
the ocfs2 as a resource managed by pacemaker, so that the volume is only
mounted if it is necessary ?
Thanks for any hint.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
Je suis Charlie
Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org