Each volume has a different dlm domain. Only nodes that mount that volume, join that domain.
Søren Kröger wrote: > Hi Sunil > > Will the mastery involve all nodes in the cluster, regardless which > filesystem they have mounted? Or will it only involve those nodes > which have mounted the particular filesystem? > > Søren > > On 28/04/2009, at 22.21, Sunil Mushran <sunil.mush...@oracle.com> wrote: > >> Søren Kröger wrote: >>> I'm trying to split up our big OCFS2 filesystem into 3 separate >>> LUN's, since there are only a limited amount of nodes which need >>> access to the different parts of the OCFS2 filesystem. >>> One "Master" server with RW access should still be able to mount all >>> 3 OCFS2 LUN's, all others would only mount one of the 3 LUN's in RO >>> mode. >> >> In a distributed lock manager, no one node is the master node for >> all resources. Mastery of resources is distributed amongst all the >> nodes irrespective of the fact that a node may have mounted ro. >> >> Note that the locks on a resource are only created if a node actually >> reads that inode. So it could be that you have three nodes reading >> different directory trees that have little in common with each other. >> In that case, each node will master the resources and the only lock >> on each resource will be the local one. >> >>> What i want is: >>> - Lowering the impact, when one node crashes and causes deadlocks on >>> the filesystem for other nodes >> >> When a node crashes, the surviving nodes will recover it. Why should >> there be any deadlocks? >> >>> - Lowering the impact, when we have to resize a OCFS2 volume >> >> ocfs2 has online file system resize. However we require a cluster >> volume manager. This should be working in SLES11 when they release >> the HA extension package. We hope to have this running with (RH)EL6. >> >>> - Lowering the mastery time >> >> Again, ro nodes have as much a say in mastery as rw. >> >>> Is there one master per filesystem or one master per cluster? >>> Would 3 separate filesystems under the same cluster be "separated" >>> enough to achieve? _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users