Msl, The HA NFS validation checks that the same file system is mounted on both nodes if you are not using HASP (i.e avoid shared disks). Set the reboot mode to soft to prevent hard failures and hopefully if there is problem with discs, the failover can happen. The problem is that the data will be shared will not be in sync.
Others in the alias will correct if this logic will not work. Best Regards, Madhan Kumar msl wrote: > Hello all, > I want to know where can i find the informations about configure a HA NFS > service, without shared discs... i will explain better: > I have a two node cluster, and each server has a disc (non-shared), that i > want to use as a NFS share (lets call them discA on node1 and discB on node2). > So, i need to configure the cluster to start the NFS service and the logical > hostname on the server where the filesystem is mounted. But i think the > cluster should not handle the filesystem (devices), because they are not > shared. > Something like this: > 1 - The filesystem is mounted on node1 (the logical hostname and nfs > services is on node1) > 2- The discA fails, the SC sees that the device had failed and switch the > logical hostname and nfs services to node2. > I think i need a "glue" (agent) to umount the filesystem in node1 (if > necessary), and mount the filesystem on node2. The filesystem is a ZFS, and > i'm thinking in use the legacy_mount option... but the objective is to know > "how tell the SC 3.2 to does not handle the mount/unmount of the filesystems, > just monitor it and call another app/script on failover/switch back scenario". > > Thanks alot! > -- > > This message posted from opensolaris.org > > _______________________________________________ > ha-clusters-discuss mailing list > ha-clusters-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss >
