Hello,
We have a PCS cluster running on 2 CentOS 7 nodes, exposing 2 NFSv3 volumes which are then mounted to multiple servers (around 8). We want to have 2 more sets of additional shared NFS volumes, for a total of 6. I have successfully configured 3 resource groups, with each group having the following resources: * 1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes the NFS share assigned to its own NIC. * 3x ocf_heartbeat_Filesystem resources (1 is for the nfs_shared_infodir and the other 2 are the ones exposed via the NFS server) * 1x ocf_heartbeat_nfsserver resource that uses the aforementioned nfs_shared_infodir. * 2x ocf_heartbeat_exportfs resources that expose the other 2 filesystems as NFS shares. * 1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as its own source_host. All 9 filesystem volumes are mounted via iSCSI to the PCS nodes in /dev/mapper/mpathX So the structure is like so: Resource group 1: * /dev/mapper/mpatha - shared volume 1 * /dev/mapper/mpathb - shared volume 2 * /dev/mapper/mpathc - nfs_shared_infodir for resource group 1 Resource group 2: * /dev/mapper/mpathd - shared volume 3 * /dev/mapper/mpathe - shared volume 4 * /dev/mapper/mpathf - nfs_shared_infodir for resource group 2 Resource group 3: * /dev/mapper/mpathg - shared volume 5 * /dev/mapper/mpathh - shared volume 6 * /dev/mapper/mpathi - nfs_shared_infodir for resource group 3 My concern is that when I run a df command on the active node, the last ocf_heartbeat_nfsserver volume (/dev/mapper/mpathi) mounted to /var/lib/nfs. I understand that I cannot change this, but I can change the location of the rpc_pipefs folder. I have had this setup running with 2 resource groups in our development environment, and have not noticed any issues, but since we're planning to move to production and add a 3rd resource group, I want to make sure that this setup will not cause any issues. I am by no means an expert on NFS, so some insight is appreciated. If this kind of setup is not supported or recommended, I have 2 alternate plans in mind: 1. Have all resources in the same resource group, in a setup that will look like this: a. 1x ocf_heartbeat_IPaddr2 resource for the Virtual IP that exposes the NFS share. b. 7x ocf_heartbeat_Filesystem resources (1 is for the nfs_shared_infodir and 6 exposed via the NFS server) c. 1x ocf_heartbeat_nfsserver resource that uses the aforementioned nfs_shared_infodir. d. 6x ocf_heartbeat_exportfs resources that expose the other 6 filesystems as NFS shares. Use the clientspec option to restrict to IPs and prevent unwanted mounts. e. 1x ocf_heartbeat_nfsnotify resource that has the Virtual IP set as its own source_host. 2. Setup 2 more clusters to accommodate our needs I really want to avoid #2, due to the fact that it will be overkill for our case. Thanks Christoforos Christoforou Senior Systems Administrator Global Reach Internet Productions <http://www.twitter.com/globalreach> Twitter | <http://www.facebook.com/globalreach> Facebook | <https://www.linkedin.com/company/global-reach-internet-productions> LinkedIn p (515) 996-0996 | <http://www.globalreach.com/> globalreach.com
_______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/