> I had the same problem with nfs ha with rhcs, i solved using udp when
> i mounted the shares in client side.

Thank you, but I prefer a TCP NFS mount.

> On the original question, you need to specify the fsid for the
> file system.  Otherwise you get an fsid that's derived in part
> from the device numbers, so different device numbers on the
> failover leads to a different fsid.

For my test, I specify a fsid for all nfs mount, it doesn't seems to be the
root problem.
Example :
<<
/exports                *(rw,fsid=0,insecure,no_subtree_check)
/exports/test
192.168.0.0/24(rw,nohide,fsid=1,insecure,no_subtree_check,async)
>>

> Having "hard" mounting seems to allow failover to work for us.
> I'd rather not though as we have VPN laptop client machines that we'd
> rather didn't hang if the connection drops (maybe soft with a suitable
> timeo and retrans options would be good for these boxes).

What configuration do you use on "hard" mounting to allow failover NFS
service export ?

Thank you for your response.
-- 
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to