> On Feb 13, 2015, at 16:16, Equipe R&S Netplus <netplus.r...@gmail.com> wrote:
> > On the original question, you need to specify the fsid for the
> > file system.  Otherwise you get an fsid that's derived in part
> > from the device numbers, so different device numbers on the
> > failover leads to a different fsid.
> 
> For my test, I specify a fsid for all nfs mount, it doesn't seems to be the 
> root problem.
> Example :
> <<
> /exports                *(rw,fsid=0,insecure,no_subtree_check)
> /exports/test           
> 192.168.0.0/24(rw,nohide,fsid=1,insecure,no_subtree_check,async)
> >>

How are you managing the failover?  If you are doing failover of the
file system (e.g., via SAN), then I'd expect the HA service manager
(rgmanager, pacemaker, etc.) to handle the export, since you don't
want to do the export until the file system is mounted.  If you are
exporting a replica file system, then I believe it has to be a block
level replica (e.g., DRDB)--a file system replica via rsync or such
will have different inode numbers, and AIR the inode number shows up
in the NFS file handle.

If you're doing something different, you'll need to give us details.

-dan


-- 
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to