On Thu, Dec 18, 2008 at 19:26, Peter Niessen <[email protected]> wrote:
> Hello,
>
> first, thanks to the heartbeat girls and guys for providing a piece of
> easy administrable software!
>
> I'm working on an active/active NFS server, in which two servers have
> access to two common storage arrays. I.e. we have servers nfs1 and
> nfs2 which both can see /dev/sdb1 and /dev/sdc1. In normal operation,
> nfs1 should serve /dev/sdb1 as /scratch1 and nfs2 should /dev/sdc1 as
> /scratch2. I'm using heartbeat 2.1.3.
>
> With the help of
>
> http://web.archive.org/web/20070630065713/http://chilli.linuxmds.com/~mschilli/NFS/active-active-nfs.html
>
> I created identical /etc/fstab files on nfs1 and nfs2 where /scratch1
> and /scratch2 are listed with the "noauto" option:
>
> /dev/system/scratch1 /scratch1 xfs rw,suid,dev,exec,noauto,nouser,async 1 2
> /dev/system/scratch2 /scratch2 xfs rw,suid,dev,exec,noauto,nouser,async 1 2
>
>
> The /etc/exports on each machine list /scratch1, /scratch2 as
> exportable.
>
> In a first approach, I put together two groups (ordered, colocated) which
> provide an ip-address (ocf IPAddr2) (10.0.0.1, 10.0.0.2) and the
> mountpoint (ocf Filesystem), one group for each server.
> I also created two locations so that the servers run on their defaults.
>
> The nfsserver is created as a cloned resource, one instance running on
> each file server. However, when failover occurs, the surviving clone
> of the nfsserver isn't restarted,

Right.  This is by design.

What the other instance will do, if you configure it to, is receive a
notification that the other side stopped.
So what you need to do is configure notify=true for the nfsserver
resource and modify the script to restart if it is told that a peer
went away.

> which results in "stale nfs handle".
>
> How can I tell the nfsserver to restart on failover?
>
> Cheers & thanks, Peter.
>
> PS: Why did I not put the nfs servers into the group resources?
>
> When the nfsserver is part of the group, the fail-over works great:
> The ip-adress is transfered, the partitions are mounted and the nfsserver
> is restarted, so the nfs clients keep seeing the disks.
>
> For example, if nfs2 fails, 10.0.0.2 moves over to nfs1, and /scratch2
> is mounted on nfs1.
>
> Now, fail-back has a problem: When the relocated source is migrated
> back to the re-started node (nfs2), it stops the nfsserver on the failover
> node (nfs1) and the clients get in trouble because the /scratch1
> filesystem disappears.
>
> --
> Peter Niessen
> FZJ
> Tel.: (+49)2461/61-1753
>
>
>
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
>
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzende des Aufsichtsrats: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr. Harald Bolt,
> Dr. Sebastian M. Schmidt
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to