Fajar Priyanto wrote:
> Hi all,
> I'm setting up a 2-node cluster for httpd, pure-ftpd, and nfs failover.
> All seems ok. httpd, pure-ftpd, nfs can failover in all tests (unplugged
> eth0,
> reboot, shutdown, etc).
>
> The problem is, the nfs is exporting a partition on each server without
> shared
> storage, in this case it's exporting /var/ftp/pub softlinked to /home/pub
>
> The problem arises when failover occurs, the client mounting that export, has
> to remount it, because of this error:
> cannot access /mnt: Stale NFS file handle
>
> I've googling around and it seems that the proposed solution is to
> edit /etc/init.d/nfslock with:
>
> And exporing with the option: fsid=xxxx in both nodes.
>
> But the problem stays.
> So:
> Is it possible to do this nfs failover without shared storage? If it's not
> possible, what is the best approach for this beside using shared storage?
> Actually at first, it's an ftp server cluster only and it works very well,
> but
> then they want the files to be accessible from nfs export too.
>
> This is my specs and settings:
> Centos 4.3 with no updates
> heartbeat-stonith-2.0.7-1.c4
> heartbeat-pils-2.0.7-1.c4
> heartbeat-2.0.7-1.c4
>
> /etc/hosts:
> 127.0.0.1 ftp1.fuji.local ftp1 localhost.localdomain localhost
> 192.168.0.201 ftp2.fuji.local ftp2
> 192.168.0.200 ftp1.fuji.local ftp1
> 10.0.0.201 hb2.fuji.local hb2
> 10.0.0.200 hb1.fuji.local hb1
>
> /etc/exports:
> /home/pub *(rw,fsid=888)
>
> /etc/init.d/nfslock:
> daemon rpc.statd "$STATDARG" -n ftp1.fuji.local
>
> /etc/ha.d/ha.cf:
> logfacility daemon
> serial /dev/ttyS0
> watchdog /dev/watchdog
> bcast eth1
> keepalive 2
> warntime 5
> deadtime 20
> initdead 100
> baud 19200
> udpport 694
> auto_failback on
> node ftp1.fuji.local ftp2.fuji.local
> #respawn userid cmd
> #respawn hacluster /usr/lib/heartbeat/ccm
> respawn hacluster /usr/lib/heartbeat/ipfail
> ping 192.168.0.254
> #ping_group ftpcluster 192.168.1.70 192.168.1.80
> use_logd yes
> #crm on
> #apiauth mgmtd uid=hacluster
> #respawn root /usr/lib/heartbeat/mgmtd -t
>
> /etc/ha.d/haresources:
> ftp1.fuji.local 192.168.0.203 httpd nfs pure-ftpd rsync2
The two underlying filesystems have to have _exactly_ the same content,
the same inode numbers for every file, etc.
We recommend using DRBD or something similar for keeping the two sides
in sync.
If the two sides are read-only, and you don't want to set up DRBD, then
you _could_ dd the filesystem from one machine to the other. But, then
you can't ever update it, etc. So, that would not be very maintainable.
But, don't misunderstand. You need something like DRBD or an identical
disk image between the two machines.
--
Alan Robertson <[EMAIL PROTECTED]>
"Openness is the foundation and preservative of friendship... Let me
claim from you at all times your undisguised opinions." - William
Wilberforce
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems