Fajar Priyanto wrote:
> On Tuesday 17 April 2007 20:39, Alan Robertson wrote:
>>> But the problem stays.
>>> So:
>>> Is it possible to do this nfs failover without shared storage? If it's
>>> not possible, what is the best approach for this beside using shared
>>> storage? Actually at first, it's an ftp server cluster only and it works
>>> very well, but then they want the files to be accessible from nfs export
>>> too.
>>>
>>> This is my specs and settings:
>>> Centos 4.3 with no updates
>>> heartbeat-stonith-2.0.7-1.c4
>>> heartbeat-pils-2.0.7-1.c4
>>> heartbeat-2.0.7-1.c4
>>>
>>> /etc/hosts:
>>> 127.0.0.1 ftp1.fuji.local ftp1 localhost.localdomain
>>> localhost 192.168.0.201 ftp2.fuji.local ftp2
>>> 192.168.0.200 ftp1.fuji.local ftp1
>>> 10.0.0.201 hb2.fuji.local hb2
>>> 10.0.0.200 hb1.fuji.local hb1
>>>
>>> /etc/exports:
>>> /home/pub *(rw,fsid=888)
>>>
>>> /etc/init.d/nfslock:
>>> daemon rpc.statd "$STATDARG" -n ftp1.fuji.local
>>>
>>> /etc/ha.d/ha.cf:
>>> logfacility daemon
>>> serial /dev/ttyS0
>>> watchdog /dev/watchdog
>>> bcast eth1
>>> keepalive 2
>>> warntime 5
>>> deadtime 20
>>> initdead 100
>>> baud 19200
>>> udpport 694
>>> auto_failback on
>>> node ftp1.fuji.local ftp2.fuji.local
>>> #respawn userid cmd
>>> #respawn hacluster /usr/lib/heartbeat/ccm
>>> respawn hacluster /usr/lib/heartbeat/ipfail
>>> ping 192.168.0.254
>>> #ping_group ftpcluster 192.168.1.70 192.168.1.80
>>> use_logd yes
>>> #crm on
>>> #apiauth mgmtd uid=hacluster
>>> #respawn root /usr/lib/heartbeat/mgmtd -t
>>>
>>> /etc/ha.d/haresources:
>>> ftp1.fuji.local 192.168.0.203 httpd nfs pure-ftpd rsync2
>> The two underlying filesystems have to have _exactly_ the same content,
>> the same inode numbers for every file, etc.
>>
>> We recommend using DRBD or something similar for keeping the two sides
>> in sync.
>>
>> If the two sides are read-only, and you don't want to set up DRBD, then
>> you _could_ dd the filesystem from one machine to the other. But, then
>> you can't ever update it, etc. So, that would not be very maintainable.
>>
>> But, don't misunderstand. You need something like DRBD or an identical
>> disk image between the two machines.
>
> Sorry for the late reply,
> I was at the site implementing the above mentioned cluster. So far the client
> can accept those conditions (they have to remount the nfs export in case of
> failover). Regarding the data replication between the 2 node, I setup a rsync
> script that makes sure the data on both machines get synced.
>
> I'll explore DRBD.
> Thank you very much.
Let me say very very clearly:
rsync will not work for NFS failover
OK?
--
Alan Robertson <[EMAIL PROTECTED]>
"Openness is the foundation and preservative of friendship... Let me
claim from you at all times your undisguised opinions." - William
Wilberforce
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems