On 2010-06-01 11:33, RaSca wrote:
> Il giorno Mar 01 Giu 2010 08:27:31 CET, Andrew Beekhof ha scritto:
> [...]
>> Who is in charge of stopping/detaching nfsd? exportfs perhaps?
>> The solution is to get that part working, until it does the cluster wont 
>> work.
> 
> exportfs actually does not have the capability of doing anything on the 
> running nfsd. It should be modified by adding nfsd handle.
> 
> So, this resource agent at this moment is totally useless, because it 
> does not handle any of the tasks described.

OK, another followup here. I have just deployed the exportfs agent on a
Debian squeeze cluster, with the following configuration:

primitive p_exportfs ocf:heartbeat:exportfs \
        params directory="/mnt" fsid="42" \
        clientspec="192.168.122.0/255.255.255.0" \
        options="rw,no_root_squash"
primitive p_fs_nfs ocf:heartbeat:Filesystem \
        params \
        device="/dev/disk/by-uuid/3653065c-dde9-4b92-9aaa-e86844bd5892" \
        directory="/mnt" fstype="ext3" options="noatime"
primitive p_ip_nfs ocf:heartbeat:IPaddr2 \
        params ip="192.168.122.110" cidr_netmask="24"
primitive p_iscsi_nfs ocf:heartbeat:iscsi \
        params iscsiadm="/usr/bin/iscsiadm" \
        target="iqn.2010-02.com.linbit:nfs-ha" \
        portal="10.9.9.91:3260"
primitive p_stonith_alice stonith:meatware \
        params hostlist="alice"
primitive p_stonith_bob stonith:meatware \
        params hostlist="bob"
location l_stonith_alice p_stonith_alice -inf: alice
location l_stonith_bob p_stonith_bob -inf: bob
colocation c_nfs inf: p_exportfs ( p_ip_nfs p_fs_nfs ) p_iscsi_nfs
order o_nfs inf: p_iscsi_nfs ( p_fs_nfs p_ip_nfs ) p_exportfs
property $id="cib-bootstrap-options" \
        stonith-enabled="true" \
        dc-version="1.0.8-f2ca9dd92b1d+ sid tip" \
        cluster-infrastructure="Heartbeat" \
        last-lrm-refresh="1275481912"

... and then mounted the exported /mnt directory on an Ubuntu lucid box.
Then, in two parallel sessions, I started jobs that wrote to and read
from the NFS mount. On the NFS cluster, I sent one node into standby and
watched failover go completely flawlessly. The jobs continued to read
from/write to NFS with an interruption on the order of seconds.

Here are my NFS mount options, in case you're interested:
192.168.122.110:/mnt on /tmp/nfsmount type nfs
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.110,mountvers=3,mountproto=tcp,addr=192.168.122.110)

So, I'll file this under "works for me". If it doesn't work for you,
please collect an hb_report.

Cheers,
Florian

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to