Hello.
As I know -- You  haven't  use  controld for gfs2

Read more:
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08s04.html


2011/11/7 Eric Mueller <mcss_...@yahoo.com>

> I have followed the clusters from scratch tutorial, however i have some
> descrepencies that i want to make sure are correct. everything seems to be
> functioning properly such as gfs2 and dlm. although, i am unable to clone
> the ipaddr2 resource agent, it seems to be complaining about
> "globally-unique=true". I am receiving the below error messages, did this
> setting change in this latest version of pacemaker?
>
> Failed actions:
>     ra_vip:0_monitor_0 (node=f-gfs01b, call=46, rc=2, status=complete):
> invalid parameter
>     ra_vip:1_monitor_0 (node=f-gfs01b, call=47, rc=2, status=complete):
> invalid parameter
>     ra_vip:0_monitor_0 (node=f-gfs01a, call=5, rc=2, status=complete):
> invalid parameter
>     ra_vip:1_monitor_0 (node=f-gfs01a, call=6, rc=2, status=complete):
> invalid parameter
>
>
> the other setting that is different is that my cluster-infrastracture is
> set to cman and not openais, but it seems to be working, is this ok?
>
> my dlm and gfs control do not have the ".pcmk" file extension and I did
> remove the pcmk from /etc/corosync/service.d, which i think is the reason,
> is this ok?
>
> here are my versions of software...
> Pacemaker 1.1.5-5.el6
> Corosync Cluster Engine, version '1.2.3'
> dlm_controld 3.0.12
> gfs_controld 3.0.12
> cman_tool 3.0.12
>
> my config.....
> node f-gfs01a
> node f-gfs01b
> primitive dlm ocf:pacemaker:controld \
>         params daemon="dlm_controld" \
>         op monitor interval="120s"
> primitive gfs-control ocf:pacemaker:controld \
>         params daemon="gfs_controld" args="-g 0" \
>         op monitor interval="120s"
> primitive ra_filesystem ocf:heartbeat:Filesystem \
>         params device="/dev/sdb1" directory="/home" fstype="gfs2"
> primitive ra_vip ocf:heartbeat:IPaddr2 \
>         params ip="x.x.3.85" cidr_netmask="32" clusterip_hash="sourceip" \
>         op monitor interval="30s"
> clone dlm-clone dlm \
>         meta interleave="true"
> clone fs-clone ra_filesystem
> clone gfs-clone gfs-control \
>         meta interleave="true"
> clone vip-clone ra_vip \
>         meta globally-unique="true" clone-max="2" clone-node-max="2"
> target-role="Started"
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
>         cluster-infrastructure="cman" \
>         stonith-enabled="false" \
>         last-lrm-refresh="1320355096" \
>         no-quorum-policy="ignore"
>
>
>
> my crm status.......
> ============
> Last updated: Mon Nov  7 10:58:25 2011
> Stack: cman
> Current DC: f-gfs01b - partition with quorum
> Version: 1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
> 2 Nodes configured, unknown expected votes
> 4 Resources configured.
> ============
>
> Online: [ f-gfs01a f-gfs01b ]
>
>  Clone Set: dlm-clone [dlm]
>      Started: [ f-gfs01a f-gfs01b ]
>  Clone Set: gfs-clone [gfs-control]
>      Started: [ f-gfs01a f-gfs01b ]
>  Clone Set: fs-clone [ra_filesystem]
>      Started: [ f-gfs01a f-gfs01b ]
>
> Failed actions:
>     ra_vip:0_monitor_0 (node=f-gfs01b, call=46, rc=2, status=complete):
> invalid parameter
>     ra_vip:1_monitor_0 (node=f-gfs01b, call=47, rc=2, status=complete):
> invalid parameter
>     ra_vip:0_monitor_0 (node=f-gfs01a, call=5, rc=2, status=complete):
> invalid parameter
>     ra_vip:1_monitor_0 (node=f-gfs01a, call=6, rc=2, status=complete):
> invalid parameter
>
>
> thanks in advance!
> Eric
>
>
>
>
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



-- 
Viacheslav Biriukov
BR
http://biriukov.com
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to