Hi Sorry that was a typo. I use different name for my resource on in the LAB.
I found that after installing crmsh it fixed my problem, even though I don't use crm it self. cat /etc/yum.repos.d/ha-cluster.repo [network_ha-clustering_Stable] name=Stable High Availability/Clustering packages (CentOS_CentOS-7) type=rpm-md baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ gpgcheck=1 gpgkey=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/repodata/repomd.xml.key enabled=1 yum install -y crmsh Regards Jaco van Niekerk Office: 011 608 2663 E-mail: [email protected] [Desktop] accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Disclaimer added by CodeTwo Exchange Rules 2010 www.codetwo.com<http://www.codetwo.com/?sts=1048> On 25/07/2018 02:40, Igor Cicimov wrote: Hi Jaco, On Mon, Jul 23, 2018 at 11:10 PM, Jaco van Niekerk <[email protected]<mailto:[email protected]>> wrote: Hi I am using the following packages: pcs-0.9.162-5.el7.centos.1.x86_64 kmod-drbd84-8.4.11-1.el7_5.elrepo.x86_64 drbd84-utils-9.3.1-1.el7.elrepo.x86_64 pacemaker-1.1.18-11.el7_5.3.x86_64 corosync-2.4.3-2.el7_5.1.x86_64 targetcli-2.1.fb46-6.el7_5.noarch my /etc/drbd.conf global { usage-count no; } common { protocol C; } resource imagesdata { on node1.san.localhost { device /dev/drbd0; disk /dev/vg_drbd/lv_drbd; address 192.168.0.2:7789<http://192.168.0.2:7789>; meta-disk internal; } on node2.san.localhost { device /dev/drbd0; disk /dev/vg_drbd/lv_drbd; address 192.168.0.3:7789<http://192.168.0.3:7789>; meta-disk internal; } } my /etc/corosync/corosync.conf: totem { version: 2 secauth: off cluster_name: san_cluster transport: udpu interface { ringnumber: 0 bindnetaddr: 192.168.0.0 broadcast: yes mcastport: 5405 } } nodelist { node { ring0_addr: node1.san.localhost name: node1 nodeid: 1 } node { ring0_addr: node2.san.localhost name: node2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 wait_for_all: 1 last_man_standing: 1 auto_tie_breaker: 0 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes } Pacemaker setup: pcs cluster auth node1.san.localdomain node2.san.localdomain -u hacluster -p PASSWORD pcs cluster setup --name san_cluster node1.san.localdomain node2.san.localdomain pcs cluster start --all pcs cluster enable --all pcs property set stonith-enabled=false pcs property set no-quorum-policy=ignore The following command doesn't work: pcs resource create my_iscsidata ocf:linbit:drbd drbd_resource=iscsidata op monitor interval=10s pcs resource master MyISCSIClone my_iscsidata master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true I receive the following on pcs status: * my_iscsidata_monitor_0 on node2.san.localhost 'not configured' (6): call=9, status=complete, exitreason='meta parameter misconfigured, expected clone-max -le 2, but found unset.', I guess it's a typo. See the highlighted part above, drbd_resource has to match your DRBD resource you created which in your case would be imagesdata
_______________________________________________ drbd-user mailing list [email protected] http://lists.linbit.com/mailman/listinfo/drbd-user
