Hi. We are moving from a Opensuse 12.1 with the packages pacemaker-1.1.6 corosync-1.4.1 ocfs2-tools-o2cb-1.8.0 ocfs2-tools-1.8.0
To a new server with Suse SLES 12, with pacemaker-1.1.12 corosync-2.3.3 ocfs2-tools-1.8.2 When migrating the configuration from the old system, I noticed that there were changes on the way OCFS2 is configured now and managed to get it running on the new host. But when the time come to configure the Virtual IP addresses allocated to the loopback interface I couldn't go ahead, some weird error is showing when we try to clone and start the IP addresses involved. This is the configuration we have running on the old system: primitive httpd lsb:apache2 \ op monitor interval="60s" timeout="20s" \ op start interval="0" timeout="90s" \ op stop interval="0" timeout="100s" primitive ip_ccardbusiness ocf:heartbeat:IPaddr2 \ params ip="172.31.0.18" cidr_netmask="32" nic="lo" \ op monitor interval="30s" primitive ip_ccardgift ocf:heartbeat:IPaddr2 \ params ip="172.31.0.16" cidr_netmask="32" nic="lo" \ op monitor interval="30s" primitive ip_intranet ocf:heartbeat:IPaddr2 \ params ip="172.31.0.19" cidr_netmask="32" nic="lo" \ op monitor interval="30s" group ip_httpd ip_intranet ip_ccardgift ip_ccardbusiness clone cloneHTTPD httpd \ meta globally-unique="false" interleave="true" target-role="Started" clone cloneIP_HTTPD ip_httpd \ meta globally-unique="false" interleave="true" target-role="Started" colocation IP_HTTPD-cloneHTTPD inf: cloneHTTPD cloneIP_HTTPD order httpd-after-ip_httpd inf: cloneIP_HTTPD:start cloneHTTPD:start This is the configuration I have done on the new system: primitive ip_ccardbusiness IPaddr2 \ params ip=172.31.0.148 cidr_netmask=32 nic=lo \ op monitor interval=10 timeout=20s \ op start interval=0 timeout=20s \ op stop interval=0 timeout=20s primitive ip_ccardgift IPaddr2 \ params ip=172.31.0.146 cidr_netmask=32 nic=lo \ op monitor interval=10 timeout=20s \ op start interval=0 timeout=20s \ op stop interval=0 timeout=20s primitive ip_intranet IPaddr2 \ params ip=172.31.0.149 cidr_netmask=32 nic=lo \ op monitor interval=10 timeout=20s \ op start interval=0 timeout=20s \ op stop interval=0 timeout=20s group ip-httpd ip_ccardbusiness ip_ccardgift ip_intranet clone c-ip-httpd ip-httpd \ meta interleave=true globally-unique=false target-role=Started While configuring the system, before putting all the addresses on a group, the addresses comes up at one of the servers. But if they are grouped and cloned, I get this error: Last updated: Tue Sep 1 14:02:16 2015 Last change: Fri Aug 21 17:47:01 2015 Stack: corosync Current DC: apolo (168427777) - partition with quorum Version: 1.1.12-ad083a8 2 Nodes configured 11 Resources configured Online: [ apolo diana ] stonith_sbd (stonith:external/sbd): Started apolo Clone Set: base-clone [dlm] Started: [ apolo diana ] Clone Set: c-clusterfs [clusterfs] Started: [ apolo diana ] Failed actions: ip_ccardbusiness_start_0 on apolo 'unknown error' (1): call=65, status=complete, last-rc-change='Fri Aug 21 17:46:32 2015', queued=0ms, exec=337ms ip_ccardbusiness_start_0 on diana 'unknown error' (1): call=64, status=complete, last-rc-change='Fri Aug 21 17:44:21 2015', queued=1ms, exec=254ms I have already trayed to set the clone this way, but without any success: clone c-ip-httpd ip-httpd \ meta interleave=true globally-unique=false clone-max=2 clone-node-max=1 target-role=Started Am I missing something? Please, can someone shade some light on this issue? Any help will be very welcome. Best regards, Carlos _______________________________________________ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org