On Thu, Jul 30, 2009 at 2:03 PM, Ahmed Munir<[email protected]> wrote:
> Hi,
>
> Thanks Andrew for giving me advise for upgrading heartbeat 2.1.3 to
> pacemaker 1.0.4 anlong with heartbeat 2.99.2.
>
> After Installing and configuring I run couple of tests, like turning off the
> nodes, starting  and rebooting them. Working fine what I desired that moment
> I've assigned private
> IPs on the nodes i.e. node, ha1 IP: 192.168.0.184 and node ha2
> IP:192.168.0.185.
>
> But when I assigned ha1 and ha2 as a public IP, I'm still facing probelm
> like previous version of heartbeat 2.1.3.
> When I turn off ha2 and power it up it shows following status;

You're not creating resources for the nodes' real addresses are you?

>
> ============
> Last updated: Thu Jul 30 17:46:12 2009
> Stack: Heartbeat
> Current DC: ha2 (70503c2e-bb4a-48f8-aab3-53696656a4d0) - partition with
> quorum
> Version: 1.0.4-6dede86d6105786af3a5321ccf66b44b6914f0aa
> 2 Nodes configured, unknown expected votes
> 4 Resources configured.
> ============
> OFFLINE: [ ha1  ]
> Online: [ ha2  ]
>
> IPaddr_1        (ocf::heartbeat:IPaddr):        Started ha2
> IPaddr_2        (ocf::heartbeat:IPaddr):        Started ha2
> OpenSips_1      (ocf::heartbeat:OpenSips):      Started ha2
> OpenSips_2      (ocf::heartbeat:OpenSips):      Started ha2
>
> On the other hand when I check the node ha1 which was failover to ha2, shows
> its status as listed below;
>
> ============
> Last updated: Thu Jul 30 17:46:12 2009
> Stack: Heartbeat
> Current DC: ha1 (e651c120-b9a1-489a-baf7-caf0028ad540) - partition with
> quorum
> Version: 1.0.4-6dede86d6105786af3a5321ccf66b44b6914f0aa
> 2 Nodes configured, unknown expected votes
> 4 Resources configured.
> ============
> OFFLINE: [ ha2  ]
> Online: [ ha1  ]
>
> IPaddr_1        (ocf::heartbeat:IPaddr):        Started ha1
> IPaddr_2        (ocf::heartbeat:IPaddr):        Started ha1
> OpenSips_1      (ocf::heartbeat:OpenSips):      Started ha1
> OpenSips_2      (ocf::heartbeat:OpenSips):      Started ha1
>
> When I checked the logs it shows me that ha2 is not member of ha1, as I'm
> listing down the logs below;
>
> Jul 30 18:10:34 ha1 crmd: [2841]: WARN: crmd_ha_msg_callback: Ignoring HA
> message (op=vote) from ha2: not in our membership list (size=1)
> Jul 30 18:10:37 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_slave_all message (38) from ha2: not in our membership
> Jul 30 18:10:37 ha1 attrd: [2840]: info: attrd_ha_callback: flush message
> from ha2
> Jul 30 18:10:39 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_replace message (3b) from ha2: not in our membership
> Jul 30 18:10:39 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (3f) from ha2: not in our membership
> Jul 30 18:10:41 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (41) from ha2: not in our membership
> Jul 30 18:10:41 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (43) from ha2: not in our membership
> Jul 30 18:10:41 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (45) from ha2: not in our membership
> Jul 30 18:10:41 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (46) from ha2: not in our membership
> Jul 30 18:10:41 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (47) from ha2: not in our membership
> Jul 30 18:10:41 ha1 attrd: [2840]: info: attrd_ha_callback: flush message
> from ha2
> Jul 30 18:10:42 ha1 last message repeated 3 times
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4a) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4b) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4c) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4d) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4e) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (4f) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (50) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (52) from ha2: not in our membership
> Jul 30 18:10:42 ha1 cib: [2837]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (53) from ha2: not in our membership
> Jul 30 18:10:55 ha1 ccm: [2836]: info: Break tie for 2 nodes cluster
> Jul 30 18:10:55 ha1 crmd: [2841]: info: mem_handle_event: Got an event
> OC_EV_MS_INVALID from ccm
> Jul 30 18:10:55 ha1 cib: [2837]: info: mem_handle_event: Got an event
> OC_EV_MS_INVALID from ccm
> Jul 30 18:10:55 ha1 crmd: [2841]: info: mem_handle_event: no mbr_track info
> Jul 30 18:10:55 ha1 cib: [2837]: info: mem_handle_event: no mbr_track info
> Jul 30 18:10:55 ha1 crmd: [2841]: info: mem_handle_event: Got an event
> OC_EV_MS_NEW_MEMBERSHIP from ccm
> Jul 30 18:10:55 ha1 cib: [2837]: info: mem_handle_event: Got an event
> OC_EV_MS_NEW_MEMBERSHIP from ccm
> Jul 30 18:10:55 ha1 crmd: [2841]: info: mem_handle_event: instance=31,
> nodes=1, new=0, lost=0, n_idx=0, new_idx=1, old_idx=3
> Jul 30 18:10:55 ha1 cib: [2837]: info: mem_handle_event: instance=31,
> nodes=1, new=0, lost=0, n_idx=0, new_idx=1, old_idx=3
> Jul 30 18:10:55 ha1 crmd: [2841]: info: crmd_ccm_msg_callback: Quorum
> (re)attained after event=NEW MEMBERSHIP (id=31)
> Jul 30 18:10:55 ha1 cib: [2837]: info: cib_ccm_msg_callback: Processing CCM
> event=NEW MEMBERSHIP (id=31)
> Jul 30 18:10:55 ha1 crmd: [2841]: info: ccm_event_detail: NEW MEMBERSHIP:
> trans=31, nodes=1, new=0, lost=0 n_idx=0, new_idx=1, old_idx=3
> Jul 30 18:10:55 ha1 crmd: [2841]: info: ccm_event_detail:       CURRENT: ha1
> [nodeid=0, born=31]
> Jul 30 18:10:55 ha1 crmd: [2841]: info: populate_cib_nodes_ha: Requesting
> the list of configured nodes
> Jul 30 18:10:57 ha1 cib: [2837]: info: cib_process_request: Operation
> complete: op cib_modify for section nodes (origin=local/crmd/182,
> version=0.37.66): ok (rc=0)
> Jul 30 18:11:00 ha1 crmd: [2841]: WARN: crmd_ha_msg_callback: Ignoring HA
> message (op=noop) from ha2: not in our membership list (size=1)
>
>
> Kindly review my problem. I'm attaching my ha.cf and cib.xml along with and
> please do reply.
>
> --
> Regards,
>
> Ahmed Munir
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to