Hello, over the last couple of days, I setup an active passive nfs server and iSCSI storage using drbd, pacemaker, heartbeat, lio and nfs kernel server. While testing cluster I was often setting it to unmanaged using:
crm configure property maintenance-mode=true Sometimes when I did that, both nodes or the standby node, suicided itself because /usr/lib/heartbeat/crmd was crashing. I can reproduce the problem easily. It even happened to me with a two node cluster having no resources at all. If you need more information, drop me an e-mail. Highlights of the log: Jun 6 10:17:37 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_FAIL cause=C_FSA_INTERNAL origin=get_lrm_resource ] Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: crm_abort: abort_transition_graph: Triggered assert at te_utils.c:339 : transition_graph != NULL Jun 6 10:17:37 astorage1 heartbeat: [2863]: WARN: Managed /usr/lib/heartbeat/crmd process 2947 killed by signal 11 [SIGSEGV - Segmentation violation]. Jun 6 10:17:37 astorage1 ccm: [2942]: info: client (pid=2947) removed from ccm Jun 6 10:17:37 astorage1 heartbeat: [2863]: ERROR: Managed /usr/lib/heartbeat/crmd process 2947 dumped core Jun 6 10:17:37 astorage1 heartbeat: [2863]: EMERG: Rebooting system. Reason: /usr/lib/heartbeat/crmd See the log: Jun 6 10:17:22 astorage1 crmd: [2947]: info: do_election_count_vote: Election 4 (owner: 56adf229-a1a7-4484-8f18-742ddce19db8) lost: vote from astorage2 (Uptime) Jun 6 10:17:22 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ] Jun 6 10:17:27 astorage1 crmd: [2947]: info: update_dc: Set DC to astorage2 (3.0.6) Jun 6 10:17:28 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=astorage2/crmd/210, version=0.9.18): ok (rc=0) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Jun 6 10:17:28 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd3:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd10:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd8:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd6:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd5:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd9:0 (10000) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Jun 6 10:17:28 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd4:0 (10000) Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[35] on astorage2-fencing for client 2947, its parameters: hostname=[astorage2] userid=[ADMIN] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] passwd=[ADMIN] crm_feature_set=[3.0.6] ipaddr=[10.10.30.22] CRM_meta_interval=[60000] cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation astorage2-fencing_monitor_60000 (call=35, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[36] on drbd10:0 for client 2947, its parameters: drbd_resource=[r10] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd10:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd10:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resour cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd10:0_monitor_31000 (call=36, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[37] on drbd3:0 for client 2947, its parameters: drbd_resource=[r3] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd3:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd3:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd3:0_monitor_31000 (call=37, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[38] on drbd4:0 for client 2947, its parameters: drbd_resource=[r4] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd4:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd4:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd4:0_monitor_31000 (call=38, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[39] on drbd5:0 for client 2947, its parameters: drbd_resource=[r5] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd5:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd5:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd5:0_monitor_31000 (call=39, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[40] on drbd6:0 for client 2947, its parameters: drbd_resource=[r6] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd6:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd6:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd6:0_monitor_31000 (call=40, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[41] on drbd8:0 for client 2947, its parameters: drbd_resource=[r8] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd8:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd8:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd8:0_monitor_31000 (call=41, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:30 astorage1 lrmd: [2944]: info: cancel_op: operation monitor[42] on drbd9:0 for client 2947, its parameters: drbd_resource=[r9] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_inactive_resource=[drbd9:0 ] CRM_meta_notify_promote_uname=[ ] CRM_meta_timeout=[20000] CRM_meta_notify_master_uname=[astorage2 ] CRM_meta_name=[monitor] CRM_meta_notify_start_resource=[drbd9:0 ] CRM_meta_notify_start_uname=[astorage1 ] crm_feature_set=[3.0.6] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[ cancelled Jun 6 10:17:30 astorage1 crmd: [2947]: info: process_lrm_event: LRM operation drbd9:0_monitor_31000 (call=42, status=1, cib-update=0, confirmed=true) Cancelled Jun 6 10:17:31 astorage1 crmd: [2947]: notice: crmd_client_status_callback: Status update: Client astorage2/crmd now has status [offline] (DC=false) Jun 6 10:17:31 astorage1 crmd: [2947]: info: crm_update_peer_proc: astorage2.crmd is now offline Jun 6 10:17:31 astorage1 crmd: [2947]: notice: crmd_peer_update: Status update: Client astorage2/crmd now has status [offline] (DC=astorage2) Jun 6 10:17:31 astorage1 crmd: [2947]: info: crmd_peer_update: Got client status callback - our DC is dead Jun 6 10:17:31 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=crmd_peer_update ] Jun 6 10:17:31 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Jun 6 10:17:31 astorage1 crmd: [2947]: info: do_te_control: Registering TE UUID: 3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc Jun 6 10:17:31 astorage1 crmd: [2947]: info: set_graph_functions: Setting custom graph functions Jun 6 10:17:31 astorage1 crmd: [2947]: info: start_subsystem: Starting sub-system "pengine" Jun 6 10:17:31 astorage1 pengine: [5812]: info: Invoked: /usr/lib/pacemaker/pengine Jun 6 10:17:31 astorage1 cib: [2943]: info: cib_process_shutdown_req: Shutdown REQ from astorage2 Jun 6 10:17:31 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_shutdown_req for section 'all' (origin=astorage2/astorage2/(null), version=0.9.54): ok (rc=0) Jun 6 10:17:32 astorage1 cib: [2943]: info: cib_client_status_callback: Status update: Client astorage2/cib now has status [leave] Jun 6 10:17:32 astorage1 cib: [2943]: info: crm_update_peer_proc: astorage2.cib is now offline Jun 6 10:17:32 astorage1 cib: [2943]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jun 6 10:17:32 astorage1 cib: [2943]: info: mem_handle_event: instance=12, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4 Jun 6 10:17:32 astorage1 cib: [2943]: info: cib_ccm_msg_callback: Processing CCM event=NOT PRIMARY (id=12) Jun 6 10:17:35 astorage1 crmd: [2947]: info: do_dc_takeover: Taking over DC status for this partition Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_readwrite: We are now in R/W mode Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/56, version=0.9.55): ok (rc=0) Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/57, version=0.9.56): ok (rc=0) Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/59, version=0.9.57): ok (rc=0) Jun 6 10:17:35 astorage1 crmd: [2947]: info: join_make_offer: Making join offers based on membership 12 Jun 6 10:17:35 astorage1 crmd: [2947]: info: join_make_offer: Peer process on astorage2 is not active (yet?): 00000002 2 Jun 6 10:17:35 astorage1 crmd: [2947]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks Jun 6 10:17:35 astorage1 crmd: [2947]: info: mem_handle_event: Got an event OC_EV_MS_NOT_PRIMARY from ccm Jun 6 10:17:35 astorage1 crmd: [2947]: info: mem_handle_event: instance=12, nodes=2, new=2, lost=0, n_idx=0, new_idx=0, old_idx=4 Jun 6 10:17:35 astorage1 crmd: [2947]: info: crmd_ccm_msg_callback: Quorum lost after event=NOT PRIMARY (id=12) Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/61, version=0.9.58): ok (rc=0) Jun 6 10:17:35 astorage1 crmd: [2947]: info: update_dc: Set DC to astorage1 (3.0.6) Jun 6 10:17:35 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Jun 6 10:17:35 astorage1 crmd: [2947]: info: do_dc_join_finalize: join-1: Syncing the CIB from astorage1 to the rest of the cluster Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/64, version=0.9.58): ok (rc=0) Jun 6 10:17:35 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/65, version=0.9.59): ok (rc=0) Jun 6 10:17:36 astorage1 crmd: [2947]: info: do_dc_join_ack: join-1: Updating node state to member for astorage1 Jun 6 10:17:36 astorage1 crmd: [2947]: info: erase_status_tag: Deleting xpath: //node_state[@uname='astorage1']/lrm Jun 6 10:17:36 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='astorage1']/lrm (origin=local/crmd/66, version=0.9.60): ok (rc=0) Jun 6 10:17:36 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Jun 6 10:17:36 astorage1 crmd: [2947]: info: populate_cib_nodes_ha: Requesting the list of configured nodes Jun 6 10:17:36 astorage1 attrd: [2946]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Jun 6 10:17:36 astorage1 crmd: [2947]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled Jun 6 10:17:36 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd3:0 (10000) Jun 6 10:17:36 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/68, version=0.9.62): ok (rc=0) Jun 6 10:17:37 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/70, version=0.9.64): ok (rc=0) Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd10:0 (10000) Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd8:0 (10000) Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd6:0 (10000) Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd10:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd3:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd4:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd5:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd6:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd8:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: can_be_master: Forcing unmanaged master drbd9:1 to remain promoted on astorage2 Jun 6 10:17:37 astorage1 pengine: [5812]: notice: stage6: Delaying fencing operations until there are resources to manage Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd5:0 (10000) Jun 6 10:17:37 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Jun 6 10:17:37 astorage1 crmd: [2947]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1370506657-30) derived from /var/lib/pengine/pe-input-496.bz2 Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 4: cancel astorage2-fencing_monitor_60000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: info: cancel_op: No pending op found for astorage2-fencing:35 Jun 6 10:17:37 astorage1 lrmd: [2944]: info: on_msg_cancel_op: no operation with id 35 Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 2: cancel drbd10:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd10:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-33" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="2" operation="cancel" operation_key="drbd10:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="2:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd10:0" long-id="ma-ms-drbd10:drbd10:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="36" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r10" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 5: cancel drbd3:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd3:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-34" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="5" operation="cancel" operation_key="drbd3:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="5:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd3:0" long-id="ma-ms-drbd3:drbd3:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="37" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r3" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 3: cancel drbd4:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd4:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-35" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="3" operation="cancel" operation_key="drbd4:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="3:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd4:0" long-id="ma-ms-drbd4:drbd4:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="38" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r4" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 6: cancel drbd5:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd5:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-36" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="6" operation="cancel" operation_key="drbd5:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="6:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd5:0" long-id="ma-ms-drbd5:drbd5:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="39" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r5" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 7: cancel drbd6:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd6:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-37" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="7" operation="cancel" operation_key="drbd6:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="7:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd6:0" long-id="ma-ms-drbd6:drbd6:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="40" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r6" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd9:0 (10000) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 1: cancel drbd8:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd8:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-38" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="1" operation="cancel" operation_key="drbd8:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="1:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd8:0" long-id="ma-ms-drbd8:drbd8:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="41" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r8" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: info: te_rsc_command: Initiating action 8: cancel drbd9:0_monitor_31000 on astorage1 (local) Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_add_rsc(870): failed to send a addrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: lrm_get_rsc(666): failed to send a getrsc message to lrmd via ch_cmd channel. Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: get_lrm_resource: Could not add resource drbd9:0 to LRM Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: do_lrm_invoke: Invalid resource definition Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <create_request_adv origin="te_rsc_command" t="crmd" version="3.0.6" subt="request" reference="lrm_invoke-tengine-1370506657-39" crm_task="lrm_invoke" crm_sys_to="lrmd" crm_sys_from="tengine" crm_host_to="astorage1" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <crm_xml > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <rsc_op id="8" operation="cancel" operation_key="drbd9:0_monitor_31000" on_node="astorage1" on_node_uuid="76bbbf07-3d2d-476d-b758-2a7a4577f162" transition-key="8:0:0:3fa38a9f-5ebc-4a48-bc80-1c95cc6655bc" > Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <primitive id="drbd9:0" long-id="ma-ms-drbd9:drbd9:0" class="ocf" provider="linbit" type="drbd" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input <attributes CRM_meta_call_id="42" CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="31000" CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" CRM_meta_notify="true" CRM_meta_operation="monitor" CRM_meta_role="Slave" CRM_meta_timeout="20000" crm_feature_set="3.0.6" drbd_resource="r9" /> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </rsc_op> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </crm_xml> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_lrm_invoke: bad input </create_request_adv> Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_log: FSA: Input I_FAIL from get_lrm_resource() received in state S_TRANSITION_ENGINE Jun 6 10:17:37 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_FAIL cause=C_FSA_INTERNAL origin=get_lrm_resource ] Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 4 (src=73) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 2 (src=74) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 5 (src=75) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 3 (src=76) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 6 (src=77) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 7 (src=78) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 1 (src=79) Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: destroy_action: Cancelling timer for action 8 (src=80) Jun 6 10:17:37 astorage1 crmd: [2947]: info: do_te_control: Transitioner is now inactive Jun 6 10:17:37 astorage1 crmd: [2947]: WARN: do_log: FSA: Input I_FAIL from get_lrm_resource() received in state S_POLICY_ENGINE Jun 6 10:17:37 astorage1 crmd: [2947]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_FAIL cause=C_FSA_INTERNAL origin=get_lrm_resource ] Jun 6 10:17:37 astorage1 crmd: [2947]: ERROR: crm_abort: abort_transition_graph: Triggered assert at te_utils.c:339 : transition_graph != NULL Jun 6 10:17:37 astorage1 heartbeat: [2863]: WARN: Managed /usr/lib/heartbeat/crmd process 2947 killed by signal 11 [SIGSEGV - Segmentation violation]. Jun 6 10:17:37 astorage1 ccm: [2942]: info: client (pid=2947) removed from ccm Jun 6 10:17:37 astorage1 heartbeat: [2863]: ERROR: Managed /usr/lib/heartbeat/crmd process 2947 dumped core Jun 6 10:17:37 astorage1 heartbeat: [2863]: EMERG: Rebooting system. Reason: /usr/lib/heartbeat/crmd Jun 6 10:17:37 astorage1 cib: [2943]: WARN: send_ipc_message: IPC Channel to 2947 is not connected Jun 6 10:17:37 astorage1 cib: [2943]: WARN: cib_notify_client: Notification of client 2947/d4332be4-1b1f-42e7-8d6a-4dc79e5a7e07 failed Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Jun 6 10:17:37 astorage1 cib: [2943]: WARN: send_ipc_message: IPC Channel to 2947 is not connected Jun 6 10:17:37 astorage1 cib: [2943]: WARN: cib_notify_client: Notification of client 2947/d4332be4-1b1f-42e7-8d6a-4dc79e5a7e07 failed Jun 6 10:17:37 astorage1 cib: [2943]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='astorage1']//lrm_resource[@id='astorage2-fencing']/lrm_rsc_op[@id='astorage2-fencing_monitor_60000' and @call-id='35'] (origin=local/crmd/72, version=0.9.70): ok (rc=0) Jun 6 10:17:37 astorage1 cib: [2943]: WARN: send_ipc_message: IPC Channel to 2947 is not connected Jun 6 10:17:37 astorage1 cib: [2943]: WARN: send_via_callback_channel: Delivery of reply to client 2947/d4332be4-1b1f-42e7-8d6a-4dc79e5a7e07 failed Jun 6 10:17:37 astorage1 cib: [2943]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed Jun 6 10:17:37 astorage1 attrd: [2946]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd4:0 (10000) Jun 6 10:17:37 astorage1 pengine: [5812]: notice: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-496.bz2 root@astorage1:/var/lib/heartbeat/cores/hacluster# ls -al total 2024 drwx------ 2 hacluster root 4096 Jun 6 10:17 . drwxr-xr-x 5 root root 4096 Jun 5 16:50 .. -rw------- 1 hacluster haclient 2187264 Jun 6 10:17 core root@astorage1:/var/lib/heartbeat/cores/hacluster# file core core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/lib/heartbeat/crmd' root@astorage1:/var/lib/heartbeat/cores/hacluster# gdb /usr/lib/heartbeat/crmd core GNU gdb (GDB) 7.4.1-debian Copyright (C) 2012 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/lib/heartbeat/crmd...(no debugging symbols found)...done. [New LWP 2947] warning: Can't read pathname for load map: Input/output error. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `/usr/lib/heartbeat/crmd'. Program terminated with signal 11, Segmentation fault. #0 0x0000000000416fd3 in ?? () (gdb) bt #0 0x0000000000416fd3 in ?? () #1 0x0000000000406ef4 in ?? () #2 0x0000000000407a54 in ?? () #3 0x0000000000410a67 in ?? () #4 0x00007fd976db4355 in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #5 0x00007fd976db4688 in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #6 0x00007fd976db4a82 in g_main_loop_run () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #7 0x0000000000405763 in ?? () #8 0x00007fd97789fead in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 #9 0x0000000000405589 in ?? () #10 0x00007fff1b1d1d28 in ?? () #11 0x000000000000001c in ?? () #12 0x0000000000000001 in ?? () #13 0x00007fff1b1d2aa0 in ?? () #14 0x0000000000000000 in ?? () (gdb) Please let me know if that is a known bug and if I should file a bugreport against Debian Wheezy? Cheers, Thomas _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
